As the computing landscape evolves, a new focus has emerged within the industry, driven by the increasing demand for powerful artificial intelligence (AI) capabilities. This shift has led to major changes in the design and development of system on chips (SOCs), particularly those from major chip manufacturers like AMD and Intel.
The recent surge in AI applications, especially in the PC market, has been influential in the redefinition of chip priorities. One catalyst has been Microsoft’s Windows Copilot, which requires significant AI processing power. In response to this need for speedier AI processing, both AMD and Intel have reoriented their SOC designs, giving more attention to neural processing units (NPUs) at the expense of other traditional chip components.
For example, AMD’s Strix Point APUs, slated for launch later this year, exemplify this new direction. Initially, these APUs were designed with a substantial System-Level-Cache (SLC), which would have notably improved the performance of both the central processing unit (CPU) and the integrated graphics processing unit (iGPU). This design would harness the power of Zen 5 for CPU and RDNA 3+ for iGPU, promising significant performance boosts. However, plans changed when a large AI Engine block was introduced to enhance NPU capabilities, specifically the “XDNA 2” AI performance, resulting in a threefold increase. Consequently, the previously planned SLC was reduced, altering the potential performance balance of the APU.
Intel likewise has invested heavily in future AI-centric chips such as Arrow Lake, Lunar Lake, and Panther Lake, all aimed towards bolstering AI PC capabilities. These NPUs are expected to occupy a significant portion of the die space, traditionally reserved for features such as additional CPU cores, expanded iGPU units, and larger caches.
This reallocation of resources heralds a new era of AI-focused chip development. The current trajectory indicates that while CPU and iGPU components will continue to see improvements in the forthcoming SOC generations, the spotlight will increasingly shine on NPU performance. As an example, AMD’s Strix Point touts a threefold NPU gain with up to 50 tera operations per second (TOPs), and Intel’s Panther Lake promises a doubling of NPU performance to about 70 TOPs, compared to the previous generation.
Nevertheless, it’s clear there’s a trade-off involved—with a finite amount of die space available, the expansion of NPU capabilities comes at the cost of the potential that could have been realized in other chip areas. The growing emphasis on NPUs seems steadfast unless the AI trend significantly wanes.
Looking into the future, we can expect a range of exciting new AI platforms in 2024, showcasing advancements in both CPU and AI performance. These developments reflect the industry’s response to the insatiable appetite for AI, setting the stage for a cutting-edge computing experience.
The prioritization of NPUs by chipmakers is a strategic response to market demands for more efficient and powerful AI processing. As we watch these upcoming SOCs take shape, one thing is certain: the computing world is gearing up for an AI-driven transformation, with a continued debate on how this will impact the balance between traditional performance and AI computational power.





