Apple is quietly putting the most important pieces in place for its next wave of custom silicon, and one project stands out: a dedicated AI server chip reportedly codenamed Baltra. A new Morgan Stanley analysis suggests Apple is already locking in advanced manufacturing capacity at TSMC, signaling that its ambitions extend well beyond Macs and iPhones and into large-scale Apple Intelligence infrastructure.
The key detail is Apple’s growing commitment to TSMC’s SoIC technology, a next-generation 3D chip packaging method. Morgan Stanley says Apple is materially ramping SoIC-related activity, with capacity reservations equivalent to about 36,000 wafers in calendar year 2026 and roughly 60,000 wafers in 2027. Those are substantial numbers, and they imply Apple expects to ship high volumes of advanced chips that benefit from cutting-edge packaging rather than treating SoIC as a niche experiment.
So what is SoIC, and why does it matter for Apple silicon? SoIC, short for System on Integrated Circuit, is a 3D packaging approach that supports stacking multiple chips in both horizontal and vertical configurations, effectively allowing multiple dies to behave like a unified SoC-class solution. This makes it easier to mix and match different components such as CPU, GPU, and Neural Engine blocks within one package. In practical terms, it can enable more flexible performance and product segmentation. Apple could, for example, tailor a “Pro” or “Max” class chip with different GPU-heavy or AI-heavy configurations without redesigning everything from scratch.
Some of the SoIC capacity Apple is reserving is expected to support future high-end consumer chips, including upcoming M5 Pro and M5 Max parts and the M6 Pro/Max generation anticipated afterward. But the larger story is that much of this capacity appears aligned with Baltra, Apple’s rumored custom AI server processor expected to arrive around 2027.
The current expectation is that this server-class ASIC will be built on TSMC’s 3nm N3E process and use a chiplet-based design, where separate blocks are optimized for specific tasks and then combined into a complete processor. This approach is popular for scaling performance efficiently and for accelerating AI workloads, especially in data centers where power and throughput matter as much as raw speed.
The report also indicates that Broadcom could play a role in helping these chiplets communicate and work together inside Apple Intelligence servers, particularly around the interconnect and coordination between processor components under heavy simultaneous workloads. A chiplet strategy can also offer Apple an operational advantage: it may help keep the full architecture opaque, limiting how much any single partner can infer about the final design.
Looking further ahead, Apple reportedly wants to bring more of Baltra’s development and production responsibilities in-house over time, reducing reliance on outside design support. One hint pointing in that direction is Apple’s procurement of T-glass samples from Samsung’s SEMCO, which suggests Apple is exploring additional advanced packaging materials and techniques that can support high-performance multi-die designs.
If Morgan Stanley’s capacity figures are even close, the takeaway is straightforward: Apple is preparing for a major expansion of its custom silicon strategy, with AI server chips likely becoming a central pillar. Between large SoIC reservations, 3nm chiplet plans, and a long-term push to internalize more of the stack, Apple appears to be building the foundation for running Apple Intelligence at scale on hardware it controls end to end.






