Intel Achieves A Phenomenal 90% EMIB Yield As Per Analyst, EMIB-M For Efficiency & EMIB-T For Massive ">12x Reticle" Packages In 2028 1

Intel’s EMIB Reaches 90% Yield, Fueling Foundry Momentum as EMIB‑T Targets 12+ Reticle Scaling by 2028

Intel’s EMIB advanced packaging technology just hit a major milestone that could reshape how next-generation AI data center chips are built. According to new commentary shared by analyst Jeff Pu, EMIB has now reached an impressive 90% yield rate—an important signal that the technology is not only viable, but also ready for wider adoption by major chip customers.

Why does that matter? Because advanced packaging has become one of the biggest bottlenecks in scaling modern AI hardware. As AI accelerators grow larger and more complex, companies increasingly rely on multi-die designs—mixing compute chiplets and stacks of high-bandwidth memory (HBM) into a single package. EMIB (Embedded Multi-die Interconnect Bridge) is Intel’s approach to connecting these dies efficiently, with the goal of delivering high bandwidth at lower cost and with strong manufacturing scalability.

Interest in EMIB has been building as AI firms look for alternatives that can compete with established 2.5D packaging approaches. EMIB’s core pitch is straightforward: make large, high-performance multi-die systems practical without driving cost and complexity through the roof. That’s also why EMIB has been linked to upcoming deployments in the AI ecosystem, including use in future Google TPU designs, as well as reported plans involving NVIDIA’s next-generation “Feynman” platforms. Meta has also been mentioned as a potential EMIB customer, though that particular timeline is said to be aimed at a late-2028 CPU, meaning details may take a while to surface.

Intel has also been emphasizing the advantages it sees in EMIB for AI and high-performance computing chips. The company highlights improved yields, lower power, lower cost, and better feasibility for building larger mixed-node systems—designs that combine dies made on different manufacturing process nodes in the same package. That flexibility is increasingly important as chipmakers try to optimize performance, supply chain options, and overall economics.

One of the more notable claims Intel has recently made is that EMIB yields are comparable to FCBGA (Flip Chip Ball Grid Array), while offering higher interconnect density between dies. FCBGA is widely used across CPUs, GPUs, and controller dies, connecting a chip directly to a substrate using solder bumps. EMIB takes a different approach by embedding a silicon bridge in the package substrate, enabling dense die-to-die connections without requiring a full silicon interposer.

Two EMIB variants are now central to Intel’s roadmap: EMIB-M and EMIB-T.

EMIB-M is designed with efficiency in mind. It integrates MIM (Metal-Insulator-Metal) capacitors into the silicon bridge to improve power delivery and signal integrity by reducing noise. While MIM capacitors can be somewhat more expensive than some alternatives, they’re valued for stability and lower leakage—useful traits when pushing dense chiplet designs. In EMIB-M configurations, chiplets connect through the bridge for high-bandwidth communication, while power is routed around the bridge.

EMIB-T evolves the design for even higher-end scaling. It adds TSVs (through-silicon vias) into the bridge, changing how power delivery is handled. Instead of routing power around the bridge, EMIB-T can route power directly through it. This is positioned as a better fit for the demands of top-tier AI accelerators where power delivery, density, and scaling are critical.

Intel is also outlining how EMIB-T could scale for the hyperscaler era. Today, the company says EMIB-T can enable designs larger than 8x reticle size using 120×120 packages, with configurations that can include 12 HBM chips, four dense chiplets, and more than 20 EMIB-T connections. Looking ahead to 2028, Intel targets scaling beyond 12x reticle size with packages larger than 120×180, supporting more than 24 HBM dies and over 38 EMIB-T bridges.

For broader context, competing advanced packaging roadmaps are also pushing toward extremely large designs by 2028, with industry expectations that reticle-scale expansion and HBM integration will continue accelerating. Some ultra-large packaging approaches may go even bigger, but typically at significantly higher cost—making cost-effective scaling a key battleground.

One of EMIB’s biggest strategic strengths is that it’s designed to be IP-agnostic and process-node-agnostic. In practical terms, that means customers can combine multiple dies from different sources and process technologies into one package, optimizing for bandwidth, power integrity, and scale without being locked into a single node strategy. As AI chips increasingly become complex “systems in a package,” that flexibility can be just as important as raw performance.

With yields reportedly reaching 90% and multiple major AI players rumored to be exploring adoption, EMIB is shaping up to be one of Intel’s most important technologies for competing in advanced packaging—and a potentially pivotal lever for Intel’s foundry ambitions as demand for massive AI data center chips keeps rising.