Intel's 18A-P Goes Beyond a 9% Speed Bump, Adding 50% Better Thermal Conductivity and Tighter Skew Corners to Win Foundry Customers

SK hynix Adopts Intel’s EMIB to Bypass TSMC CoWoS Constraints and Keep AI Chip Supply Moving

As the global push to build more AI infrastructure accelerates, one surprisingly stubborn choke point has moved into the spotlight: advanced chip packaging. With demand surging for AI accelerators and high-bandwidth memory, the industry’s 2.5D packaging capacity is feeling the strain. Against that backdrop, memory leader SK hynix is working with Intel on chip packaging technology, signaling how valuable alternative packaging options have become as supply tightness persists.

Intel, which has been staging a notable comeback under CEO Lip-Bu Tan, is also using this moment to broaden its role beyond CPUs. The company is positioning its advanced packaging capabilities as a strategic advantage, and its Embedded Multi-die Interconnect Bridge (EMIB) is increasingly being discussed as a practical alternative when conventional 2.5D packaging routes are constrained.

The collaboration centers on 2.5D packaging approaches and Intel’s EMIB technology. In typical 2.5D designs, an interposer helps connect multiple dies, such as a logic die paired with stacks of high-bandwidth memory (HBM), before the package connects to the circuit board. The issue is that the supply chain supporting these advanced packaging steps has been tight since the AI boom kicked into high gear in late 2022, and bottlenecks have remained difficult to eliminate even as manufacturers race to expand capacity.

That’s helping fuel interest in options beyond the most widely used solutions in the market. Industry chatter has pointed to rising attention around Intel’s EMIB, particularly as some established packaging routes face bottlenecks. According to a report from Zdnet Korea, SK hynix and Intel are researching and developing EMIB-based packaging, with a focus on using EMIB in a 2.5D-like configuration to better support multi-die AI chip designs.

One key area of evaluation is how EMIB could be used to connect SK hynix’s HBM to a chip’s logic die, a critical requirement for modern AI accelerators where memory bandwidth often dictates real-world performance. The report also indicates SK hynix is reviewing what raw materials would be needed if EMIB moves from evaluation into volume production, underscoring that this effort isn’t just theoretical—it’s being examined with manufacturing realities in mind.

As with any advanced packaging technology, the deciding factor may come down to yields. Packaging yield can determine whether a solution is merely promising or actually viable at scale for high-volume AI silicon. Well-known analyst Ming-Chi Kuo has commented on this topic, noting that while a reported 90% yield figure for Intel’s EMIB-T may serve as a validation metric, it isn’t necessarily the same as true production yield. For large customers considering packaging choices for next-generation AI chips, real production yields can make or break adoption decisions.

Sources suggest Intel’s aggressive push to market EMIB, combined with today’s packaging supply constraints, could elevate the technology into a more central position in the AI packaging ecosystem. Intel’s messaging is also tying advanced packaging directly to customer responsiveness. During the company’s latest earnings call, Tan emphasized that Intel’s structure allows it to incorporate customer feedback quickly and adapt products to different AI workloads, highlighting packaging and foundry capabilities as part of the value proposition—not just the CPU business.

With AI chip demand continuing to rise and packaging still acting as a limiting factor, collaborations like the one between Intel and SK hynix show where the market is headed: more competition in advanced packaging, more experimentation with interconnect approaches, and a sharper focus on scalable manufacturing results. If EMIB can prove itself in production conditions—especially for HBM-heavy AI designs—it could become a far more common piece of the AI hardware supply chain in the months and years ahead.