OpenAI’s race to expand its computing power just took a decisive turn. CEO Sam Altman recently visited Taiwan for high-level talks with two of the world’s most critical technology manufacturers—TSMC and Foxconn—signaling a push to accelerate the company’s ambitious Stargate initiative and its plans for custom AI chips.
Stargate is envisioned as a vast buildout of AI infrastructure, with multiple next-generation data centers powered by cutting-edge hardware. Partners reportedly include OpenAI, Oracle, and SoftBank. With an estimated price tag near $500 billion, the project is being framed as one of the largest computing ventures ever undertaken in the United States, designed to deliver unprecedented AI performance at global scale.
Foxconn’s role appears central to making that vision real. The company is expected to be a major supplier of NVIDIA-based rack-scale solutions that could serve as the backbone for Stargate’s early deployments. Altman’s discussions in Taiwan likely focused on ramping AI server production, supply assurance, and how Foxconn can support the logistics of building and deploying a mega-scale AI infrastructure across multiple sites.
The TSMC visit points to an equally significant move: OpenAI’s pursuit of custom silicon. While the company has been striking multi-billion-dollar agreements for GPUs and cloud capacity with names like NVIDIA and CoreWeave, it’s increasingly clear that relying solely on off-the-shelf hardware won’t be enough to hit long-term cost, performance, and energy targets. Custom ASICs purpose-built for AI could offer higher efficiency and tighter control over the compute roadmap.
Earlier reports suggested OpenAI might co-develop a chip with Broadcom. The strategy now appears to be evolving. With talent reportedly recruited from Google’s TPU program, OpenAI seems poised to take a larger share of chip design in-house, potentially handing off fabrication to TSMC. While specifications remain under wraps, industry chatter points to TSMC’s 3nm process as a likely candidate, with first integrations targeted around 2026 if timelines hold.
If successful, OpenAI’s ASIC could become one of the first high-profile demonstrations of Big Tech moving beyond general-purpose accelerators toward fully customized AI silicon at scale. That doesn’t mean replacing NVIDIA overnight; rather, it signals a hybrid future where specialized chips handle specific workloads while GPUs continue to power broad training and inference tasks. The endgame is better performance per watt, lower total cost of ownership, and more predictable supply for critical AI models.
Why this matters goes beyond hardware. AI development increasingly depends on economies of scale in compute. Owning a differentiated chip platform can unlock new model architectures, tighter software-hardware co-optimization, and a more resilient supply chain. It also sharpens competitive positioning as demand for AI compute outstrips current capacity.
What to watch next:
– Design milestones: signs that OpenAI has finalized core architectures and is moving toward tape-out
– Foundry commitments: confirmation of node selection and manufacturing windows at TSMC
– Server ramp: Foxconn’s capacity plans for NVIDIA-based racks and how quickly they can scale
– Partner ecosystem: updates from Oracle and SoftBank on data center deployments tied to Stargate
– Timelines: whether the 2026 integration target holds amid intense demand for advanced nodes
Altman’s Taiwan trip underscores a simple reality: the companies that can secure and shape the next generation of AI compute—through both partnerships and custom chips—will set the pace for the industry. Stargate, Foxconn’s manufacturing muscle, and TSMC’s leading-edge processes together form a blueprint for how OpenAI intends to do exactly that.






