Anthropic Is Signing A 6GW Compute Capacity Deal With Amazon For To Train & Deploy Its Claude Models on Trainium Chips

Amazon Pumps Another $5B Into Anthropic, Securing 6GW of Trainium Power for Claude AI

Anthropic is ramping up its AI ambitions with a major expansion of its partnership with Amazon, securing multi-gigawatt compute capacity to train and deploy future versions of its Claude models through 2026. The agreement adds significant new access to Amazon’s Trainium hardware, signaling how quickly demand for large-scale AI infrastructure is accelerating.

The updated deal gives Anthropic up to 5 gigawatts (GW) of compute capacity in the first half of 2026, focused largely on Amazon Trainium2. On top of that, Anthropic plans to bring nearly an additional 1GW online by the end of 2026 using a mix of Trainium2 and next-generation Trainium3 chips. This expanded runway is designed to help Anthropic scale Claude for both training and real-world deployment as usage continues climbing.

Anthropic and Amazon have been working closely since 2023, and Claude adoption on Amazon’s AI platform has grown to the point where more than 100,000 customers are now running Claude there. The two companies also launched Project Rainier, described as one of the world’s largest compute clusters. Anthropic is already operating at enormous scale within that environment, reportedly using more than one million Trainium2 chips to train and serve Claude. The new agreement effectively supercharges that existing footprint, adding more capacity and newer silicon over time.

Financially, the commitment is just as eye-catching. Anthropic is reportedly committing more than $100 billion over the next decade to Amazon’s AWS technology services. The expansion spans multiple generations of Amazon’s custom AI chips, including Trainium2, Trainium3, and Trainium4, with an option for Anthropic to access future chip generations as they become available. The move aligns with broader industry signals that demand for custom silicon is surging, especially for AI training and inference workloads.

The additional capacity also aims to support Claude’s growing customer needs outside the U.S., including increased demand across Asia and Europe, where AI adoption and enterprise deployment are scaling rapidly.

On the investment side, Amazon is reportedly putting another $5 billion into Anthropic now, with a further $20 billion planned later. That would build on the roughly $8 billion Amazon has already invested, reinforcing how strategically important this partnership has become for both companies as the race to build larger, more capable AI systems intensifies.

Meanwhile, Anthropic’s model roadmap continues moving fast. After recently launching Claude Opus 4.7—positioned as a faster model with support for workflows that can behave more like an agent—Anthropic has also indicated that Opus 4.7 is being used in part to test cybersecurity-related capabilities. The company has suggested it’s not as capable as the Mythos preview, but it’s serving an important role as a step toward broader deployment plans for Mythos.

There have also been recent reports that the NSA has been using Claude Mythos in preview form, despite some internal concerns reportedly labeling the company a “supply chain risk.” According to those reports, the NSA is among roughly 40 organizations that have been granted access to Mythos during its preview phase.

With massive compute secured, billions in additional investment, and a clear focus on next-generation model deployment, Anthropic’s expanded Amazon partnership highlights a defining reality of modern AI: the frontier isn’t just about smarter models—it’s also about who can lock in the largest, most reliable pools of high-performance compute to build and run them at scale.