Apple and Google logos with digital circuit connections.

Apple Taps Google’s 1.2-Trillion-Parameter AI to Supercharge Siri

Apple is turning to Google’s AI to supercharge a redesigned Siri, and new details reveal just how big that upgrade could be. According to new reporting, Apple plans to tap a customized version of Google’s Gemini model with an estimated 1.2 trillion parameters—far larger than the roughly 1.5 billion-parameter proprietary model currently handling Siri’s cloud tasks. The result should be a far more capable assistant that can plan complex tasks, understand context, and summarize content with greater accuracy.

Before landing on Gemini, Apple reportedly tested OpenAI’s ChatGPT and Anthropic’s Claude. The final plan is to use Google’s model for the heaviest lifting in the cloud while keeping Apple’s privacy-first approach intact. Requests sent to the cloud will run through the company’s Private Cloud Compute architecture, designed to protect users with encryption and stateless processing so personal data isn’t stored.

The arrangement is expected to cost Apple around $1 billion per year to license Google’s AI. It’s the latest chapter in a long-running business relationship between the two companies, which already includes Google paying Apple a significant annual sum to remain the default search provider on Apple devices.

Inside Apple, the Siri overhaul is codenamed Glenwood and is being driven by leaders including Mike Rockwell, who led the Vision Pro effort, and software engineering chief Craig Federighi. The revamped Siri is built around three key pillars:

– Query planner: A decision-making brain that figures out the best way to complete a request, whether that means running a web search, pulling in personal data like calendar entries or photos, or triggering actions inside third-party apps through App Intents.
– Knowledge search system: A built-in general knowledge layer that lets Siri answer trivia and factual questions without always leaning on external AI or web results.
– Summarizer: A core Apple Intelligence feature that can condense text and audio, generate notification digests, create webpage summaries in Safari, and power writing tools. This can tap third-party AI models, including ChatGPT, when appropriate.

In this hybrid setup, the customized Gemini model will drive the query planner and summarizer, while Apple’s on-device large language models will handle the knowledge search component. Importantly, this doesn’t hand over Apple’s ecosystem to Google’s search AI; instead, it uses Gemini as a specialized engine for select cloud tasks while keeping critical intelligence on the device.

Apple isn’t planning to rely on Gemini forever. The company continues to develop its own large-scale models and long-term in-house solutions. For now, though, Gemini is the bridge that helps deliver the next-generation Siri, promising smarter in-app actions, richer personal context, and better on-screen awareness when the update rolls out in a future iOS release.