Apple Confirms Google Gemini Will Help Power Its Next-Generation Siri

Apple has now confirmed what many in the tech world suspected: a Google Gemini-powered upgrade is coming to Siri later this year, helping drive Apple’s next wave of Apple Intelligence features. The announcement effectively acknowledges that Apple is leaning on Google’s large-scale AI to accelerate its revamped voice assistant and broader on-device and cloud AI ambitions.

According to details previously shared by industry insiders, Apple plans to use a customized version of Google’s Gemini model to power key parts of the new Siri experience. The Gemini model being tailored for Apple is reportedly enormous, with roughly 1.2 trillion parameters—dramatically larger than the cloud model Apple currently uses for Siri, which is said to sit around 1.5 billion parameters. There were also claims that this partnership could cost Apple as much as $1 billion per year, underlining just how central this collaboration may be to Apple’s AI roadmap.

To understand why Apple would bring Gemini into the picture, it helps to know how the revamped Siri is expected to work. The next-generation assistant is described as having three major building blocks.

First is the query planner, essentially Siri’s decision-making brain. This layer determines how Siri should complete a request—whether that means running a web search, pulling from personal information like calendar events and photos, or triggering actions inside third-party apps using App Intents. App Intents is Apple’s framework that makes app features “discoverable” to Siri, allowing you to do things through voice commands without manually opening the app.

Second is a knowledge search system. This is meant to give Siri a built-in general knowledge capability to answer trivia and everyday questions more directly, without relying on external chatbots or always falling back to standard web results.

Third is the summarizer, a core Apple Intelligence-style tool designed to condense text or audio quickly and clearly. This is the capability behind features like notification summaries, webpage summaries in Safari, and other writing and summarization tools. It can also tap third-party models when needed.

Under Apple’s reported architecture, Google’s customized Gemini would take on the heavy lifting for Siri’s query planner and summarizer in the cloud, while Apple’s own on-device large language models—marketed as Apple Foundation Models within Apple Intelligence—would handle the knowledge search system.

Apple is also working toward a major rollout of Apple Intelligence features with a Spring 2026 iOS update (described as iOS 26.4). Those upgrades are expected to make Siri far more useful in everyday situations, especially for hands-free tasks.

One of the biggest changes is In-app Actions, where Siri can perform context-based tasks inside supported apps through voice commands. That could mean adding items to a grocery list, sending a message through a specific app, or playing music—without you having to navigate screens step by step.

Another upgrade is Personal Context Awareness, which would let Siri use information from your personal data more intelligently. For example, it could search your Messages history to find a podcast recommendation someone sent you, or surface details relevant to your plans and conversations.

Then there’s On-Screen Awareness, designed to help Siri understand what you’re looking at on your iPhone or iPad display and take action accordingly—bringing Apple closer to the “agentic” assistant experience users increasingly expect.

Apple’s confirmation came via a statement given to CNBC, where the company said it chose Google’s tech after evaluating options and concluded that Gemini offers the strongest foundation for Apple Foundation Models. Google also followed up by confirming a multi-year collaboration in which the next generation of Apple Foundation Models will be based on Gemini models and cloud technology, supporting future Apple Intelligence capabilities and a more personalized Siri.

Meanwhile, Apple CEO Tim Cook has recently said the company is making good progress on the revamped Siri and remains on track for a 2026 launch. Cook also suggested Apple plans to integrate with more AI model providers over time, signaling that Siri may eventually support multiple external AI models beyond what’s available today. At the moment, only ChatGPT is integrated with Siri under the Apple Intelligence umbrella, but Apple’s comments indicate a broader multi-model future could be on the way.