Apple is in the process of agreeing to a deal that would make the company pay Google approximately 1 billion annually to license the company to use Google’s next-generation AI model, Gemini, to run a significant update to Siri. The deal represents a significant change to Apple since it provisionally relies on the AI power of Google as it works to develop its models on a large scale.
The enhanced Siri, which is projected to come out next spring, will be based on the Google-operated system that will be used to process complex operations and multiturn questions. The Apple model will switch to its own model after Apple reaches the necessary level of performance; until then, the most complicated features will be powered by Google.

Read also: Apple to Pull Automatic Wi-Fi Sync from its Watch in EU, iOS 26.2 Reveals
Apple turns to Google as a temporary AI supplier for new Siri
The report indicates that Apple will resort to the Google Gemini model as a temporary solution until its internal models are strong enough to embrace the new Siri architecture. According to Bloomberg, Apple considered OpenAI and Anthropic models earlier this year, and Google received the deal, as Apple eliminated Anthropic because of financial disagreements. The financial terms of Google have also been reported to be more viable at Apple.
The personalized Gemini model has a huge parameter of up to 1.2 trillion, which is an indicator of the complexity and sophistication of the system. There are approximately 150 billion parameters in the current cloud-based version of the Apple Intelligence, and this implies that the new Gemini model would run with a scale that is approximately eight times larger. Apple allegedly sees this model as a temporary tool as it works on a model of its own that is a 1 trillion-parameter in-house option, which should be ready by next year.
According to Bloomberg, the new Siri upgrade in Apple is internally known as Glenwood, with the larger OS update that supports the new features being known as Linwood. This project is alleged to be headed by Mike Rockwell of the Vision Pro creator, and the head of software engineering, Craig Federighi.
Read also: Apple’s Next-Gen Siri Poised to Ignite iPhones with AI Power — March Launch Set

Gemini model to implement the complex planning and summarization.
Rather than making Gemini a chatbot with Siri, Apple will incorporate the technology of Google in the back-end. The model will also be able to deal with summarizing tasks and multistep planning, which allows Siri to comprehend the context and perform more complicated tasks within the application. As an illustration, the model will generalize information, coordinate most of the multi-app processes, and instruct Siri on how to implement more complex instructions.
There are still features that will be based on Apple models. The system was apparently designed by Apple in such a way that Gemini only steps in when the tasks are too complicated to be handled by the existing Apple Intelligence models.
The model of the Gemini is based on a Mixture-of-Experts architecture, i.e., only a few of its parameters are used in each query. This architecture enables Google to develop systems that have very large numbers of parameters without having to consume too many processing resources each time a user communicates with the model. Apple hopes this will decrease the cost of operation and the latency of Siri interaction.
The Gemini model will be hosted on the servers of the Apple Private Cloud Compute service, which implies that user data will not go through and be stored on Google servers. Data security and privacy will be under the control of Apple, and Google will not access the information about users. The report states that the partnership will most probably stay in the background and will not feature in Apple marketing.
Siri is planned to be redesigned by 2026 with the restructuring of the architecture by Apple.
Apple had initially planned to release an improved Siri with iOS 18, but the company encountered internal performance problems that made it reform the architecture. According to Bloomberg, Apple took longer to develop a new platform under the large language model-driven version of Siri; thus, the release had to be delayed. Apple intends to release the enhanced Siri in spring 2026 with the iOS 26.4 update now.
The initial improvements will be delivered earlier, although the most notable ones will be the capacity to talk in a full-fledged fashion, multi-task performance, and greater connectivity between applications, which will be accomplished with the help of Gemini. The full version of the new Siri will work more like OpenAI ChatGPT, or Claude of Anthropic is expected to work at Apple. Nonetheless, the firm has no intention of launching a chatbot app on its own.
The report also reports that previous connotation between Google and Apple saw the prospect of Google integrating a Gemini chatbot feature directly in Siri. The discussions were on the verge of improving earlier this year; however, the feature did not advance. Currently, Siri has the option of routing complex requests to ChatGPT, although one cannot select another chatbot.
Apple and Google already have a high business partnership. As it stands, Google is paying Apple approximately 20 billion dollars a year to continue being the default search engine in iPhones, iPads, and Mac products. The new Gemini deal, which is estimated to cost Apple approximately whether it is the new deal or not: $1 billion per year, will be independent of such a search deal.
The choice by Apple to adopt the model of Google does not alter its long-term plans on AI. The company is also still developing its own internal models, such as a 1 trillion-parameter one currently under development. Apple is said to feel that when the in-house model has reached the quality of production by itself, it will be used as the main model behind the improved Siri.
Expectations of launching new Siri
By the next spring, when Apple releases the version of Siri that can think more effectively and is able to conduct multi-turn dialogue, people will find a much more useful assistant. With Siri, context, multidimensional workflows, and actions across several applications will be understood, served, and processed without human interference.
The new model will allow Siri to sum up the articles, create detailed plans, and perform classifications previously performed through several distinct orders. Apple is scheduled to incorporate the new features throughout the operating system and curtail external branding around Google.
The firm will position the technology of Google as an internal supplier and not a consumer-oriented partner.
The new Siri and Apple Intelligence should be a cornerstone improvement for users of iPhones and iPads. Apple believes that Siri can be made into a more sophisticated assistant that can answer with the same degree of performance as the current state of the art conversational AI platforms and remain focused on privacy and device integration, as Apple has been.






