Apple is reportedly leaning harder than ever into Claude AI behind the scenes, and new anecdotal details suggest the company is now treating AI usage as a measurable productivity signal inside certain teams.
According to a recent account shared online, some Apple groups on the business side, including global sourcing, have gained access to Claude with a daily token budget that can reach roughly $300 per day. The more surprising detail is how that budget is being interpreted internally: teams that use far fewer tokens than their allotted amount may be drawing increased scrutiny, and low AI consumption is said to be coming up when managers request additional staffing or backfills. In other words, leadership may be asking whether a team has fully embraced AI tools before approving more headcount.
To understand why this stands out, it helps to compare typical AI coding and assistant costs. Anthropic has indicated that Claude usage for developer-focused workflows can often land in the neighborhood of $100 to $200 per developer per month on certain plans. Put next to a $300-per-day allocation, the message is clear: Apple appears willing to spend aggressively on internal AI if it translates into faster output, smoother operations, and higher productivity per employee.
This internal push arrives as Apple’s consumer AI plans also seem to be moving closer to a more concrete rollout. One major shift on the horizon is a more capable, chatbot-style Siri that is expected to be deeply integrated into Apple’s software experience. The assistant is described as being able to draw on personal context, take actions within apps, browse the web, generate content (including images), help with coding, and summarize or analyze information. Siri is also expected to support uploading files as part of requests, which could broaden its usefulness for work and school tasks.
Apple is also said to be building features that let Siri understand what’s on your screen. That includes the ability to view open windows and on-screen content, adjust device settings, and handle combination requests where multiple actions are bundled into one prompt. If implemented well, this would move Siri beyond quick commands and toward being a more proactive, workflow-oriented assistant.
On the infrastructure side, the revamped Siri is expected to run using cloud compute based on Google’s TPU technology, with Apple maintaining that its privacy standards and safeguards will remain intact under the arrangement. The AI model powering the experience is described as a more advanced evolution aligned with Apple’s latest foundation model work, and it’s expected to be positioned competitively against top-tier AI offerings.
Siri’s interface is also expected to expand beyond voice. A dedicated Siri app is reportedly in the works, designed to store and surface your conversation history in one place. Another notable addition being tested is an “Extensions” concept that could allow Siri to connect with third-party AI agents, potentially including tools like ChatGPT or Claude, so users can tap into specialized capabilities depending on the task.
Apple is also experimenting with new ways to access Siri, including an interface connected to the Dynamic Island, while continuing to support familiar activation methods like voice and the power button. And in a potential change to everyday navigation on iPhone, Apple is reportedly exploring a future where Siri takes on a bigger role in system-wide search, effectively merging assistant features with the core search experience users rely on today for apps, suggestions, settings, and upcoming appointments.
Taken together, these developments paint a picture of a company pushing AI from two directions at once: making it a required, budgeted tool internally while preparing a more powerful Siri experience for consumers. If the internal token budgets and scrutiny around AI usage are any indication, Apple doesn’t just want teams to have access to AI—it wants proof they’re using it.






