Microsoft doubles down on AI spend as rivals widen access
- Microsoft paired strong Q3 results with FY26 capital expenditure guidance of $190 billion and Azure growth guidance of 39-40%, confirming that AI infrastructure spending is still accelerating.
- Microsoft 365 Copilot reached 20 million paid enterprise seats, giving Microsoft stronger evidence that AI is becoming embedded in core workplace software usage.
- Microsoft introduced Microsoft IQ and Agent 365 while expanding Azure Local for Microsoft Sovereign Private Cloud, broadening its pitch from AI assistant to governed enterprise AI platform.
- OpenAI brought GPT-5.5, Codex, and Managed Agents to Amazon Bedrock just after Microsoft and OpenAI loosened their partnership structure, reducing Azure's practical exclusivity.
- AWS, Anthropic, and Google all expanded AI distribution into work software, creative tools, and cars, increasing pressure on Microsoft's own Copilot-led ecosystem.
Microsoft's Q3 results made one point unmistakable: the company is still in build-out mode for AI. It reported $82.89 billion in Q3 revenue and $4.27 EPS, while guiding Azure growth at 39-40% and forecasting FY26 capital expenditures of $190 billion. That matters because Microsoft is telling the market that demand for AI and cloud capacity remains strong enough to justify another year of unusually heavy spending, even as investors keep asking when that spend turns into cleaner margins.
Azure growth and Copilot usage are the practical answer Microsoft offered to that scrutiny. Microsoft 365 Copilot reached 20 million paid enterprise seats, up from 15 million in January, with weekly engagement per user matching Outlook and queries per user growing nearly 20% quarter-over-quarter. For Microsoft's position, this is the strongest near-term evidence that AI is becoming a habit inside existing software franchises rather than a side experiment.
Microsoft is broadening its enterprise AI stack beyond Copilot chat
Microsoft also used the week to show that it wants to own more of the control layer around enterprise AI deployment. It introduced Microsoft IQ for contextual AI and Agent 365 for governance, and it had already made Microsoft 365 Copilot agentic capabilities generally available, including multi-agent orchestration and Copilot Cowork. The significance is less the product names than the direction: Microsoft is trying to move from selling an assistant to selling an operating model for AI inside business software, with governance as part of the pitch rather than an afterthought.
That strategy extends to infrastructure for customers who cannot simply move everything into a standard public cloud setup. Azure Local now supports deployments of up to thousands of servers within a single sovereign environment for Microsoft Sovereign Private Cloud. This strengthens Microsoft's hand in regulated and national markets, where AI demand exists but control over data, models, and operations is often as important as model quality.
OpenAI's AWS move weakens Azure exclusivity without ending the partnership
The most important external development for Microsoft was not a new model, but a change in distribution. OpenAI launched a limited preview of frontier models including GPT-5.5, Codex, and Managed Agents via Amazon Bedrock, just after Microsoft and OpenAI announced an amended partnership that gives Microsoft continued primary cloud rights and a non-exclusive license to OpenAI's IP through 2032 while allowing OpenAI broader multi-cloud flexibility. In plain terms, Azure keeps a privileged relationship, but Microsoft no longer has the same clean exclusivity story.
This matters on two levels. First, AWS can now use OpenAI access to narrow a competitive gap in enterprise AI. Second, Microsoft has to prove that customers stay for the full stack - Azure, Copilot, security, governance, and deployment options - not just because OpenAI was easiest to get there.
Competitors are attacking Microsoft's AI distribution from multiple angles
The wider market also moved against any assumption that Microsoft's current lead will stay unchallenged by default. AWS launched Amazon Quick as an AI assistant for work, Anthropic added creative connectors for Claude across Adobe Creative Cloud, Blender, Ableton, SketchUp, and others, and Google said Gemini will roll out to cars with Google built-in. These are different markets, but they share the same pattern: competitors are embedding AI directly inside established workflows and devices instead of waiting for users to come to a standalone chatbot.
For Microsoft, that raises the bar for Copilot and Azure alike. The real contest is increasingly about distribution, default placement, and workflow integration, not just model benchmarks. It also means Microsoft's own push into agents and governance is timely, because rivals are no longer competing only on model quality but on how deeply their AI becomes part of everyday work and use.