Microsoft broadens Foundry as AI competition shifts to capacity and agents
- Microsoft expanded Azure AI Foundry with GPT-chat-latest (GPT-5.5 Instant), new third-party models, and three new realtime voice models.
- Microsoft highlighted Azure datacenter region expansion across Europe as reports raised doubts about its 2030 hourly renewable energy target.
- Anthropic and AWS pushed the agent race forward with finance-specific templates and Amazon Bedrock AgentCore Payments.
- Compute and power constraints moved closer to the center of AI competition as Anthropic secured over 300 MW of GPU capacity and Hut 8 signed a $9.8 billion data-center lease.
- Microsoft deepened its role in AI oversight through new evaluation agreements with U.S. and U.K. agencies and model access for CAISI testing.
Microsoft used the week to make Azure AI Foundry broader and more current at the same time. It added OpenAI’s GPT-chat-latest (GPT-5.5 Instant) as the new default chat model and expanded the catalog with IBM Granite 4.1, NVIDIA Nemotron Nano Omni and Qwen 3.6-35B-A3B. That matters because Microsoft is leaning harder into Foundry as a model marketplace and control layer, not just a route to one flagship model, which is the more defensible position if enterprise customers want choice rather than a single-stack bet.
The same pattern showed up in voice. Microsoft introduced GPT-realtime-translate, GPT-realtime-whisper and GPT-realtime-2 through the Realtime API, pushing Foundry further into low-latency translation, transcription and reasoning use cases. For Microsoft, this is less about a flashy demo than about making Azure AI Foundry useful for production workloads where response speed and multimodal input decide whether an AI feature is actually deployable.
Microsoft is expanding Azure capacity in Europe while questions grow around the energy bill for AI. A new Azure post highlighted datacenter region expansion across Austria, Belgium, Denmark, Greece and Finland, framed around sovereign and compliant infrastructure for cloud and AI demand. In practical terms, this is Microsoft responding to a simple enterprise constraint: many customers want advanced AI, but they also want it in-region and under familiar compliance rules.
That expansion sits awkwardly next to reports that Microsoft is discussing whether to delay or abandon its 2030 goal of matching 100% of its hourly electricity use with renewable energy purchases. The tension is now harder to ignore, because AI build-out is no longer an abstract future need but a current infrastructure race. For readers tracking Microsoft’s AI position, the message is clear: capacity is still strategic, but the cost is showing up not only in capital intensity, also in sustainability commitments.
Competitors are moving from generic assistants toward industry-specific and transaction-capable agents. Anthropic released ten agent templates for financial services, including workflows such as pitchbook creation, KYC screening and month-end closing, and paired that push with Microsoft 365 add-ins, connectors and an MCP app. That is a direct attempt to meet enterprise buyers where they already work, which is exactly the terrain Microsoft has tried to own with Copilot.
AWS went a step further on agent infrastructure by previewing Amazon Bedrock AgentCore Payments with Coinbase and Stripe, aimed at letting AI agents pay for APIs, web content, MCP servers and other agents in real time. Even if early, it points to a more concrete battleground: the winners may not be the firms with the best chatbot alone, but the ones that supply the rails for agents to do work, call tools and complete transactions inside business systems.
AI infrastructure competition is becoming as much about megawatts as models. Anthropic said it had doubled Claude Code rate limits, removed peak-hour caps, raised API rate limits for Claude Opus, and secured over 300 MW of GPU capacity through a SpaceX partnership at Colossus 1 data center. That is important because service quality at enterprise scale increasingly depends on reserved compute, not just model design.
The same week brought more evidence that the build-out is spreading far beyond the model vendors themselves. Hut 8 signed a 15-year, $9.8 billion lease for the first phase of its Beacon Point AI data-center campus, adding to a stream of financing and capacity moves across the sector. For Microsoft, this wider market signal matters because Azure is competing in an environment where access to power, land and GPUs is becoming a strategic constraint for everyone, not a background detail.
Microsoft is also spending political capital on AI evaluation and security frameworks. It announced new agreements with the U.S. Center for AI Standards and Innovation and the U.K.’s AI Security Institute, while also joining an arrangement to give the U.S. Commerce Department’s CAISI early access to frontier AI models for security testing. This does not create immediate product revenue, but it strengthens Microsoft’s position with governments and large regulated buyers that increasingly want AI systems to arrive with formal testing and oversight, not just marketing claims.