Empty conference breakout room after the final keynote session at Google Cloud Next '26 - printed agenda on a chair, projection screen dimming down

Google Cloud Next '26: Google's New Control Layer for Enterprise AI

Four days in Las Vegas, one message for enterprise AI: Google is building the control plane for agentic AI and inviting companies to run their agents on it

At Google Cloud Next '26 in Las Vegas (April 22 to 24, 2026), Google unveiled the Gemini Enterprise Agent Platform alongside an 8th-generation TPU family. With 75 percent of Google Cloud customers now using AI products and 16 billion tokens processed per minute via API, the announcements carry real operational weight. For European enterprises, the key questions are not about feature sets. They are about who controls the agent infrastructure and what the sovereignty claims actually deliver.

Summary

Google Cloud Next '26 marked a strategic move from AI model provider to agent infrastructure platform. The Gemini Enterprise Agent Platform introduces four structured pillars covering development, runtime, governance and optimisation. The 8th-generation TPU family splits into a training-optimised variant (TPU 8t) and an inference-optimised variant (TPU 8i), each with substantial performance gains. Enterprise adoption figures are credible: KPMG deployed over 100 agents with 90 percent staff uptake in one month, and Vodafone projects 100 million euro in savings. For European decision-makers, the critical issue is the gap between Google's sovereignty marketing and the actual SEAL-2 certification that its S3NS joint venture achieved, while the three competing providers reached SEAL-3.

What Happened at Google Cloud Next '26

Google Cloud Next '26 took place in Las Vegas from April 22 to 24, 2026. The event served as the annual showcase for Google Cloud's enterprise product direction, and this year's central theme was clear from the opening keynote: AI agents are moving from experimentation to production infrastructure, and Google wants to own the control layer.

The headline announcement was the Gemini Enterprise Agent Platform, a structured four-pillar framework designed to take companies from building individual agents to running, governing and measuring agent fleets at scale. Alongside it, Google introduced the 8th generation of its Tensor Processing Units, split into two specialised variants optimised for training and inference respectively. Workspace received an AI-enabled collaboration layer, and Google announced a 5x speed improvement in Microsoft 365-to-Workspace migration tooling.

The conference numbers reflect genuine adoption momentum rather than aspirational positioning. 75 percent of Google Cloud customers now use at least one AI product. 330 customers processed more than one trillion tokens over the past 12 months. API throughput reached 16 billion tokens per minute in Q1 2026, up from 10 billion the previous quarter. Paid Gemini Enterprise users grew 40 percent quarter-on-quarter in Q1 2026.

Key Takeaway

Google Cloud Next '26 was not primarily a product launch event. It was a signal that Google intends to be the infrastructure layer for enterprise AI agents, not just a model API provider. That distinction carries significant strategic and commercial implications for every organisation currently evaluating its AI platform choices.

The Gemini Enterprise Agent Platform: Four Pillars

Google did not announce a single product. It announced a structured platform with four distinct pillars, each addressing a different stage of the enterprise agent lifecycle. The four pillars are BUILD, SCALE and ORCHESTRATE, GOVERN, and OPTIMIZE. Taken together, they represent Google's attempt to make itself the operational foundation for enterprise AI agent fleets.

BUILD: Development Tools for Agent Creation

Agent Development Kit (ADK)

A graph-based development framework for building multi-step agent workflows. Graph-based orchestration provides more explicit control over agent decision paths than purely sequential or reactive approaches.

Agent Studio

A low-code environment that allows non-developer teams to build and configure agents using visual tooling. Intended to extend agent creation beyond engineering departments into business units.

Agent Registry

A central directory for cataloguing all deployed agents within an organisation. Provides visibility into what agents exist, what they do and who owns them, which is a basic governance prerequisite.

Enterprise Integrations

Native integrations with Atlassian, Box, Oracle, ServiceNow and Workday via MCP (Model Context Protocol), the open standard for tool interoperability. MCP reduces custom integration work and supports portability.

SCALE AND ORCHESTRATE: Runtime Infrastructure

Sub-Second Cold Starts

Agent instances initialise fast enough to respond to user-triggered events without perceptible delay. This matters for interactive use cases where agents must feel responsive rather than batch-like.

Long-Running Agents

Support for agent workflows that persist across hours or days rather than completing within a single session. Enables agents that monitor, wait for conditions and resume, covering use cases like procurement approval or compliance monitoring.

Memory Bank

Persistent cross-session memory that agents can read and write. Allows agents to maintain context about a user, project or business process across separate interactions, rather than starting from scratch each time.

Secure Sandboxes

Isolated execution environments that prevent agents from accessing systems or data outside their defined scope. A foundational control for deploying agents in environments with sensitive data.

GOVERN: Security and Identity Controls

Agent Identity

Cryptographic identifiers assigned to each agent, enabling audit trails that record which agent took which action. Without agent-level identity, forensic analysis of AI-driven incidents is significantly harder.

Agent Gateway

A traffic control layer that sits between agents and downstream systems. Includes prompt injection protection, a critical defence against attempts to manipulate agent behaviour through malicious input in retrieved content.

Agent Anomaly Detection

Automated monitoring that flags unusual agent behaviour patterns. Complements human oversight by providing a first-pass filter for agents acting outside expected parameters.

Security Command Center

Centralised security visibility across the agent fleet, integrated with Google Cloud's existing security monitoring infrastructure. Provides a single view for security teams rather than agent-by-agent monitoring.

OPTIMIZE: Observability and Evaluation

Agent Observability

OpenTelemetry-compliant instrumentation for agent traces, logs and metrics. OTel compliance is important because it allows agent telemetry to flow into existing monitoring stacks without custom integrations.

Agent Simulation

A stress-testing environment that runs agents against synthetic load and edge-case scenarios before production deployment. Reduces the risk of production incidents caused by untested agent behaviour under load.

Agent Evaluation

Real-time scoring of agent output quality, enabling continuous measurement of whether agents are performing as intended across changing conditions.

The four-pillar structure is deliberate. Each pillar addresses a phase where enterprise agent deployments currently fail: undisciplined build, unreliable runtime, absent governance, and no objective quality measurement.

The platform's depth is notable, but so is its implications. Every pillar adds value and adds dependency. Organisations that adopt Agent Registry, Memory Bank and Agent Identity across a fleet of production agents are building significant switching costs into their infrastructure. This is not unique to Google, but it is important to name clearly before committing.

For further context on how Google's agent platform compares to Microsoft and AWS approaches, see our analysis of AI Agent Governance across the major cloud providers .

8th-Gen TPUs: Training and Inference, Two Separate Chips

Google's 8th-generation TPU family abandons the single-chip approach in favour of two specialised variants, each optimised for a fundamentally different computational workload. This reflects a maturation of the hardware market for AI: the gap between training and inference requirements has widened to the point where optimising for both simultaneously produces worse results than optimising for each separately.

Dimension TPU 8t (Training) TPU 8i (Inference)
Primary workload Large model training and fine-tuning Production inference serving
Performance vs. previous gen 3x vs. Ironwood generation 80% better performance per dollar
Energy efficiency 2x better performance per watt 3x more on-chip SRAM reduces memory bandwidth pressure
Scale 9,600 TPUs with 2 PB shared memory in a single superpod 1,152 TPUs per pod
Interconnect Virgo network, capable of connecting 1 million TPUs Boardfly topology within a pod
Reliability metric ~97% goodput across training runs Collectives Acceleration Engine for inter-chip coordination

For most European enterprises, the TPU announcements are relevant as a signal rather than a direct procurement decision. Very few European companies train frontier models. What matters is that inference costs are dropping materially (80 percent better performance per dollar for TPU 8i), which will flow through to lower API costs for Gemini-based workloads over time. The training numbers matter for the handful of European organisations with large-scale fine-tuning programmes.

Key Takeaway

The 80 percent inference cost improvement on TPU 8i is the number that matters most for enterprise budgets. As that improvement reaches API pricing, the economics of running production AI agents at scale will change. Factor this into multi-year platform cost projections rather than using current API pricing as a baseline.

Enterprise Adoption by the Numbers

The adoption metrics Google presented at Next '26 are among the most concrete enterprise AI numbers published by any hyperscaler this year. They deserve attention because they move the conversation from "AI is coming" to "AI is operational at scale."

75%
of Google Cloud customers now use at least one AI product
330
customers processed over 1 trillion tokens in 12 months
16B
tokens per minute processed via API in Q1 2026 (up from 10B)
40%
QoQ growth in Gemini Enterprise paid users in Q1 2026

Three customer cases illustrate what early enterprise adoption looks like in practice.

Vodafone

Projected 100 million euro in savings over the programme lifetime through AI-powered self-healing network diagnostics. Agents monitor network performance, identify anomalies and trigger remediation workflows without human intervention for routine faults.

KPMG

Achieved 90 percent employee adoption within the first month of deploying over 100 specialist agents. This is an unusually high adoption figure and suggests the agents were deployed in workflows where employees had immediate, practical motivation to use them.

WPP

Running AI-led marketing campaigns at twice the previous speed, producing one full campaign approximately every four days. The speed gain comes from agent-assisted content generation, adaptation and approval workflows rather than pure automation.

327%
growth in multi-agent adoption on Databricks in 4 months
5x
faster Microsoft 365 to Workspace migration with new tooling
100M+
euro projected Vodafone savings from AI diagnostics

The Databricks multi-agent adoption figure (327 percent growth in four months) is perhaps the most instructive. It suggests that once organisations begin deploying agents, they deploy more of them quickly, which has direct implications for the pace at which platform dependencies accumulate. An organisation that starts with two or three agents on a specific platform is likely to have twenty or thirty within a year.

Sources: Google Cloud Next '26 keynote presentations, April 2026. Customer figures as reported by Google and verified through separate company disclosures where available.

Sovereignty

European Perspective: Data Sovereignty and the S3NS Controversy

European data sovereignty in cloud computing remains an unresolved problem, and Google Cloud Next '26 provided a useful illustration of why. The gap between marketed sovereignty and certified sovereignty is significant, and European enterprises making platform decisions based on marketing claims rather than certification outcomes are taking on risk they may not have accurately assessed.

The EU Sovereign Cloud Qualification: What Actually Happened

The European Commission awarded sovereign cloud contracts worth 180 million euros over six years to four providers. Three of those providers achieved SEAL-3 certification, the highest level in the evaluation framework. The fourth, S3NS, the joint venture between Thales and Google, achieved only SEAL-2.

CISPE Secretary General Francisco Mingorance did not soften his assessment. He called the outcome "sovereignty washing" and described it as "an own goal" for the providers involved. The charge is specific: presenting a product as sovereign cloud infrastructure while failing to achieve the certification level that three competitors reached in the same evaluation is a credibility problem, not just a technical shortfall.

The structural issue beneath the certification gap is the US CLOUD Act. This law allows US government authorities to compel US-headquartered companies to produce data held abroad, including in EU data centres. The CLOUD Act applies to Google regardless of where data is physically stored. This is not a hypothetical risk: it is a legal mechanism that exists and has been used.

Google Cloud does offer EU-only data processing options, and the company has invested in technical and contractual controls designed to limit exposure. But these controls operate within the CLOUD Act framework rather than outside it. Organisations in regulated industries or handling data subject to strict residency requirements should treat this as a legal question requiring qualified advice, not a marketing question answerable by reading a product page.

S3NS certification gap in context: Three of four EU sovereign cloud providers qualified at SEAL-3. S3NS (Thales plus Google) qualified at SEAL-2. For organisations using the EU sovereign cloud programme as a proxy for sovereignty assurance, SEAL-2 means the assurance level is lower than what competitors offer in the same programme. Verify the actual certification level before using sovereign cloud status as a compliance or procurement justification.

For a broader view on how European countries are navigating digital sovereignty differently, see our analysis of Digital Sovereignty in Europe: France acts while Germany hesitates . The Microsoft EU datacenter transparency issues discussed in Microsoft's EU datacenter lobbying provide additional context on how US cloud providers navigate European sovereignty requirements.

The Critical View: Vendor Lock-in and the Agent Control Plane

The most consequential observation from Google Cloud Next '26 was not in Google's keynote. It came from SiliconAngle's conference coverage: whoever controls the agent control plane wins. This is a simple statement with significant implications.

"Whoever controls the agent control plane wins." The race to own enterprise AI agent infrastructure is accelerating, and Google Cloud Next '26 made clear that Google intends to be that control plane.

Technology strategist Kai Waehner has mapped the specific layers at which agentic AI lock-in accumulates. The list is longer than most organisations expect: the model itself, the orchestration framework used to coordinate agents, the runtime environment where agents execute, and the operational patterns embedded in engineering and business teams over time. Each layer creates its own switching cost. The combination creates compounding dependency.

Lock-in Factors
Memory Bank stores persistent agent context in Google-managed infrastructure, making agent history non-portable
Agent Registry and Agent Identity create proprietary catalogues and audit trails tied to the Google Cloud platform
ADK graph structures and Agent Studio configurations may not transfer to alternative orchestration frameworks
Teams trained on Agent Studio's low-code tooling develop Google-specific operational knowledge
327% multi-agent growth on Databricks shows how quickly the agent fleet, and therefore the dependency, expands
Portability Options
MCP (Model Context Protocol) as an open standard for tool integrations reduces custom connector lock-in
Apache Iceberg-based Cross-Cloud Lakehouse provides data portability via an open table format
OTel-compliant Agent Observability allows telemetry to flow into vendor-neutral monitoring stacks
Enterprise integrations with Atlassian, Box, Oracle, ServiceNow and Workday are accessible via open MCP protocol
Gemini models are available via API without requiring the full Gemini Enterprise Agent Platform

The portability mitigations are real. MCP is a genuine open standard, Apache Iceberg is widely supported, and OTel compliance matters. But these open-standard components exist alongside proprietary ones. The practical question is not whether Google offers portability options, but whether the specific combination of features an organisation adopts leaves its most valuable assets, the agent logic, the accumulated memory, the governance configurations, in a portable state.

For a deeper analysis of how to approach agent platform selection with lock-in in mind, see our article on AI Agent Platforms: Vendor Lock-in and Enterprise Strategy .

Action

What Companies Should Do Now

Google Cloud Next '26 confirmed that enterprise AI agent infrastructure is no longer a future concern. It is a present decision. Organisations that defer platform strategy decisions are not avoiding lock-in: they are accumulating it by default as teams make local choices without a governing framework.

  1. Map your agent workloads against the four lock-in layers. Before evaluating any platform, establish what you are actually building. Classify planned and existing agent use cases by model dependency, orchestration complexity, runtime persistence requirements and operational knowledge investment. This map determines where your actual lock-in exposure will accumulate.
  2. Establish data residency and sovereignty requirements before selecting a platform. Do not accept marketing language about sovereign cloud as a compliance substitute. Verify the certification level of any claimed sovereign cloud product against the actual EU programme outcomes. If SEAL-3 is the standard and a provider achieved SEAL-2, that gap requires a specific risk assessment, not a marketing reframe.
  3. Prioritise open standards for integration and data layers. Use MCP for tool integrations where available. Store agent-relevant data in Apache Iceberg or equivalent open formats. Configure observability via OTel-compatible instrumentation from the start. These decisions are much cheaper to make on day one than to retrofit after a fleet of agents has been running for twelve months.
  4. Treat governance as a design requirement, not a late addition. Agent Identity, audit trails and anomaly detection are not optional features for regulated environments. If you are deploying agents in HR, finance, customer data or critical operations, governance infrastructure must exist before the agents go to production, not after an incident prompts it.
  5. Evaluate the Memory Bank decision carefully. Persistent cross-session agent memory is a powerful capability and a significant portability risk. Before adopting Memory Bank for production workloads, understand exactly what data is stored, in what format, under what terms, and what the process for exporting or migrating that data would be. This is a contractual and architectural question, not just a feature evaluation.
  6. Use the Stanford AI Index benchmarks to calibrate your timeline. European enterprise AI adoption is tracking behind US adoption in most sectors. The Stanford AI Index 2026 documents the trust gap and its causes. Understanding where your sector stands relative to the adoption curve helps set realistic expectations for what is achievable in 2026 versus 2027.
Key Takeaway

The organisations that will have the most strategic options in 2028 are those that make deliberate, documented architecture decisions in 2026 rather than allowing vendor convenience to determine their platform structure by default. Google Cloud Next '26 raised the stakes. It did not change the principles for making good decisions.

Further Reading

Frequently Asked Questions

What is the Gemini Enterprise Agent Platform? +

The Gemini Enterprise Agent Platform is Google's unified infrastructure for building, running, governing and optimising AI agents at enterprise scale. Unveiled at Google Cloud Next '26 in Las Vegas, it rests on four pillars: BUILD (development tools including ADK, Agent Studio and Agent Registry), SCALE and ORCHESTRATE (runtime features including sub-second cold starts and persistent Memory Bank), GOVERN (security tools including cryptographic Agent Identity and prompt injection protection via Agent Gateway), and OPTIMIZE (observability and evaluation tooling). It integrates with major enterprise software vendors including Atlassian, Box, Oracle, ServiceNow and Workday.

What is the difference between TPU 8t and TPU 8i? +

TPU 8t is optimised for model training: it delivers three times the compute power of the previous Ironwood generation, doubles performance per watt, and can scale to 9,600 TPUs sharing two petabytes of memory in a single superpod connected by the Virgo network, with around 97 percent goodput. TPU 8i is optimised for inference: it delivers 80 percent better performance per dollar, contains three times more on-chip SRAM than its predecessor, uses a Collectives Acceleration Engine, and connects 1,152 TPUs per pod via the Boardfly topology. The 8t chip maximises scale and throughput for training, while the 8i chip maximises cost efficiency for inference workloads.

What does Google Cloud Next '26 mean for European data sovereignty? +

The picture is mixed. Google offers EU-only data processing options and participates in the European sovereign cloud programme. However, S3NS, the Thales-Google joint venture competing for European sovereign cloud contracts worth 180 million euros over six years, achieved only SEAL-2 certification, not the SEAL-3 that the other three qualified providers reached. CISPE Secretary General Francisco Mingorance called this outcome sovereignty washing. The US CLOUD Act remains a structural concern: it allows US authorities to demand access to data held by US-headquartered companies abroad regardless of where that data is physically stored.

How significant is the vendor lock-in risk with Gemini Enterprise Agent Platform? +

The risk is real and accumulates across multiple layers simultaneously. Technology strategist Kai Waehner identifies four lock-in dimensions for agentic AI: the model itself, the orchestration framework, the runtime environment, and the operational patterns embedded in teams over time. Whoever controls the agent control plane gains a durable advantage. Google partially mitigates this by offering MCP as an open interoperability standard and an Apache Iceberg-based Cross-Cloud Lakehouse for data portability. But the deeper an organisation integrates with Memory Bank, Agent Registry and proprietary orchestration, the higher the exit cost becomes.

What should European companies do now? +

European companies should take four steps. First, map current and planned AI agent workloads against the four lock-in layers: model, orchestration, runtime and team patterns. Second, establish data residency and processing requirements before selecting any agent platform, and verify that chosen providers can actually meet SEAL-3-equivalent sovereignty standards. Third, use open standards where available: MCP for tool integration and Apache Iceberg for data portability reduce switching costs without sacrificing capability. Fourth, treat agent governance as a day-one requirement rather than a later addition, since retroactive governance is significantly more expensive and disruptive.

Which European companies are already using Google Cloud AI? +

Vodafone is projecting 100 million euro in savings through AI-powered self-healing diagnostics built on Google Cloud. KPMG achieved 90 percent employee adoption across more than 100 agents within a single month of deployment. WPP, the global marketing group, is running AI-led campaigns at twice the previous speed, producing one campaign roughly every four days. These represent early, high-profile deployments rather than broad European adoption. The sovereign cloud and CLOUD Act questions remain unresolved for regulated industries and public sector organisations across Europe.