S-Bahn platform from above: dozens of commuters standing individually on the platform looking at their smartphones, captured as a documentary image of collective AI use and fragmented public attention

Stanford AI Index 2026: Record AI Adoption, Crumbling Public Trust

The most important annual AI report delivers numbers every decision-maker needs to know

88% of organisations use AI. Generative AI reached 53% of the population in three years, faster than PC or internet. At the same time, public trust is falling to a historic low. What the Stanford AI Index 2026 means for your AI strategy.

Summary

The Stanford AI Index 2026 documents record organisational AI adoption (88%) alongside a widening trust gap: 73% of AI experts are optimistic about the labour market, but only 23% of the public share that view. Technically, AI models have surpassed human PhD-level baselines in science, and the performance gap between US and Chinese models has narrowed to 2.7%. For European organisations, the EU AI Act provides a measurable trust advantage: the EU is trusted more as an AI regulator than the US (31%) or China. AI agents remain in single-digit adoption across most business functions, creating a clear early-mover opportunity.

What is the Stanford AI Index 2026?

The Stanford AI Index is the annual report of Stanford's Human-Centered AI Institute (HAI), published since 2018. It is the most important annual reference on the state of AI worldwide because it does not only measure model performance but also documents investment flows, organisational adoption, education, labour markets, regulation, and public trust.

The 2026 edition, published on 13 April 2026, is the first to fully document the wave of broadly deployed generative AI since 2023. It shows that technical progress and social adaptation are moving in opposite directions.

Stanford AI Index is the annual report of Stanford's Human-Centered AI Institute (HAI). It combines performance data, investment statistics, labour market data, and global citizen surveys into a comprehensive picture of AI development worldwide.

Technical performance exceeds earlier forecasts

AI models achieved capability leaps in 2025 that were considered five years away just twelve months prior. On the SWE-bench Verified benchmark for automated software engineering, the success rate rose from 60% to nearly 100% in a single year. Frontier models now solve PhD-level science questions and won a gold medal at the International Mathematics Olympiad.

At the same time, the report provides a deliberate counterexample: the same top models correctly read an analogue clock only 50.1% of the time. AI capability is context-dependent and does not support sweeping assessments.

January 2025

SWE-bench at 60%

Automated software engineering correctly solves 60% of standard coding tasks.

Spring 2025

Maths Olympiad: Gold medal

Google Gemini Deep Think wins a gold medal at the International Mathematics Olympiad.

Early 2026

SWE-bench near 100%

Coding benchmark rises from 60% to nearly 100% in one year. PhD-level baselines surpassed.

~100%
SWE-bench Verified (2026)
Prior year: 60% - a full rise in twelve months
2.7%
US lead over China
In 2024 the gap was 9.26% - now effectively tied
50.1%
Success rate reading analogue clocks
Same model solves PhD questions, fails at simple everyday tasks
Key insight

Avoid blanket judgements about AI capabilities. The Index shows: models are exceptional in specific domains and weak in others. Pilot projects should target exactly the areas where strengths are documented.

Enterprise adoption at record levels

Eighty-eight percent of organisations now use AI - the highest value ever measured. Generative AI reached 53% of the population in three years, faster than the PC or internet did in comparable periods. Four in five university students use AI for coursework.

Despite these figures, the use of AI agents in organisations remains nascent: adoption rates across nearly all departments are still in single digits. That is not a lag - it is an opportunity.

88%
Organisational AI adoption
Highest value ever recorded in the Stanford AI Index
53%
Population share GenAI (3 years)
Faster than PC and internet in comparable periods
$172bn
Consumer value GenAI (US annual)
Median value per user tripled between 2025 and 2026

Generative AI reached 53% of the population within three years - faster than the PC or the internet did in comparable timeframes.

Stanford AI Index 2026, Stanford HAI ,

AI agents as the early-mover theme: While broad AI adoption has crossed the 88% mark, AI agent adoption rates across nearly all business functions remain in single digits. Organisations that invest in structured agent pilots now are building a lead that late movers will find difficult to close within twelve months.

Productivity gains are real and unevenly distributed

The documented productivity gains from AI are real and based on primary sources. Between 14 and 26 percent efficiency improvements in customer support and software development, up to 72 percent in marketing. The Stanford AI Index 2026 provides the strongest data foundation to date for internal business cases.

At the same time, the report reveals a structural labour market shift: US developers aged 22-25 saw an employment decline of nearly 20% since 2024. Older developers continue to grow. AI is not replacing professions wholesale, but it is changing which entry points remain viable.

Marketing 72%
Software development 26%
Customer support 14%

AI adoption is not a success story unless a trust foundation has been built within the workforce.

Analysis based on Stanford AI Index 2026

For European talent strategies, the question is no longer whether AI changes jobs, but how fast and in which segments. Junior roles in areas where AI excels are under more pressure. Targeted upskilling and a clear internal competency development programme are relevant now, not in two years.

The trust gap: the underestimated strategic problem

Seventy-three percent of AI specialists expect positive labour market impacts from AI. Only 23 percent of the public share that view. This is not a communication gap that a newsletter can close. It reflects a structural perception difference between those actively shaping AI and those facing it from the outside.

For executives in European organisations, this means: AI projects do not fail because of the technology. They fail because of legitimacy and acceptance. Internal AI communication is a leadership responsibility.

AI Experts
73% see positive labour market impacts
Experience AI as a tool and productivity lever
Direct access to models and results
Know what AI can and cannot do
General Public
Only 23% see positive labour market impacts
Encounter AI through media coverage and rumour
Limited experience with real-world applications
Dominant perception: job threat
31%
US trust in AI regulation
Last place among all surveyed countries worldwide
362
Documented AI incidents 2025
+55% compared to 233 incidents in 2024
#1
EU regulatory trust globally
EU ranked above US and China in global trust survey

Europe's strategic position in the global comparison

According to the Stanford AI Index 2026, the EU is the most trusted AI regulator globally. This is not a marketing claim - it is the result of a global citizen survey. For European organisations, this means: the regulatory burden of the EU AI Act comes with a measurable trust advantage that companies in the EU can use in international markets.

On the investment side, the US saw USD 285.9 billion in private AI investment in 2025, 23 times more than China. Yet the inflow of AI researchers to the US has fallen by 89% since 2017. In the long run, this weakens the US innovation base and opens opportunities for other regions to build AI capacity.

$285.9bn
US private AI investment 2025
23 times more than China's $12.4bn
-89%
AI researcher inflow to US since 2017
US talent base erodes despite investment lead
#1
EU regulatory trust globally
EU above US and China in global trust rankings

EU AI Act as a differentiator: European companies that use the EU AI Act as a basis for transparency and documentation can actively communicate this trust advantage to customers and international partners. This is not a compliance exercise - it is a positioning opportunity.

Challenges and risks

The Stanford AI Index 2026 also documents what is going wrong. Documented AI incidents rose 55% to 362 cases. The most capable models are now the least transparent: training parameters, dataset sizes, and architecture details are increasingly withheld. This directly conflicts with the transparency requirements of the EU AI Act.

Risk Evidence Relevance for Europe
Opacity of large models Leading models less documented than in 2023 Conflict with EU AI Act transparency requirements
Rising AI incidents 362 incidents in 2025, +55% year-on-year Increased liability risk without adequate governance
Education gap Only 6% of teachers report clear AI guidelines Shortage of AI-competent graduates in the pipeline
Junior developers under pressure -20% employment for US developers aged 22-25 since 2024 Onboarding pathways need to be rethought

What to do now

The Stanford AI Index 2026 shows that adoption alone is not a measure of success. 88% organisational adoption sounds like a win. But as long as AI agents are barely deployed and workforce trust has not been established, the benefit stays at the surface. Organisations that invest now in structured adoption with transparent communication and early agent deployment are building a lead.

1

Start AI agent pilots

Broad adoption is at 88%. AI agents remain in single digits. The early-mover window is open right now.

2

Close the trust gap

Treat internal AI communication as a leadership task. Employees need facts, not polished promises.

3

Use the EU AI Act as an asset

Communicate compliance documentation externally. The EU trust lead is a measurable competitive advantage.

4

Build junior AI competency

Develop younger staff in AI skills before entry-level positions come under further pressure.

5

Use productivity data

14-72% productivity gains by area. The report provides the data foundation for internal business cases.

6

Document AI governance

AI incidents rose 55%. Build internal AI governance and incident documentation now, not later.

Conclusion

The Stanford AI Index 2026 is the strongest data foundation yet for AI strategy decisions. Technical progress, adoption data, and the trust gap all point in the same direction: the next competitive shift will not happen at model rollout, but at the point where organisations build trust and move into AI agents.

Further Reading

Frequently Asked Questions

What is the Stanford AI Index 2026? +

The Stanford AI Index is the annual report of Stanford's Human-Centered AI Institute (HAI), published since 2018. It documents technical AI progress, investment flows, organisational adoption, labour market effects, regulation, and public sentiment worldwide. The 2026 edition, published in April 2026, is the first comprehensive documentation after the broad rollout of generative AI since 2023.

Why is organisational AI adoption at 88% while public trust is falling? +

88% of organisations use AI, but only 23% of the US public views the labour market impact positively. This reflects a perception gap: decision-makers and specialists experience AI as a productivity tool, while workers outside tech see primarily threats, particularly given the documented 20% employment decline among developers aged 22-25. Internal AI communication is therefore a strategic leadership responsibility.

What does China closing the US AI performance gap mean for Europe? +

China leads in AI publication volume, citations, and industrial robotics, while the US leads in private investment (USD 285.9 billion, 23 times more than China) and frontier models. For European organisations, this means that dependence on US models persists, but alternative sources and European AI capacity gain strategic importance. The EU AI Act gives European providers a measurable trust advantage.

What is the economic value of generative AI according to the Stanford report? +

Estimated annual value to US consumers reached USD 172 billion by early 2026, with the median value per user tripling between 2025 and 2026. At the enterprise level, productivity gains of 14-26% were documented in customer support and software development, and up to 72% in marketing.

Why does the EU have a measurable trust advantage in AI regulation? +

According to the Stanford AI Index 2026, more people globally trust the EU as an AI regulator than the US or China. The US scores just 31% trust in its own government's ability to regulate AI - the lowest of all surveyed countries. For European companies, this means the EU AI Act is not just a compliance burden, but a measurable trust anchor with customers, partners, and employees.

What should European businesses take from the Stanford AI Index 2026? +

Three points are particularly relevant: First, AI agents remain in single-digit adoption across most business functions, creating a clear early-mover window. Second, the trust gap between experts and the public makes internal AI communication a leadership responsibility. Third, the EU AI Act provides European companies with a reputation advantage that should be actively communicated.