Stanford's 2026 AI Index Report is 423 pages long. Its most revealing finding fits in a single sentence: AI experts and the American public are describing two different technologies.
Only 10% of Americans say they're more excited than concerned about AI in daily life. Among AI experts surveyed for the same report, 56% believe AI will have a positive impact on the US over the next 20 years. That's not a gap in opinion. That's two groups answering different questions.
The numbers don't lie, but they don't flatter anyone either
The survey data published by Stanford HAI on April 13 is worth sitting with. On jobs: 73% of experts say AI will positively impact how people do their work. The public agrees at 23%. On healthcare: 84% of experts see AI improving medical care over the next two decades, versus 44% of the general public. On the economy: experts are optimistic at 69%, the public at 21%.
Nearly two-thirds of Americans, 64%, believe AI will lead to fewer jobs over the next 20 years. The experts who study this technology for a living are looking at the same evidence and reaching the opposite conclusion.
Both groups aren't irrational. They're just standing in different places.
Gen Z is the signal people keep misreading
The sharpest data point in the report isn't a percentage gap between experts and the public. It's what's happening inside Gen Z, the generation using AI more than anyone else.
A Gallup poll conducted in February and March 2026, surveying 1,572 people aged 14 to 29, found that Gen Z excitement about AI fell from 36% to 22% in a single year. The share feeling hopeful dropped from 27% to 18%. Those feeling angry rose from 22% to 31%. This is happening while roughly half of Gen Z uses AI daily or weekly.
They aren't rejecting AI out of ignorance. They're using the tools and arriving at a different conclusion than the people selling them. That's a credibility problem, not a communications problem.
The fracture goes all the way down
The expert-public divide isn't only visible in surveys. It shows up inside companies too. On Hacker News, one developer put it plainly after the report dropped:
"People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. Results rarely live up to the extremely rosy promises. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring, especially of junior engineers, has slowed to a trickle. You can only fool people for so long."
This isn't anecdotal noise. The Decoder's analysis of the report notes that employment among US software developers aged 22 to 25 has dropped nearly 20% since 2024. The productivity gains the industry touts at 14 to 26% in customer support and software development aren't landing evenly. Junior workers are absorbing the displacement while senior leaders announce the benefits.
The trust deficit that should alarm everyone
The US ranks last among all surveyed nations in trusting its government to regulate AI responsibly: 31%. Singapore leads at 81%. A separate Gallup survey found that 80% of US adults want the government to maintain AI safety rules even if it slows development, and only 2% fully trust AI to make fair and unbiased decisions.
Meanwhile, documented AI incidents, defined as harms or near-harms from deployed systems, rose from 233 in 2024 to 362 in 2025. The Stanford report's own conclusion is that "responsible AI is not keeping pace with AI capability."
It's worth noting that the report itself was produced with assistance from ChatGPT and Claude, and its funders include Google and OpenAI. Stanford acknowledges this openly. Whether that's admirable transparency or an illustration of exactly why the public doesn't trust the conversation is a question the report doesn't answer.
Why it matters
The usual response from the industry is that the public needs better AI literacy. More education. Better messaging. If people just understood what the technology could do, they'd feel differently.
That framing misses the point. Gen Z has the literacy. Engineers outside ML teams have the access. The people growing more skeptical aren't confused about what AI does. They're responding to what it's doing to their hiring prospects, their job security, and an industry that keeps telling them to trust the outcome while controlling who benefits from it.
The experts aren't wrong about what AI can do in controlled conditions. The public isn't wrong about what AI is doing to them in practice. Stanford's conclusion, that responsible AI isn't keeping pace with capability, is the most important thing in the report, and the least likely to change anything.
One side has the benchmarks. The other side has the lived experience. Only one of them gets to decide what ships next.
If the generation using AI the most is also the one growing angriest about it, is the problem that they don't understand the technology, or that they understand it better than the people building it want to admit?
Originally published as an Instagram carousel on @recul.ai.