Google leaders including Demis Hassabis push back on claim of uneven AI adoption internally
A viral post on X from veteran programmer and former Google engineer Steve Yegge set off a rhetorical firestorm this week, drawing sharp public rebuttals from some of Google’s most prominent AI leaders and reopening a sensitive question for the company: how deeply are its own engineers really using the latest generation of AI coding tools? The debate began after Yegge summarized what he said was the view of his friend, a current and longtime Google employee (or Googler), who claimed the Gemini AI-firm's internal AI adoption looks much more ordinary and less cutting-edge than outsiders might expect.Yegge said Googler friend claimed Google engineering mirrors an “average” industry pattern of a 20%-60%-20% split: a small group of outright AI refusers (20%) a much larger middle still relying mainly on simpler chat and coding-assistant workflows (60%), and another small group of AI-first, cutting-edge engineers using agentic tools extensively and mastering them (20%).A VentureBeat search of
A viral post on X from veteran programmer and former Google engineer Steve Yegge set off a rhetorical firestorm this week, drawing sharp public rebuttals from some of Google’s most prominent AI leaders and reopening a sensitive question for the company: how deeply are its own engineers really using the latest generation of AI coding tools?
The debate began after Yegge summarized what he said was the view of his friend, a current and longtime Google employee (or Googler), who claimed the Gemini AI-firm's internal AI adoption looks much more ordinary and less cutting-edge than outsiders might expect.
Yegge said Googler friend claimed Google engineering mirrors an “average” industry pattern of a 20%-60%-20% split: a small group of outright AI refusers (20%) a much larger middle still relying mainly on simpler chat and coding-assistant workflows (60%), and another small group of AI-first, cutting-edge engineers using agentic tools extensively and mastering them (20%).
A VentureBeat search of X using its parent company’s AI assistant Grok found that Yegge’s April 13 post spread quickly, topping 4,500 likes, 205 quote posts, 458 replies and 1.9 million views as of April 14.
We've reached out to Google for comment on the claims and will update when we receive a response.
A veteran, oustpoken Googler voice
Why did the opinion of Yegge's unnamed Googler friend land so hard? In part because Yegge is not just another commentator taking shots from the sidelines.
He spent about 13 years at Google after earlier stints at Amazon and GeoWorks, later joined Grab, and then became head of engineering at Sourcegraph in 2022. He has long been known in software circles for widely read essays on programming and engineering culture, and for an earlier internal Google memo that accidentally became public in 2011 and drew broad media attention.
That history helps explain why engineers and executives still take his critiques seriously, even when they reject them.
Yegge has built a reputation over many years as a blunt insider-outsider voice on software culture, someone with enough standing in the industry that his judgments can travel fast, especially when they touch nerves inside big technology companies.
Wikipedia’s summary of his career notes his long Google tenure and the outsized attention his blog posts and prior Google critiques have received.
Unpacking Yegge's friend's argument
In this case, Yegge’s argument was not simply that Google uses too little AI. It was that the company’s adoption may be uneven, culturally constrained and less transformed than its branding implies.
His friend supposedly argued that some Googlers could not use Anthropic’s Claude Code because it was framed as “the enemy,” and that Gemini was not yet sufficient for the fullest agentic coding workflows. He contrasted Google with what he described as a smaller set of companies moving much faster.
Pushback from Hassabis and current Googlers
The first major pushback came from Demis Hassabis, the co-founder and CEO of Google DeepMind, who replied directly and forcefully. “Maybe tell your buddy to do some actual work and to stop spreading absolute nonsense. This post is completely false and just pure clickbait,” Hassabis wrote.
Other Google leaders followed with lengthier defenses.
Addy Osmani, a director at Google Cloud AI, wrote that Yegge’s account “doesn’t match the state of agentic coding at our company.” He added, “Over 40K SWEs use agentic coding weekly here.”
Osmani said Googlers have access to internal tools and systems including “custom models, skills, CLIs and MCPs,” and pushed back on the idea that Google employees are sealed off from outside models, writing that “folks can even use @AnthropicAI’s models on Vertex” and concluding that “Google is anything but average.”
Other current Google employees reinforced that message. Jaana Dogan, a software engineer at Google, wrote in a quote tweet: “Everyone I work with uses @antigravity like every second of the day,” later following up with another X post stating: "Unpopular opinion: If you think tokens burned is a productivity metric, no one should take you seriously. Imagine you are a top 0.0001% writer and they are only counting the tokens you produce."
Paige Bailey, a DevX engineering lead at Google DeepMind, said teams had agents “running 24/7.”
Several other Google and DeepMind figures also challenged Yegge’s characterization, some disputing the factual basis of his claims and others suggesting he lacked visibility into current internal usage.
Yegge's rebuttal
Yegge, for his part, did not retreat. In a follow-up to Hassabis, he wrote, “I’m not trying to misrepresent anyone,” but argued that by his own standard for advanced AI adoption, Google still does not appear to be doing especially well.
He pointed to token usage and the replacement of older development habits with truly agentic workflows as the more meaningful benchmark, and said he would be willing to retract his criticism if Google could show its engineers were operating at that level.
AI adoption vs. AI transformation
That leaves the core dispute unresolved, but clearer. This is less a fight over whether Google engineers use AI at all than a fight over what should count as meaningful adoption.
Googlers are pointing to scale, weekly usage and the availability of internal and external tools. Yegge is arguing that those measures may capture broad exposure without proving a deeper change, an AI transformation, in how engineering work gets done. The clash reflects a wider industry split between visible usage metrics and more transformative, power-user behavior.
For Google, the subject is especially sensitive. Yegge has criticized the company before, including in a 2018 essay explaining why he left, where he argued Google had become too risk-averse and had lost much of its ability to innovate.
If his latest critique had come from a lesser-known poster, it might have faded. Coming from a former longtime Google engineer with a record of memorable public criticism, it instead drew direct responses from some of the company’s top AI figures — and turned a single post into a broader public argument about whether Google’s AI leadership is as deep internally as it looks from the outside.
Share
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
