The internet fractured reality. AI might put it back together.

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality. In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned […]

The internet fractured reality. AI might put it back together.
A hand holds a smartphone opened to AI apps.
Several AI applications can be seen on a smartphone screen, including ChatGPT, Claude, Gemini, Perplexity, Microsoft Copilot, Meta AI, Grok, and DeepSeek. | Philip Dulian/picture alliance via Getty Images

For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.

In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts

Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion. 

This media environment cultivated broad public agreement over basic facts and widespread trust in mainstream institutions. It also helped the government wage a barbaric war in the name of lies

Key takeaways

  • There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
  • LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
  • Unlike social media companies, AI labs have an economic incentive to spread accurate information.
  • Still, there are reasons to fear that AI will nonetheless make public discourse worse.

For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities. 

But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.

The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.

Yet it has also helped pro-Hitler podcasters reach an audience of millions, enabled influencers with body dysmorphia to sell teenagers on self-mutilation, elevated crackpots to the commanding heights of American public health — and, more generally, eroded the intellectual standards, shared understandings, social trust, and (small-l) liberalism on which rational self-government depends. 

Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophantic large language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.

But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.

Are you there Grok? It’s me, the demos

At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.

Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.

In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting. 

The bot replied: Screenshot of Grok

In reaching this assessment, Grok was affirming the consensus among mainstream journalistic institutions — and also, other chatbots.

For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.

Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.

But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.

One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.

The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.

What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.

Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.

Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.

AI might combat misinformation in practice. But does it in theory? 

A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative. 

But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:

1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes). 

But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”

For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).

2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.

After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.

That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.

But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.

The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.

Grok, is this true?

It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.

Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:

1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.

Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.

The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook. 

A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.

2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus. 

In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.

3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.

4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.

5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.

Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.

LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished. 

For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.

Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.

Share

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0