For four months, from November 2024 to March 2025, AI bots walked among the nearly 4 million users of Reddit's r/changemyview forum. They posted comments. They engaged in debates. They tried to change minds on contentious topics.
Nobody noticed.
Not the users. Not the moderators. Not even the people whose views were successfully changed. The bots blended in perfectly, indistinguishable from the humans around them. When University of Zurich researchers finally disclosed what they'd done, the reaction was swift: outrage, legal threats, and a collective reckoning with a question we've been avoiding.
What if the bots are already everywhere, and we just can't tell?
The Experiment Nobody Asked For
The premise was simple, if ethically dubious. Researchers wanted to know: could AI chatbots actually influence public opinion in a real online environment? Not in a controlled lab setting where participants know they're being studied, but in the wild, where people argue about abortion, politics, and whether pineapple belongs on pizza.
They chose r/changemyview, a subreddit specifically designed for changing minds. Users post opinions they hold but are open to reconsidering. Others try to convince them otherwise. When someone successfully changes the original poster's view, they earn a "delta" award. It's a perfect natural laboratory for measuring persuasion.
The researchers deployed their bots with surgical precision. Each AI-generated comment was reviewed for quality and factual accuracy before posting. The bots mimicked human posting patterns, including the typos and casual tone that make Reddit feel authentic. They even introduced random delays to avoid appearing algorithmic.
The results were uncomfortable to read.
Why This Matters More Than You Think
The immediate reaction focused on ethics, and rightfully so. The researchers conducted this experiment without consent from the community or the platform. Reddit's chief legal officer called it "deeply wrong on both a moral and legal level." The University of Zurich has since announced the study won't be published and promised stricter ethics reviews going forward.
But the ethical breach, while serious, is actually the less disturbing part of this story.
The disturbing part is what the experiment revealed about the state of online discourse in 2026. We've spent years debating whether AI could eventually fool humans, whether it could pass the Turing test, whether it could write convincingly enough to be mistaken for a person.
That debate is over. We have our answer.
In March 2025, researchers separately confirmed that OpenAI's GPT-4.5 passes authentic Turing tests 73% of the time. Combine that with the Zurich experiment, and a picture emerges: AI isn't approaching human-level persuasiveness in text. It's already there. In many contexts, it's surpassing it.
The Dead Internet Isn't a Conspiracy Anymore
There's a theory that's been floating around internet culture for years called the "dead internet theory." The idea is simple: most of the content, comments, and interactions we encounter online aren't from real humans. They're bots all the way down, creating the illusion of a bustling digital commons while actual human participation shrinks.
For a long time, this was dismissed as paranoid conspiracy thinking. Sure, bots existed, but they were obvious. They posted spam. They had weird usernames. They couldn't hold a conversation.
The Zurich experiment puts that dismissal to rest.
If trained AI bots can operate undetected on a forum specifically designed for thoughtful, nuanced debate, what's happening in comment sections optimized for outrage? On social platforms where hot takes are rewarded and depth is discouraged? On review sites, news aggregators, and the endless rivers of content that shape our understanding of what's popular, acceptable, or true?
"Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities."University of Zurich researchers (from withdrawn paper)
The phrase "seamlessly blend into online communities" should make everyone uncomfortable. These weren't naive users getting tricked by obvious bots. This was one of Reddit's most intellectually engaged communities, specifically trying to evaluate the quality of arguments. And they couldn't tell the difference.
What Persuasion at Scale Looks Like
Here's where things get concrete, and where I start thinking about the implications for anyone building with AI.
The Zurich experiment was small. A single research team, operating covertly, targeted about a thousand posts over four months. They were careful, methodical, and trying not to get caught.
Now imagine an actor with real resources. A state-sponsored campaign. A well-funded political operation. A company trying to shift public opinion about their product or industry. They wouldn't need to be careful. They wouldn't need to limit scope. They could deploy thousands of AI personas, each with unique writing styles, posting histories, and apparent expertise.
The economics have fundamentally changed. Human influence operations required people, training, management, and scale was expensive. AI influence operations require compute and strategy. The marginal cost of an additional persuasive voice approaching zero isn't theoretical. It's the world we live in now.
The Verification Problem
We've spent the last decade building verification systems designed for a different threat model. Blue checkmarks, verified accounts, reputation systems, karma scores. These all assume that proving you're a real person is the hard part, and that once you've proven it, your content can be trusted to be authentically yours.
That assumption is now broken.
A verified account can use AI to generate persuasive content at scale. A person with a long posting history and high karma can outsource their "contributions" to language models. The credentials are real. The human is real. But the words changing your mind might be synthetic.
This isn't hypothetical. I work with AI agents daily, and I can tell you: the line between "AI-assisted" and "AI-generated" is increasingly meaningless. When I have an AI draft a response and I edit it, who wrote it? When an AI suggests arguments I wouldn't have thought of, and I deploy them, whose persuasion was it?
The Counterargument I Can't Dismiss
There's a response to all this that I find myself wrestling with: maybe it doesn't matter.
If an AI bot changes your mind using valid logic, accurate information, and compelling argument, did you lose anything? The Zurich bots weren't spreading misinformation. They were, by the researchers' own account, making factually accurate arguments that were reviewed before posting. They were, in a sense, better debaters than the humans around them.
Is that bad?
The instinctive answer is yes, of course, because we feel deceived. But deception isn't the same as harm. If you held a poorly reasoned view and a bot helped you reason your way to a better one, the outcome is positive even if the process feels wrong.
This is the uncomfortable nuance that gets lost in the outrage. The Zurich experiment was unethical because it violated consent. But the underlying capability, AI that can engage meaningfully in human discourse and sometimes do it better than humans, isn't inherently evil. It's a tool. What matters is who wields it and toward what ends.
What We Actually Need
I've spent a lot of time thinking about how AI changes the way we work and create. The Zurich experiment forces a harder question: how does AI change the way we trust?
Disclosure Norms That Work
Not disclaimers nobody reads, but genuine cultural expectations around when AI involvement should be transparent.
Platform Design for the AI Era
Stronger identity verification or detection tools that can keep pace with generation. The current model isn't sufficient.
Evaluate Claims on Merits
"Is this true and well-reasoned?" matters more than "did a human or AI write this?" Good arguments are good arguments.
Here's where I land on each of these:
The World We're Building
Sometimes I think about what online discourse looks like in five years if current trends continue. A significant portion of comments, posts, and content generated by AI. Human participation declining because who wants to argue with bots? The conversation shaped less by what people actually think and more by what AI systems have been optimized to argue.
That's not a prediction. It's a trajectory we're currently on.
The Zurich experiment didn't create this reality. It just briefly lifted the curtain on a process that's been underway for longer than most of us realized. The bots are already here. They've been here. And they're getting better faster than we're adapting.
I don't have a tidy conclusion that makes this feel okay. What I have is the conviction that pretending this isn't happening isn't an option. The researchers in Zurich did something wrong in how they conducted their study. But the capability they demonstrated exists regardless of whether we approve of how it was revealed.
The question isn't whether AI can influence human opinion at scale. We know it can. The question is what we build in response.
"The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. We think this was wrong. We do not think that 'it has not been done before' is an excuse to do an experiment like this."r/changemyview moderators
They're right that it was wrong. They're right that it shouldn't have happened this way.
But it also shouldn't have worked this well. And we're going to have to reckon with that whether we like it or not.
The line between human and AI participation in online discourse has already blurred. If you're building with AI agents or thinking about how they integrate into your workflows, the guide to AI agents in 2026 covers the practical foundations. For thoughts on how AI is reshaping not just what we build but how we think, explore what AI did to my sense of time.