A woman wore a billboard on her head. The Os in "TOOL" were cut out as eye holes, so she could see through the word. The sign read: "WHO WILL BE WHOSE TOOL?"
That image, strange and defiant, captures the mood of what happened on a Saturday afternoon in late February when hundreds of people walked through London's King's Cross neighborhood chanting "Pull the plug" and "Stop the slop." They stopped at the UK offices of OpenAI, Meta, and Google DeepMind. They carried signs that said "EXTINCTION=BAD" and "Demis the Menace" and the straightforward "Stop using AI."
It was, by every credible count, the largest anti-AI protest the world has seen. And it matters more than the tech industry wants to admit.
From Three Hecklers to Five Hundred Marchers
The growth curve tells a story on its own. In May 2023, when Sam Altman visited London, two or three protesters stood outside the lecture hall, heckling an audience of hundreds. In June 2025, Pause AI managed a few dozen people outside Google DeepMind's offices. By February 2026, the number jumped to 500.
"We've been growing very rapidly," Joseph Miller, director of Pause AI's UK branch and co-organizer of the march, told MIT Technology Review. "We appear to be on a somewhat exponential path, matching the progress of AI itself."
Miller is a PhD student at Oxford studying mechanistic interpretability, a field that tries to understand what actually happens inside large language models when they process information. His research has led him somewhere uncomfortable: the technology may never be fully controllable.
The march was organized by a coalition of groups. Pause AI, which operates in more than 14 countries, calls for an international moratorium on frontier AI development. Pull the Plug, a newer UK campaign, wants binding citizens' assemblies where ordinary people, informed by experts, decide how AI gets used in British society. Mad Youth Organise focuses on AI's role in the mental health crisis among young people. Blaksox raises concerns about racial bias in automated systems.
These are not fringe voices. A survey by the Ada Lovelace Institute found that 84% of British people fear the government will prioritize its partnerships with large technology companies over public interest when regulating AI. The marchers may have been hundreds. The sentiment they represent is millions.
What the Protesters Actually Want
The demands from the march were more specific than "stop AI," though some signs said exactly that. They fell into three broad categories.
Democratic control through citizens' assemblies. Pull the Plug's central proposal is borrowed from climate activism: convene randomly selected citizens, brief them with expert testimony, and let their recommendations become binding government policy. It is the same model Ireland used to break its deadlock on abortion law and marriage equality. The argument is that AI policy is too consequential to be left to the companies building it or the politicians those companies lobby.
A pause on frontier development. Pause AI wants governments to coordinate an international halt on training models more powerful than current frontier systems until safety research catches up. "Companies and countries are racing to create superhuman AI that will be able to dominate and destroy humanity," Miller said. "No company can stop the race by themselves. We need governments to coordinate."
Whistleblower protections and talent pipeline pressure. Maxime Fournes, the global head of Pause AI who spent 12 years working in the AI industry before joining the organization, told MIT Technology Review he wants to make it socially and professionally costly to build dangerous systems. "We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job," he said. The goal is to dry up recruitment, which explains why the march targeted company headquarters rather than government buildings.
That last point connects directly to a trend we have covered here at FutureHumanism. AI safety researchers have been quitting major labs at an accelerating rate, citing concerns that commercial pressure overrides safety work. The protesters and the departing researchers are, in many ways, different expressions of the same frustration.
The Timing Was Not Accidental
The march happened during the same week that the Pentagon attempted to force Anthropic to let the US military use Claude for "all lawful purposes." Anthropic refused and drew a line, but OpenAI signed a deal with the Department of Defense instead. For protesters worried about AI being used in weapons systems, the news was confirmation of their worst fears in real time.
The data center angle adds a material dimension to what might otherwise look like an abstract philosophical debate. AI training runs consume enormous amounts of electricity and water. When a company builds a hyperscale data center in your neighborhood, the costs are immediate and visible: noise, water diversion, strain on the electrical grid, construction disruption. The benefits, meanwhile, flow somewhere else entirely.
Global Action Plan, the environmental charity coordinating the data center protests, frames this as a climate justice issue. The communities bearing the physical burden of AI infrastructure are rarely the ones profiting from it. That argument resonates with people who might not care about superintelligence risk but care deeply about their water bill.
Why the Tech Industry Should Pay Attention
The honest part is that most people at the march, including the organizers, admitted the protest was unlikely to change company behavior directly. "I don't think that the pressure on companies will ever work," Fournes said. "They are optimized to just not care about this problem."
So why march at all? Because they're not really trying to convince the companies. They're trying to shift what politicians think they can get away with ignoring.
Every major technology regulation in history started with public pressure that companies initially dismissed. The GDPR did not appear out of nowhere. The California Consumer Privacy Act did not write itself. Years of privacy advocacy, data breach scandals, and organized public anger created the political conditions for legislators to act. The anti-AI movement is trying to create those conditions now, and the 84% figure from the Ada Lovelace Institute suggests they have fertile ground to work with.
The tech industry's usual response to this kind of thing is to point at the benefits. AI helps doctors, speeds up research, makes knowledge work cheaper. Sure. Nobody at the march was arguing AI does nothing useful. What they were saying is that letting private companies race to build increasingly powerful systems with basically no oversight is reckless, regardless of the upsides.
Try dismissing that as "technology is bad." You can't, really. And that's why it's gaining traction.
What Happens Next
The movement has momentum. Pause AI's chapter count continues to grow. Pull the Plug announced an open call for new members the week after the march. The data center resistance is expanding, and local planning battles over AI infrastructure will give the movement concrete, winnable fights that build organizational capacity.
The question is whether governments move fast enough to channel this energy into policy before it curdles into something less constructive. Right now, most AI regulation globally is voluntary, industry-drafted, and toothless. The EU AI Act is the notable exception, but even it focuses on risk classification rather than the fundamental question the protesters are asking: who gets to decide how powerful AI becomes, and what it is used for?
If you work in AI, watching clips of Saturday's march probably feels uncomfortable. A sign that says "Stop using AI" doesn't exactly invite nuanced conversation. But if you look past the bluntest slogans, the underlying demand is pretty straightforward: accountability, transparency, and a seat at the table for the people whose lives these systems affect.
That's not radical. In most democracies, that's just how things are supposed to work.
The march ended in a church hall in Bloomsbury. The protesters wrote their names on stickers, stuck them to their chests, and sat down in rows to figure out how to save the world. Will their methods work? Hard to say. But 500 people showing up to try tells you something about where this is all heading.
The technology is moving fast. The backlash is starting to keep up.
Related reading: AI Safety Researchers Are Quitting, and We Should Pay Attention | The Line AI Companies Drew with the Pentagon | AI Art, Ethics, and the Ownership Question