Thought Leadership

500 People Marched Against AI in London. Here Is What They Actually Want.

London's March Against the Machines drew hundreds to King's Cross, targeting OpenAI, Meta, and DeepMind. What the anti-AI protest movement really demands.
March 5, 2026 · 9 min read
TL;DR: On February 28, up to 500 people marched through London's King's Cross, stopping at the offices of OpenAI, Meta, and Google DeepMind. It was the largest anti-AI protest on record. Their demands ranged from binding citizens' assemblies on AI policy to a full pause on frontier model development. The movement is growing fast, and the technology industry would be unwise to ignore it.

A woman wore a billboard on her head. The Os in "TOOL" were cut out as eye holes, so she could see through the word. The sign read: "WHO WILL BE WHOSE TOOL?"

That image, strange and defiant, captures the mood of what happened on a Saturday afternoon in late February when hundreds of people walked through London's King's Cross neighborhood chanting "Pull the plug" and "Stop the slop." They stopped at the UK offices of OpenAI, Meta, and Google DeepMind. They carried signs that said "EXTINCTION=BAD" and "Demis the Menace" and the straightforward "Stop using AI."

It was, by every credible count, the largest anti-AI protest the world has seen. And it matters more than the tech industry wants to admit.

~500 Protesters at the march
84% Of UK public who fear government prioritizes Big Tech over people
14+ Countries with Pause AI chapters

From Three Hecklers to Five Hundred Marchers

The growth curve tells a story on its own. In May 2023, when Sam Altman visited London, two or three protesters stood outside the lecture hall, heckling an audience of hundreds. In June 2025, Pause AI managed a few dozen people outside Google DeepMind's offices. By February 2026, the number jumped to 500.

"We've been growing very rapidly," Joseph Miller, director of Pause AI's UK branch and co-organizer of the march, told MIT Technology Review. "We appear to be on a somewhat exponential path, matching the progress of AI itself."

Miller is a PhD student at Oxford studying mechanistic interpretability, a field that tries to understand what actually happens inside large language models when they process information. His research has led him somewhere uncomfortable: the technology may never be fully controllable.

The march was organized by a coalition of groups. Pause AI, which operates in more than 14 countries, calls for an international moratorium on frontier AI development. Pull the Plug, a newer UK campaign, wants binding citizens' assemblies where ordinary people, informed by experts, decide how AI gets used in British society. Mad Youth Organise focuses on AI's role in the mental health crisis among young people. Blaksox raises concerns about racial bias in automated systems.

These are not fringe voices. A survey by the Ada Lovelace Institute found that 84% of British people fear the government will prioritize its partnerships with large technology companies over public interest when regulating AI. The marchers may have been hundreds. The sentiment they represent is millions.

What the Protesters Actually Want

The demands from the march were more specific than "stop AI," though some signs said exactly that. They fell into three broad categories.

The core demand isn't anti-technology. It's anti-unaccountable technology. Most protesters told reporters they want democratic oversight, actual safety regulation, and transparency - not a blanket ban on all AI research.

Democratic control through citizens' assemblies. Pull the Plug's central proposal is borrowed from climate activism: convene randomly selected citizens, brief them with expert testimony, and let their recommendations become binding government policy. It is the same model Ireland used to break its deadlock on abortion law and marriage equality. The argument is that AI policy is too consequential to be left to the companies building it or the politicians those companies lobby.

A pause on frontier development. Pause AI wants governments to coordinate an international halt on training models more powerful than current frontier systems until safety research catches up. "Companies and countries are racing to create superhuman AI that will be able to dominate and destroy humanity," Miller said. "No company can stop the race by themselves. We need governments to coordinate."

Whistleblower protections and talent pipeline pressure. Maxime Fournes, the global head of Pause AI who spent 12 years working in the AI industry before joining the organization, told MIT Technology Review he wants to make it socially and professionally costly to build dangerous systems. "We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job," he said. The goal is to dry up recruitment, which explains why the march targeted company headquarters rather than government buildings.

That last point connects directly to a trend we have covered here at FutureHumanism. AI safety researchers have been quitting major labs at an accelerating rate, citing concerns that commercial pressure overrides safety work. The protesters and the departing researchers are, in many ways, different expressions of the same frustration.

The Timing Was Not Accidental

The march happened during the same week that the Pentagon attempted to force Anthropic to let the US military use Claude for "all lawful purposes." Anthropic refused and drew a line, but OpenAI signed a deal with the Department of Defense instead. For protesters worried about AI being used in weapons systems, the news was confirmation of their worst fears in real time.

Context matters: The same week as the London march, coordinated protests took place at data centers across the UK and Germany. In Berlin, the FAIrness Now campaign demonstrated outside the Federal Ministry for Economic Affairs. In East London, residents opposed a proposed hyperscale data center in Havering. This isn't one protest. It's a movement opening up on multiple fronts.

The data center angle adds a material dimension to what might otherwise look like an abstract philosophical debate. AI training runs consume enormous amounts of electricity and water. When a company builds a hyperscale data center in your neighborhood, the costs are immediate and visible: noise, water diversion, strain on the electrical grid, construction disruption. The benefits, meanwhile, flow somewhere else entirely.

Global Action Plan, the environmental charity coordinating the data center protests, frames this as a climate justice issue. The communities bearing the physical burden of AI infrastructure are rarely the ones profiting from it. That argument resonates with people who might not care about superintelligence risk but care deeply about their water bill.

Why the Tech Industry Should Pay Attention

The honest part is that most people at the march, including the organizers, admitted the protest was unlikely to change company behavior directly. "I don't think that the pressure on companies will ever work," Fournes said. "They are optimized to just not care about this problem."

So why march at all? Because they're not really trying to convince the companies. They're trying to shift what politicians think they can get away with ignoring.

The strategic play: Anti-AI activism is borrowing from climate activism and anti-tobacco campaigns - movements that spent years losing before the wins started stacking up. Nobody thinks they'll shut down OpenAI next month. The play is to make AI regulation politically inevitable, one protest and one headline at a time.

Every major technology regulation in history started with public pressure that companies initially dismissed. The GDPR did not appear out of nowhere. The California Consumer Privacy Act did not write itself. Years of privacy advocacy, data breach scandals, and organized public anger created the political conditions for legislators to act. The anti-AI movement is trying to create those conditions now, and the 84% figure from the Ada Lovelace Institute suggests they have fertile ground to work with.

The tech industry's usual response to this kind of thing is to point at the benefits. AI helps doctors, speeds up research, makes knowledge work cheaper. Sure. Nobody at the march was arguing AI does nothing useful. What they were saying is that letting private companies race to build increasingly powerful systems with basically no oversight is reckless, regardless of the upsides.

Try dismissing that as "technology is bad." You can't, really. And that's why it's gaining traction.

What Happens Next

The movement has momentum. Pause AI's chapter count continues to grow. Pull the Plug announced an open call for new members the week after the march. The data center resistance is expanding, and local planning battles over AI infrastructure will give the movement concrete, winnable fights that build organizational capacity.

The question is whether governments move fast enough to channel this energy into policy before it curdles into something less constructive. Right now, most AI regulation globally is voluntary, industry-drafted, and toothless. The EU AI Act is the notable exception, but even it focuses on risk classification rather than the fundamental question the protesters are asking: who gets to decide how powerful AI becomes, and what it is used for?

If you work in AI, watching clips of Saturday's march probably feels uncomfortable. A sign that says "Stop using AI" doesn't exactly invite nuanced conversation. But if you look past the bluntest slogans, the underlying demand is pretty straightforward: accountability, transparency, and a seat at the table for the people whose lives these systems affect.

That's not radical. In most democracies, that's just how things are supposed to work.

500 today, 5,000 tomorrow? Pause AI says their growth is exponential. Whether that holds up or not, the direction is obvious. This isn't a handful of people with signs anymore. It's an organized, international coalition with specific demands and the ability to get people in the street. The tech industry can engage with it now on reasonable terms, or deal with it later when the terms are a lot worse.

The march ended in a church hall in Bloomsbury. The protesters wrote their names on stickers, stuck them to their chests, and sat down in rows to figure out how to save the world. Will their methods work? Hard to say. But 500 people showing up to try tells you something about where this is all heading.

The technology is moving fast. The backlash is starting to keep up.

Related reading: AI Safety Researchers Are Quitting, and We Should Pay Attention | The Line AI Companies Drew with the Pentagon | AI Art, Ethics, and the Ownership Question

Share This Article

Share on X Share on Facebook Share on LinkedIn
Future Humanism editorial team

Future Humanism

Exploring where AI meets human potential. Daily insights on automation, side projects, and building things that matter.

Follow on X

Keep Reading

Tether Just Made Your Phone an AI Training Lab. The Cloud Should Be Nervous.
AI Tools

Tether Just Made Your Phone an AI Training Lab. Th...

Tether's QVAC framework enables billion-parameter AI model fine-tuning on smartp...

ODEI and the Case for World Memory as a Service
AI Agents

ODEI and the Case for World Memory as a Service

Every AI agent forgets everything. ODEI is building the persistent memory infras...

The Three Laws of Agent Commerce: How x402, ERC-8004, and ERC-8183 Built an Economy in Three Weeks
AI Agents

The Three Laws of Agent Commerce: How x402, ERC-80...

Three standards dropped in three weeks and together form the complete infrastruc...

These AI-Evolved Robots Refuse to Die, and That Changes Everything
AI Agents

These AI-Evolved Robots Refuse to Die, and That Ch...

Northwestern's legged metamachines are the first robots evolved inside a compute...

Share This Site
Copy Link Share on Facebook Share on X
Subscribe for Daily AI Tips