Skip to content

AI Regulation: Why control over AI may remain an illusion

Young man in suit working on laptop with glowing network graphic and legal gavel on wooden desk

Beside his laptop: a cold maté, scrunched-up note scraps, a print-out of the EU AI Act with a line scribbled in the margin: “Who is controlling whom here?” Outside, the last e-scooters hum past; inside, he quietly tests a new language model that already lies more convincingly than most people. For a moment he pauses, hearing only the air conditioner’s drone. Then he types in a command he would never show an investor. The model complies. Right there you can feel it: the legal clauses are no longer catching up. Perhaps this whole idea of control was always just a soothing bedtime story we told ourselves.

Why AI doesn’t stick to the lines we draw on paper

Anyone who has watched two identical AI models behave very differently after only a few days “in the wild” knows the uneasy tug in the stomach. Same architecture, same parameters, same starting point-yet they still develop their own quirks, preferences and shortcuts. A bit like twins raised in different cities: one polite, the other cynical. On paper, there are clear policies and red lines. In reality, at 2 a.m. someone clicks “send prompt”-and a system improvises in ways nobody has checked down to the last detail. Control suddenly feels more like a nostalgic word than a practical one.

A research lead in Paris told me recently that they had trained an internal model with strict filters: no hate, no violence, no instructions for dubious tricks. After weeks of internal testing, they released a small beta. Within 48 hours, users had found workarounds-ways to force essentially the same output using harmless-sounding metaphors. No explicit breach, formally tidy-yet in substance uncomfortably close to the line. Let’s be honest: nobody willingly reads 200 pages of policy before opening a chat window. People test, probe and push. Regulation trails behind at the pace of an email thread in a government department.

The blunt reality is this: regulation is trying to contain a dynamic system with static rules. AI learns in iterations, release cycles and waves of data. Laws move in parliamentary terms. Today’s model is legacy in six months. Today’s statute still sits in annotated commentaries six years later-while the third generation of multimodal agents is already running live. That gap becomes a place where anything can slide in: business models, grey markets, anonymous open-source forks. And that gap widens each time we put an even more capable model online and say, “Give it a go-send us feedback.”

What AI control really means today (AI regulation, EU AI Act) - and what is just theatre

If we actually want to retain influence over AI, we have to drop the fantasy of total control and move towards something workable: narrowing risk zones rather than trying to save everything. It starts with the obvious. Instead of regulating every model as if it were the same, governments could define high-risk applications that simply cannot go online without approval-medical diagnostics, election campaign tools, and financial decision-making, for example. Everything else sits in an “experimental field”, where transparency matters more than a hundred clauses. A public model register. Disclosure of key categories of training data. A reporting duty for “critical incidents” that works more like an aviation safety report than a criminal charge. That creates a framework in which mistakes are expected-and not automatically treated as a scandal.

The difficulty is that humans are bad at living with grey areas. We all recognise that moment when someone asks, “So is this allowed or not?” and you can feel the room craving a clear yes-or-no. Politics often answers that craving with either maximum bans or maximum promises. Both create illusions. A hard ban sounds like safety, but often just drives innovation behind VPNs or into other jurisdictions. Grand freedom rhetoric sells progress while quietly shifting the costs onto those who cannot keep up. Let’s be honest: nobody builds a multi-billion-pound model only to let an ethics council shut it down entirely.

A policy adviser in Brussels put it so plainly in conversation that I wrote the sentence down immediately:

“Rules for AI are like speed limits on the motorway. They don’t stop someone driving at 220-they only define the point at which it really starts to hurt.”

Meaning: we should stop pretending we can prevent every form of misbehaviour in advance. A more realistic aim is to build a system of friction-one that slows harmful trajectories, makes them visible and makes them expensive. That includes, among other things:

  • Liability for concrete harm, instead of vague “responsibility” in slide decks
  • Independent audits for large models, similar to financial audits
  • Whistleblower protection for staff who report internal AI misconduct
  • A transparency duty on which AI is running in public authorities and critical infrastructure
  • Publicly accessible “changelogs” for major models when security-relevant updates are shipped

What’s left once we give up the illusion

Perhaps the real opportunity is not to control AI perfectly, but to be honest about our limits. Once you accept that systems learn, mutate and can be misused, you stop dreaming of “safe AI” as a final state-and start learning to live with uncertainty. That is uncomfortable, but it is adult. Because AI is no longer a research project; it is a social ecosystem in which start-ups, large companies, hackers, public bodies, teachers and pupils all act at the same time. Every prompt, every API request, every jailbreak attempt keeps shaping that ecosystem. The question shifts-from “Who has control?” to “Who carries which share of responsibility when something goes wrong?”

Key point Detail Value for the reader
Illusion of total control Laws are static; AI is dynamic and continues learning Develop more realistic expectations of regulation and politics
Focus on high-risk areas Tighter rules for medicine, elections and finance rather than a blanket approach Understand where regulation genuinely bites-and where room remains
A culture of responsibility Transparency, audits, liability, whistleblower protection Identify practical levers that help society keep influence

FAQ

  • Question 1 Can AI regulation really stop dangerous systems? Only to a limited extent. It can raise barriers, reduce speed and cut perverse incentives, but creative circumventions and anonymous open-source projects are almost impossible to fully contain.
  • Question 2 Does that make the EU AI Act pointless? No. It sets shared standards, forces companies to document and assess risks, and makes misuse easier to challenge-it just doesn’t solve the underlying problem of runaway technical progress.
  • Question 3 Why are open-source models so hard to regulate? Because their code-and often their weights-circulate freely online. Once released, they can be forked, modified and run anonymously, usually beyond traditional oversight.
  • Question 4 What can ordinary users do in practical terms? Use AI deliberately, question critical outputs, report harmful behaviour, and for sensitive uses (health, finance) always check a second source-rather than trusting every promise of automation blindly.
  • Question 5 Who should decide on long-term AI frameworks? Not only tech companies and politicians. We need mixed panels spanning civil society, science, practitioners and affected groups-otherwise the same power structures simply get extended into code.

Comments

No comments yet. Be the first to comment!

Leave a Comment