News

The Brussels Effect — and Its Limits

Something profound is happening in the world’s democratic capitals, and almost nobody wants to talk about it honestly. While headlines scream about AI breakthroughs and chatbots going mainstream, a quieter — and far more consequential — debate is unfolding in the corridors of power from Brussels to Beijing, from Washington to Geneva. The question is not whether governments will regulate artificial intelligence. They will. The question is: who will write those rules, and whose vision of humanity will they encode into law?

This is the central contest of the coming decade, and it is already well underway.

The Brussels Effect — and Its Limits

For years, techno-optimists told themselves a comforting story: Silicon Valley sets the global standard, and the rest of the world follows. The internet was born American, and American values — free expression, light-touch regulation, permissionless innovation — defined the digital age. That story is now thoroughly obsolete. The most ambitious attempt to regulate AI is not coming from Washington. It is coming from Brussels, where the European Union has spent four years crafting the world’s first comprehensive AI legal framework.

The EU AI Act, which entered into force in August 2024, classifies AI systems by risk level and imposes strict requirements on everything from facial recognition in public spaces to large language models trained on vast corpora of human text. It requires transparency. It demands accountability. It bans a narrow category of practices deemed fundamentally incompatible with human dignity. It is imperfect — criticized by civil liberties groups for exceptions that swallow the rules, and by industry for compliance costs that could strangle innovation before it breathes. But it exists. It is real. And it is already shaping how companies build AI systems, because the logic of the Brussels Effect holds as powerfully for AI as it did for data privacy: if you want to sell in the world’s largest market, you play by its rules.

“The EU AI Act is not perfect regulation. But it is the first serious attempt by any major power to answer the question that matters most: what kind of intelligence do we want in our societies, and what limits should it respect?”

America’s Regulatory Paradox

The United States, meanwhile, remains trapped in a regulatory paradox that grows more acute with every month. The Trump administration’s rollback of Biden-era AI executive orders in early 2025 sent a clear signal: the United States intends to lead on AI through industry, not government. The message resonated with investors and developers who feared that Washington-style regulation would export European anxieties about technology to the one place where AI has always been allowed to run fast.

But the signal also created a vacuum. A country that declines to regulate AI is not a country without AI policy — it is a country whose AI policy is made entirely by private actors. And private actors, however well-intentioned, optimize for what their shareholders want. The social contract implications of AI — who wins, who loses, who gets surveilled, who gets displaced — do not resolve themselves in the absence of rules. They resolve in favor of whoever has the most power.

What is striking is how the American posture has already begun to fracture. State legislatures, particularly in California and New York, have begun enacting their own AI laws — on deepfake elections, on algorithmic discrimination in hiring, on autonomous vehicles. What Washington declined to do nationally, the states are doing locally, producing a patchwork that is, paradoxically, more restrictive in some respects than anything Brussels has imposed. The federal government’s refusal to set coherent national standards is not producing a light-touch regime. It is producing a chaotic one.

“The question is not whether governments will regulate artificial intelligence. They will. The question is who will write those rules, and whose vision of humanity will they encode into law?”

China’s State-Driven Model

China has taken yet another approach — one that is both more coherent and more alarming. Beijing does not pretend that AI regulation is unnecessary, or that the market will sort out its excesses. China regulates AI vigorously, but the purpose is not consumer protection or democratic accountability. It is regime security. The goal is to ensure that artificial intelligence strengthens, rather than destabilizes, the Chinese Communist Party’s monopoly on political power.

This sounds straightforwardly dystopian, and in some respects it is. Chinese AI companies operate under a web of content restrictions, algorithmic accountability requirements, and mandatory government access provisions that would be unconstitutional in any liberal democracy. The generative AI regulations China enacted in 2023 require that AI-generated content promote “socialist core values” and that companies maintain logs for government inspection. These are not peripheral constraints — they are architectural features of how AI is built in China.

But to dismiss China’s approach as simply authoritarian is to miss something important. China is taking the challenge of AI governance seriously at the level of national strategy. The Chinese government has published detailed national AI development plans, invested billions in AI infrastructure, and built an institutional architecture for AI governance that coordinates across multiple government agencies. Whether or not you approve of the goals, the rigor of the planning is remarkable. The CCP has decided what role AI will play in Chinese society and is building the regulatory scaffolding to enforce that decision. That is governance. It is governance we should find troubling, but it is governance nonetheless.

The Missing Voice: The Global South

What is most striking about this emerging global competition to regulate AI is who is not at the table. The debate is almost entirely a conversation among wealthy nations — the EU, the United States, China, the United Kingdom, Japan, South Korea. These are the places where AI is being built, and they are the places where the rules are being written. But AI is not a technology that respects borders, and its effects will not be contained by them.

For much of the Global South, the AI revolution arrives not as a tool of liberation but as a fait accompli. The algorithms that will decide who gets credit, who gets a job, who gets a visa, and who gets a phone number are being trained on data from wealthy countries and optimized for the preferences of wealthy users. The regulatory frameworks being developed in Brussels and Washington will shape what products are safe to deploy in Lagos or Dhaka, but the people of Lagos and Dhaka had no say in writing them.

This is the colonial dimension of the AI governance debate that nobody wants to discuss. The nations that built the internet — and profited from it enormously — are now writing the rules for the next generation of the internet. The nations that were colonized by the telegraph and the printing press are being told to accept the chatbot and the recommendation engine on terms set by others. This is not a conspiracy. It is a predictable consequence of power, and it will reproduce the hierarchies of the previous technological era unless deliberate efforts are made to diversify who sits at the table.

“The nations that built the internet — and profited from it enormously — are now writing the rules for the next generation. This is not a conspiracy. It is a predictable consequence of power.”

What a Humane AI Framework Would Look Like

If we take seriously the idea that AI regulation is the defining policy challenge of the coming decade, we should be honest about what good regulation would actually require. It would start from the premise that artificial intelligence is a social technology — one whose effects flow through institutions, labor markets, and democratic processes — not merely a product to be certified for safety before deployment.

Good AI regulation would require meaningful transparency: not just disclosure of training data and model weights, but real explainability for consequential decisions. When an AI system denies someone a loan, or flags them for additional scrutiny at an airport, or deprioritizes their resume, the person affected should be able to understand why and to challenge the decision. This is not a radical demand. It is the minimum condition for a just society.

Good AI regulation would protect labor, not just with boilerplate provisions about retraining, but with real mechanisms for worker power in the age of automation. The history of technological displacement is not a story of inevitable progress — it is a story of choices. Societies that chose to share the gains from technological change managed it. Societies that chose to let the market distribute the gains found themselves with gilded elites and hollowed-out middle classes. AI is the next choice point, and the window for making it is closing.

Good AI regulation would insist on democratic accountability: not just advisory boards of ethicists, but actual legislative oversight, judicial review, and the hard work of elections and public deliberation. The idea that AI companies can govern themselves through internal ethics teams is a comforting fiction. Ethics without enforcement is not ethics — it is branding.

The Stakes Could Not Be Higher

The race to regulate AI is, at bottom, a race to decide what kind of world artificial intelligence will produce. Will it be a world where the technology serves human flourishing — where AI expands human capability, liberates people from drudge work, and brings down the cost of healthcare and education? Or will it be a world where AI entrenches existing power, automates authoritarianism, and turns the remaining commons of information and attention into new frontiers of extraction?

The answer to that question will not be determined by engineers or investors alone. It will be determined by governments, by courts, by civil society, and by citizens who demand a voice in a technology that will reshape their lives. The rules are being written now. The only question is who will be at the table when they are finalized.

That is the race worth paying attention to — and the one that deserves more than comfortable silence.

Anna Schmidt is a Senior Opinion Writer for Media Hook, offering sharp commentary on politics, culture, and the ideas that define our times.

About Maya Patel

Maya Patel is the Technology Correspondent for Media Hook, covering innovation, artificial intelligence, cybersecurity, and the digital transformation reshaping society.