The Regulatory Race to Govern AI
Artificial intelligence is developing faster than the regulatory frameworks designed to govern it. From generative AI tools that can produce realistic text, images, and video in seconds, to autonomous systems making consequential decisions in healthcare, finance, and law enforcement, policymakers across the world are grappling with a common dilemma: how do you regulate a technology whose full capabilities — and risks — are still being discovered?
The answer, so far, looks very different depending on where you are in the world.
The European Union: Risk-Based Regulation
The EU has taken the most comprehensive regulatory approach with its AI Act, which came into force in 2024 and is being phased in over several years. The Act classifies AI systems by risk level:
- Unacceptable risk: Systems such as real-time biometric surveillance in public spaces and social scoring by governments are banned outright.
- High risk: AI used in critical infrastructure, education, employment, and law enforcement faces strict requirements for transparency, human oversight, and data quality.
- Limited and minimal risk: Subject to lighter transparency obligations or no specific requirements.
The EU's approach is the most structured in the world, but critics warn it could disadvantage European companies compared to less-regulated competitors, particularly from the United States and China.
The United States: Sector-by-Sector Approach
The US has not passed a comprehensive federal AI law. Instead, regulation is happening sector by sector — through existing bodies like the FDA (for AI in medical devices), the FTC (for consumer protection), and financial regulators overseeing AI in lending and trading.
Executive orders have directed federal agencies to develop AI guidelines, and several states — most notably California — have passed or are considering their own AI legislation. This fragmented approach allows for flexibility but creates a patchwork that businesses operating nationally find difficult to navigate.
China: Innovation with State Control
China has moved quickly to regulate specific AI applications, particularly generative AI, requiring providers to register their services and submit to security assessments. At the same time, the Chinese government views AI as a strategic national priority and is investing heavily in its development through state-backed programmes.
The regulatory environment is less focused on individual rights and transparency in the Western sense, and more on ensuring AI development serves national security objectives and does not threaten social stability.
Key Issues That Every Regulator Faces
Regardless of approach, governments everywhere are wrestling with the same fundamental questions:
- Accountability: When an AI system causes harm, who is responsible — the developer, the deployer, or the user?
- Transparency: Should AI systems be required to explain their decisions? How do you mandate explainability for complex models?
- Bias and fairness: AI trained on historical data can encode and amplify existing social biases. How should regulators address this?
- Existential risk: Some researchers argue that sufficiently advanced AI could pose risks to humanity at a civilisational scale. Most regulators are not yet engaging with this level of risk.
International Coordination: Still in Early Stages
Multilateral efforts to coordinate AI governance are underway but nascent. The G7's Hiroshima AI Process and the UN's Advisory Body on AI have published principles and recommendations, but these are non-binding. A binding international treaty on AI remains a distant prospect given geopolitical divisions.
What is certain is that AI governance will be one of the defining policy challenges of this decade. Getting it right — balancing innovation, safety, and fundamental rights — will require sustained, serious, and genuinely global effort.