The Maniac

Technology

The Race to Regulate AI Has Entered Its Most Dangerous Phase

As governments worldwide scramble to contain artificial intelligence, the gap between policy and capability grows wider by the day.

Share

The article outlines an inflection point in global AI governance: major powers have moved past debating whether to regulate and are now struggling with how fast and how hard to regulate a technology that is evolving at unprecedented speed.

Key points:

Diverging Regulatory Models

EU: The AI Act is the most comprehensive, risk-based framework so far, imposing strict obligations on high-risk systems (e.g., hiring, policing, critical infrastructure) through audits and transparency rules.

US: Relies on executive orders and voluntary guidelines, with no unified federal law. Regulation is emerging piecemeal at the state level, creating a fragmented landscape.

China: Has moved quickly with binding rules on generative AI, deepfakes, and recommendation algorithms, but critics see these as tools of information control as much as citizen protection.

Innovation vs. Safety Tension

Industry leaders warn that heavy-handed rules could push AI research and deployment to more permissive jurisdictions, concentrating power where safeguards are weakest.

Advocates for stronger regulation argue that unregulated AI poses escalating risks—misinformation, deepfakes, and autonomous weapons—making delay more dangerous than overreach.

Global Coordination Problem

AI systems are borderless: models trained in one country can be deployed worldwide almost instantly.

Without harmonized standards, companies can engage in regulatory arbitrage, relocating to the least restrictive regimes.

Upcoming international summits aim to build cross-border frameworks, but disagreements over enforcement and IP protections threaten progress.

The Next 12 Months Are Pivotal

The EU will test whether its AI Act can be enforced effectively in practice.

The US election will shape whether federal AI legislation advances or remains stalled.

The open-source AI movement complicates all governance efforts by making powerful models widely accessible, challenging traditional control mechanisms.

Overall, the world is entering a decisive phase in AI regulation, caught between precaution and permissiveness, and it remains unclear whether any regulatory framework can keep pace with or meaningfully constrain a technology that is rapidly self-improving and globally distributed.

The next phase of AI regulation will likely be defined by three intersecting dynamics: regulatory experimentation, geopolitical competition, and the struggle to operationalize enforcement at scale.

First, the EU AI Act will function as a global reference model, much as the GDPR did for data protection. Because access to the EU market is commercially indispensable, many multinational firms will default to EU-style compliance globally rather than maintain fragmented standards. Over the next year, expect:

A rapid build-out of internal compliance teams focused on risk classification, documentation, and auditability.

A surge in third-party "AI assurance" providers offering model evaluations, red-teaming, and conformity assessments.

Early test cases in high‑risk domains like hiring, credit scoring, and biometric identification, which will clarify how aggressively regulators interpret the Act.

Second, the United States will continue to regulate through a mix of soft law and sectoral rules rather than a single omnibus statute. In practice, this means:

Federal agencies (FTC, CFPB, EEOC, FDA, SEC, etc.) using existing authorities to treat AI as an extension of already-regulated activities (consumer protection, employment discrimination, medical devices, financial risk).

States experimenting with targeted laws on deepfakes, biometric data, and automated decision systems, creating compliance friction for nationwide deployments.

Stronger expectations around transparency, incident reporting, and safety evaluations for frontier models, even if formal obligations remain patchy.

Third, China’s approach will continue to blend safety, social stability, and information control. Its generative AI and recommendation-system rules will:

Push providers toward pre‑deployment content controls, watermarking, and traceability of generated outputs.

Embed political and ideological constraints directly into technical and organizational requirements.

Serve as a template for other states that prioritize sovereignty and information control over open innovation.

Internationally, coordination will lag behind deployment. Forums like the AI Seoul Summit, the OECD AI Policy Observatory, and G7/G20 processes will likely converge on high‑level principles—transparency, accountability, human oversight, and safeguards for critical infrastructure—but struggle to agree on:

Binding enforcement mechanisms across borders.

Shared thresholds for what counts as a "frontier" or "high‑risk" model.

Common rules for cross‑border model evaluation, data flows, and incident disclosure.

Open‑source and widely distributed models will be the hardest to govern. Regulators will increasingly shift from trying to control who can access models to controlling how they are used in specific contexts. Expect:

More use‑case‑specific rules (e.g., for political advertising, biometric identification, medical advice, financial recommendations) rather than blanket bans on model classes.

Liability frameworks that focus on deployers and integrators—those embedding models into products and workflows—rather than solely on model developers.

Growing emphasis on technical safety measures: watermarking, provenance metadata, usage logging, and standardized evaluation benchmarks.

Over the next twelve months, the most plausible trajectory is a cautious but tightening regime:

The EU will set the de facto global floor for high‑risk systems.

The US will rely on enforcement through existing laws plus executive and agency guidance, with a few states pushing more aggressive rules.

China will continue to refine a highly centralized, state‑centric model of AI governance.

Cross‑border frameworks will remain principle‑heavy and enforcement‑light, but they will start to converge on shared expectations for frontier model testing and incident reporting.

For innovators, this environment rewards:

Building documentation, evaluation, and monitoring into the development lifecycle from the outset.

Designing systems for context‑sensitive deployment, with configurable safeguards and audit trails.

Treating compliance not as a bolt‑on cost but as a competitive differentiator in markets where trust and safety are becoming core to adoption.

The central tension will persist: regulators must move fast enough to mitigate systemic risks without freezing beneficial experimentation. No jurisdiction is likely to "solve" this in the near term; instead, we will see a rolling process of adjustment, with high‑profile failures and enforcement actions periodically resetting the political appetite for stricter controls.

The Maniac

Editorial

The editorial team behind The Maniac.