I participated in the Prague Process simulation at the University of Michigan, working with real case material from Operátor ICT (OICT), Prague's city-owned digital infrastructure company. As artificial intelligence systems become increasingly integrated into critical sectors—from healthcare and finance to criminal justice and employment—governments worldwide are grappling with how to regulate these powerful technologies. The challenge lies in balancing innovation with protection, ensuring AI systems are safe, fair, and transparent while not stifling technological progress.
Working with OICT's real-world smart city challenges provided concrete context for understanding how AI governance plays out in practice. This comparative analysis examines how five major jurisdictions—the European Union, United States, China, United Kingdom, and India—are approaching AI governance. Each represents a different regulatory philosophy, from the EU's comprehensive risk-based framework to the UK's pro-innovation approach, creating a complex global landscape that organizations like OICT must navigate when deploying AI in public infrastructure.
The Prague Process simulation was an academic workshop at the University of Michigan where students took on roles of international delegates and stakeholders to simulate AI governance negotiations—similar to Model UN but focused on AI policy. The simulation was framed around real-world smart city AI governance challenges that organizations like OICT face in practice.
Operátor ICT a.s. is Prague's city-owned digital infrastructure company that builds and operates smart city technology for the Czech capital. While not a pure AI governance organization, OICT is deeply involved in questions of algorithmic governance, smart city AI ethics, and data governance as Prague's primary tech operator.
Key Systems:
The Prague Process used OICT's real-world challenges as case material, with students representing different stakeholder perspectives in AI governance negotiations:
My Role: Delegate representing [jurisdiction] in negotiations on governing AI in smart city contexts, contributing to policy recommendations for algorithmic accountability in public infrastructure.
Working with OICT's case material highlighted the unique challenges of governing AI in smart city contexts, where algorithmic systems directly impact citizens' daily lives through public services and infrastructure.
Challenge: Balancing open city data initiatives with privacy protection and algorithmic transparency requirements.
AI Governance Issues:
Challenge: Implementing AI-powered routing and accessibility features while ensuring equitable access and service quality.
AI Governance Issues:
Challenge: Governing AI-powered camera systems and potential facial recognition deployment in public spaces.
AI Governance Issues:
Challenge: Establishing vendor accountability and compliance requirements for AI systems purchased by city government.
AI Governance Issues:
Framework: Comprehensive horizontal regulation with four risk categories: Unacceptable Risk (banned), High Risk (strict requirements), Limited Risk (transparency obligations), and Minimal Risk (voluntary codes).
Enforcement: Fines up to €35 million or 7% of global annual turnover. Market surveillance authorities in each member state.
Key Features: GDPR integration, conformity assessments, CE marking, fundamental rights impact assessments, human oversight requirements.
Framework: Executive Order 14110 + NIST AI Risk Management Framework. Sector-specific regulation through existing agencies (FDA for medical devices, NHTSA for autonomous vehicles).
Enforcement: Distributed across federal agencies. Voluntary standards with mandatory reporting for large AI systems (>10^26 FLOPs).
Key Features: Safety and security standards, algorithmic impact assessments, civil rights protections, international cooperation emphasis.
Framework: Algorithm Recommendation Management Provisions, Deep Synthesis Provisions, Draft AI Measures. Focus on algorithmic transparency and national security.
Enforcement: Cyberspace Administration of China (CAC) oversight. Algorithm registries, security assessments, content moderation requirements.
Key Features: Algorithmic transparency, data localization, content control, social stability considerations, state oversight.
Framework: AI White Paper emphasizing existing regulator empowerment. Context-specific regulation through ICO, FCA, CQC, and other sector regulators.
Enforcement: Existing regulators adapt current frameworks. Principles-based approach with regulatory sandboxes and innovation-friendly policies.
Key Features: Regulatory flexibility, innovation sandboxes, international leadership ambitions, risk-proportionate responses.
Framework: Digital Personal Data Protection Act 2023 as foundation. Emerging AI-specific regulations balancing innovation with protection.
Enforcement: Data Protection Board establishment. Consultation-based approach with industry engagement and phased implementation.
Key Features: Data protection integration, digital economy focus, innovation promotion, stakeholder consultation, gradual implementation.
An AI system used by multinational corporations to screen job applications, analyzing resumes and predicting candidate success.
An AI system that analyzes medical imaging to detect early-stage cancer, deployed in hospitals across multiple countries.
The divergent approaches across jurisdictions create compliance complexity for global AI deployments. Organizations must navigate multiple, sometimes conflicting, regulatory frameworks simultaneously.
Clear tension between promoting AI innovation and ensuring adequate protection. The UK prioritizes innovation while the EU emphasizes comprehensive protection, with others finding middle ground.
Many jurisdictions lack sufficient technical expertise and resources for effective AI regulation enforcement, particularly for complex algorithmic systems and emerging technologies.
Despite different approaches, common themes emerge around transparency, accountability, and risk management, suggesting potential for international harmonization efforts.
Small and medium enterprises face disproportionate compliance costs, potentially stifling innovation in the AI ecosystem and favoring large technology companies.
AI systems often require cross-border data flows, creating tension between local data protection requirements and global AI system functionality.
Developed comprehensive framework for comparing AI governance approaches across five major jurisdictions, identifying key policy dimensions and regulatory strategies.
Proposed actionable recommendations for harmonizing international AI governance while respecting jurisdictional sovereignty and cultural differences.
Evaluated the technical feasibility and implementation challenges of different regulatory approaches, considering current AI technology capabilities.
Analyzed the differential impacts of various regulatory approaches on different stakeholder groups, from tech companies to civil society organizations.