Back to Portfolio
Prague Process Simulation in partnership with Operátor ICT (OICT), Prague's city-owned digital infrastructure company

AI Governance Framework

Comparative Analysis of International AI Regulation
University of Michigan • 2023-2024

The Global AI Governance Challenge

I participated in the Prague Process simulation at the University of Michigan, working with real case material from Operátor ICT (OICT), Prague's city-owned digital infrastructure company. As artificial intelligence systems become increasingly integrated into critical sectors—from healthcare and finance to criminal justice and employment—governments worldwide are grappling with how to regulate these powerful technologies. The challenge lies in balancing innovation with protection, ensuring AI systems are safe, fair, and transparent while not stifling technological progress.

Working with OICT's real-world smart city challenges provided concrete context for understanding how AI governance plays out in practice. This comparative analysis examines how five major jurisdictions—the European Union, United States, China, United Kingdom, and India—are approaching AI governance. Each represents a different regulatory philosophy, from the EU's comprehensive risk-based framework to the UK's pro-innovation approach, creating a complex global landscape that organizations like OICT must navigate when deploying AI in public infrastructure.

About the Prague Process & OICT Partnership

The Prague Process simulation was an academic workshop at the University of Michigan where students took on roles of international delegates and stakeholders to simulate AI governance negotiations—similar to Model UN but focused on AI policy. The simulation was framed around real-world smart city AI governance challenges that organizations like OICT face in practice.

OICT: Prague's Digital Infrastructure

Operátor ICT a.s. is Prague's city-owned digital infrastructure company that builds and operates smart city technology for the Czech capital. While not a pure AI governance organization, OICT is deeply involved in questions of algorithmic governance, smart city AI ethics, and data governance as Prague's primary tech operator.

Key Systems:

  • PID Lítačka: Prague's integrated transit app with AI-powered routing and accessibility features
  • Golemio: City data platform providing open city data with privacy and algorithmic transparency considerations
  • Smart City Infrastructure: Municipal sensors, digitalization projects, and algorithmic decision-making systems

Simulation Framework

The Prague Process used OICT's real-world challenges as case material, with students representing different stakeholder perspectives in AI governance negotiations:

  • City Government: Municipal officials balancing public service delivery with citizen rights
  • Civil Society: Privacy advocates and digital rights organizations
  • Tech Vendors: Companies providing AI solutions to city governments
  • EU Regulators: Policy makers implementing AI Act compliance
  • Citizens: Residents affected by algorithmic decision-making in public services

My Role: Delegate representing [jurisdiction] in negotiations on governing AI in smart city contexts, contributing to policy recommendations for algorithmic accountability in public infrastructure.

Smart City AI Governance

Working with OICT's case material highlighted the unique challenges of governing AI in smart city contexts, where algorithmic systems directly impact citizens' daily lives through public services and infrastructure.

Golemio Data Platform

Challenge: Balancing open city data initiatives with privacy protection and algorithmic transparency requirements.

AI Governance Issues:

  • Data anonymization and re-identification risks
  • Algorithmic bias in city service recommendations
  • Transparency obligations for automated decision-making
  • Cross-border data flows for EU-wide city comparisons

PID Lítačka Transit AI

Challenge: Implementing AI-powered routing and accessibility features while ensuring equitable access and service quality.

AI Governance Issues:

  • Algorithmic fairness in route optimization
  • Accessibility compliance for disabled users
  • Real-time decision-making transparency
  • Performance monitoring and bias detection

Municipal Surveillance Systems

Challenge: Governing AI-powered camera systems and potential facial recognition deployment in public spaces.

AI Governance Issues:

  • Facial recognition bans and exceptions
  • Public space privacy expectations
  • Law enforcement AI tool accountability
  • Democratic oversight of surveillance AI

AI Procurement Governance

Challenge: Establishing vendor accountability and compliance requirements for AI systems purchased by city government.

AI Governance Issues:

  • Vendor transparency and explainability requirements
  • Ongoing monitoring and audit obligations
  • Liability allocation for AI system failures
  • Public procurement compliance with AI Act

Jurisdiction Comparison

🇪🇺 European Union - AI Act (2024)

Risk-Based Approach

Framework: Comprehensive horizontal regulation with four risk categories: Unacceptable Risk (banned), High Risk (strict requirements), Limited Risk (transparency obligations), and Minimal Risk (voluntary codes).

Enforcement: Fines up to €35 million or 7% of global annual turnover. Market surveillance authorities in each member state.

Key Features: GDPR integration, conformity assessments, CE marking, fundamental rights impact assessments, human oversight requirements.

Regulatory Strength:
9/10

🇺🇸 United States - Executive Order 14110

Sector-Specific Approach

Framework: Executive Order 14110 + NIST AI Risk Management Framework. Sector-specific regulation through existing agencies (FDA for medical devices, NHTSA for autonomous vehicles).

Enforcement: Distributed across federal agencies. Voluntary standards with mandatory reporting for large AI systems (>10^26 FLOPs).

Key Features: Safety and security standards, algorithmic impact assessments, civil rights protections, international cooperation emphasis.

Regulatory Strength:
6/10

🇨🇳 China - Algorithm Governance

Security-Focused Approach

Framework: Algorithm Recommendation Management Provisions, Deep Synthesis Provisions, Draft AI Measures. Focus on algorithmic transparency and national security.

Enforcement: Cyberspace Administration of China (CAC) oversight. Algorithm registries, security assessments, content moderation requirements.

Key Features: Algorithmic transparency, data localization, content control, social stability considerations, state oversight.

Regulatory Strength:
8.5/10

🇬🇧 United Kingdom - AI White Paper

Pro-Innovation Approach

Framework: AI White Paper emphasizing existing regulator empowerment. Context-specific regulation through ICO, FCA, CQC, and other sector regulators.

Enforcement: Existing regulators adapt current frameworks. Principles-based approach with regulatory sandboxes and innovation-friendly policies.

Key Features: Regulatory flexibility, innovation sandboxes, international leadership ambitions, risk-proportionate responses.

Regulatory Strength:
4/10

🇮🇳 India - Emerging Framework

Balanced Approach

Framework: Digital Personal Data Protection Act 2023 as foundation. Emerging AI-specific regulations balancing innovation with protection.

Enforcement: Data Protection Board establishment. Consultation-based approach with industry engagement and phased implementation.

Key Features: Data protection integration, digital economy focus, innovation promotion, stakeholder consultation, gradual implementation.

Regulatory Strength:
5.5/10

Regulatory Spectrum

From Light-Touch to Prescriptive Regulation

🇬🇧
UK: Pro-innovation, principles-based approach
🇺🇸
US: Sector-specific, voluntary standards
🇮🇳
India: Balanced, consultation-driven framework
🇨🇳
China: State oversight, security-focused
🇪🇺
EU: Comprehensive, risk-based regulation
Light Regulation Moderate Regulation Heavy Regulation

AI Regulation Timeline

2016
GDPR Proposed
EU data protection foundation
2018
GDPR Enforcement
Global privacy standard set
2021
EU AI Act Proposed
First comprehensive AI law
2022
US NIST AI RMF
Risk management framework
2023
US EO 14110 + India DPDP
Executive action & data protection
2024
EU AI Act Enters Force
Global regulatory milestone
2025+
Global Harmonization
International cooperation efforts

Case Studies

AI Hiring Algorithm

An AI system used by multinational corporations to screen job applications, analyzing resumes and predicting candidate success.

🇪🇺
EU: High-risk AI system requiring conformity assessment, human oversight, bias testing, and transparency obligations under AI Act Article 6-15.
🇺🇸
US: Subject to EEOC guidance on algorithmic hiring tools, potential civil rights audits, and state-level algorithmic accountability laws.
🇨🇳
China: Algorithm registration required, transparency reports, and compliance with labor law provisions on fair employment.
🇬🇧
UK: ICO guidance on automated decision-making, equality considerations, and potential regulatory sandbox participation.
🇮🇳
India: Data protection compliance under DPDP Act, emerging AI governance framework, consultation with industry stakeholders.

Medical Diagnosis AI

An AI system that analyzes medical imaging to detect early-stage cancer, deployed in hospitals across multiple countries.

🇪🇺
EU: High-risk medical device under AI Act + MDR compliance, clinical evaluation, post-market surveillance, and notified body assessment.
🇺🇸
US: FDA premarket approval as Software as Medical Device (SaMD), 510(k) clearance pathway, and ongoing safety monitoring.
🇨🇳
China: NMPA medical device approval, algorithm transparency requirements, and integration with national health data systems.
🇬🇧
UK: MHRA medical device regulation, CQC quality standards, and potential participation in regulatory innovation office programs.
🇮🇳
India: CDSCO medical device approval, data localization requirements, and alignment with Digital India healthcare initiatives.

Policy Dimensions Analysis

Transparency

🇪🇺
4.5/5
🇺🇸
3/5
🇨🇳
4/5
🇬🇧
2.5/5
🇮🇳
2.75/5

Accountability

🇪🇺
4.25/5
🇺🇸
3.25/5
🇨🇳
3.75/5
🇬🇧
2.25/5
🇮🇳
2.5/5

Privacy

🇪🇺
4.75/5
🇺🇸
2.75/5
🇨🇳
3.5/5
🇬🇧
4/5
🇮🇳
3.75/5

Safety

🇪🇺
4.5/5
🇺🇸
3.5/5
🇨🇳
4.25/5
🇬🇧
3/5
🇮🇳
3.25/5

Innovation

🇪🇺
2/5
🇺🇸
4/5
🇨🇳
3/5
🇬🇧
4.5/5
🇮🇳
4.25/5

Ethics

🇪🇺
4.75/5
🇺🇸
3.25/5
🇨🇳
2.75/5
🇬🇧
3.5/5
🇮🇳
3.75/5

Key Findings

Regulatory Fragmentation Risk

The divergent approaches across jurisdictions create compliance complexity for global AI deployments. Organizations must navigate multiple, sometimes conflicting, regulatory frameworks simultaneously.

Innovation vs Protection Trade-offs

Clear tension between promoting AI innovation and ensuring adequate protection. The UK prioritizes innovation while the EU emphasizes comprehensive protection, with others finding middle ground.

Enforcement Capability Gaps

Many jurisdictions lack sufficient technical expertise and resources for effective AI regulation enforcement, particularly for complex algorithmic systems and emerging technologies.

Standards Convergence Potential

Despite different approaches, common themes emerge around transparency, accountability, and risk management, suggesting potential for international harmonization efforts.

SME Compliance Burden

Small and medium enterprises face disproportionate compliance costs, potentially stifling innovation in the AI ecosystem and favoring large technology companies.

Cross-Border Data Challenges

AI systems often require cross-border data flows, creating tension between local data protection requirements and global AI system functionality.

Key Contributions

Comparative Policy Analysis

Developed comprehensive framework for comparing AI governance approaches across five major jurisdictions, identifying key policy dimensions and regulatory strategies.

Methodology Policy Research Comparative Law

Policy Recommendations

Proposed actionable recommendations for harmonizing international AI governance while respecting jurisdictional sovereignty and cultural differences.

Policy Design International Relations Governance

Technical Feasibility Assessment

Evaluated the technical feasibility and implementation challenges of different regulatory approaches, considering current AI technology capabilities.

Technical Analysis Implementation Risk Assessment

Stakeholder Impact Analysis

Analyzed the differential impacts of various regulatory approaches on different stakeholder groups, from tech companies to civil society organizations.

Stakeholder Analysis Impact Assessment Policy Evaluation

Skills Demonstrated

Research & Analysis

Policy Analysis Comparative Law Legal Research Data Analysis

Technical Skills

Risk Assessment Technical Writing Systems Thinking AI/ML Understanding

Communication

Stakeholder Analysis Policy Communication Cross-Cultural Analysis International Relations