Introduction

Artificial Intelligence regulation is rapidly evolving, with the EU AI Act establishing the first comprehensive framework globally. This article examines the EU AI Act and emerging regulatory approaches across jurisdictions.

EU AI Act: Overview

Scope

  • First comprehensive AI regulation globally; applies to providers, deployers, importers, distributors of AI systems in EU
  • Extraterritorial effect: applies to AI systems placed on EU market or affecting EU persons
  • Risk-based approach: unacceptable risk, high-risk, limited risk, minimal risk

Risk Categories

Unacceptable Risk (Prohibited)

  • Subliminal manipulation (causing harm)
  • Exploitation of vulnerabilities (age, disability, socioeconomic)
  • Social scoring (by public authorities)
  • Real-time biometric identification in public spaces (limited exceptions)
  • Predictive policing based on profiling
  • Emotion recognition in workplace/education (limited exceptions)

High-Risk AI Systems

  • Biometric identification and categorization
  • Critical infrastructure management
  • Education and vocational training
  • Employment and worker management
  • Access to essential services (credit, healthcare, housing)
  • Law enforcement, immigration, asylum
  • Administration of justice

High-Risk Requirements

  • Risk management system
  • Data governance (training, validation, testing data)
  • Technical documentation (conformity assessment)
  • Record-keeping (logging)
  • Transparency and information to users
  • Human oversight
  • Accuracy, robustness, cybersecurity
  • Conformity assessment (self-assessment or notified body)
  • Registration in EU database
  • Post-market monitoring

General Purpose AI (GPAI)

  • Foundation models: transparency requirements (training data summary, copyright compliance)
  • Systemic risk models (high-impact GPAI): additional requirements (model evaluation, adversarial testing, incident reporting)
  • Codes of practice for GPAI compliance

Limited Risk (Transparency Requirements)

  • AI systems interacting with humans (disclosure)
  • Emotion recognition systems (disclosure)
  • Deep fakes (disclosure)
  • Biometric categorization (disclosure)

Implementation Timeline

  • June 2024: Entry into force
  • February 2025: Prohibitions apply (unacceptable risk)
  • August 2025: GPAI rules apply
  • August 2026: High-risk system requirements apply (for most systems)
  • August 2027: High-risk system requirements for certain regulated products

United Kingdom: Pro-Innovation Approach

Framework

  • White Paper (2023): Pro-innovation approach; sector-specific regulation
  • Principles: Safety, security, transparency, fairness, accountability, contestability, redress
  • Existing Regulators: FCA (financial services), MHRA (healthcare), ICO (data protection), Ofcom (communications) developing sector-specific approaches
  • Central AI Authority (proposed): Coordination across regulators

United States: Sectoral Approach

Executive Order on AI (2023)

  • Comprehensive executive order addressing AI safety, security, and trust
  • Requirements for developers of high-impact AI models (reporting, red-teaming)
  • NIST AI Risk Management Framework (voluntary guidance)
  • Agencies: FTC (consumer protection), CFPB (financial services), EEOC (employment), FDA (healthcare), etc.

Proposed Legislation

  • AI Foundation Model Transparency Act (disclosure of training data)
  • Algorithmic Accountability Act (impact assessments)
  • State-level initiatives (California, Colorado, others)

China: Generative AI Regulation

Key Regulations

  • Interim Measures for Generative AI (2023): Content moderation, security assessments, training data compliance, user information protection, algorithmic registration
  • Deep Synthesis Provisions (2023): Deep fake regulation, disclosure requirements, security assessments
  • Algorithmic Recommendation Provisions (2022): Algorithmic transparency, user rights

Canada: Artificial Intelligence and Data Act (AIDA)

Proposed Framework

  • Comprehensive AI regulation under Bill C-27 (2023)
  • High-impact AI systems: risk-based requirements (similar to EU AI Act)
  • Prohibition on reckless deployment causing harm
  • Innovation, Science and Economic Development Canada enforcement

Singapore: Model AI Governance Framework

Approach

  • Voluntary governance framework (2019, 2020, 2024 updates)
  • Generative AI governance framework (2024)
  • Focus on accountability, transparency, safety, fairness, explainability
  • Sector-specific guidance from MAS (finance), PDPC (data protection)

Key Compliance Considerations

Risk Assessment

  • Classify AI systems by risk level (EU AI Act classification)
  • Conduct impact assessments (algorithmic impact assessment, fundamental rights impact assessment)
  • Document risk management processes

Data Governance

  • Training data: provenance, quality, bias assessment
  • Copyright compliance: training data copyright status (EU AI Act requires summary of copyrighted training data)
  • Personal data: GDPR compliance for training and inference

Transparency and Documentation

  • Technical documentation (model cards, datasheets)
  • Transparency to users (AI disclosure)
  • Human oversight mechanisms for high-risk systems

Monitoring and Incident Response

  • Post-market monitoring for high-risk systems
  • Incident reporting (EU AI Act: significant incidents reported to regulators)
  • Continuous improvement based on performance metrics

Practical Recommendations

  1. Conduct inventory of AI systems and classify by risk
  2. Develop AI governance framework (policies, roles, responsibilities)
  3. Implement AI risk management processes (NIST AI RMF, ISO 42001)
  4. Document AI systems (technical documentation, model cards)
  5. Monitor regulatory developments across operating jurisdictions
  6. Engage with regulators on emerging AI issues
  7. Consider AI conformity assessment and certification (EU AI Act)