Summary of the EU AI Act
The EU AI Act establishes a comprehensive regulatory framework for artificial intelligence by classifying AI systems according to their risk level. Unacceptable risk systems (like social scoring and manipulative AI) are prohibited outright. High-risk AI systems face substantial regulation, while limited risk systems (such as chatbots and deepfakes) must meet transparency requirements. Minimal risk AI applications remain largely unregulated.
The Act places the majority of compliance obligations on providers (developers) of high-risk AI systems, whether based in the EU or in third countries whose AI outputs are used in the EU. Users (professional deployers) of high-risk systems face fewer obligations. High-risk AI classifications include systems used in critical infrastructure, education, employment, essential services, law enforcement, migration control, and justice administration, with specific provisions for each category.
For General Purpose AI (GPAI) models, the Act creates tiered requirements. All GPAI providers must provide technical documentation, instructions for use, comply with copyright directives, and publish training data summaries. Free and open-source GPAI providers face lighter obligations unless their systems present systemic risks. GPAI models using compute resources above a specific threshold (10^25 FLOPs) are considered to present systemic risks and must meet additional requirements including adversarial testing and incident reporting.
Implementation and enforcement will be overseen by a new AI Office within the European Commission, which will monitor compliance particularly for GPAI providers. The office can conduct evaluations of models to assess compliance and investigate systemic risks. The Act allows for codes of practice as a means of demonstrating compliance until European harmonized standards are published.
The AI Act will be implemented in phases after entry into force: 6 months for prohibited systems, 12 months for GPAI, 24 months for high-risk systems under Annex III, and 36 months for high-risk systems under Annex I. Codes of practice must be ready within 9 months after entry into force, with requirements for both EU and non-EU entities whose AI systems affect EU citizens.
See this longer summary for more context.