Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

The summary of EU AI Act is available at https://artificialintelligenceact.eu/high-level-summary.

The AI Act classifies AI according to its risk

EU AI Act and research

The majority of obligations fall on providers (developers) of high-risk AI systems

‘provider’ (developer)
develops an AI system or a general-purpose AI model or has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
‘deployer’ (user)
uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity

Prohibited AI Systems (AI Act, Art. 5)

High-Risk AI Systems (AI Act – Chapter III)

Scope is complex, see Art. 6.

General purpose AI (GPAI)

All providers of GPAI models must (https://artificialintelligenceact.eu/chapter/5):

  1. Draw up technical documentation, including training and testing process and evaluation results.

  2. Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.

  3. Establish a policy to respect the Copyright Directive.

  4. Publish a sufficiently detailed summary about the content used for training the GPAI model. Exception for free and open licence GPAI models: The providers only have to comply with the latter two obligations above, unless the free and open licence GPAI model is systemic.

GPAI models with systemic risks

GPAI models
present systemic risks when the cumulative amount of compute used for its training is greater than 1025 floating point operations (FLOPs).

Providers of GPAI models with systemic risk must also:

See https://artificialintelligenceact.eu/chapter/5.

Specific obligations on the providers of general–purpose AI (GPAI) models (https://artificialintelligenceact.eu/article/53):

The General-Purpose AI Code of Practice

See https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai.

EU AI Act and GPAI Code of Practice: Transparency and Model Documentation Form

The “Copyright” chapter has one Commitment on “Copyright policy” (https://ec.europa.eu/newsroom/dae/redirection/document/118115):

EU AI Act and GPAI Code of Practice: Safety and Security

The “Safety and Security” chapter has ten Commitments (https://ec.europa.eu/newsroom/dae/redirection/document/118119).

  1. Commitment 1 “Safety and Security Framework”

    • Measure 1.1 Creating the Framework

    • Measure 1.2 Implementing the Framework

    • Measure 1.3 Updating the Framework

    • Measure 1.4 Framework notifications

  2. Commitment 2 “Systemic risk identification”

    • Measure 2.1 Systemic risk identification process

    • Measure 2.2 Systemic risk scenarios

  3. Commitment 3 “Systemic risk analysis”

    • Measure 3.1 Model-independent information

    • Measure 3.2 Model evaluations

    • Measure 3.3 Systemic risk modelling

    • Measure 3.4 Systemic risk estimation

    • Measure 3.5 Post-market monitoring

  4. Commitment 4 “Systemic risk acceptance determination”

    • Measure 4.1 Systemic risk acceptance criteria and acceptance determination

    • Measure 4.2 Proceeding or not proceeding based on systemic risk acceptance determination

  5. Commitment 5 “Safety mitigations”

    • Measure 5.1 Appropriate safety mitigations

  6. Commitment 6 “Security mitigations”

    • Measure 6.1 Security Goal

    • Measure 6.2 Appropriate security mitigations

  7. Commitment 7 “Safety and Security Model Reports”

    • Measure 7.1 Model description and behaviour

    • Measure 7.2 Reasons for proceeding

    • Measure 7.3 Documentation of systemic risk identification, analysis, and mitigation

    • Measure 7.4 External reports

    • Measure 7.5 Material changes to the systemic risk landscape

    • Measure 7.6 Model Report updates

    • Measure 7.7 Model Report notifications

  8. Commitment 8 “Systemic risk responsibility allocation”

    • Measure 8.1 Definition of clear responsibilities

    • Measure 8.2 Allocation of appropriate resources

    • Measure 8.3 Promotion of a healthy risk culture

  9. Commitment 9 “Serious incident reporting”

    • Measure 9.1 Methods for serious incident identification

    • Measure 9.2 Relevant information for serious incident tracking, documentation, and reporting

    • Measure 9.3 Reporting timelines

    • Measure 9.4 Retention period

  10. Commitment 10 “Additional documentation and transparency”

Tool: EU AI Act Compliance Checker 1

EU AI Act Compliance Checker 2