The EU AI Act (Regulation (EU) 2024/1689) is transformative legislation regulating artificial intelligence systems within the EU. It creates obligations for AI providers (developers and suppliers) and deployers (those using AI systems) based on the risk level of the AI system. High-risk AI systems face substantial compliance requirements including technical documentation, conformity assessments, and human oversight procedures. Agreements between AI providers and deployers must clearly allocate these compliance responsibilities and specify how each party will meet their obligations under the AI Act. The EU AI Act fundamentally changes how AI systems are developed, deployed, and governed. Organizations using AI need to understand whether their systems fall under high-risk, limited-risk, or minimal-risk categories, what compliance requirements apply, and how responsibilities are allocated in provider-deployer agreements. Non-compliance can result in substantial fines (up to 6% of global turnover for large companies). Understanding the AI Act's requirements and ensuring agreements properly allocate responsibilities is essential for legal and operational compliance.
What is a Technology & Compliance?
The EU AI Act is legislation regulating AI systems based on risk level. Providers must develop high-risk AI systems with technical documentation, conformity assessments, and transparency measures. Deployers must implement human oversight, maintain performance monitoring, and report significant incidents. Agreements between providers and deployers must clearly assign AI Act compliance responsibilities, including documentation, testing, monitoring, and incident reporting.
Red flags to watch for
Risk classification determines compliance obligations; agreements must identify the risk category and associated requirements.
AI Act requires providers to maintain technical documentation and provide it to deployers and authorities; absence is a compliance gap.
Providers have primary responsibility for certain AI Act obligations; unreasonable allocation violates the spirit of the AI Act.
High-risk systems must have transparency and human oversight; agreements should specify how these are implemented.
AI Act requires reporting of significant incidents; agreements must specify incident notification and escalation procedures.
AI systems require ongoing compliance; agreements should allow for updates to address regulatory changes.
Your legal rights
The EU AI Act (2024/1689) is directly applicable EU regulation. Providers of high-risk AI systems must comply with documentation, conformity assessment, and transparency requirements. Deployers must implement human oversight, performance monitoring, and incident reporting. Non-compliance results in fines up to 6% of global annual turnover for intentional violations and up to 3% for unintentional violations. National authorities enforce the AI Act; disputes between providers and deployers are subject to contract law.
Questions to ask before you sign
- 1What is the risk classification of this AI system under the EU AI Act (high-risk, limited-risk, minimal-risk)?
- 2What technical documentation will you provide, and is it compliant with AI Act requirements?
- 3How will human oversight and transparency be implemented for this AI system?
- 4What incident reporting procedures must I follow, and what is your support in maintaining compliance?
- 5What happens if the AI system's risk classification changes or new AI Act requirements emerge?
- 6How will we address compliance responsibilities if the AI system is updated or modified?
Disclaimer: This guide is for educational purposes only and does not constitute legal advice. Contract law varies by jurisdiction and individual circumstances. Always consult a qualified legal professional before making decisions based on this information.