Article 8

Compliance with the requirements

1.   High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.

2.   Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I.

Frequently Asked Questions

It means that artificial intelligence systems classified as high-risk must follow specific rules outlined in the AI Act, based on their intended use and current technological standards, and providers must use appropriate risk management systems to demonstrate compliance, ensuring the AI systems are safe, reliable, and meet all legal obligations.
The responsibility for ensuring compliance rests with the providers of products incorporating high-risk AI systems; they must not only meet the safety and usage rules defined in the AI Act but also align with other relevant European Union regulations, integrating necessary testing, reporting, and documentation systematically to simplify the process.
Yes, providers have the option to merge testing, reporting, and documentation processes required for AI compliance into their existing procedures already specified by other European Union legislation, thereby avoiding repetitive tasks, limiting extra workloads, and maintaining consistency across compliance practices.
The risk management system helps providers identify, assess, and control potential risks posed by high-risk AI systems; it plays an essential role in ensuring compliance with the AI Act, assisting in demonstrating that the AI system is effectively managed, safe, and meets legislative standards based on its purpose and technical capabilities.

AI literacy

Get Started within 24 hours.

Once you have submitted your details, you’ll be our top priority!