Article 27

Fundamental rights impact assessment for high-risk AI systems

1.   Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of:

(a)

a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

(b)

a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;

(c)

the categories of natural persons and groups likely to be affected by its use in the specific context;

(d)

the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;

(e)

a description of the implementation of human oversight measures, according to the instructions for use;

(f)

the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.

2.   The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.

3.   Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.

4.   If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.

5.   The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under this Article in a simplified manner.

Frequently Asked Questions

A fundamental rights impact assessment is an evaluation done by public agencies or private companies providing public services, before using high-risk AI systems, to identify and minimize potential harmful effects these systems might have on people’s basic rights, ensuring responsible use and effective human oversight mechanisms during the deployment of AI technology.
This impact assessment must be carried out before the first use of a high-risk AI system by public-law bodies, entities delivering public services, and specific categories listed in the AI Act, ensuring risks to fundamental rights are considered and managed from the very beginning of deployment, except for certain AI applications explicitly excluded.
The impact assessment must describe the purpose and processes for using the AI, how frequently and for how long it will be used, who will be impacted, specific risks that might occur, measures for ensuring human oversight, and planned actions if negative impacts or harms arise, including complaint mechanisms and internal oversight.
Yes, if a similar fundamental rights impact assessment has already been completed previously or provided by the AI provider, it can be reused, but it must be regularly reviewed and updated if significant changes occur in the system or its use, to ensure all impacts on fundamental rights stay current and relevant.

AI literacy

Get Started within 24 hours.

Once you have submitted your details, you’ll be our top priority!