Article 20

Corrective actions and duty of information

1.   Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly.

2.   Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken.

Frequently Asked Questions

If a provider notices that their high-risk AI system does not meet the requirements of the AI Act, they must right away take necessary actions to fix it, remove it from the market, stop its operation, or recall the product. They must inform all involved parties such as distributors, users, representatives and importers depending on the situation.
Providers must immediately notify the authorities responsible for market surveillance and, if applicable, the organization that initially issued the certificate of conformity, whenever they become aware that their high-risk AI system poses a significant risk. They must communicate details about the problems found and how these issues have been or will be addressed.
Corrective actions can include repairing or modifying the AI system to comply with legal standards, temporarily or permanently disabling the technology, removing the product from distribution channels, or recalling it altogether. Providers need to pick an action that appropriately addresses the particular non-compliance or risk situation discovered.
Yes, the providers should collaborate closely with the users or deployers of the high-risk AI systems to investigate the causes of any identified risks or non-compliance. Working together helps ensure that the issues are thoroughly understood and effectively managed, helping minimize potential negative impacts of the technology.

AI literacy

Get Started within 24 hours.

Once you have submitted your details, you’ll be our top priority!