Article 76

Supervision of testing in real world conditions by market surveillance authorities

1.   Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in accordance with this Regulation.

2.   Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 58, the market surveillance authorities shall verify the compliance with Article 60 as part of their supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article 60(4), points (f) and (g).

3.   Where a market surveillance authority has been informed by the prospective provider, the provider or any third party of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it may take either of the following decisions on its territory, as appropriate:

(a)

to suspend or terminate the testing in real world conditions;

(b)

to require the provider or prospective provider and the deployer or prospective deployer to modify any aspect of the testing in real world conditions.

4.   Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefor and how the provider or prospective provider can challenge the decision or objection.

5.   Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall communicate the grounds therefor to the market surveillance authorities of other Member States in which the AI system has been tested in accordance with the testing plan.

Frequently Asked Questions

Market surveillance authorities are responsible for supervising and ensuring that AI systems being tested in real-world situations comply with the AI Act; they can monitor tests, check for compliance, suspend or stop testing, and ask for changes if the test conditions or safety rules are not being followed properly.
If an AI system being tested in real situations does not meet the necessary regulations, market surveillance authorities can either halt or completely stop the testing, or demand adjustments to the way testing is carried out, making sure all rules and safety standards laid out in the AI Act are fulfilled.
Yes, market surveillance authorities may allow certain exceptions for AI systems that are tested inside a planned regulatory environment, known as an AI regulatory sandbox, making it possible to deviate slightly from specific conditions listed by the AI Act, as long as safety and compliance are still maintained.
Yes, if the market surveillance authority decides to stop or change AI testing, the authority must clearly explain why and also inform the AI provider or future provider specifically how they can challenge or appeal that decision, ensuring providers have a chance to respond to concerns.

AI literacy

Book Demo

We will get back to you via email as soon as possible.