Gerd Altmann Pixabay

Recent EC interventions on AI safety and liability

Following the clarifying interventions of other national privacy supervisory bodies, the European Commission has also published a White Paper on how to address security and accountability issues in the field of Artificial Intelligence.

In order to create a “Europe fit for the digital age”, the European Commission has dictated its guidelines, through two rather decisive publications: the White Paper entitled “On Artificial Intelligence — A European approach to excellence and trust”, and the “Reports on the security and responsibility implications of Artificial Intelligence, the Internet of things and robotics”, on security-responsibility interactions, in the still experimental field of AI.

Until now, according to the EU Product Liability Directive, an injured party can be compensated, if he proves the causal link between the damage and the defect in the product, a defect due to negligence on the part of the producer, unless the defect manifested itself after the “indicted” good has been put into circulation.

The Reports, mentioned earlier, set out seven key points for safety and product liability, and they are:

  • SOFTWARE: the software, which before was “aggregated” to a product, before it was put on the market, is instead, representing a fundamental part of the development of AI, created according to safety dictates “cut out” on the software itself.
  • “HIGH RISK” AI: a system of differentiation in AI systems is recommended, recognizing some of them as more accidents on the security of EU citizens (such as those that may affect equal treatment or intrusive technologies such as biometric identifiers), and precisely for this reason, a stricter “conformity assessment” is recommended, including
    - Repeated checks
    - Verification of training data
    - Systems to “repair” the damage detected.
  • SHARED RISK: In AI systems, there is a tendency to share any responsibility between the producer and the importer. The EC recommends a broader, multi-stakeholder allocation, from the developer, to the service provider, to the distributor and even to the user himself, so that any obligation can be directed to the best placed actors to address any potential risk.
  • MONITORED RISK: a risk assessment is recommended, not only for the “putting into circulation” of the product, but throughout the life cycle, given the continuous evolution of AI systems.
  • TRAINING DATA RISK: The EC notes that in the current risk assessment, errors due to the use of defective training data (such as computer vision, untrained to detect objects in poor visibility environments) are not calculated, and therefore proposes an assessment both before, on the data used for training, and afterwards, during the deployment of the AI system.
  • DEVELOPERS’ COLLABORATION: in this case the idea of requiring developers to reveal the design parameters of the algorithms and metadata sets, in case of incidents, has been suggested. This solution wants to put a brake on the so-called “black box effect”, with which it is extremely complicated to understand on what basis the AI system has made a decision, and increase transparency with the active collaboration of developers.
  • INVERSION OF THE TEST, OBJECTIVE RESPONSIBILITY AND INSURANCE: these are new “roads” that the European Commission is developing in order to increase the safety of AI systems and the possible “compensation”, especially those with “specific risk”.

You can consult the full document at this link:
https://ec.europa.eu/info/sites/info/files/report-safety-liability-artificial-intelligence-feb2020_en_1.pdf

All Rights Reserved

Raffaella Aghemo, Lawyer

Lawyer and consultant for innovation technology, IP, copyright, communication & marketing, likes movies and books, writes legal features and books reviews