The European Commission has proposed rules for responsible use of artificial intelligence saying the aim is “to turn Europe into the global hub for trustworthy Artificial Intelligence (AI)”. Concerning the controversial remote biometric identification, the Commission says this is considered high risk and subject to strict requirements. “Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined”.
The combination of the first-ever legal framework on AI and a new Coordinated plan with member will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU, the Commission said.
AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.
HIGH RISK AREA
AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
- Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
- Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
High-risk AI systems will be subject to strict obligations before they can be put on the market.
In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence).
Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.
The Commission proposes creation of a European Artificial Intelligence Board to supervise and drive development standards for AI.
Next step is that the European Parliament must discuss and approve the proposals before the rules are implemented in the national legislation in member states.