EU legislators agree on world’s first law on artificial intelligence
An agreement on what is described as the world’s first law on how to handle artificial intelligence has after days of delayed discussions been reached between the EU Parliament and EU governments in the European Council. The law includes safeguards agreed on future developments of general purpose artificial intelligence that has been a security focus in discussions after the turbulence around AI-development company OpenAI. The law includes limitations for use of biometric identification systems by police and courts. Violation of the law can result in fines ranging from 35 million Euros, or 7% of global turnover, to 7.5 million, or 1.5% of turnover. The law is not expected to be implemented before 2025.
The law includes bans on social scoring and AI used to manipulate or exploit user vulnerabilities. The Parliament in a statement described the agreement as a political deal “on a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, while businesses can thrive and expand”.
“This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact.”
EU Commission president, Ursula von der Leyen said “the EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era.”
“Until the Act will be fully applicable, we will support businesses and developers to anticipate the new rules. Around 100 companies have already expressed their interest to join our AI Pact, by which they would commit on a voluntary basis to implement key obligations of the Act ahead of the legal deadline.”
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the law will prohibit:
- biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will;
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Negotiators agreed on rules for the controversial use of biometric identification systems (RBI) in public spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
“Real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:
- targeted searches of victims (abduction, trafficking, sexual exploitation),
- prevention of a specific and present terrorist threat, or
- the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), obligations were agreed. The Parliament says that “MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors.”
“AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
High-impact GPAI models with systemic risk that meet certain criteria will have to include model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. Until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.
The agreement promotes so-called regulatory sandboxes and real-world-testing, established by national authorities to develop and train innovative AI before placement on the market.
The agreed text will now have to be formally adopted by both Parliament and Council to become EU law.
Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.
Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!
We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.