What most probably will be the world’s first rules on artificial intelligence have been approved by the European Parliament with 499 votes in favour, 28 against and 93 abstentions. After the booming interest, but also worries about AI, big tech companies like Microsoft, Google and OpenAI, have called for rules on responsible use of AI. Next step is that the Parliament and ministers in the European Council have to agree on the final legislation. The negotiations will start immediately.
After some experts have warned AI is introduced too fast and could lead to the extinction of humanity, EU Commission’s Margrethe Vestager in a BBC interview says discrimination is a more pressing concern from advancing AI than human extinction.
The EU rules include the right to make complaints about AI systems and that providers must show if content has been generated by AI.
Generative AI that can produce texts looking like they were written by humans, would have to comply with transparency requirements, like showing that the content was generated by AI. Providers would need to design the model to prevent it from generating illegal content and publishing summaries of copyrighted data.
The draft transparency and risk-management rules for AI systems are to “ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe.
The draft approved by the Parliament wants a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.
The rules follow a risk-based approach and obligations for providers and users depending on the level of risk the AI can generate.
AI systems with an unacceptable level of risk to people’s safety would be prohibited, including systems with manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).
Bans on intrusive and discriminatory uses of AI include:
- “Real-time” and “post” remote biometric identification systems in publicly accessible spaces;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy);