Skip links
Transparency rules for AI

Introducing transparency rules for artificial intelligence

What could be the world’s first rules on artificial intelligence has been adopted by the European Parliament’s  committees for the internal market and civil liberties. They include the right to make complaints about AI systems and that providers must show if content has been generated by AI. The draft comes at a time when there seems to be no end to the AI race in introducing AI and the day after Google-owner Alphabet announced AI-support for a large number of services.

Before negotiations with governments in the European Council on the final form of the law can start, the now approved draft needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

Generative foundation models, like GPT, would have to comply with transparency requirements, like disclosing that the content was generated by AI. Providers would need to design the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

The draft transparency and risk-management rules for AI systems are to “ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe.

Read Also:  Tech leaders warn artificial intelligence race is out of control

The Committee’s draft negotiating mandate was adopted with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s original proposal, MEPs aim “to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly.” 

They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

The rules follow a risk-based approach and obligations for providers and users depending on the level of risk the AI can generate. 

AI systems with an unacceptable level of risk to people’s safety would be prohibited, including systems with manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

MEPs amended the original list to include bans on intrusive and discriminatory uses of AI systems such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
Read Also:  Media forecasted to be biggest investor in artificial intelligence

The classification of high-risk areas was amended to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

The draft includes obligations for providers to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law would promote regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.

Citizens should have the right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems.

Read Also:  Bill Gates join tech leaders expressing trust in AI development

“It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe”, says the committees’ co-rapporteur Brando Benifei (S&D, Italy).

Co-rapporteur Dragos Tudorache (Renew, Romania) said: “Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe.”

Read Also:  Future regulations of the use of artificial intelligence

Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.

Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!

We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.

    Do you want an experienced opinion on a job issue?
    Moonshot Manager is here to answer!

      Moonshot community sharing thoughts and ideas, in a anonymous, safe environment.