How to balance the benefits and risks with using artificial intelligence will be on the European Parliament’s agenda after the holidays. On the table is the Commission’s proposal on how to regulate the use of AI.
The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. At the same time, it is commonly acknowledged that the specific characteristics of certain AI systems raise some concerns especially with regard to safety, security and fundamental rights protection, the parliament says in a statement about its 2023 agenda.
The European Commission has said it wants to boost private and public investment in AI technologies to €20 billion per year. An estimate, earlier published by the Parliament, says that AI and robotics can create 60 million new jobs worldwide by 2025.
The Parliament has earlier stated that “the EU has fallen behind in the global race for tech leadership. “There is a risk that standards will be developed elsewhere, often by non-democratic actors, while MEPs believe the EU needs to act as a global standard-setter in AI”, the Parliament said.
The MEPs identified policy options that could unlock AI’s potential in health, the environment and climate change, to help combat pandemics and global hunger, and enhance people’s quality of life through personalised medicine.
MEPs said that, combined with the necessary support infrastructure, education and training, AI can increase capital and labour productivity, innovation, sustainable growth and job creation.
The European Commission’s proposal for a new Artificial Intelligence Act (AI Act) from April 2021 includes a technology-neutral definition of AI systems. The Commission proposes to adopt different set of risk-based rules with four levels:
- Unacceptable risk AI. Harmful uses of AI that violates EU values (such as social scoring by governments) will be banned because of the unacceptable risk they create;
- High-risk AI. A number of AI systems that are creating adverse impact on people’s safety or their fundamental rights are considered to be high-risk. In order to ensure trust and consistent high level of protection of safety and fundamental rights, a range of mandatory requirements would apply to all high-risks systems;
- Limited risk AI. Some AI systems will be subject to a limited set of obligations (e.g. transparency);
- Minimal risk AI. All other AI systems can be developed and used in the EU without additional legal obligations than existing legislation.
The proposal is now being discussed by the co-legislators, the European Parliament and the Council. Shortly before Christmas, the Council adopted its ‘general approach’ on the AI including for instance to extend to private actors the prohibition on using AI for social scoring.
Global adoption of Artificial Intelligence has more than doubled since 2017, though the proportion of organizations using AI has plateaued between 50 and 60% for the past few years. A set of companies seeing the highest financial returns from AI continue to pull ahead of competitors, according to consultancy McKinsey’s Global Survey on AI.
Addressing the issue about a global standard on ethics for AI, UNESCO’s member states have adopted standards saying these are the first of its kind.
“Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled”, UNESCO says.
“AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected.”
“We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues.”
The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.
The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.