Artificial intelligence (AI) plays a major role in digital transformation. The European Parliament will this autumn vote on new rules on the use of AI saying the Artificial Intelligence Act should unlock AI’s potential in fields such as health, environment and climate change. The European Commission wants to boost private and public investment in AI technologies to €20 billion per year. An estimate published by the Parliament says that AI and robotics can create 60 million new jobs worldwide by 2025.
The Parliament has in a statement said that “the EU has fallen behind in the global race for tech leadership. “There is a risk that standards will be developed elsewhere, often by non-democratic actors, while MEPs believe the EU needs to act as a global standard-setter in AI”, the Parliament said.
The MEPs identified policy options that could unlock AI’s potential in health, the environment and climate change, to help combat pandemics and global hunger, and enhance people’s quality of life through personalised medicine. MEPs say that, combined with the necessary support infrastructure, education and training, AI can increase capital and labour productivity, innovation, sustainable growth and job creation.
However, several studies show citizens’ hesitation and sometimes fear when facing the potential of artificial intelligence.
“The EU should not always regulate AI as a technology and the level of regulatory intervention should be proportionate to the type of risk associated with the particular use of an AI system”, MEPs said.
Noting the EU’s push for a global agreement on common standards for the responsible use of AI, MEPs encouraged like-minded democracies to work together to jointly shape this international debate. They also stressed that AI technologies could pose important ethical and legal questions, and voiced concerns about military research and technological developments into lethal autonomous weapon systems.
Parliament pointed out that certain AI technologies enable the automation of information processing at an unprecedented scale, paving the way for potential mass surveillance and other unlawful interference in fundamental rights. MEPs warned that authoritarian regimes can apply AI systems to control, exert mass surveillance and rank their citizens or restrict freedom of movement, while dominant tech platforms use AI to obtain more personal information. For MEPs, this profiling poses risks to democratic systems.
The EU should therefore “prioritise international cooperation with like-minded partners in order to safeguard fundamental rights and at the same time cooperate on minimising new technological threats.”
A recent study by US-based Pew Research Center showed that US women are more sceptical than men about some uses of artificial intelligence (AI). This is the case concerning use of driverless cars and using AI to find false information on social media,
The survey said 34% of women are unsure about whether social media algorithms to find false information are a good or bad idea, compared with 26% of men. When it comes to the use of face recognition by police, 31% of women are not certain whether it is a good or bad idea, compared with 22% of men.
Women are more likely to support the inclusion of a wider variety of groups in AI design, the Pew survey says. 67% of women say it’s extremely or very important for social media companies to include people of different genders when designing social media algorithms to find false information, compared with 58% of men. Women are also more likely to say it is important that different racial and ethnic groups are included in the same AI design process (71% vs. 63%).
Additionally, women are more doubtful than men that it is possible to design AI computer programs that can consistently make fair decisions in complex situations. Only around two-in-ten women (22%) think it is possible to design AI programs that can consistently make fair decisions, while a larger share of men (38%) say the same. A plurality of women (46%) say they are not sure whether this is possible, compared with 35% of men.
”Overall, women in the U.S. are less likely than men to say that technology has had a mostly positive effect on society (42% vs. 54%) and more likely to say technology has had equally positive and negative impacts (45% vs. 37%). In addition, women are less likely than men to say they feel more excited than concerned about the increased use of AI computer programs in daily life (13% vs. 22%).”
”Gender remains a factor in views about AI and technology’s impact when accounting for other variables, such as respondents’ political partisanship, education and race and ethnicity.”
The analysis says women are consistently more likely than men to express concern about computer programs executing tasks. 43% of women say they would be very or somewhat concerned if AI programs could diagnose medical problems, while 27% of men say the same.
The Pew also studied how citizens in general see the use of AI. This survey shows Americans see promise in the way artificial intelligence and human enhancement technologies could improve daily life and human abilities.
“Yet public views are also defined by the context of how these technologies would be used, what constraints would be in place and who would stand to benefit – or lose – if these advances become widespread.”
Ambivalence is a theme in the survey data: 45% say they are equally excited and concerned about the increased use of AI programs in daily life, compared with 37% who say they are more concerned than excited and 18% who say they are more excited than concerned.
It shows that more US adults oppose than favour the idea of social media sites using facial recognition to automatically identify people in photos (57% vs. 19%) and more oppose than favour the idea that companies might use facial recognition to automatically track the attendance of their employees (48% vs. 30%).
Another concern is the potential impact of these emerging technologies on social equity.
“People are far more likely to say the widespread use of several of these technologies would increase rather than decrease the gap between higher- and lower-income Americans. For instance, 57% say the widespread use of brain chips for enhanced cognitive function would increase the gap between higher- and lower-income Americans; just 10% say it would decrease the gap. There are similar patterns in views about the widespread use of driverless cars and gene editing for babies to greatly reduce the risk of serious disease during their lifetime.”
“About six-in-ten Americans think the use of computer chip implants in the brain would be more acceptable if people could turn on and off the effects, and 53% would find the brain implants more acceptable if the computer chips could be put in place without surgery.”
About half or more also see mitigating steps that would make the use of robotic exoskeletons, facial recognition technology by police and gene editing in babies to greatly reduce the risk of serious disease during their lifetime more acceptable.
Addressing the issue about a global standard on ethics for AI, UNESCO’s member states adopted standards saying these are the first of its kind.
RULE OF LAW
“Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled”, UNESCO says.
“AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected.”
“We see increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance, and increased use of unreliable AI technologies in law enforcement, to name a few. Until now, there were no universal standards to provide an answer to these issues.”
The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.
The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.
The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves.
The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure.
This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts.