Experts agree that companies and government authorities will expand the role of AI in peoples’ daily lives in useful ways. But they are split about how much control humans will have over essential decision making with more AI in use. Some experts worry that these AI systems will diminish individuals’ ability to control their own choices, according to a survey among experts made by the Pew Research Center. A test of famous AI chatbot ChatGPT shows that it is slightly less worried than experts.
“There is general agreement that smart machines, bots and systems powered mostly by machine learning and artificial intelligence will quickly increase in speed and sophistication between now and 2035”, the report says.
“Some analysts have concerns about how business, government and social systems are becoming more automated. They fear humans are losing the ability to exercise judgment and make decisions independent of these systems.”
“Others optimistically assert that throughout history humans have generally benefited from technological advances. They say that when problems arise, new regulations, norms and literacies help ameliorate the technology’s shortcomings. And they believe these harnessing forces will take hold, even as automated digital systems become more deeply woven into daily life.”
Pew Research Center and Elon University’s Imagining the Internet Center asked 540 technology innovators, developers, business and policy leaders, researchers, academics and activists:
By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?
- 56% agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
- 44% agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.
The majority referred to that
- powerful interests like companies and governments have no interest in honouring human control;
- people are comfortable and already let algorithms decide for them;
- AI is very complicated and made for centralised control and not personalised control.
Those who predict that AI will be developed so that humans easily can control tech-aided decisions referred to:
- humans and tech always positively evolve together;
- businesses will protect human control because marketplaces demand it;
- the future will feature both more and less human control.
The researchers asked also ChatGPT that was launched by company OpenAI late last year. Microsoft has announced that ChatGPT will be used for search in Bing saying that it will make search easier as the bot can produce human-like written answers and not just links.
Google had launched a similar bot. Discussions have focused on what reliability bots can produce.
The Pew researchers asked ChatGPT:
By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives? Why do you think humans will or will not be in control of important decision-making in the year 2035?
“Yes, it is highly likely that by 2035, smart machines, bots, and AI systems will be designed to allow humans to easily control important tech-aided decision-making in their lives. As AI technology continues to advance, it will be important for developers to prioritize user control and transparency in their AI systems. This could include features such as adjustable privacy settings, the ability to explain AI decisions and recommendations, and the option for human overrides.
“An example of this trend can already be seen in the development of voice assistants, such as Amazon’s Alexa or Google Home. These devices were designed to allow users to control their home environment with voice commands, but they also give users the ability to adjust their privacy settings and control what information is shared with the AI system. This type of user control is becoming increasingly important as AI is integrated into more aspects of our lives and decision-making.
“However, it is possible that some AI systems may not prioritize human control in the future. For example, in certain industries, AI systems may be designed to make autonomous decisions without human intervention for efficiency or safety reasons. In these cases, it will be important for developers to ensure that these AI systems are transparent in their decision-making and include oversight mechanisms to ensure that they are acting ethically. Overall, the extent to which humans will be in control of AI-powered decision-making will depend on the values and priorities of AI developers and the regulations put in place to govern AI use.”
An observation is that while the majority of experts were worried, ChatGPT first gave answers in favour on continued human control while later adding that there are also indications of a, for humans, less positive development.