Don’t pause the development of artificial intelligence to prioritize responsible AI. It’s time to get serious about the AI ethics standards and guardrails all of us must continue adopting and refining. Government regulation should not be broad but smart, precision regulation that applies the strongest regulation to AI use cases with the highest risk of societal harm, Christina Montgomery, VP and chief privacy officer and Francesca Rossi, AI ethics global leader, both at IBM, write in a blog post disagreeing with leaders who have argued for a six month pause in AI development.
“The recent focus on AI in our society is a reminder of the old line that with any great power comes great responsibility. Instead of a blanket pause on the development of AI systems, let’s continue to break down barriers to collaboration and work together on advancing responsible AI—from an idea born in a meeting room all the way to its training, development, and deployment in the real world”, they argue in the blog for World Economic Forum.
“The stakes are simply too high, and our society deserves nothing less.”
58% of the public are familiar with ChatGPT but 42% have heard nothing at all about it and relatively few have tried it themselves, according to a Pew Research Center US survey conducted in March. Among those who have tried ChatGPT, a majority report it has been at least somewhat useful. OpenAI, that developed ChatGPT, has made a version open for the public to use.
Montgomery and Rossi suggests a three bullets action:
- First, we urge others across the private sector to put ethics and responsibility at the forefront of their AI agendas. A blanket pause on AI’s training, together with existing trends that seem to be de-prioritizing investment in industry AI ethics efforts, will only lead to additional harm and setbacks.
- Second, governments should avoid broadly regulating AI at the technology level. Otherwise, we’ll end up with a whack-a-mole approach that hampers beneficial innovation and is not future-proof. We urge lawmakers worldwide to instead adopt smart, precision regulation that applies the strongest regulation control to AI use cases with the highest risk of societal harm.
- Finally, there still is not enough transparency around how companies are protecting the privacy of data that interacts with their AI systems. That’s why we need a consistent, national privacy law in the US. An individual’s privacy protections shouldn’t change just because they cross a state line.
“When ethically designed and responsibly brought to market, generative AI capabilities support unprecedented opportunities to benefit business and society. They can help create better customer service and improve healthcare systems and legal services. They also can support and augment human creativity, expedite scientific discoveries, and mobilize more effective ways to address climate challenges.”
As AI also comes with risks, they stress it is critical that we question what AI could mean for the future of the workforce, democracy, creativity, and the overall well-being of humans and our planet.
“Some tech leaders recently called for a six-month pause in the training of more powerful AI systems to allow for the creation of new ethics standards. While the intentions and motivations of the letter were undoubtedly good, it misses a fundamental point: these systems are within our control today, as are the solutions.”
The Pew Research survey shows men are more likely than women to have heard at least a little about ChatGPT, as are adults under 30 when compared with those 30 and older.
Just 14% in the survey say they have used ChatGPT for entertainment, to learn something new, or for their work.
“This lack of uptake is in line with a Pew Research Center survey from 2021 that found that Americans were more likely to express concerns than excitement about increased use of artificial intelligence in daily life”, the Centre says.
Roughly four-in-ten who have tried ChatGPT say it has been extremely (15%) or very useful (20%), while 39% say it has been somewhat useful. 21% say it has been not very or not at all useful (6%).