Microsoft and Alphabet-owned Google just a few months ago started the ongoing race in introducing generative artificial intelligence saying it will dramatically improve search on Bing and Google respectively. They now seem to be competing also in introducing rules for responsible use of AI with both of them calling for governments and authorities to regulate the use.
This comes after Sam Altman, CEO of OpenAI, the startup behind ChatGPT, recently told a US Senate hearing that use of AI needs regulation. His company is closely cooperating with Microsoft that has invested heavily in OpenAI.
Other tech leaders have asked for a six month moratorium on AI development for security reasons but there are no indications of that this call will lead to anything. The race in introducing AI, and also to look responsible, is on.
Microsoft’s president Brad Smith in a speech in Washington said that his biggest concern around artificial intelligence is deepfakes, realistic looking but false content. In a blog post he is presenting a five points proposal for governments and authorities to regulate the AI use plus.
Alphabet’s and Google’s president , global affairs and chief legal officer, Kent Walker, in a blog post repeated its policy statement: “AI is too important not to regulate, and too important not to regulate well.”
“We all must be clear-eyed that AI will come with risks and challenges. Against this backdrop, we’re committed to moving forward boldly, responsibly, and in partnership with others”, Walker writes.
Brad Smith says: “It’s not enough to focus only on the many opportunities to use AI to improve people’s lives.”
“This is perhaps one of the most important lessons from the role of social media. Little more than a decade ago, technologists and political commentators alike gushed about the role of social media in spreading democracy during the Arab Spring. Yet, five years after that, we learned that social media, like so many other technologies before it, would become both a weapon and a tool – in this case aimed at democracy itself”, writes Microsoft’s Brad Smith.
He has a five point proposal for how to regulate AI use:
- First, implement and build upon new government-led AI safety frameworks. He mentions the U.S. National Institute of Standards and Technology, or NIST.that has launched a new AI Risk Management Framework.
- Second, require effective safety brakes for AI systems that control critical infrastructure. The government would define the class of high-risk AI systems that control critical infrastructure. New laws would require operators of these systems to build safety brakes into high-risk AI systems by design.
- Third, develop a broad legal and regulatory framework based on the technology architecture for AI. Microsoft believes there will need to be a legal and regulatory architecture for AI that reflects the technology architecture for AI itself.
- Fourth, promote transparency and ensure academic and nonprofit access to AI. A critical public goal is to advance transparency and broaden access to AI resources. While there are some important tensions between transparency and the need for security, there exist many opportunities to make AI systems more transparent in a responsible way. it is critical to expand access to AI resources for academic research and the nonprofit community.
- Fifth, pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.
Smith adds that: Ultimately, every organization that creates or uses advanced AI systems will need to develop and implement its own governance systems.
Alphabet/Google is publishing a white paper with policy recommendations for AI encouraging governments to focus on three key areas — unlocking opportunity, promoting responsibility, and enhancing security:
- 1.Unlocking opportunity by maximizing AI’s economic promise. Policymakers should invest in innovation and competitiveness, promote legal frameworks that support responsible AI innovation, and prepare workforces for AI-driven job transition. Governments should explore foundational AI research through national labs and research institutions, adopt policies that support responsible AI development (including privacy laws that protect personal information and enable trusted data flows across national borders), and promote continuing education, upskilling programs, movement of key talent across borders, and research on the evolving future of work.
- 2.Promoting responsibility while reducing risks of misuse. A multi-stakeholder approach to governance. Learning from the experience of the internet, stakeholders will come to the table with a healthy grasp of both the potential benefits and challenges. Some challenges will require fundamental research to better understand AI’s benefits and risks, and how to manage them, and developing and deploying new technical innovations in areas like interpretability and watermarking. Others will be best addressed by developing common standards and shared best practices and proportional, risk-based regulation to ensure that AI technologies are developed and deployed responsibly.
- 3.Enhancing global security while preventing malicious actors from exploiting this technology. The first step is to put technical and commercial guardrails in place to prevent malicious use of AI and to work collectively to address bad actors, while maximizing the potential benefits of AI. For example, governments should explore next-generation trade control policies for specific applications of AI-powered software that are deemed security risks, and on specific entities that provide support to AI-related research and development in ways that could threaten global security.