Skip links
Responsible use of artificial intelligence

Big tech calls for governments to regulate use of artificial intelligence

Microsoft and Alphabet-owned Google just a few months ago started the ongoing race in introducing generative artificial intelligence saying it will dramatically improve search on Bing and Google respectively. They now seem to be competing also in introducing rules for responsible use of AI with both of them calling for governments and authorities to regulate the use.

This comes after Sam Altman, CEO of OpenAI, the startup behind ChatGPT, recently told  a US Senate hearing that use of AI needs regulation. His company is closely cooperating with Microsoft that has invested heavily in OpenAI.

Other tech leaders have asked for a six month moratorium on AI development for security reasons but there are no indications of that this call will lead to anything. The race in introducing AI, and also to look responsible, is on.

Microsoft’s president Brad Smith in a speech in Washington said that his biggest concern around artificial intelligence is deepfakes, realistic looking but false content. In a blog post he is presenting a five points proposal for governments and authorities to regulate the AI use plus.

Alphabet’s and Google’s president , global affairs and chief legal officer, Kent Walker, in a blog post repeated its policy statement: “AI is too important not to regulate, and too important not to regulate well.”

Read Also:  Risks with deepfakes increased by artificial intelligence

“We all must be clear-eyed that AI will come with risks and challenges. Against this backdrop, we’re committed to moving forward boldly, responsibly, and in partnership with others”, Walker writes.

Brad Smith says: “It’s not enough to focus only on the many opportunities to use AI to improve people’s lives.” 

“This is perhaps one of the most important lessons from the role of social media. Little more than a decade ago, technologists and political commentators alike gushed about the role of social media in spreading democracy during the Arab Spring. Yet, five years after that, we learned that social media, like so many other technologies before it, would become both a weapon and a tool – in this case aimed at democracy itself”, writes Microsoft’s Brad Smith.

He has a five point proposal for how to regulate AI use:

  • First, implement and build upon new government-led AI safety frameworks. He mentions the U.S. National Institute of Standards and Technology, or NIST.that has launched a new AI Risk Management Framework.
  • Second, require effective safety brakes for AI systems that control critical infrastructure. The government would define the class of high-risk AI systems that control critical infrastructure. New laws would require operators of these systems to build safety brakes into high-risk AI systems by design. 
  • Third, develop a broad legal and regulatory framework based on the technology architecture for AI. Microsoft believes there will need to be a legal and regulatory architecture for AI that reflects the technology architecture for AI itself.
  • Fourth, promote transparency and ensure academic and nonprofit access to AI. A critical public goal is to advance transparency and broaden access to AI resources. While there are some important tensions between transparency and the need for security, there exist many opportunities to make AI systems more transparent in a responsible way. it is critical to expand access to AI resources for academic research and the nonprofit community. 
  • Fifth, pursue new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.
Read Also:  Introducing transparency rules for artificial intelligence

 Smith adds that: Ultimately, every organization that creates or uses advanced AI systems will need to develop and implement its own governance systems. 

Alphabet/Google is publishing a white paper with policy recommendations for AI  encouraging governments to focus on three key areas — unlocking opportunity, promoting responsibility, and enhancing security:

 

Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.

Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!

We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.

    Do you want an experienced opinion on a job issue?
    Moonshot Manager is here to answer!

      Moonshot community sharing thoughts and ideas, in a anonymous, safe environment.