
Tech companies creating forum for safe artificial intelligence
Artificial intelligence is a global focus. Also those who have developed AI have been warning about the dangers of using the AI they have developed. Tech companies Google, Microsoft, Open AI and Anthropic have now announced a joint effort to ensure safe and responsible development of artificial intelligence. “The Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models”, the companies say in a statement.
The announcement comes just a few days after seven tech companies, Including Google, Microsoft and Open AI, and the White House agreed to actions meant to limit risks with artificial intelligence.
The announcement about the Forum says that governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.
“To build on these efforts, further work is needed on safety standards and evaluations to ensure frontier AI models are developed and deployed responsibly. The Forum will be one vehicle for cross-organizational discussions and actions on AI safety and responsibility.”
The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models:
Identifying best practices: Promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks.
Advancing AI safety research: Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
Facilitating information sharing among companies and governments: Establish trusted, secure mechanisms for sharing information among companies, governments and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.
“The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards”, the founders say.
The core objectives for the Forum are:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
The Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.
Membership is open to organizations that:
- Develop and deploy frontier models (as defined by the Forum).
- Demonstrate strong commitment to frontier model safety, including through technical and institutional approaches.
- Are willing to contribute to advancing the Forum’s efforts including by participating in joint initiatives and supporting the development and functioning of the initiative.
Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.
Around 350 AI industry leaders have earlier signed a statement published by Center for AI Safety saying the statement is “meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.” Signatories include representatives for OpenAI, which started the most recent AI race by launching generative chatbot ChatGPT, and Google DeepMind and Microsoft.
Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.
Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!
We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.