Anthropic, Google, Microsoft and OpenAi team up in Frontier Model Forum for safe AI development. A bluff?

The four AI industry leaders have launched the Frontier Model Forum, a body to ensure the "safe and responsible development" of the most powerful artificial intelligence models. However, critics see this as a ploy to avoid regulation
Frontier Model Forum

Tech bigwigs are allying on artificial intelligence. Yesterday Anthropic, Google, Microsoft and OpenAI announced the establishment of the Frontier Model Forum, a body created to ensure “safe and responsible development of AI models.”

The group will focus on coordinating security research and articulating best practices for what are called “frontier AI models” that exceed the capabilities found in existing state-of-the-art models. In fact, the Frontier Model Forum will aim to promote security research and provide a channel of communication between industry and policymakers.

As Axios recalls, the companies are among seven that committed to an agreement with the White House to minimize AI risks, conduct further AI security research, and share security best practices, among other pledges.

At the same time, the Financial Times points out that similar groups already exist. The Partnership on AI, of which Google and Microsoft were also founding members, was formed in 2016 with members from across civil society, academia and industry and with a mission to promote the responsible use of artificial intelligence.

What will change now with Frontier Model Forum? Some critics wonder, pointing out the absence of concrete goals or measurable outcomes in the four companies’ commitments.

Read also: Will there really be common rules between the EU and the US on artificial intelligence?

What the frontier model forum will do

The Frontier Model Forum will also work with policymakers and academics, facilitating information sharing between companies and governments.

“The body,” a note explains, “will draw on the technical and operational expertise of member companies to benefit the entire Artificial Intelligence ecosystem,” such as through advancing technical assessments, developing a public library of solutions to support best practices and industry standards.

Frontier Model Forum’s goal

Among the Forum’s goals are to promote AI security research. Identify best practices for responsible development.

Collaborate with policymakers, academics, civil society and business to share knowledge about security possibilities and risks.

And support efforts to develop applications that can help address society’s greatest challenges.

Read also: ChatGPT worries Biden: CEOs of Microsoft, Google and OpenAI summoned to the White House

Related articles...
Latest news
ISOLCORE

With CZ panel, ISOLCORE changes the rules of the insulation materials market

Cantiere Navale Noè di Augusta: Italian excellence between tradition, innovation and sustainability

Cosmo Impresa is the consulting player that guides your company into the future of Industry 4.0

be open

Commitment to increase sustainable operations from2025 on was made by BE OPEN think-tank atCOP29

Le Fonti Gran Gala torna il 5 dicembre 2024

cmi

C.m.i.: Growing Between Environmental Sustainability and Human Capital Enhancement

Newsletter

Sign up now to stay updated on all business topics.