July 28, 2023
2 mins read

Tech firms form body to ensure safe development of AI models

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members…reports Asian Lite News

Four major tech companies — Google, OpenAI, Microsoft, and Anthropic have come together to form a new industry body designed to ensure the “safe and responsible development” of “frontier AI” models.

In response to growing calls for regulatory oversight, these tech firms have announced the formation of “Frontier Model Forum” which will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem and develop a public library of solutions to support industry best practices and standards.

The Forum aims to help — advance AI safety research to promote responsible development of frontier models and minimise potential risks, identify safety best practices for frontier models, share knowledge with policymakers, academics, civil society and others to advance responsible AI development, and support efforts to leverage AI to address society’s biggest challenges.

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members.

Qualifying organisations must be developing and deploying frontier AI models, as well as showing a “strong commitment to frontier model safety”.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” said Kent Walker, President, Global Affairs, Google & Alphabet.

Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.

Moreover, the founding companies will also establish key institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these efforts.

“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement on Wednesday.

The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models — promote knowledge sharing and best practices among industry, governments, civil society, and academia, support the AI safety ecosystem by identifying the most important open research questions on AI safety, and facilitate information sharing among companies and governments.

ALSO READ-7 top tech firms sign deal with US on AI guardrails

Previous Story

G20 must step up for climate action, says Guterres

Next Story

ChatGPT fined for exposing personal info of 687 S. Koreans

Latest from -Top News

Ramaphosa gains from US showdown

South African President’s conduct with Trump reminded South Africans of his diplomatic pedigree, and of his importance to the country’s rules-based order South Africa’s President Cyril Ramaphosa and his delegation went to

Armed gangs kill dozens in Nigeria

A local state governor said “scores” of people had been killed in the attacks, which also saw homes and properties destroyed. Northeast Nigeria has been gripped by a deadly wave of violence

Children die as USAID aid cuts snap a lifeline

Trump administration cut more than 90% of USAID’s foreign aid contracts and $60 billion in overall assistance around the world. Programs serving children were hit hard Under the dappled light of a
Go toTop

Don't Miss

EU lawmakers agree on artificial intelligence laws

Breton said the rules would implement safeguards for the use

YouTube to crack down on AI-generated videos via labels

This is important in cases where the content discusses sensitive