Four of the most influential companies in the field of artificial intelligence (AI) announced the creation of the Frontier Model Forum to oversee the development of state-of-the-art models.
OpenAI, the developer of ChatGPT, AI startup Anthropic, Microsoft and Google have just announced the establishment of the Frontier Model Forum. The organization will focus on developing new AI models that are “safe and responsible”.
The Frontier Model Forum was established to oversee the development of advanced AI models.
According to Brad Smith, President of Microsoft, companies that create AI technology have a responsibility to ensure it is safe, secure, and under human control. The Frontier Model Forum is an important step in bringing the technology industry together to improve the responsibility of AI, address challenges, and benefit everyone.
The forum's members said the main goals are to promote research in AI safety such as developing standards for evaluating models; encouraging responsible deployment of advanced AI models; discussing trust and risks of AI with politicians and academics; and helping research into AI's uses such as combating climate change and detecting cancer.
The forum comes as countries push for AI regulation. On July 21, tech companies, including founding members of the Frontier Model Forum, agreed to new AI protections after a meeting with US President Joe Biden.
The parties pledged to “watermark” AI content to make it easier to detect things like deepfakes, allowing independent experts to audit AI models.
Earlier, on July 18, the United Nations Security Council held its first meeting on AI. Here, British Foreign Secretary James Cleverly commented that AI “will fundamentally change every aspect of human life”. He said that it is urgent to “shape the governance of transformative technologies because AI has no boundaries”.
According to Vietnamnet