OpenAI's new AI safety roadmap will create a framework to address risks, allowing boards to overturn decisions made by executives.
According to a plan announced on its official website on December 18, the company OpenAI has laid out a new framework to address safety in its most advanced artificial intelligence (AI) models, including allowing the board to reverse decisions that have already been approved.
OpenAI will only deploy cutting-edge technology in specific areas like cybersecurity and nuclear if it has been proven safe.
The company will also establish a dedicated advisory group to review safety reports before they are sent to the company's executives and board of directors.
Even if the executives have made decisions, the board of directors can still reverse those decisions.
Since ChatGPT launched more than a year ago, the potential risks of AI have been a top concern for both AI researchers and the general public.
Generative AI technology has surprised users with its capabilities, but has also raised safety concerns because of its potential to spread misinformation and manipulate people.
In April 2023, a group of AI industry leaders and experts signed an open letter, calling for a six-month pause in the development of systems more powerful than OpenAI's GPT-4, to thoroughly study the potential risks to society.
In May 2023, a Reuters poll found that more than two-thirds of Americans were extremely worried about the potential negative impacts of AI technology, while 61% believed it could threaten the existence of civilization.
According to Vietnamnet