OpenAI is working to prevent fraud with AI

July 7, 2023 08:27

OpenAI Group announced on July 5 that it has established a new research group to find ways to ensure that artificial intelligence (AI) technology operates safely for humans.

Chú thích ảnh

OpenAI and ChatGPT logos

“The power of superintelligence could lead to the decline of power, or even the extinction of the human race,” said OpenAI co-founder Ilya Sutskever and OpenAI chief oversight officer Jan Leike, predicting that superintelligent AI — systems that are capable of being smarter than humans — could emerge within this decade.

The two group leaders said there is currently no solution to control or regulate super-intelligent AI, so breakthroughs are needed in "connected research", focusing on ensuring AI still benefits humans.

Over the next four years, OpenAI will dedicate 20% of its guaranteed computing power to solving this problem. In addition, the company is creating a new team called Superalignment, which aims to create human-level AI alignment research, which will then be scaled up with massive amounts of computing power. This means that AI systems will be trained on human feedback, which will then assist in evaluating and performing alignment research on behalf of individuals.

Mr. Connor Leahy - founder of AI technology construction company Conjecture - assessed that this plan still has many shortcomings, because human-level AI can get out of control if the connection problems are not solved.

The potential dangers of AI remain a top concern for both tech researchers and the public. In April, a group of top tech executives signed an open letter calling for regulation of AI and raising awareness of the “extinction threat” posed by the technology. Additionally, a May Reuters/Ipsos poll found that more than two-thirds of Americans are concerned about the negative impacts of AI, with 61% of respondents believing that AI could threaten human civilization.

According to VNA

(0) Comments
Latest News
OpenAI is working to prevent fraud with AI