ChatGPT maker OpenAI will launch tools to combat misinformation ahead of elections in countries around the world in 2024.
The explosive success of the text-generating website ChatGPT has fueled a global artificial intelligence revolution. But experts also warn that such tools could create fake news and influence voters in national elections in 2024.
OpenAI said it will no longer allow users to use its technology products, including ChatGPT and the DALL-E 3 image-generating browser, for political campaigns.
In a report last week, the World Economic Forum (WEF) warned that AI-generated misinformation is the biggest short-term global risk and could undermine newly elected governments in major economies.
Concerns about fake news at elections have been raised for years, but the advent of AI text and image generators has increased the threat, especially if users cannot tell whether the content they see is fake or manipulated, experts say.
The issue of using artificial intelligence (AI) to interfere in elections has become a concern since OpenAI released two products including ChatGPT that can generate human-like text and DALL-E technology that creates “deepfakes” (a technique that uses AI to create fake audio, images and videos).
OpenAI CEO Sam Altman himself, at a hearing before the US Congress in May 2023, also expressed concerns about the possibility that generative AI could be used to interfere with the election process.
OpenAI also plans to pin a “cr” icon to AI-generated images, in line with the guidelines of the Content Verification and Authentication Alliance (C2PA), which was founded to combat misinformation, as well as find ways to identify DALL-E content even if the image has been edited.
OpenAI said ChatGPT, when asked procedural questions about US elections like polling locations, would direct users to authoritative websites.
ChatGPT users will be provided with real-time news with full records and links, as well as directed to a voting website when they have questions about election procedures.
OpenAI also emphasized its existing policy on preventing deepfakes and impersonating chatbots, as well as content created to mislead about the voting process or suppress voting rights.
The company also pledged to limit political apps and create a mechanism to report potential violations in its new GPT system.