On November 26, the US, UK and more than a dozen other countries announced an international agreement on artificial intelligence (AI) safety.
According to US officials, this is the first detailed agreement on how to ensure AI technology is used safely against the risk of fraud, and also urges technology companies to create AI systems that are "safe by design".
According to the 20-page document, 18 countries have agreed that companies designing and using AI must develop and deploy this advanced technology in a way that keeps customers and the general public safe and free from abuse. The agreement is non-binding and mainly provides general recommendations, such as monitoring AI abuse, data protection, etc. The countries that signed the new agreement are located across all continents, in addition to the US and UK, including Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, Singapore, etc.
According to Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, it is important that so many countries have agreed on the idea that AI systems need to put safety first. “This is the first time we have collectively affirmed that AI is not just about cool features and speed to market or how to compete to reduce costs, but rather, with this agreement, everyone has agreed that the most important thing to do at the design stage is security,” Easterly told reporters.
It is the latest in a series of initiatives by governments around the world to shape the development of AI, which is increasingly having an impact on industry and society at large. The agreement addresses questions about how to ensure AI technology is safe from hackers and includes recommendations such as releasing models only after proper security testing. However, it does not address thorny questions around the appropriate use of AI, or how to collect the data that feeds these models.
Europe has been ahead of the US in terms of AI regulation. European lawmakers have drafted AI rules, and France, Germany and Italy have recently reached agreements on how to regulate AI. The Biden administration has also pushed lawmakers to regulate AI, but a divided Congress has made little progress. The White House has sought to reduce risks from AI to consumers, workers and minorities and bolster national security with its first comprehensive AI executive order in October, requiring companies developing AI to notify the US government under the Defense Production Act (DPA) if their AI programs pose a risk to national security, the local economy or human health. The new order also addresses risks related to chemical, biological, radiological, nuclear and cybersecurity.
According to Tin Tuc newspaper