Snapchat may not have fully assessed the risks that its artificial intelligence (AI) chatbot feature poses to children's privacy, the UK's data watchdog, the Information Commissioner's Office (ICO), said on October 6.
In April, Snap, the company behind the photo and video messaging platform Snapchat, opened access to a custom chatbot called “My AI” to all users. The feature, built on AI and the ChatGPT language platform, was previously only available to paid Snapchat+ subscribers. My AI can make recommendations, answer questions, help users plan, and even compose a haiku in seconds. Snap said all conversations with My AI will be stored by Snapchat and can be analyzed to improve product features. Snap also cautioned users not to share confidential information with My AI.
The UK's data watchdog cited an investigation into whether Snap had adequately assessed the privacy risks to children and other users before launching the My AI feature, warning that it could ban My AI in the UK if the US-based tech company fails to address the concerns.
A Snap spokesperson said the company is reviewing the ICO's relevant report, and confirmed that My AI met its security and legal review process.
The ICO is investigating how Snapchat's My AI feature processed the personal data of around 21 million users in the UK, including children aged 13-17.
Snap is the latest social media company to launch an AI chat tool for its users. Since early February, several other tech companies have also launched AI chatbots for their search browsers, including Microsoft and Google.
According to VNA