On April 11, US technology group Meta said it is developing new tools to protect teenage users from "extortion" scams on its Instagram platform.
There have been reports of criminal groups conducting extortion scams in recent times by luring young people into providing sensitive personal images and then threatening to publicly release the images unless they receive a payment.
Meta says it is testing a “sensitive photo protection” tool. Powered by AI, the tool will be able to find, analyze, and blur such photos sent to minors on the app’s messaging system. This way, recipients won’t have to see unwanted content on their phones, and will have the option to continue viewing the images.
In the announcement, Meta emphasized that the new tool will help protect young people from the risk of blackmail using sensitive images, while at the same time, scammers and potential criminals will have more difficulty finding and reaching teenage users.
Additionally, Meta said it will use AI tools to identify accounts that send out infringing content and implement strict measures to prevent bad actors from interacting with young users on Instagram. Meta will also provide advice and safety tips for anyone who sends or receives unwanted messages and photos.
According to US authorities, in 2022 alone, about 3,000 teenagers in this country became victims of the above-mentioned scams.
Meanwhile, Meta is facing lawsuits in more than 40 states alleging that the company exploited young users for profit by creating a business model that keeps young people occupied and harms their mental health.
In January, Meta said it would roll out measures to protect users under 18, including content controls and enhanced parental monitoring tools.