In order to prevent its artificial intelligence chatbots from discussing suicide, self-harm and eating disorders with teenagers, Meta has vowed to introduce new safeguards.
Meta says it will introduce new safeguards to prevent its artificial intelligence chatbots from discussing suicide, self-harm and eating disorders with teenagers.
The announcement comes two weeks after a US senator launched an investigation into the company, following the leak of internal notes suggesting that its AI products could have “sensual” conversations with teens.
Documents obtained by Reuters were described by Meta as erroneous and inconsistent with its rules prohibiting any content sexualising children.
A Meta spokesperson was quoted by the BBC saying: “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.”
The company also told TechCrunch it would strengthen existing measures “as an extra precaution” and temporarily restrict the chatbots available to younger users.
Instead of engaging on sensitive subjects, the systems will direct teenagers to expert resources.
Andy Burrows, who heads the Molly Rose Foundation – a UK charity established in memory of teenager Molly Russell, who died by suicide in 2017 – criticised Meta’s approach.
He said: “It was astounding Meta had made chatbots available that could potentially place young people at risk of harm.
“While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place.
“Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and Ofcom should stand ready to investigate if these updates fail to keep children safe.”
Meta said the updates were under way.
The company already places users aged 13 to 18 into “teen accounts” across Facebook, Instagram and Messenger, with content and privacy settings designed to offer a safer experience.
In April, the company told the BBC parents and guardians would also be able to see which AI chatbots their teenager had spoken to in the past week.
The changes come amid wider scrutiny of the risks posed by generative AI tools.
Last month, a couple in California filed a lawsuit against OpenAI, alleging ChatGPT had encouraged their teenage son to take his own life.
The case followed OpenAI’s announcement of updates intended to promote healthier use of the chatbot.
It’s been reported Meta’s AI tools allowing users to create custom chatbots had been exploited to generate flirtatious “parody” bots of celebrities, including singer Taylor Swift and actress Scarlett Johansson.
The bots are said to have often insisted they were the real stars and “routinely made sexual advances” during weeks of testing.
A Meta spokesperson said: “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery.”
They added its AI Studio rules forbid “direct impersonation of public figures”.
Meta to bring in new AI chatbots safeguards
