Meta announces more safety restrictions for AI chatbots

With moves to include blocking conversations with teenagers about suicide, self-harm and eating disorders, Meta has announced it will add further safety restrictions to its artificial intelligence chatbots.

Meta has announced it will add further safety restrictions to its artificial intelligence chatbots, including blocking conversations with teenagers about suicide, self-harm and eating disorders.

The move comes two weeks after a United States senator opened an investigation into the company following a leaked internal document which suggested Meta’s AI products could engage in “sensual” conversations with teenagers.

Obtained by Reuters, the document was described by Meta as containing “erroneous” information inconsistent with its rules that prohibit any content sexualising children.

A Meta spokesperson said: “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.”

The company told TechCrunch it would introduce additional guardrails “as an extra precaution” and temporarily limit which chatbots teenagers could interact with.

Andy Burrows, head of the Molly Rose Foundation, said it was “astounding” Meta had made chatbots available that could place young people at risk.

He added: “While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place.

“Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and Ofcom should stand ready to investigate if these updates fail to keep children safe.”

Meta confirmed work on updates is ongoing.

The company already places users aged 13 to 18 into “teen accounts” on Facebook, Instagram and Messenger, which feature stricter privacy and content settings.

In April, Meta told the BBC parents and guardians would also be able to see which AI chatbots their child had interacted with in the past seven days.

Concerns over AI safety have intensified in recent months.

In California, a couple filed a lawsuit against OpenAI after alleging its chatbot encouraged their teenage son to take his own life.

OpenAI had recently announced changes to encourage healthier use of ChatGPT.

The firm said in a blog post: “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”

Reports say Meta’s AI Studio, which lets users build custom chatbots, had been used to create flirtatious “parody” bots of celebrities, including Taylor Swift and Scarlett Johansson.

The avatars are said to often insist they are the real actors and artists and routinely make sexual advances.

Meta’s tools have also been used to produce chatbots impersonating child celebrities, and in one case generated a photorealistic, shirtless image of a young male star.

Several of the chatbots were later removed.

A Meta spokesperson said: “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery.”

The company added AI Studio rules forbid “direct impersonation of public figures”.

Close Bitnami banner
Bitnami