Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.
Nathan Howard | Reuters
Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.
The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.
The company said AI chatbots will instead point teenagers to expert resources when appropriate.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.
Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.
The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.
TechCrunch was first to report the change.
Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.
In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.
“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”
WATCH: Is the A.I. trade overdone?
