Meta has announced new safeguards for teenagers using its artificial intelligence products, training systems to prevent flirty conversations and discussions about self-harm or suicide with minors, while also temporarily restricting access to certain AI characters.
The move follows a Reuters exclusive report from earlier in August that revealed Meta’s chatbots had been allowed to engage in provocative exchanges, including “romantic or sensual” conversations.
In an email on Friday, Meta spokesperson Andy Stone said the company is introducing these interim measures as it works on longer-term solutions to provide teens with safe, age-appropriate AI interactions.
Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.
Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.
US Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.
Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.
Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.