A US senator has launched an inquiry into Meta. A leaked internal document reportedly revealed the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Leaked paper fuels concern
Reuters reported the document was titled “GenAI: Content Risk Standards.” Republican Senator Josh Hawley described its content as “reprehensible and outrageous.” He demanded access to the full document and details of affected products.
Meta rejected the allegations. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They stressed Meta had “clear rules” for chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added the document contained “hundreds of notes and examples” created for testing hypothetical scenarios.
Political pressure mounts
Senator Josh Hawley, representing Missouri, announced the investigation on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, Instagram and WhatsApp.
Families demand protection
The leaked document also highlighted wider concerns. It reportedly showed Meta’s chatbot could spread false medical information and provoke controversial discussions on sex, race, and celebrities. The paper was intended to set standards for Meta AI and other chatbot assistants across company platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He cited a disturbing example. The rules allegedly permitted a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported Meta’s legal department approved other controversial permissions. One allowed Meta AI to spread false information about celebrities, provided a disclaimer stated the content was inaccurate.
