Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Investors Business Daily
Investors Business Daily
Technology
RYAN DEFFENBAUGH

Meta, Alphabet, OpenAI Face FTC Probe Over Safety Of Children Using AI Chatbots

The Federal Trade Commission is ordering Alphabet's Google, Instagram and Facebook parent Meta Platforms, ChatGPT creator OpenAI and three other companies to provide information on how children are interacting with their AI chatbots.

The FTC said Thursday that it is seeking information on how the firms are monitoring the potential negative impacts of AI-powered chatbots when used by children and teens. The request also was sent to Elon Musk's xAI, Snapchat parent Snap and chatbot startup Character AI.

"The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products," the FTC said in a news release.

The FTC is conducting the probe using its 6(b) authority, which authorizes wide-ranging studies without specific law-enforcement purposes.

Meta stock was trading flat on the stock market today, near 753.33. Google stock was flat as well, near 240.02. Snap stock was up more than 3% to 7.29.

AI Chatbot FTC Inquiry Adds To Scrutiny

The action from the FTC adds to growing scrutiny over how young users are interacting with AI chatbots.

Last month, Sen. Josh Hawley said he would start an investigation into whether Meta's gen AI chatbots pose a threat to children. The announcement followed a Reuters report that Meta had permitted its chatbots to "engage a child in conversations that are romantic or sensual." A Meta spokesperson told Reuters that the company is revising the internal AI policy document that the report cited.

Meta declined to comment on the FTC inquiry. The company is training its chatbots to no longer engage with teenage users on "self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations," TechCrunch reported late last month.

The New York Times recently reported on a lawsuit from parents who said conversations with ChatGPT contributed to their 16-year-old son's suicide. OpenAI told The New York Times that "ChatGPT includes safeguards such as directing people to crisis help lines and referring them to real-world resources."

Asked about the FTC announcement, an OpenAI spokesperson said the company is "committed to engaging constructively" with the inquiry.

"As we shared last week, we will soon introduce expanded protections for teens, including parental controls and the ability for parents to be notified when the system detects their teen is in a moment of acute distress," the company said in an emailed statement.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.