Select Language

English

Down Icon

Select Country

Portugal

Down Icon

OpenAI to Add Parental Controls to ChatGPT After Teen's Death

OpenAI to Add Parental Controls to ChatGPT After Teen's Death

US artificial intelligence company OpenAI announced Tuesday it will add parental controls to its chatbot ChatGPT, a week after a US couple claimed the system encouraged their teenage son to commit suicide.

“Next month, parents will be able to (…) link their account to their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate behavior rules,” the generative AI company explained in its blog post.

Parents will also receive notifications from ChatGPT “when the system detects that their teen is in a moment of acute distress,” OpenAI added.

Matthew and Maria Raine claim in the lawsuit filed Monday in a California state court that ChatGPT cultivated an intimate relationship with their son Adam for several months between 2024 and 2025, before his death.

The complaint alleges that in their final conversation, on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided him with a technical analysis of a noose that it confirmed “could potentially choke a human being.”

Adam was found dead hours later using this method.

“When a person uses ChatGPT, they really feel like they’re talking to something on the other side,” said attorney Melodi Dincer, who helped prepare the legal action.

“These are the same characteristics that could lead someone like Adam, over time, to start sharing more and more about his personal life and, ultimately, to seek advice and guidance from this product that basically seems to have all the answers,” Dincer said.

The lawyer said OpenAI's blog post announcing parental controls and other security measures is "generic" and lacks detail.

“It really is the bare minimum, and it definitely suggests that many (simple) safety measures could have been implemented,” he added.

“It remains to be seen whether they will do what they say they will do and how effective it will be overall.”

The Raine case was just the latest in a string of incidents in which people were encouraged by AI chatbots to pursue delusional or harmful thoughts, prompting OpenAI to announce it would reduce the models' "coaxing" of users.

“We continue to improve how our models recognize and respond to signs of mental and emotional distress,” OpenAI said Tuesday.

mng/tgb/gc/mjw/mvl/ag/jc/fp

IstoÉ

IstoÉ

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow