OpenAI has rolled out some long-awaited parental controls for ChatGPT to all web users, with mobile coming “soon,” according to the company.

The controls, announced last month, allow for reducing or removing certain content — like sexual roleplay and the ability to generate images — and reducing the level of personalization on ChatGPT conversations by turning off its memory of past transcripts.

Parents must have their own accounts to access the controls, and teens must opt in, either by inviting a parent to link their account or by accepting a parent’s invitation. Teens can disconnect their accounts at any time, though parents will be notified if that happens. Parents don’t have access to their teen’s conversations, even with a linked account. The only potential exception: “in rare cases where our system and trained reviewers detect possible signs of serious safety risk, parents may be notified — but only with the information needed to support their teen’s safety,” per OpenAI.

OpenAI laid out most of these features back in August when it said parental controls were coming. Notably, one feature that it was “exploring” seems to have not materialized: the ability to set an emergency contact who is reachable with “one-click messages or calls” within the chatbot. It’s possible OpenAI hopes to cover some of the same ground with the automatic feature for notifying parents. “We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong,” OpenAI wrote.

OpenAI’s original announcement came after the death of Adam Raine, the 16-year-old who died by suicide after months of confiding in ChatGPT. OpenAI was hit with a lawsuit, and within weeks, ChatGPT was being discussed during a Senate panel about various chatbots’ potential harm to minors, where parents of teens who died by suicide spoke.

Hours before the Senate panel, OpenAI CEO Sam Altman posted a blog in which he said the company was attempting to balance teen safety with both privacy and freedom, and that the company is working on an “age-prediction system to estimate age based on how people use ChatGPT.”

Matthew Raine, the father of the late Adam, said during the Senate panel hearing earlier this month, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

During the hearing, Raine also criticized OpenAI’s past approach to safety. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, going on to add that Altman said OpenAI should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’”

Share.
Exit mobile version