On Tuesday, OpenAI CEO Sam Altman said that the company was attempting to balance privacy, freedom, and teen safety — principles that, he admitted, were in conflict. His blog post came hours before a Senate hearing focused on examining the harm of AI chatbots, held by the subcommittee on crime and counterterrorism and featuring some parents of children who died by suicide after talking to chatbots.

“We have to separate users who are under 18 from those who aren’t,” Altman wrote in the post, adding that the company is in the process of building an “age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID.”

Altman also said the company plans to apply different rules to teen users, including veering away from flirtatious talk or engaging in conversations about suicide or self-harm, “even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

Altman’s comments come after the company shared plans earlier this month for parental controls within ChatGPT, including linking an account with a parent’s, disabling chat history and memory for a teen’s account, and sending notifications to a parent when ChatGPT flags the teen to be “in a moment of acute distress.” The blog post came after a lawsuit by the family of Adam Raine, a teen who died by suicide after months of talking with ChatGPT.

ChatGPT spent “months coaching him toward suicide,” Matthew Raine, the father of the late Adam Raine, said on Tuesday during the hearing. He added, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.”

During the teen’s conversations with ChatGPT, Raine said that the chatbot mentioned suicide 1,275 times. Raine then addressed Altman directly, asking him to pull GPT-4o from the market until, or unless, the company can guarantee it’s safe. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, adding that Altman said the company should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’”

Three in four teens are using AI companions currently, per national polling by Common Sense Media, Robbie Torney, the firm’s senior director of AI programs, said during the hearing. He specifically mentioned Character AI and Meta.

“This is a public health crisis,” one mother, appearing under the name Jane Doe, said during her testimony about her child’s experience with Character AI. “This is a mental health war, and I really feel like we are losing.”

Share.
Exit mobile version