OpenAI Study Reveals Prominence of AI Psychosis Among ChatGPT Users
What? In response to the increasing number of cases blaming conversations with ChatGPT for hospitalization, divorce and even death, OpenAI (the parent company of the artificial intelligence chatbot) conducted a study to find out how many of its users were suffering from “AI psychosis.”
So What? According to Wired, OpenAI found that about 560,000 of its users are indeed experiencing mania or psychosis (which can be fueled by conversations with user-affirming AI chatbots): “About 1.2 million more are possibly expressing suicidal ideations, and another 1.2 million may be prioritizing talking to ChatGPT over their loved ones, school or work.”
Now What? OpenAI has addressed the concern by working with over 170 mental health care professionals to train ChatGPT to respond to these cases by expressing empathy “while avoiding affirming beliefs that don’t have basis in reality.” That said, parents should remind young AI users that most chatbots, including ChatGPT, tend to tell their users what they want to hear. So if your child is having troubling thoughts or emotions, no matter how difficult, they should speak to a trusted adult first—and they should avoid speaking to a chatbot at all.
Teens Voluntarily Participate in Tech-Free Bedroom Challenge
What? Four teens from Bradford, in the UK, volunteered to participate in a tech-free bedroom challenge as part of a study by BBC. For five days, their phones, gaming consoles and laptops were removed from their rooms. They were given analog alarm clocks to help them wake up on time.
So What? All four teens found the challenge to be difficult but rewarding. Among the benefits, they listed better sleep, more family time, increased activities with friends and less procrastinating.
Now What? The volunteers happily took their devices back once the challenge was over. However, several decided that some limits—such as leaving their phones outside their rooms once they actually go to bed—were worth keeping. Consider posing such a challenge to your own kids and see what good habits they may form on their own as a result.
The AI Slur ‘Clanker’ Has Turned Racist
What? “Clanker” has long been used in media and entertainment (including Star Wars) as a derogatory term for robots and droids. But according to Wired, the “slur” really took off in July after TikTok creator Harrison Stewart went viral for his comedic skits calling AI agents, boyfriends and girlfriends “clankers.”
So What? Unfortunately, just one month later, the creator announced he would no longer be using the word since commenters started appropriating the term as a racial slur against him (Stewart is Black), calling him a “cligger” (a mashup of “clanker” and the n-word). Elsewhere on TikTok, users began creating their own “clanker” skits that seem to use robots as stand-ins for Black people. In one very obvious example, a man dressed up as a police officer tells an AI, “Don’t you know clankers sit in the back of the bus, Rosa Sparks?”
Now What? Hopefully, some of the people creating these videos aren’t trying to be racist. But this demonstrates how satirical videos can quickly get out of hand. Talk to your teens about the impact these videos (and the comments they spark) can have. As Christians, we’re told to use our words to build others up (Ephesians 4:29). So if creating a “clanker” video, sharing one or even commenting on one serves to tear someone down, then perhaps teens shouldn’t be watching those videos in the first place, no matter how funny.



![2nd Nov: Love Actually (2003), 2hr 14m [R] – Streaming Again (6.75/10)](https://occ-0-1009-1007.1.nflxso.net/dnm/api/v6/Qs00mKCpRvrkl3HZAN5KwEL1kpE/AAAABQGCsuHpb4PsnXuko3OuwOT84xiD06hd-7uOlb5XoVdZb4PJ9MAZircclAxYWFL_klyID79XfpCWFfAOVkzziKtvl6DTYx2r6fbg.jpg?r=275)








