You may have heard that artificial intelligence isn’t always our friend. Well, it turns out that it and its users definitely aren’t all that friendly to our kids.

According to WeProtect Global Alliance’s Global Threat Assessment 2023, child sexual abuse material (CSAM) has increased online by some 87% since 2019, with 32 million reports of CSAM analyzed in 2022 alone. Another report from the U.K.-based Internet Watch Foundation suggested that the volume of AI-generated CSAM on the internet has increased as well. During its 30-day review, a total of 3,512 explicit images and videos created with AI were found—a 17% increase over a similar review conducted in fall 2023.

Oh, but that’s not all that’s being laid at AI’s feet. According to NBC News, AI users are, in a sense, performing double duty in the abuse department.

Terrible people with the right tech can create some nasty deepfake stuff with just a single photo or short video of any person in your life. But the ability to create completely realistic deepfakes from scratch is still far less than perfect. So for something truly lifelike, foul techs use old footage of real abuse snatched from the dark corners of the web and blend it with new faces. So not only are today’s kids being abused, but abuse victims from decades ago—grown-up survivors—are once again being pulled into that poisonous web.

It’s all so revolting and corrupt, and seemingly only getting worse as AI improves its footing.

“Realism is improving. Severity is improving. It’s a trend that we wouldn’t want to see,” said Dan Sexton, the Internet Watch Foundation’s chief technology officer.

Of course, all of the above raises the question: If we can somehow measure how much of this AI-generated muck is being produced, why can’t we just shut it all down? People are being ostracized and kicked off Big Tech platforms all the time, right? Why not this?

Well, experts say it’s because abusive materials are a different animal. And so are their makers. Social platforms are regularly working to eliminate such illegal material, and law enforcement always has an ear out for tips on its whereabouts. But the bad people and their bad stuff like to hide in dark corners.  

These deepfakes, along with the original imagery that they use, are often kept on overseas servers where laws against it don’t exist or local authorities are ill-equipped to address it. Many child abuse distributors make use of gaps in social media security that allow them to post videos “privately” (thus eluding auto-detectors of explicit material) and then share login info with CSAM consumers.

Then there’s the dark web, a part of the internet that’s hidden from traditional search engines and that bounces connections back and forth in untraceable patterns. It’s like a dank back alley hidden in the cloud somewhere.

The only absolute solution for barring this sort of material would be to either shut the internet down altogether or take a page from a country like China and create a nationwide firewall of iron-fisted government control. Of course, even with the best of intentions, you can imagine how quickly that latter choice could veer toward a terrible outcome all its own.

So, what do we do about these abusive videos that we can’t seem to get rid of? Even synthetic, less-than-perfect videos of teens can be used to cyberbully. Fake explicit images have been used for sextortion and blackmail. We’ve even seen stories of teens being pushed to suicide.

What’s to be done?

Well, keeping an eye on news stories, studies and suggestions from abuse-help organizations can be helpful. It never hurts to be as well-informed as possible when you’re making decisions for your family.

However, the first step for your family’s personal safety is probably the easiest: Limit the number of pictures and videos you or your children post online. Even if your accounts are set to private, WeProtect’s report found that 60% of cases of online abuse involved perpetrators known to the child. Bad actors can use a variety of AI tech to alter benign family photos and videos, and they don’t care if those pics are from your child’s social media account … or yours.

Next, work to understand the apps, social networks and any online services your older kids are using. In fact, just take a stroll on YouTube and see how easy it is to make a deepfake. (Not that I’m recommending you try your hand at it.)

Then talk early and often to your children about this new world we live in and the dangers that are out there. Have some conversations about online safety and how that takes priority over sharing pics from the weekend pool party.

Finally, create a home environment where all of your family members feel safe talking to you about anything they might encounter in this area. If they become a victim of abuse or some scam, or some unexpected picture lands in their DMs, you want them to feel comfortable enough to share and talk it out.

Hey, sexual abuse may not be super easy to talk about for anyone around the kitchen table. But you can bet that AI won’t be shy about it.

More resources from Focus on the Family:

Share.
Exit mobile version