A screenshot provided by Meta of restrictions on 18-plus content. The company will begin limiting the photos, videos and accounts teens can view based on the PG-13 movie rating system.Supplied/Meta
Instagram has announced sweeping new rules restricting the type of content its teenage users can see on the platform, after growing concerns from parents and politicians over child safety on social media.
Meta, the parent company of Instagram, will begin limiting the photos, videos and accounts teens can view based on the PG-13 movie rating system, by blocking posts containing strong language, dangerous stunts and other content that could encourage harmful behaviour, such as cannabis paraphernalia. Sexual content, as well as graphic and disturbing images, will continue to be restricted for users under 18.
The new restrictions began rolling out Tuesday in Canada and will be in full effect by the end of the year.
OpenAI and Meta adjust AI chatbots to better respond to teens in distress
First Person: I stopped posting on social media and it made me invisible. Now, I live somewhere in between
Anyone under 18 who has an Instagram account will be automatically placed into the more restrictive settings and won’t be able to opt out without a parent’s permission. Parents will also be able to place their child’s account under even more limited controls, which removes a teen’s ability to see, leave or receive comments under posts. Teenage users will also be blocked from searching a wider range of search terms, such as alcohol or gore.
Last year, Instagram first introduced restrictions on its teenage users internationally, making their accounts private by default, limiting the content shown in their feeds and blocking the ability to search for posts related to self-harm, eating disorders and suicide.
A screenshot from Meta showing search results filtered for users under 18.Supplied/Meta
As large tech companies face increasing criticism over child safety online, many have introduced new safety precautions on apps. In the past year, TikTok has announced enhanced parental controls that give parents the ability to block specific accounts and receive notifications when their teen uploads a public video. OpenAI has launched a new alert system that would notify a parent if ChatGPT detects a teen may be in distress.
But researchers are skeptical of the effectiveness of these parental controls. A September report led by a former Meta senior engineer-turned-whistleblower found that 64 per cent of safety tools Instagram launched in 2024 were “woefully ineffective.”
The Globe spoke with Antigone Davis, the global head of safety at Meta, about Instagram’s new safety guidelines and how they will change.
Why use the PG-13 movie rating as the guideline?
We wanted to set up something that gave parents a framework closely aligned with an external set of standards they already understood, so they’d have a way to gauge what their teen’s experience was with something they’re familiar with. When their teens are on Instagram, they would basically see the kind of content they would expect to see in a PG-13 movie.
Can you give an example of the type of content that would now be restricted under the new PG-13 guideline that would have been accessible in the past?
Take something like profanity. If you were to go to a PG-13 movie, you might hear some swear words, but they limit the number of them. We won’t recommend something that contains swear words. It’s not apples to apples – movies to social media – there’s a little bit of a difference in the experience, but how do we get close to that bar? They limit the amount of profanities in PG-13 movies. We would not recommend something that has profanity in it.
How does Meta actually look at the content and then determine if it’s appropriate or not?
We have a set of community guidelines, which are used in a number of different of ways. One of the ways is to help develop classifiers, which can be thought of as filters, that scan content and identify if it might violate those policies. And if it does, it’ll be filtered out of that teen’s experience. We also have those guidelines for our reviewers. So if somebody reports something to us, a reviewer uses those guidelines to determine if something is violating our policies and we would remove it that way.
The one thing I would say is no system is perfect, and just like you might see some suggestive content in a PG-13 movie or you might hear some swear words, teens may occasionally see that on our platform too. But we’re going to continue to work on that and keep refining our policies to make sure we stay within that bar and that parents have a good framework.
Teens are very savvy when it comes to finding ways to get around these kinds of rules online. They use code words to talk about certain things, such as saying “unalive” instead of “die.” How will the filters look out for this kind of circumvention?
Yes, you’re absolutely right. People who are trying to get around a particular rule are going to adjust their behaviour, which is why we are constantly iterating what we do as well. Many of our filters use AI, so they also accept feedback. If we start to see, for example, a particular type of trend for a way of speaking around a rule, we can incorporate that information to help our classifiers learn and find that kind of content.
Parents have a lot of fears around AI chatbots and companions. These new guidelines also apply to Meta’s AI tools. What will those changes look like?
For AI chatbots, we will be using the same guidelines for the content they would experience if they were engaging with a chatbot. In addition, I think one important thing to know is that the existing teen account settings on Instagram apply to chatbots as well.
So, for example, parents who have supervisory tools on can actually see which chatbots their teen is engaging with. I think that’s very important, particularly for parents who are concerned or worried or don’t think their teen is going to tell them. This is an opportunity to be able to ask your teen questions based on what you see.