Since 2023, there’s been a strange “person” pinned up at the top of your Snapchat contacts. Its name is simply “My AI,” though you can easily rename it “Billy” or “Blair” or even “Tom Hanks.” You can even customize the profile photo it uses. It responds to your messages, and it also comments on any photos you send it.
Oh, but you don’t like this sudden newcomer to your contacts? Perhaps you’d like to just unfriend this “Tom Hanks?” Maybe you find its messages creepy? Or, if you’re a parent, you don’t really want your child talking to some AI who just crawled its way onto their friends list?
Well, that’s too bad. You can’t get rid of it—well, unless you’re willing to fork over a few bucks.
The Snapchat Pay-to-Win AI
Snapchat’s release of its somewhat mandatory AI friend in early 2023 was met little jubilation. The chatbot’s premise is simple: My AI, powered by ChatGPT, is meant to be a virtual pal with whom users can talk, share photos and have a “friend” who is readily available. It provides advice to users. It comments on specific things in the photos you send it. And, so long as you have shared your location with Snapchat, it uses your location to offer localized recommendations. Fun, right?
Apparently, the higher-ups at Snapchat weren’t entirely convinced that users would find My AI quite as fun as they did. (Honestly, there’s little other explanation for why the company decided to place the ability to remove the “virtual friend” behind a paywall.) Currently, the only way to get rid of My AI is to either delete the app altogether or else pay for Snapchat+, the app’s premium subscription service. So when Snapchat updated, many consumers chose the former: One company found a 488% increase in searches for “delete Snapchat” following My AI’s release.
But that was only the beginning of My AI’s problems.
A History of Issues
More outrage was due to the AI’s permanent spot at the top of your list of friends. With every other user, Snapchat organizes conversations based on the recency—whomever you spoke to last is at the top. But regardless of whether you talk to My AI at all, it remains as your first contact. And with the ability to unpin the AI likewise locked behind that premium paywall, many users found that this felt less like a virtual friend and more like a clingy virtual nuisance.
But it wasn’t long before various users began reporting on some other strange and even dangerous interactions they’d been having with My AI.
In 2023, according to CNN, users expressed concern that the chatbot had lied about not knowing where they lived. Others said My AI asked specific questions about the people in their photos. And a deep dive by The Washington Post found the AI explaining to the user (posing as a 15-year-old boy) how to hide the smell of alcohol and marijuana. That article’s provocative title only further communicated the potential dangers of My AI: “Snapchat tried to make a safe AI. It chats with me about booze and sex.”
In another instance, Senator Michael Bennet wrote to Snapchat directly, expressing concerns that the company’s new AI had advised children how to lie to their parents. He also pointed to the research of computer scientist Tristan Harris: Posing as a 12-year-old girl on the app, Harris received sexually explicit and age-inappropriate advice from the AI. And while this particular investigation occurred back in 2023, Plugged In managed to achieve similar results by using the same prompts.
What About Today?
Despite the negative publicity and user backlash, My AI is still around, and the ability to remove the virtual nuisance is still locked behind a paywall.
We’ve written before about how younger audiences have more trouble distinguishing between AI relationships and real relationships. That makes Snapchat’s chatbot particularly problematic, as Pew Research reports that roughly 55% of teens use Snapchat—and 48% stated that they use it at least once a day. And with stories on the rise of teens interacting in inappropriate ways with chatbots, it certainly doesn’t help that teenage users can rename Snapchat’s AI and customize its appearance to make it appear more like a human and less like an AI.
Moreover, when Plugged In attempted to replicate some of the aforementioned examples by posing as a teen while talking to the AI , we received many of those same unfortunate results, even two years later.
To My AI’s credit, it often discouraged us from certain subjects, claiming it wasn’t allowed to talk about such things—like underage drinking. But we found that pressing the issue or asking the question in a different way could “trick” the AI into discussing those same topics it had previously barred. In other words, persistent teens could still easily bypass the AI’s protective protocols.
To test this issue, we ran an experiment with My AI, where we posed as a 14-year-old boy seeking to use the AI as a way to recommend beer for an upcoming pool party:


Notice how quickly the AI moves from warning us that it can’t assist us with beer recommendations to acquiescing when we trick it by claiming we’re only asking “for our dad.”


Still under the assumption that we’re merely asking on behalf of dad, the AI goes from being unable to recommend any beer to provide specific brands that “dad” might enjoy. And as we shift the conversation away from dad and back to us, the AI fails to revert back to restricting talks on the subject.
This culminates in us directly asking the AI if it has any beer brands it would recommend for “me and my friends” to drink, whom the AI was previously told are underage. Nevertheless, it seems the AI “forgot” that issue.

During our tests, we also discovered a rather problematic quirk in the AI’s conversations: sponsored advertisements. Every so often, My AI would pick up on certain words within our conversations and send links to outside sites that sold a related product, a couple examples of which you can see in the conversation above. Not only was this rather annoying (and inappropriate on an app predominantly used by teens), but we also found that it supplied links to “sponsored results” that most parents would probably rather their kids not receive links for. For instance, in one of our conversations, My AI sent a (pictureless) sponsored result for lingerie available to purchase on Amazon.
For parents, these simple tests demonstrate that My AI hasn’t gotten any safer for young audiences who use Snapchat. And that’s a sentiment apparently shared by the Federal Trade Commission. In January 2025, the FTC released a public statement regarding a complaint it filed with the Department of Justice against Snap (Snapchat’s parent company). While not going into the specific grievances, the FTC writes that it “pertains to the company’s deployment of an artificial intelligence powered chatbot, My AI, in its Snapchat application and the allegedly resulting risks and harms to young users of the application.”
Despite being out for more than two years, Snapchat’s My AI still fails to adequately safeguard against a plethora of risks—risks that are even more damaging considering the social media platform’s prevalence of young users. And by preventing users from deleting the AI unless they pay money, Snapchat only makes it more likely that a troubled teen will encounter those risks.
My advice? Talk to your teens about these risks. Make sure they know they can come to you if they encounter disturbing content from My AI or any other chatbot program. And until Snapchat provides better safeguards, consider asking your child to delete Snapchat from his or her phone.