Techniques: “These TikTok Accounts Are Hiding Child Sexual Abuse Material In Plain Sight”

Today’s techniques assessment examines the article “These TikTok Accounts Are Hiding Child Sexual Abuse Material In Plain Sight” by Alexandra Levine for Forbes on 14 Nov 2022. This article discusses threat actors posting child sexual exploitation (CSE) content on Tiktok and how they get away with doing that, which is pretty harrowing. There’s a lot of useful information in this article on how people are able to post banned content on social media platforms while evading detection, which will be the main focus of this assessment.

This article also helped me solidify “Calibrate” as a stage of an influence operation; I think that “Calibrate” should refer to actions taken which don’t actively help the spread of information, but instead prevent its removal or detection. For example, algospeak is used to avoid automated content moderation. Harmful information would likely be easier to find people didn’t censor keywords, but without it the information would be removed altogether by platforms hosting it.

Calibrate: Create invite-only community

Don’t be shy, girl.

Come and join my post in private.


The posts are easy to find on TikTok. They typically read like advertisements and come from seemingly innocuous accounts.

But often, they’re portals to illegal child sexual abuse material quite literally hidden in plain sight—posted in private accounts using a setting that makes it visible only to the person logged in. From the outside, there’s nothing to see; on the inside, there are graphic videos of minors stripping naked, masturbating, and engaging in other exploitative acts. Getting in is as simple as asking a stranger on TikTok for the password.

Invite-only communities appear to be a method used to avoid content moderation (desipte TikTok’s claims that these videos are reviewed by AI). It may be the case that the intention is to avoid reports by other platform users; in the following quote we can see how harmful content was reported by platform users once visible outside the private community (even if it didn’t actually get removed).

Adair first discovered the “posting-in-private” issue in March, when someone who was logged into the private TikTok account @My.Privvs.R.Open made public a video of a pre-teen “completely naked and doing inappropriate things” and tagged Adair. Adair immediately used TikTok’s reporting tools to flag the video for “pornography and nudity.” Later that day, she received an in-app alert saying “we didn’t find any violations.”

Calibrate: Algospeak

This “posting-in-private” phenomenon—which some refer to as posting in “Only Me” mode—isn’t hard to find on TikTok. While a straightforward search for “post in private” returns a message saying “this phrase may be associated with behavior or content that violates our guidelines,” the warning is easily evaded by algospeak. Deliberate typos like “prvt,” slang like “priv,” jumbled phrases like “postprivt” and hashtags like #postinprvts are just some of the search terms that returned hundreds of seemingly violative accounts and invitations to join. Some posts also include #viral or #fyp (short for “For You Page,” the feed TikTok’s more than a billion users see when they open the app) to attract more eyeballs. TikTok told Forbes it prohibits accounts and content mentioning “post to private” or variations of that phrase. Only after Forbes flagged examples of problematic algospeak did TikTok block some hashtags and searches that now pull up a warning: “This content may be associated with sexualized content of minors. Creating, viewing, or sharing this content is illegal and can lead to severe consequences.”

Within days of an active TikTok user following a small number of these private accounts, the app’s algorithm began recommending dozens more bearing similar bios like “pos.t.i.n.privs” and “logintoseeprivatevids.”

This quote illustrates how quickly algospeak can adapt; ‘within days’ of alerting TikTok to terms being used to bypass their automated blocking, new algo-friendly terms were in use.

Vulnerability: Inadequate Platform Moderation

“I apologize if this is crossing a boundary however I am desperate to get this the attention it needs,” [Adair] wrote to one TikTok employee, describing the “private posting” and the way she believes users are gaming the AI “by posting a black screen for the first few seconds” of these videos.

“I personally saw one of the videos that had been unprivated and it was a child completely naked and doing indecent things. I reported the video and it came back no violation,” she continued. “Since posting my video concerning this I’ve had two children come forward and share how they were groomed by one of these accounts and were later made aware that it was an adult behind the accounts. Please. Is there anything you can do to help?”

Adair “never heard back from anybody,” she told Forbes. “Not a single person.”

Platforms claim that harmful content will be removed from their platform, but it can still proliferate if the Platform’s employees are unable to enact these policies.

In this example we see that TikTok’s moderaters were unable to prevent child sexual exploitation content from being hosted on their platform.

The sheer volume of post-in-private accounts that Forbes identified—and the frequency with which new ones pop up as quickly as old ones are banned—highlight a major blind spot where moderation is falling short and TikTok is struggling to enforce its own guidelines, despite a “zero tolerance” policy for child sexual abuse material.

We know that private videos were reviewed by TikTok’s AI moderation, but that it was not sufficient to prevent TikTok from hosting child sexual exploitation content.

The company also said that all videos posted to the platform—both public and private, including those viewable only to the person inside the account—are subject to TikTok’s AI moderation and in some cases, additional human review.