Technique Spotlight: Impersonating an existing news organisation

Threat actors can impersonate existing news organisations when producing inauthentic content, which has the following strategic benefits:

Exploiting peoples’ trust in a news organisation to make an inauthentic message appear legitimate

When we encounter a new source for information we must decide whether we trust it to be truthful (and can therefore rely on the information it provides). The amount we trust a source can build over time based on our experiences of the information it has delivered to us. A threat actor can exploit peoples’ existing trust for a source by pretending to be them when disseminating falsehoods.

Occupying fact-checking resources of the target organisation while they debunk the impersonation

When a news organisation is impersonated to spread inauthentic content, the organisation must invest resources in reducing the damage of the impersonation; leaving the fake unchecked could result in many people believing false content, or losing faith in the targeted organisation’s reporting capability.

From an operational perspective, members of the organisation with the skills required to safely and effectively debunk an impersonation will need to refocus from their ongoing investigations, which may be beneficial for threat actors who want other truths to remain hidden.

BBC disinformation reporter Alistair Coleman discusses the impact that an impersonation had on his team’s resources during the 2022 Russian invasion of Ukraine.

For transparency it’s worth noting that I haven’t seen reporting which claims the intention of a threat actor using this tactic is to occupy the resources of a target organisation, but it is regardless a real impact (so I’ve included this section just in case).

Real World Example:

In this example a Twitter account impersonated RapTV (a real news organisation reporting on developments in the music industry) when publishing the following inauthentic story:

Tweet posted on 03 Apr 22 by @rabtvcom: “DaBaby microwaved his ramen for 3 minutes, but when he took it out the noodles were still kind of hard, so he’s going to put it back in for another minute ‼️😳” // “DABABY: “Usually 3 minutes is enough time to soften the noodles the way I like, but for whatever reason it didn’t work out this time. It’s possible I didn’t place it enough in the center of the tray, but l don’t know, I’m usually pretty good about doing that. It is a somewhat new microwave, but I’m 90% sure it has the same wattage as the old one, so I feel like the brand of microwave wouldn’t make this much of a difference as far as noodle cook time goes. Regardless, not that big of a deal. I can just pull up a youtube video while I wait.””

Some users replying to the post appeared to believe it was from the legitimate RapTV account, asking why the content was newsworthy:

Users responding to the impersonating post asking why the content shared was newsworthy.

It makes sense that people were convinced by the impersonation; the illegitimate account has the same profile picture and writing style used by the real RapTV:

Left: @raptvmcom. Right: @rabtvcom.

A standard user quickly scrolling through their feed may not notice the discrepancies, and take in the posted content as genuine. In this scenario the threat actor was just parodying RapTV, but other examples show the same tactic being used to achieve malicious goals.

Further Reading:

First Draft’s categorisations of information disorder includes a section on Imposter Content which details a variety of cases where legitimate news organisations were impersonated to deceive victims

I first came across imposter content designed to cause harm when I worked in 2014 for UNHCR, the UN Refugee Agency. We were constantly battling Facebook posts where smugglers were creating pages with the UNHCR logo and posting images of beautiful yachts and telling refugees that they should “call this number to get a place on one of these boats that would take them safely across the Mediterranean.”

Understanding Information disorder – Imposter Content by Claire Wardle in October 2019

Relevant Content from the Article Archives

Mapping to the DISARM framework

DISARM has the technique “Prepare Assets Impersonating Legitimate Entities” (T0099), in which “an influence operation may prepare assets impersonating legitimate entities to further conceal its network identity and add a layer of legitimacy to its operation content”.

In this post I specified the type of Legitimate Entity (a news organisation), which would make this a sub-technique of T0099. It remains to be seen whether there’s justification for different entities having their own categories; as we continue to uncover more impersonations used in influence operations, we can assess whether such sub-techniques should exist.

DISARM’s phases with DDB’s stages

Threat actors would create impersonations while Preparing for an influence operation. These could be done in Positioning (if creating something like a spoof website to host false information) or in Production (if making something like a false image including the target’s branding).

This is a Stage 2 profile. You can read more about Profile Stages here.