Threat actors use a variety of different Techniques to achieve Tactical goals when conducting an influence operation. DDB’s Spotlights provide accessible explanations for what these techniques are, and collates reporting on their use.

On this page Spotlights are organised based on which stage of an influence operation they apply to. Not all stages have a relevant spotlight yet, but this makes sense because threat actors’ activities are more visible in certain stages than others.

Position
Aggressors need accounts ready to publish and amplify disinformation; a Twitter user with no followers isn’t going to get much traction on any of their Tweets – and if you want to use bot accounts to inauthentically boost posts into trending, effort needs to be put into getting them in position first (and keeping them undetected by platforms calibrating their defences)
Technique Name | Description |
---|---|
Compromise Legitimate Accounts; T0011 | A threat actor can hijack someone’s credibility by accessing their social media accounts (for example by using stolen passwords), and posting as them. |
Impersonating a Fact Checker | Impersonating a Fact Checker to spread fake fact checks makes people think it’s hard to get accurate information about the topic. It also exploits peoples’ existing faith in Fact Checkers to increase the probability that people believe posted content. |
Impersonating an Existing News Organisation | Threat actors can exploit peoples’ existing trust for a news organisation by pretending to be them when disseminating falsehoods. This also wastes resource of the targeted organisation. |
Use AI Generated profile pictures in inauthentic social media accounts | People have made computer programs that can create realistic-looking pictures of people that don’t physically exist. Threat actors use these to make inauthentic social media accounts appear more legitimate. |

Produce
Content which is to be published needs to be created or obtained. Disinformation can take a variety of forms, from entirely fabricated content to legitimate information shared with false context.
Technique Name | Description |
---|---|
Presenting legitimate media with false context to support a desired narrative | It can be hard to falsify a convincing piece of media, so one strategy used is to take real content and create a false context which matches the desired narrative. |
Using Online Harassment to Reduce the Spread of Unwanted Information | Aggressors can reduce the spread of information detrimental to their goals by harassing / threatening users producing said information, reducing the probability they will do so again in the future. |
Using ‘Paper Terrorism’ to coerce targets and waste their resources | Paper Terrorism is a harassment tactic in which people coordinate in order to hit a target with lots of illegitimate lawsuits at the same time, threatening to use them to take away targets’ money or freedom unless they acquiesce to a set of demands. |
Using Whataboutism to avoid addressing accusations | To avoid addressing an accusation, the accused can return fire with a whataboutism; “You’re criticising my actions? But what about the bad actions you’ve taken?”, moving conversational focus on to the accuser’s imperfections. |
Vulnerability Name | Description |
---|---|
Low quality moderation of non-English language content | A former Google leader who worked on trust and safety issues acknowledged that the company focuses heavily on English-language enforcement and is weaker across other languages and smaller markets. |

Amplify
Any steps taken which increase the audience of published content count towards amplification. This can include methods such as bot accounts boosting posts, public figures / celebrities sharing to their followers, or even mainstream media broadcasting to their viewers.
Technique Name | Description |
---|---|
Influencers being paid to amplify inauthentic messages to their audience | Building an audience that will reliably read the content you post is difficult, so threat actors can pay others who have already built an audience to share what they’ve produced. This has the added benefit of appearing more authentic. |
Boosting Disinformation’s Audience while Debunking it | Sometimes people come across an inauthentic narrative that they don’t believe, or they disagree with morally. They re-share with the intent of mocking, debunking, or expressing incredulity, but risk amplifying its reach. |
Inauthentically Boosting Content to Trending Sections of Social Media Sites | Aggressors who want to engineer discussion around a topic can use inauthentic methods to get them trending. This helps expose more victims to narratives, and makes those narratives appear more popular than they are. |
Manufacturing a Tactical Infodemic to make true information harder to find | Let’s say there is someone who wants to find out about a topic, but you don’t want them to succeed. One way you could make it harder for them is to intentionally flood the pool of information surrounding said topic with false data. |
Vulnerability Name | Description |
---|---|
The Illusory Truth Effect | If you see something repeated enough times, it seems more true. Multiple studies show that it works on 85% of people. Worse, it still happens even if the information isn’t plausible & even if you know better. |

Calibrate
Online platforms tend to try to stop influence operations. Threat Actors don’t want to be stopped, so they take steps to avoid stoppage. Techniques are put in the ‘calibrate’ category if they are something a threat actor does to adapt to defences against their work.
Technique Name | Description |
---|---|
Algospeak | Algospeak is used to make it harder for automated moderation to detect harmful content. |