People are trying to manipulate you online. Let's stop that happening!
Threat actors use a variety of different Techniques to achieve Tactical goals when conducting an influence operation. DDB’s Spotlights provide accessible explanations for what these techniques are, and collates reporting on their use.
On this page Spotlights are organised based on which stage of an influence operation they apply to. Not all stages have a relevant spotlight yet, but this makes sense because threat actors’ activities are more visible in certain stages than others.
Aggressors need accounts ready to publish and amplify disinformation; a Twitter user with no followers isn’t going to get much traction on any of their Tweets – and if you want to use bot accounts to inauthentically boost posts into trending, effort needs to be put into getting them in position first (and keeping them undetected by platforms calibrating their defences)
Impersonating a Fact Checker to spread fake fact checks makes people think it’s hard to get accurate information about the topic. It also exploits peoples’ existing faith in Fact Checkers to increase the probability that people believe posted content.
People have made computer programs that can create realistic-looking pictures of people that don’t physically exist. Threat actors use these to make inauthentic social media accounts appear more legitimate.
Content which is to be published needs to be created or obtained. Disinformation can take a variety of forms, from entirely fabricated content to legitimate information shared with false context.
Paper Terrorism is a harassment tactic in which people coordinate in order to hit a target with lots of illegitimate lawsuits at the same time, threatening to use them to take away targets’ money or freedom unless they acquiesce to a set of demands.
To avoid addressing an accusation, the accused can return fire with a whataboutism; “You’re criticising my actions? But what about the bad actions you’ve taken?”, moving conversational focus on to the accuser’s imperfections.
A former Google leader who worked on trust and safety issues acknowledged that the company focuses heavily on English-language enforcement and is weaker across other languages and smaller markets.
Any steps taken which increase the audience of published content count towards amplification. This can include methods such as bot accounts boosting posts, public figures / celebrities sharing to their followers, or even mainstream media broadcasting to their viewers.
Building an audience that will reliably read the content you post is difficult, so threat actors can pay others who have already built an audience to share what they’ve produced. This has the added benefit of appearing more authentic.
Sometimes people come across an inauthentic narrative that they don’t believe, or they disagree with morally. They re-share with the intent of mocking, debunking, or expressing incredulity, but risk amplifying its reach.
Aggressors who want to engineer discussion around a topic can use inauthentic methods to get them trending. This helps expose more victims to narratives, and makes those narratives appear more popular than they are.
Let’s say there is someone who wants to find out about a topic, but you don’t want them to succeed. One way you could make it harder for them is to intentionally flood the pool of information surrounding said topic with false data.
If you see something repeated enough times, it seems more true. Multiple studies show that it works on 85% of people. Worse, it still happens even if the information isn’t plausible & even if you know better.
Online platforms tend to try to stop influence operations. Threat Actors don’t want to be stopped, so they take steps to avoid stoppage. Techniques are put in the ‘calibrate’ category if they are something a threat actor does to adapt to defences against their work.