The Disinformation Cycle

Disinformation is very slippery and hard to understand; peoples’ intentions and beliefs are hidden behind smoke and mirrors – and it’s not always possible to see the whole story.

Some articles related to disinformation focus on debunking illegitimate information found online, while others detail strategies used to amplify disinformation – or even examine its real-world impact. To help find exactly what you’re looking for, content submitted to the archives is categorised based on which stages of the Disinformation Cycle it covers.

Disinformation Cycle Stages

The ability to classify different actions related to the lifecycle of disinformation helps our understanding of the behaviours of aggressors (i.e. a person or group aligned with a piece of disinformation), and standardises their actions across different instances of Disinformation.

The Disinformation Cycle, adapted from The Malign Foreign Influence Campaign Cycle

The steps to spreading disinformation usually happen in order (from research to calibrate), with the exceptions of deciding to act (people can decide to join in spreading disinformation at any stage) and impact (which can happen at any point after someone sees disinformation which has been published). This process is cyclical; aggressors calibrate and repeat their efforts until they’ve achieved the desired impact.

Decide to Act

People decide to act when they join in a disinformation campaign. There are lots of different reasons people would do that – perhaps they’ve been instructed to conduct a campaign that moves the overton window in a particular country, or maybe they want to trick people into believing bullshit about minority groups. Making the decision to take part in a campaign can happen at any stage of the cycle; you don’t need to have personally produced disinformation to go on to amplify it.


Effective disinformation campaigns require a deep understanding of a variety of areas. Awareness of existing fissures in a target audience (e.g. tension between citizens pending an election) provides avenues for producing emotive content likely to be shared; and knowledge of platforms’ features (like Facebook’s Groups or Instagram’s Hashtags) help aggressors target their positioning and amplification efforts in the right places.


Aggressors need accounts ready to publish and amplify disinformation; a Twitter user with no followers isn’t going to get much traction on any of their Tweets – and if you want to use bot accounts to inauthentically boost posts into trending, effort needs to be put into getting them in position first (and keeping them undetected by platforms calibrating their defences).


Content which is to be published needs to be created or obtained. Disinformation can take a variety of forms, from entirely fabricated content to legitimate information shared with false context, or with ulterior motives.

Related Tags: Content Type; Content Context


Publication refers to when produced content is initially shared. This can happen both online (e.g. on a YouTube channel) or offline (e.g. in a televised interview with mainstream media). During publication effort is typically made to obfuscate the identity of the aggressor.

Related Tags: Dissemination Platform


Any steps taken which increase the audience of published content count towards amplification. This can include methods such as bot accounts boosting posts, public figures / celebrities sharing to their followers, or even mainstream media broadcasting to their viewers.

Related Tags: Amplification Method


There are lots of different ways that we can defend ourselves and others against disinformation. Unfortunately, aggressors are aware of this – and when steps are taken to counter their efforts, they recalibrate their strategies and carry on.


Good disinformation can convince defenders to change their opinions about a subject, or can confuse them enough to evoke mistrust in critical institutions.

It’s impossible to know the extent of the impact of a piece of disinformation – it could have an influence on any defenders who have been exposed to it; but there are cases where we can link real-world consequences with particular campaigns or theories.

Aggressors and Defenders

The Disinformation Cycle can be viewed from the perspective of the aggressor or the defender (i.e. someone aligned with the target of disinformation).

Articles are categorised based on the perspective (aggressor / defender) in addition to their Disinformation Cycle stage.

Decide to Act

Increasing the cost of participating in a disinformation campaign (e.g. through legal action) can make people not want to risk joining in.


There’s not much you can do to stop aggressors conducting research, but you can make it harder for them to get the information they need – for example by hiding data related to user demographics, or encouraging users to not reveal unnecessary amounts of information about themselves online.

Defenders also benefit from learning more about the communities and platforms that aggressors target; that understanding can be used to decide which preventative measures to prioritise.


Steps taken by to purge aggressors who have positioned themselves within platforms (such as removing inauthentic accounts – or accounts which have repeatedly shared dangerous misinformation), or measures to make it harder for aggressors to position in the first place (such as rigorous Turing tests on account creation).


Actions taken to increase the cost or difficulty of producing inauthentic content. Think counterfeit money; laws were introduced to punish those caught making it, and steps were taken to make notes and coins harder to imitate.

It’s more difficult to frustrate the creation of fake online content than fake physical content, so this doesn’t come up too much.


Publication of disinformation is technically a very small step of the Disinformation Cycle; aggressors have already positioned your accounts and produced the content – now they just need to hit upload!

As such, most defence against publishing is mainly tied into countering the aggressors’ positioning efforts.


Any steps taken which decrease the probability that published content will be successfully amplified, such as encouraging people to read past inaccurate headlines before sharing to followers.


Another difficult stage to defend against; how do you stop aggressors from updating their strategies?

“Not much” is the unfortunate answer. Countering the inception and spread of lies is a Sisyphean task, which can be a bit of a bummer. Hey, at least it’s interesting!


You can take solace in that, while we can’t stop aggressors from calibrating their techniques, there’s lots of things we can do to help mitigate the impact of their efforts.

This typically involves educating potential targets, providing context and enabling scrutiny.

Disinformation Cycle Origins

The Disinformation Cycle is adapted from The Malign Foreign Influence Campaign Cycle, which is a framework produced by the U.S. Attorney General’s Cyber Digital Task Force to discuss disinformation efforts from foreign influences. I’ve rehosted it below; check out page 26 of the PDF / page 13 of the actual page numbers within the document.

To keep things nice and simple the only changes I’ve made is shortening the title, and adding “Decide to Act” and “Impact” stages; the rest maps perfectly onto the MIFICC (please don’t sue me U.S. Attorney General – keeping everything pretty much the same was the least annoying way for me to make a new standard).