The DISARM foundation has created a version of MITRE ATT&CK for Disinformation; they catalogue the tactics used by threat actors in each stage of an influence operation.
Introduction to DISARM
DISARM is a framework which we can use to better understand the tactics and techniques used by threat actors in during an influence operation (IO). DISARM breaks down IOs into the following four phases:
Threat actors achieve one or more broader Tactical goals as they progress through each of these Phases. Tactical goals can be met using a variety of different Techniques.
Using DISARM to make sense of a real-world Influence Operation
I think it’s useful to have real-world examples of the steps people take to manipulate us online, so I’m going to map elements of an IO to DISARM’s tactics and techniques. Since this is my first time using DISARM I’m going to examine an IO I’m already familiar with; Reddit Mind-Hacktivists conduct a Disinformation Campaign against Kelloggs. This was an atypical campaign in that it was planned on a publicly accessible social media platform, and the threat actors shared recordings and write-ups of their participation (most influence operations are very secretive about their actions). This gave a behind-the-scenes view which we aren’t usually privy to, and is fun to analyse!
If you’d like to learn more about the background to this campaign (why are Redditors attacking Kelloggs?), you can read this summary from my initial reporting. To provide a very quick recap here: Kelloggs underpaid their unionised employees, who striked (struck?) in response. Kelloggs didn’t want to pay more, so they tried to hire replacement staff instead. Members of the AntiWork Reddit community flooded Kelloggs’ online job application portal with false data, making it more difficult to replace the strikers (the strucken?).
In this post I’ll provide an overview of which DISARM techniques were used to achieve tactical goals within each of the four phases introduced above. If you like, you can play along at home! Check out DISARM’s technique explorer and see if you can find any that were used in this attack (or others you’re familiar with).
Phase 1: Plan
“Envision the desired outcome. Lay out effective ways of achieving it. Communicate the vision, intent, and decisions, focusing on expected results.”
Planning for this IO was done by user BloominFunions (“BF”) prior to posting their call for collective action on Reddit. We can tell this from their post; they had figured out that swarming Kelloggs’ job application portal would be a good way to make it hard for them to find new employees. Although BF likely didn’t think of it in these terms, they had conducted Target Audience Analysis (a Tactic identified within DISARM) by Mapping their Information Environment and Identifying Vulnerabilities within it (both DISARM defined Techniques which can be used to achieve the Audience Analysis Tactical goal).
Throughout each Phase I’ve identified the Tactics and associated Techniques which were exploited as part of the IO:
Plan Strategy (Tactic 01 or “TA01” (all Phases, Tactics and Techniques have identifying tags in DISARM)): “Define the desired end state, i.e. the set of required conditions that defines achievement of all objectives”
- Determine Target Audience (T0073): User BF identified Kelloggs’ hiring managers as targets for the IO.
- Determine Strategic Ends (T0074): BF provided a clear strategic goal in their post; “It’s time to clog their toilet of an application pipeline” – i.e. make it harder for Kelloggs to replace striking staff.
Target Audience Analysis (TA13): “Identifying and analyzing the target audience member locations, political affiliations, financial situations, and other attributes that an influence operation may incorporate into its messaging strategy”.
- Map Target Audience Information Environment (T0080): BF identified that Kelloggs got their information on “who wants to take over striking employees’ jobs” from the source “Kelloggs’ online job application portal”, making said portal the best place to spread false information to achieve their strategic goals.
- Identify Social and Technical Vulnerabilities (T0081): Online job application portals aren’t designed to prevent inauthentic job applications (because most people wouldn’t go to the effort of applying for a job they don’t want or need), so there were few barriers to producing false applications. By design they also allow users in any location to apply for the job, giving a way for people to support strikers remotely.
Other users later joining the IO identified technical vulnerabilities which enabled automation of the application process.
Phase 2: Prepare
“Activities conducted before execution to improve the ability to conduct the action. Examples include: development of the ecosystem needed to support the action: people, network, channels, content etc.”
Develop Content (TA06): “Create or acquire text, images, and other content”
- Reuse Existing Content (T0084): (Some techniques have sub-techniques. In this example there are multiple techniques for re-using existing content, so we can focus on the one which most fits this campaign):
- Plagiarize Content (T0084.004): The application process required uploading of a resume. BF instructed participants to use “a sample resume from google images” (this counts as “Plagiarised Content” (i.e. taking content and pretending you made it) and not “Appropriated Content” (i.e. re-using content but saying where you got it from), since properly attributing the resumes to Google Images would give the game away).
Establish Social Assets (TA15): “Social Assets” cover a variety of resources, including establishing “operational personnel”.
- Acquire/Recruit Network (T0093): Reddit’s AntiWork community had ~1.4m members at the time of BF’s post, and they were already angry at Kelloggs’ actions. BF recruited participants from an existing network of ideologically aligned users, who were capable of performing the required actions to ensure a successful campaign (i.e. inauthentically apply for jobs at Kelloggs).
Microtarget (TA05): “Target very specific populations of people”
- Create Localized Content (T0101) “Localized content refers to content that appeals to a specific community of individuals, often in defined geographic areas [.., and] may help an operation increase legitimacy, avoid detection, and complicate external attribution.”: BF produced a variety of ZIP codes for applicants to use in their fake address for each hiring factory, so that “they can’t filter out our apps easily”.
“Run the action, from initial exposure to wrap-up and/or maintaining presence etc”
Maximize Exposure (TA17): “Maximize exposure of the target audience to incident/campaign content via flooding, amplifying, and cross-posting.”
- Flooding the Information Space (T0049):
- Conduct Swarming (T0049.005) “Swarming refers to the coordinated use of accounts to overwhelm the information space with operation content. Unlike information flooding, swarming centers exclusively around a specific event or actor rather than a general narrative”: As planned, AntiWork members ‘swarmed’ the Kelloggs job application portal, posting false information.
- Bots Amplify via Automated Forwarding and Reposting (T0049.003): Users joining the IO created bots which could automatically apply for Kelloggs’ advertised roles.
- Incentivize Sharing (T0120) “Incentivizing content sharing refers to actions that encourage users to share content themselves, reducing the need for the operation itself to post and promote its own content”: In BF’s original post their call to action incentivised community participation; “It’s time for r/antiwork to make the news as a formidable fighter for the average worker. Nothing is scarier to a business than organized labor. I submitted four applications. How many did you submit?”
- Bypass Content Blocking (T0121.001): Kelloggs added captcha technology to prevent automated applications, but users were able to bypass these defences.
“Evaluate effectiveness of action, for use in future plans”
BF claimed success (“We did it Reddit!”), citing mainstream media reporting on their campaign, and noting that Kelloggs had added anti-bot technology to their application portal. While these were not stated as goals in the Plan phase, being unable to view the flooded information environment (presumably the inboxes of Kelloggs’ hiring managers) meant it was difficult to assess success of their content swarming.
- Measure Effectiveness (T0133):
- Awareness (T0133.003): BF claimed that “By making the news for something like this, we send shockwaves. Not only to Kelloggs execs, but other companies who are afraid of their own workforce”, linking several different articles on the campaign, and why it was taking place (raising awareness of Kelloggs’ poor employment practices, and potentially having a negative impact on the brand).
- Behaviour changes (T0133.001): Kelloggs took action to reduce users’ capability to automate flooding of their information environment, indicating that there was enough of a negative impact that preventative measures were required.
Aligning the DISARM framework to DDB’s framework (which itself is aligned the MFICC (Malign Foreign Influence Campaign Cycle) framework)
When I created DDB I took the stages (‘phases’) of The Malign Foreign Influence Campaign Cycle, and added two
for fun to help me catalogue motivations for IOs, and the impacts they have on people. It looks like this:
Since they all follow the same logic of “each IO goes through the same broad steps, and there are different things you do in each step”, we can pretty easily map DDB onto DISARM:
And since DISARM uses codes to uniquely identify Techniques, I can link up things I’ve already written to their work. In DDB we have “Influencers being paid to amplify inauthentic messages to their audience“, which maps to “Co-Opt Trusted Sources (T0100)” > “Co-Opt Influencers (T0100.003)” in DISARM. More closely related to this IO, DDB has “Manufacturing a Tactical Infodemic to make true information harder to find” where DISARM has “Flooding the Information Space (T0049)” > “Conduct Swarming (T0049.005)“. I love that I can re-use my existing work to help people understand DISARM’s techniques, and in the future I’d like to map more of DDB to them.
Looking back at the Reddit v Kelloggs IO, I claimed that BF’s call to action for members of the community to Conduct Swarming counted as Incentivise Sharing (T0120). However on closer inspection this Technique may refer to cases where the inauthentic content itself has incentivised sharing (e.g. false information about emergency services including the text “please share, this may save a life”), rather than encouraging users to knowingly post false content in the first place. If I were able to make additions to DISARM I would add ‘Encourage Participation in Swarming’ as a sub-technique for ‘Flooding the Information Space’ or ‘Conduct Swarming’ (depending on whether sub-sub-techniques are allowed) and tag that instead of ‘Incentivise Sharing’.
There’s also the possibility for new Tactics and Techniques to be added to DISARM based on DDB content. DISARM’s “Prepare Assets Impersonating Legitimate Entities (T0099)” Tactic covers a variety of impersonations, including “news outlets, public figures, organizations, or state entities”. DDB’s existing profiles of “Impersonating a Fact Checker” and “Impersonating a News Organisation” could be added as sub-techniques to T0099, letting us be more specific in our understanding of how techniques have been used in the past.