Let’s kick things off with some of the good stuff; a video someone made showing them creating a bot which automates the Production and Publication of disinformation:
It’s worth stating at this point I am not taking a moral stance when I call this disinformation; it just meets that criteria – i.e. knowingly spreading information which is false (like that one is willing and able to fill the positions advertised by Kelloggs). It’s fun when we get to see behind-the-scenes of a disinformation campaign; often when assessing disinformation we need to work backwards from the end result, but in this case we don’t even get to see the end result (i.e. inauthentic job applications). I also like to think about weird or edge-case examples of disinformation, as these help us better test our understanding of how it works (rather than assessing the same old stuff we’ve seen before).
Some Backstory
Back to the Kelloggs thing – here is a basic summary and background: Online community Reddit allows users to join sub-communities (called “subreddits”) based around pretty much any topic. One such subreddit called AntiWork (where this story takes place) has over a million subscribers and is growing rapidly. It’s hard to properly encapsulate the ideology of AntiWork users (here is one person’s take, and here is AntiWork’s own FAQ), but for now we can focus on the part of the community that gets angry when corporations screw over their employees. At the moment they have their eyes on Kelloggs.
A pinned megathread in AntiWork tells us why this is the case; 1,400 unionised Kelloggs workers began a strike on 05 Oct, stating that they they deserved fair pay as they often work over 80 hours a week and kept the company going during the pandemic. On 05 Dec the union “rejected a 3% (below inflation!) pay deal”, and on 07 Dec Kelloggs announced they replace all 1,400 striking workers, and were accepting online applications for replacements.
The Disinformation Campaign
AntiWork user BloominFunions responded to this news by encouraging other members of the community to send inauthentic applications to the new positions in order to “clog their toilet of an application pipeline“:

The thread contained advice on how to appear legitimate when applying, providing a variety of zip codes for the matching cities, and recommending uploading a CV downloaded from Google images. Other users built upon this advice by showing that specific web browsers speed up repeat applications, creating and sharing code which automates the work, and even improving the program to bypass anti-bot detection systems later put in place by Kelloggs.
While Kelloggs haven’t shared details on how this campaign has impacted them, we know enough applications were sent that the job portal crashed.
What can we learn from this?
I’m trying to build out a database of different techniques, narratives, personas, and general “things” related to disinformation. A catalogue of previously seen disinformation tactics could be helpful when facing them again, or for predicting how people might try to disinform us in the future. But to get to that point we need to pick apart campaigns that have already happened, so we can identify useful traits and commonalities. I think the Kelloggs campaign is a nice example we can learn from to prevent more nefarious attacks in the future.
1) Mind-Hacktivists as a disinformation persona
In Cybersecurity when someone conducts a cyber attack as a way to protest online or affect change for ideological reasons (rather than to make money or steal information) we call them a Hacktivist. Users in this campaign are similarly participating for ideological reasons, so I’m calling them Mind-Hacktivists (I thought it was important to differentiate them from Cyber Hactivists, but couldn’t think of a pithy way to mix “Disinformation” and “Activist” – “Disinformactivist” has too many syllables, and “Factivist” is kind of the opposite of what we’re going for). It’s useful to have terms for the different kinds of personas that might run a disinformation campaign, so that we can predict how each might behave based on the tactics they’ve used or the demographics they’ve targeted.
Similar to Hacktivists, I define Mind-Hacktivists as groups or individuals who intentionally participate in the spread disinformation for ideological reasons (i.e. with no short term prospect of material gain for their efforts). This persona would likely benefit from being further developed into sub-categories based on driving ideology.
I’ve written about Mind-Hacktivists in the past (but didn’t use the name at the time):
One commonality I’ve noticed is that they appear much less guarded with their techniques than other personas. In Kelloggs’ example this was by necessity (given that they needed to organise in a publicly accessible online community), but in the example linked above Ken Klippenstien revealed their tactics as part of the payoff for their deception.
2) Tactical Infodemic as a disinformation strategy
“Infodemic” has primarily been used to refer to the influx of information (false and valid) related to Coronavirus, which has made it difficult for people to find accurate information; but Infodemics can spread in other topics too. In this case users coordinated to manufacture a Tactical Infodemic specifically targeting Kellogs’ hiring managers, rendering them unable to identify legitimate job applications amongst the wave of inauthentic ones, reducing their capability to replace striking staff.
Looking back I can think of other examples that could meet the same classification; in 2016 members of the alt-right circulated false voting information targeting Hillary Clinton supporters:


There are similarities but overall I don’t think this counts as a Tactical Infodemic. While the participants were ideologically driven (matching the Mind-Hacktivist criteria), my impression is that the objective of this campaign was to increase the probability that a person would believe the incorrect information and fail to vote, rather than making it harder to find correct information about voting in the first place. In Kelloggs’ example hiring managers are likely well aware that there are a lot of false applicants for their newly posted positions (and would be able to identify most fake applications once reviewed), but this still doesn’t help them find the legitimate applications in the first place.
We can take away from this that content used in a Tactical Infodemic needs less refinement than typical disinformation. This is reflected in the disparity between content produced for the Kellogs campaign and that produced for the Hillary Voting campaign; the former required some relevant data during the application (with Reddit users supplying area ZIP codes “applicants” could use for each job, but recommended uploading an image of a CV downloaded from Google), while the latter had lots of effort put in to have visual parity with legitimate Clinton ads:


This helps us generate a definition of a Tactical Infodemic; the production of an overwhelming volume of information targeted at a particular topic or community, with the intention of making legitimate information harder to identify.
3) Successfully generating Coordinated User Action
A proper semantic analysis of this and other posts which successfully lead to Coordinated User Action would need to be done to get a reliable understanding of how these posts take off. I don’t have the capacity to do that at the moment, but I do have the capacity to post some caveated armchair observations on what I believe helped this post succeed:
- An already agitated community ready for retribution
- People were rightly angry about the mistreatment of workers. As self-identified members of the AntiWork community it’s likely they also had their own war scars helping them empathise.
- A simple call to action with a step-by-step guide
- BloominFunions provided links to every job posting, listed a variety of matching zip codes, and gave a recommendation for where to go to get a passable CV. This lowered the barrier to entry for participation.
- Use of emotive language putting the reader in the role of the hero
- I quote: “It’s time for r/antiwork to make the news as a formidable fighter for the average worker. Nothing is scarier to a business than organized labor. I submitted four applications. How many did you submit?”. How inspiring is that? Certainly the kind of rhetoric that would get me to invest 5-10 minutes in hindering The Enemy, if I were so inclined.
It’s interesting to think about whether these qualities made it more likely that other members of the AntiWork community collaborated with the original poster, than if they’d made some rubbish low effort post.
4) Seeing a disinformative bot in action
By now I suspect the average person is aware that bots can help amplify the spread of disinformation online, but I think that it’s unlikely many have seen one in action. Although the videos linked above (and here for ease of reference) show bot usage atypical to most disinformation campaigns, they may help people unfamiliar with the topic better visualise how bots can be used to impact online discussions. For example, one could easily imagine these form-completing bots leaving fake reviews for products in online stores, or posting comments on news articles.
OK that’s it
We’ve scrutinised this campaign dry, but we got some good information from it. The people of AntiWork have not only helped strikers fight for fair wage, but have also helped improve our understanding of disinformation. I think that Mind-Hacktivists and Tactical Infodemics are good categorisations which will likely turn up again in the future, and I’m looking forward to improving my understanding of each when they do.
Do you think these categorisations are useful? Once I had them, I recognised previous bits of disinformation I’d read about that met the criteria. Has that happened with you? Let me know on Twitter @DisinfoDB! I’d love to hear your thoughts, and I am forever in need of social media serotonin.
For posterity, here are a couple of threads within the AntiWork subreddit that were popular at the time of writing:
- 63.5k upvotes, 3.7k comments: Apply now! Kellogg is hiring scabs online. Let’s drown their union busting. Mods please sticky!
- 51.8k upvotes, 1.7k comments: Kellogs: adds captcha to applications. My version of the bot that guy made: That sign can’t stop me because I can read!
- 28.2k upvotes, 1.6k comments: Kellogg’s CEO, refuses to increase wages. Gets paid 11.6M a year.
- 26.2k upvotes, 1.4k comments: for those of you who want to apply to kelloggs, MS Edge auto fills 90% of the form after you fill it once.
- 24.2k upvotes, 1.1k comments: This is what it’s like to work at Kellogg
- 18.9k upvotes, 660 comments: This tiktoker built a code to apply to all Kellogg’s job postings