The free online game Bad News has been shown to help people resist common techniques used to convince people that false information is actually true. This is good because free things that are online don’t have the barrier to entry that most activities have right now (i.e. “I can’t afford to exist” and “I can’t leave my house because of a deadly pandemic ravaging the planet”). Also, there’s a lot of dangerous inauthenticity spreading on the internet, so hearing that people can be inoculated against Fake News using Bad News is good news.
In this post we look at how Bad News works, and think about ways we can build upon its success.
- How does Bad News (technically) work?
- How does Bad News (psychologically) work?
- Building on Bad News
- Can we develop a similar tool to combat Misinformation as well as Disinformation?
- Do different “strains” of disinformation use different tactics, and can we create inoculations targeted at them?
- Will games like Bad News resonate with critical, vulnerable demographics?
- Do we need to make a whole game to see these beneficial effects?
- The six techniques covered in Bad News
How does Bad News (technically) work?
Bad News is a simple, lighthearted game that takes about five minutes to complete. It focussed on six common techniques used to manipulate people into believing disinformation; Impersonation, Polarisation, Conspiracy, Emotion, Discrediting and Trolling (see below for more detailed definitions). It asked players to take the role of an aggressor producing, publishing and amplifying disinformation.
To succeed players needed to use each of the six techniques; creating an inauthentic persona, producing engaging, emotive content, and illegitimately boosting its reach. At each step of their journey as an aggressor the player is shown responses from users who have ‘fallen’ for the disinformation, and earn badges based on the techniques they mastered.
How does Bad News (psychologically) work?
One way we can help prevent people from being manipulated is by exposing them to weaker versions of specific inauthentic arguments – this way they will have already been able to reason out why dangerous narratives are incorrect, even when presented by skilled manipulators in the wild (and as such they will have been ‘inoculated’ to them). However online narratives are constantly changing, presenting a unique challenge for applying inoculation theory to disinformation protection; by the time you’ve crafted one ‘vaccine’ many other narratives may have sprung up requiring attention.
To get around this issue Bad News exposes players to underlying tactics used by aggressors, rather than individual inauthentic stories – this is why players were asked to use each of the six techniques to overcome obstacles in the game; taking the role of an aggressor in a safe environment is a great way to help people learn about how they themselves may be targeted in the future.
Successful interventions which focus on tactics would likely be usable for longer than those looking at individual stories; lots of methods used by aggressors appear in unrelated campaigns, and crafting new techniques for deception takes much more effort than rehashing existing measures. This means defenders could dedicate time into developing effective inoculations without worrying about them becoming obsolete – and Bad News’ positive results tells us that this is worth exploring further.
Of course Bad News did more than just exposing tactics used by aggressors, it gamified this content. Gamification refers to the use of game-like elements to motivate desired behaviour; in this case that behaviour was “learn about disinformation tactics”, but gamification is used everywhere to achieve a variety of goals. To better understand gamification we can take an element from an actual game (like a points system) and see how it’s used to encourage specific player behaviour – and how similar designs are used in corporate settings.
In Pac-Man to earn points players must evade deadly ghosts to eat … things. Some things are worth more points, and players are encouraged to change up their plans if they spot a high-scoring fruit appear – even if they risk crossing paths with a ghost that could otherwise easily be avoided. This challenges the player to alter their behaviour based on the moment-to-moment context of the game, and leads to a more diverse, fun experience.
Facebook and Twitter use similar designs to influence user behaviour with its own points system; likes. When a user shares content to their timeline, others can ‘like’ that content – and everyone can see a count of how many likes it got. It feels good to know other people like what you post – especially when something you posted gets a ‘high score’ with lots of likes, so people are encouraged to continue posting to the feed. This gamelike feature encourages users to provide data that fuels ad algorithms, and keeps them coming back to the platform.
This is just one small example of how we can take lessons from game designers and use them in our capitalist hellworld to increase profits. In fact, there are a myriad of gamelike elements that we can take advantage of to motivate specific behaviour. Bad News used some (including achievements, levels, storytelling, progression, goals and challenges) but not all of them – which makes sense because competitive elements like leaderboards wouldn’t have made much sense in that context.
Using pre-and post gameplay surveys we know that Bad News had a positive, statistically significant (i.e. we know this result didn’t just happen by chance) impact on people’s capability to assess online posts as reliable or not. It looks like combining an evolved version of inoculation theory with motivational benefits afforded by gamification is a good strategy for defending people against online manipulation.
Building on Bad News
The results of the Bad News study are incredibly encouraging, and naturally lead us to asking how we can build upon their findings to develop even better interventions. These are just a couple of questions I’d like to explore more given the opportunity:
Can we develop a similar tool to combat Misinformation as well as Disinformation?
In Bad News players took on the role of a malicious aggressor spreading Disinformation – i.e. someone who is knowingly producing and spreading inauthentic content with the intent to deceive. It’s not clear to me whether this game would also confer significant resistance against Misinformation – i.e. when well meaning people unintentionally share potentially harmful content they don’t know is false.
Misinformation has played a big part in the Coronavirus pandemic, with many public figures boosting inauthentic content to their millions of followers. Staying with the Bad News format, could a section be added to the game in which the player attempts to bait highly influential figures into sharing an inauthentic piece of content to their audience?
Do different “strains” of disinformation use different tactics, and can we create inoculations targeted at them?
Bad News had the objective of doing away with inoculating against arguments, instead targeting techniques. But perhaps a combination of the two methods could be beneficial for inoculating against certain categories of disinformation.
For example it seems logical that tactics used in disinformation related to healthcare would differ from tactics related to governmental conspiracy. In the coming years it will be more critical than ever to stop vaccine misinformation from infecting the masses (so that we can stop Coronavirus doing the same thing), so perhaps it would be valuable to design an intervention specifically targeted at countering the techniques used by anti-vaccine campaigners.
To better understand whether this would be an effective path we’d first need to know if there were significant differences in disinformation tactics between different topics, and assess whether we could make a more targeted version of Bad News that was both more rooted in healthcare (i.e. getting the player to make an inauthentic healthcare account, posting emotive anti-vaccine tweets, e.t.c.) and which better addressed the tactics they used.
Will games like Bad News resonate with critical, vulnerable demographics?
Recent research revealed that without intervention participants in the vaccine discussion on Facebook will be majority anti-vaccine by 2030 – so the obvious question posed after learning about Bad News is whether it can help inoculate those who are undecided about vaccines and could be exploited by illegitimate arguments.
The beneficial impact of Bad News appeared to be consistent across cultures (having been translated and tested in Germany, Greece, Poland and Sweden), and across political ideologies. These consistent results give hope that vulnerable audiences (i.e. those who are most likely to be exposed to disinformation) would benefit from Bad News – however it would be valuable to gain a better understanding of demographic information about those audiences, and seeing if there is anything we can do to improve the intervention for them.
It may be that a game designed with specific target groups in mind (e.g. by asking players to create fake news that is more likely to resemble disinformation they had seen in the past) could lead to a greater resistance to similar content in the future.
Do we need to make a whole game to see these beneficial effects?
Making a game is time consuming, and takes a decent amount of resources. Delivering inoculation in the form of a game could also alienate audiences who don’t enjoy learning in that format.
It would be worth getting an understanding of how much of Bad News’ success came from each of its gamelike elements – potentially enabling us to streamline the intervention by cutting out aspects that are inconsequential to the positive results produced. This could lead to many benefits, such as reduced time needed for both development and consumption of the inoculation.
The six techniques covered in Bad News
Impersonating online accounts of real people or organisations. Think posts from “@RAELDONELDTRUMP”, “@BBCMews” or “@factcheckUK” – accounts you might not notice as being inauthentic at a glance.
Artificially amplifying existing grievances and tensions between different groups in society, for example political differences, in order to garner support for or antagonism towards partisan viewpoints and policies.
Creating or amplifying alternative explanations for traditional news events which assume that these events are controlled by a small, usually malicious, secret elite group of people.
Producing content that will play into basic, raw emotions to frame an issue in a chosen light, or to gain attention. Emotional content leads to more engagement, so the best results come from emotive posts.
Deflecting attention away from accusations of bias by attacking or delegitimising the source of the criticism, or by denying accusations of wrongdoing altogether.
Deliberately inciting a reaction from a target audience using a variety of strategies mentioned above (e.g. emotive, polarising content).