Using Online Harassment to Reduce the Spread of Unwanted Information

While online harassment often is rooted in misogyny, racism and other forms of discrimination, it can be used tactically to achieve larger goals as part of a disinformation campaign. Aggressors can minimise the spread of information they want out of the public eye by using online platforms to threaten those producing or amplifying it with real-world violence. If people are afraid of being killed or raped, they are less likely to want to participate in public discourse; in this sense the harassment contributes to Information Disorder by deliberately reducing the availability of accurate, true information.

The ad-hoc abuse experienced by minorities likely has a similar impact of demotivating their participation in online communities. However this ad-hoc online abuse would not fall under this profile; the missing factor being the decision to abuse as a way to reduce the availability of unwanted information, rather than any ‘legitimate’ hate. For example, a political organisation may automate the abuse of journalists critical of their regime without having any personal problems with said journalists.

The probability that this tactic will successfully reduce targets’ participation in online communities increases with volume of harassment; an investigation by Nature found that scientists who reported higher frequencies of trolling or personal attacks were also most likely to say that their experiences had greatly affected their willingness to speak to the media in the future:

Chilling effect: Being attacked after speaking to the media affected scientists willingness to speak to the media in future.
Graph from the article “‘I hope you die’: how the COVID pandemic unleashed attacks on scientists”

Because of this more impactful campaigns may employ amplification techniques to inauthentically increase the volume of harassment received by targets.

TL;DR:

Objective: Reduce the spread of information which is detrimental to an aggressor’s goals
Tactic: Harass / Threaten users producing detrimental information to reduce the probability they will do so again in the future.
Associated Method(s):

  • Coordinated User Action
  • Automated Responses to Targeted Groups or Individuals
    • A database of private citizens categorised by “occupation, religion, language, age, gender, political inclination and even physical attributes” allowed a small organisation to use inauthentic social media accounts managed via the app Tek Fog to automatically respond to posts made by target groups.

      One segmentation which could be targeted in the Tek Fog app was titled “Female Reporters“. Researchers found that 18% of replies to posts made by the top 280 female journalists on Twitter were made by accounts managed by Tek Fog, and many of these replies included abusive keyword.

Further Reading: