Infectious-diseases physician Krutika Kuppalli had been in her new job for barely a week in September 2020, when someone phoned her at home and threatened to kill her.
Kuppalli, who had just moved from California to the Medical University of South Carolina in Charleston, had been dealing with online abuse for months after she’d given high-profile media interviews on COVID-19, and had recently testified to a US congressional committee on how to hold safe elections during the pandemic. But the phone call was a scary escalation. “It made me very anxious, nervous and upset,” says Kuppalli, who now works at the World Health Organization (WHO) in Geneva, Switzerland.
From “‘I hope you die’: how the COVID pandemic unleashed attacks on scientists” by Bianca Nogrady on 13 Oct 2021
While online harassment often is rooted in misogyny, racism and other forms of discrimination, it can be used tactically to achieve larger goals as part of a disinformation campaign. Aggressors can minimise the spread of information they want out of the public eye by using online platforms to threaten those producing or amplifying it with real-world violence. If people are afraid of being killed or raped, they are less likely to want to participate in public discourse; in this sense the harassment contributes to Information Disorder by deliberately reducing the availability of accurate, true information.
The ad-hoc abuse experienced by minorities likely has a similar impact of demotivating their participation in online communities. However ad-hoc online abuse would not fall under this profile; the missing factor being the decision to abuse as a way to reduce the availability of unwanted information, rather than any ‘legitimate’ hate. For example, a political organisation may automate the abuse of journalists critical of their regime without having any personal problems with said journalists.
The probability that this tactic will successfully reduce targets’ participation in online communities increases with volume of harassment; an investigation by Nature found that scientists who reported higher frequencies of trolling or personal attacks were also most likely to say that their experiences had greatly affected their willingness to speak to the media in the future:

Because of this more impactful campaigns may employ amplification techniques to inauthentically increase the volume of harassment received by targets.
Real World Example: Climate Disinformation
Organisations use harassment of journalists to discourage the publication of narratives which may harm their profits:
Emails obtained by the Committee show Exxon privately pressured journalists about unfavorable news stories implicating the industry. In October 2015, the Los Angeles Times published two articles written by students as part of the Energy and Environmental Reporting Project at the Columbia Journalism School. The articles, which drew on historic documents and more recent interviews and statements from current and former Exxon employees, showed that Exxon’s work to prepare its facilities for climate change belied its public statements downplaying or denying climate science.
Documents uncovered by the Committee suggest that Exxon baselessly accused a Columbia Journalism instructor and a post-graduate fellow of wrongdoing in connection with these stories.
From “Investigation of Fossil Fuel Industry Disinformation” by the US House Committee on Oversight and Reform on 09 Dec 2022
TL;DR:
Objective: Reduce the spread of information which is detrimental to an aggressor’s goals
Technique: Harass / Threaten users producing detrimental information to reduce the probability they will do so again in the future.
Further Reading:
- Real-life stories of online harassment — and how scientists got through it by Virginia Gewin on 16 Oct 2018
- Tek Fog: An App With BJP Footprints for Cyber Troops to Automate Hate, Manipulate Trends by Ayushman Kaul and Devesh Kumar on 06 Jan 2022
- A database of private citizens categorised by “occupation, religion, language, age, gender, political inclination and even physical attributes” allowed a small organisation to use inauthentic social media accounts managed via the app Tek Fog to automatically respond to posts made by target groups.
One segmentation which could be targeted in the Tek Fog app was titled “Female Reporters“. Researchers found that 18% of replies to posts made by the top 280 female journalists on Twitter were made by accounts managed by Tek Fog, and many of these replies included abusive keyword.
- A database of private citizens categorised by “occupation, religion, language, age, gender, political inclination and even physical attributes” allowed a small organisation to use inauthentic social media accounts managed via the app Tek Fog to automatically respond to posts made by target groups.
- “‘I hope you die’: how the COVID pandemic unleashed attacks on scientists” by Bianca Nogrady on 13 Oct 2021
- Five Narratives Used in Climate Disinformation by Marianna Spring on 23 Oct 2021