Algospeak is used to make it harder for automated moderation to detect harmful content
A threat actor can hijack someone’s credibility by accessing their social media accounts (for example by using stolen passwords), and posting as them.
Paper Terrorism is a harassment tactic in which people coordinate in order to hit a target with lots of illegitimate lawsuits at the same time, threatening to use them to take away targets’ money or freedom unless they acquiesce to a set of demands.
Threat actors can exploit peoples’ existing trust for a news organisation by pretending to be them when disseminating falsehoods. This also wastes resource of the targeted organisation.
Impersonating a Fact Checker to spread fake fact checks makes people think it’s hard to get accurate information about the topic. It also exploits peoples’ existing faith in Fact Checkers to increase the probability that people believe posted content.
Building an audience that will reliably read the content you post is difficult, so threat actors can pay others who have already built an audience to share what they’ve produced. This has the added benefit of appearing more authentic.
To avoid addressing an accusation, the accused can return fire with a whataboutism; “You’re criticising my actions? But what about the bad actions you’ve taken?”, moving conversational focus on to the accuser’s imperfections.
People have made computer programs that can create realistic-looking pictures of people that don’t physically exist. Threat actors use these to make inauthentic social media accounts appear more legitimate.
It can be hard to falsify a convincing piece of media, so one strategy used is to take real content and create a false context which matches the desired narrative.
Let’s say there is someone who wants to find out about a topic, but you don’t want them to succeed. One way you could make it harder for them is to intentionally flood the pool of information surrounding said topic with false data.