Tactics & Techniques: “How Google’s Ad Business Funds Disinformation Around the World”

“If the world’s largest online advertising platform doesn’t care that it has made false information, hate speech and toxic propaganda profitable in societies like ours, and has no intention to do anything to change because it wouldn’t financially pay off, that is devastating,”

Tijana Cvjetićanin, a member of the editorial board of Bosnian fact-checking site Raskrinkavanje

Today’s tactics and techniques assessment examines the article “How Google’s Ad Business Funds Disinformation Around the World” by Craig Silverman, Ruth Talbot, Jeff Kao and Anna Klühspies for ProPublica on 29 Oct 2022.


Impact: Threat Actors Get Money

This article helps us understand that Google places ads on articles containing misinformation, even though they say they don’t do that.

Google is funneling revenue to some of the web’s most prolific purveyors of false information in Europe, Latin America and Africa, a ProPublica investigation has found. […]

Google placed ads on 41% of roughly 800 active online articles [with] false claims about COVID-19. The company also served ads on 20% of articles about climate change that Science Feedback, an IFCN-accredited fact-checking organization, has rated false.

This is a long standing problem, something that Google has had plenty of time to address:

A number of Google ads viewed by ProPublica appeared on articles published months or years ago, suggesting that the company’s failure to block ads on content that appears to violate its rules is a long-standing and ongoing problem.

I think that this ‘Technique’ is best placed in the Impact section of an Influence Operation, rather than in the Planning phase, as we don’t have evidence that the reason these articles were posted was to earn ad revenue (just that they did end up doing that).

In 2019, the Global Disinformation Index, a nonprofit that analyzes websites for false and misleading content, estimated that disinformation websites earned $250 million per year in revenue, of which Google was responsible for 40% and the rest came from other ad tech companies. NewsGuard, which employs human reviewers to evaluate and rate websites based on a set of criteria including accuracy, estimated in 2021 the annual ad revenue earned by sites spreading false or misleading claims is $2.6 billion. The report did not say how much of that Google might be responsible for.

Marie Santini, director of the Netlab at the Federal University of Rio de Janeiro, said the number of junk news and disinformation sites like Terra Brasil Notícias, as well as their audience, exploded in Brazil in part because Google ads make it easy for people to earn money from this type of content. She likened it to people who might drive for Uber to earn extra cash.

“You don’t need to make quality content or really work with journalists. You can copy things, you can use bots, you can recycle news, and you do it from your house and you receive some money,” she said. “It’s a way to make money for people that are without opportunity. But who is making money on a large scale? Of course, it’s the platform, Google.”

Impact: Platform Gets Money

Google earns money from the ads it puts online (including ads on websites hosting false or harmful content).

In 2019, the Global Disinformation Index, a nonprofit that analyzes websites for false and misleading content, estimated that disinformation websites earned $250 million per year in revenue, of which Google was responsible for 40% and the rest came from other ad tech companies.

Google already highlights fact-checks in search and Google News results to direct people to trustworthy information. But the company does not use fact-checks to keep ads off of pages with unreliable or harmful claims. And unlike Meta and TikTok, it does not pay fact-checkers for the results of their research.

“When it comes to ads, they obviously monetize disinformation. Whether it’s without knowing or knowing, it doesn’t matter,” said Baybars Örsek, director of the International Fact-Checking Network. “There has never been a public announcement from Google’s side that has acknowledged fact-checking as a signal for their ads monetization business.”

Platform Vulnerability: Low quality moderation of non-English language content

Some platforms have vulnerabilities that can be exploited by threat actors to increase the probability that their influence operation will succeed.

In this example threat actors are able to earn revenue on websites hosting non-English language disinformation using Google ads, where they would be less likely to succeed if they posted the same content in English.

ProPublica’s examination showed that ads from Google are more likely to appear on misleading articles and websites that are in languages other than English, and that Google profits from advertising that appears next to false stories on subjects not explicitly addressed in its policy, including crime, politics, and such conspiracy theories as chemtrails.

A former Google leader who worked on trust and safety issues acknowledged that the company focuses heavily on English-language enforcement and is weaker across other languages and smaller markets. They told ProPublica it’s because Google invests in oversight based on three key concerns.

“The number one is bad PR — they are very sensitive to that. The second one is trying to avoid regulatory scrutiny or potentially regulatory action that could impact their business. And number three is revenue,” said the former leader, who agreed to speak on the condition that their name not be used in order not to hurt their business and career prospects. “For all these three, English-speaking markets primarily have the biggest impact. And that’s why most of the efforts are going into those.”

“These companies have decided to go global in their services, and that was their own decision for growth and to make revenue,” [Alexandre Alaphilippe, executive director of the EU Disinfo Lab] said. “It’s not possible to make this choice and not face the accountability needed to be in all of these countries at the same time.”

Whoa, this is the first time I’ve categorised something as a Platform Vulnerability! I think it’s going to be useful to have a way to acknowledge specific weaknesses which can be (1) exploited by threat actors and (2) fixed. Perhaps Platform Vulnerabilities will go nowhere, but that’s ok; we try things, sometimes they work.

Narrative Theme: COVID-19

In one example, Google recently placed ads for clothing brand St. John on a two-year-old Serbian article falsely claiming that cat owners don’t catch COVID-19. Google placed an ad for the American Red Cross on a May 2021 article from a far-right German site that claimed COVID-19 is comparable in danger to the flu. An ad for luxury retailer Coach was recently attached to an April article in Serbian that repeated the false claim that the COVID-19 vaccines change people’s DNA.

Narrative Theme: Climate Crisis

Last August, the Greek edition of the Epoch Times, a far-right U.S. publication connected to the Falun Gong spiritual movement, published an article that falsely claimed the sun, and not increased levels of carbon dioxide, could be responsible for global warming. That story had multiple Google ads when ProPublica viewed it, even though it appears to clearly violate Google’s policy against climate disinformation.


The intention of this series is to make it easier to understand why the article has been tagged with particular tactics or techniques. Associating reporting of real-world attacks with DISARM tactics and techniques helps us get a better understanding of how they have practically been used, who’s used them, and who they’ve been used against. To do this a relevant quote from the article will be provided under the title of the associated technique. If the technique exists in DISARM, then its identifier will be included too.