How Facebook’s Algorithm amplifies extremist disinformation to new members of the platform

There has been a LOT(!!) of information about Facebook published in the last couple of weeks. Whistleblower and former Facebook product manager Frances Haugen testified to parliament regarding Facebook’s prioritisation of profits over safety, and news organisations published many, many damning stories based on leaked internal documents. There is a lot of evidence of the harm Facebook has had, and a lot of evidence that they knew this was the case. This Google Doc contains over 100 individual stories written in response to the testimony / leak:

There is too much information for any one person to read (including me), so I’m going to try to pick out content that adds new information to DDB, specifically related to disinformation. I have not yet read these articles (I have been saving it for a weekend treat; lucky me!), but from what I’ve gleaned I think there will be information related to how Facebook’s features helped amplify inauthentic content, and how people trying to mitigate against these problems were stopped by management who did not want to risk profit. Once I’ve finished writing these posts, there will be another couple of drops in the sea of nonsense written about Facebook, and we will have more information on techniques used to spread disinformation.

I’m starting with an article looking at Facebook’s own research into how its algorithm radicalised new users:

Relevant stage of the Disinformation Cycle

‘Carol’s Journey’: What Facebook knew about how it radicalized users 

TL;DR: Facebook’s algorithms actively promote groups dedicated to extremist disinformation narratives (more than you would expect extremist groups to be promoted), which amplifies their reach.

What are algorithms and how does Facebook use them?

Facebook uses algorithms to decide what to show us (things like posts at the top of our feed, adverts we should see,  groups we could join) based on information it has on us (things like age, location, porn preference) . It does this because it would be inefficient to have a human person looking at the things we like, pulling up a spreadsheet of “things people who like this stuff also like”, and then deciding what we should be looking at; robots are quicker and much cheaper to employ for this purpose.

We ask algorithms to achieve objectives like “maximise the amount of time a user spends on this website, so that they look at the most possible ads”. If someone signed up to Facebook and told it they were interested in pizza, a good idea to keep them on Facebook would be to suggest they join pizza-related groups so they could have lovely conversations with other pizza fans. If we were concerned that our algorithm was pushing people towards unhealthy diets, we could ask the algorithm to still try to maximise the amount of time someone spent on Facebook, but to not recommend so many unhealthy food groups in the process. Maybe have a side salad group with your pizza group.

How do we know Facebook’s algorithms amplify disinformation?

Algorithms are pretty complicated and can be influenced by lots of different datapoints, so it’s hard to be sure exactly what recommendations they’re going to make. Employees at Facebook were worried their algorithm was recommending extremist disinformation content while trying to maximise their time on the site, radicalising new users of their platform. A new account was created to test this, giving the algorithm the following datapoints: 

  • Location: Wilmington, North Carolina. 
  • Interests: Politics, Parenting, Christianity. 
  • Followed Brands: Fox News, Donald Trump.

With just this data, within two days Facebook tried to get the user to join groups dedicated to the right-wing extremist conspiracy QAnon (the theory and movement that claimed President Trump was secretly saving the world from a cabal of pedophiles and Satanists). While the user didn’t act on these suggestions the algorithm still pushed this content, and within a week their feed was full of groups and pages violating Facebook’s stated rules (including those against hate speech and disinformation). Facebook’s own researcher said the user’s experience of Facebook was “a barrage of extreme, conspiratorial, and graphic content”.

What does this tell us about disinformation?

This helps us understand a bit more about ‘environmental factors’ influencing the spread of disinformation. Facebook’s researchers were able to identify a situation where their algorithm tried to push a newly created user to consuming extremist disinformation. It’s reasonable to assume that other users joining the platform who had similar interests would also be pushed towards extremist views, and as such there is less need for active recruitment into extreme disinformation groups by existing members; Facebook’s algorithm draws in potential candidates in an effort to maximise users’ time spent viewing adverts.