Use AI to create profile pictures

Relevant Disinformation Cycle Stages: Position

Technique description:

What is it: People have made computers that can look at hundreds and thousands of photos of real people, and use that information to create realistic-looking pictures of people that don’t physically exist. Threat actors use these to make inauthentic social media accounts appear more legitimate.

How do people do it: The barrier for entry to getting an AI created face is pretty low; companies like generated.photos provide the service for free online (I don’t feel bad linking to this because it was the first result in DuckDuckGo for “AI generated face”, and this is probably something people would search when looking into acquiring an AI generated face).

Why do they do it: The availability of realistic looking photos of people is really useful for threat actors who want to create inauthentic social media accounts, as they provide an easy way to increase perceived authenticity without revealing the account owner’s true identity.

Here you can see three identical tweets with different profile pictures (no photo, a cartoon avatar, and an AI generated face). People are more likely to believe a post is genuine if they can see a photo of the person who the believe made it

It’s also more effective than using a photo someone else has posted of themselves online; search engines allow people to check if an image they’ve found appears online elsewhere, meaning people could find out that it had been stolen (and that the account wasn’t authentic). With a unique AI generated face, that won’t happen.

Read More:

Designed to Deceive: Do These People Look Real to You?; New York Times – Nov 2020

This article gives a pretty amazing introduction to what can be achieved with AI faces, and also provides some tips on how you might spot a face which has been generated by a computer.