Best ways to spot AI-generated photos and deepfakes


ake images generated using artificial intelligence and “deepfake” videos where celebrities’ heads are super-imposed onto the bodies of other people are becoming ten a penny nowadays, and there are rising concerns that this will lead to an epidemic of misinformation over social media.

Earlier this week, a bot Twitter account that reports Bloomberg stories posted a fake image allegedly showing an explosion at the Pentagon in the US. Since anyone can now pay Twitter a monthly subscription fee to become “verified”, other legacy verified accounts reshared the image, thinking it was real.

While the image was soon confirmed to be fake by the Arlington Fire and Emergency Medical Services, the issue has raised concerns about how easy it is to deceive the general public and the press with AI-generated images.

Here’s our guide on how you can spot and verify both AI-generated images and deepfake videos.

How to detect an AI-generated image

Twins, or just one face that has been distorted and replicated to make an image that is rather creepy on second look?

/ 1tamara2/Pixabay

If you see an image on the internet of an unusual event or a person in a compromising position, here are some steps you should take:

1. Verify the source

First, check out the source of the information you have received. Are they a news organisation, a government organisation or the verified account of a celebrity? Or are they just impersonating a legitimate-sounding organisation?

And is this something the famous figure would usually do? This image of Pope Francis wearing a Balenciaga jacket looks real, but press images have never shown him wearing anything other than papal robes.

2. Turn on the news

If something major has happened in the world, chances are the international news organisations will know about it first, to say nothing of the local channels near you. So you should be able to see breaking news alerts on TV, radio, news websites and even news aggregator websites like Reddit.

But if the only people talking about the issue are on social media and there’s no live video footage, it could be a scam, like these fake photos that went viral in March, depicting Donald Trump being chased and arrested.

If you’re ever not sure about an image that has gone viral or the news story or incident around it, check out Snopes — the oldest and largest fact-checking website that publishes investigative journalism focused on facts and debunking politicians, hoaxes and urban legends.

3. Look at the image critically

An AI-generated image of a rabbit and some carrots that looks more noticeably fake, if you look closely

/ Susan Cipriano/Pixabay

It’s also a good idea to look at the image critically. We now know that if you were to look at a photo of a person and there is a random stray arm, leg or hand, that the image has been photoshopped.

Although AI-generated images are usually more discerning than that, there’s always a certain quality about the image that looks rather unreal, if you look at it closely. Take this rabbit, for instance.

Look up “rabbit” on Google Images and you will see many different breeds of bunnies. Take another look at this AI-generated image of the rabbit, and you will notice that its face and body have been artificially generated from the faces of several rabbits.

The carrots next to it are also artificial-looking and each has an almost identical shape. If you’ve ever had to chop up a bag of carrots, you’d know that they all usually look unique from each other.

4. Use an AI image detector

When in doubt, always run the image through an AI image detector. You can do this on the PC by right-clicking the image on Twitter and clicking “Save image as…” on the menu that appears.

Then go to one of these free AI image detector services Illuminarty, Optic AI or Not and Everypixel Aesthetics.

This photo of a new-born infant, which looks quite believable, is actually an AI-generated image too

/ Vicki Hamilton/Pixabay

These services use a type of AI called neural networks containing computer algorithms to analyse images and compare them to known key traits, patterns and characteristics of various AI models and typical images made by humans, to determine the origin of the content.

The AI image detectors will scan the image you upload and within seconds, provide you with a statistic on how likely they think that an image is fake. You can play around with the services by uploading one of your own photos and comparing it to how it rates an AI-generated picture — there should be quite a big percentage difference between the “real” photo and the “fake” one.

How to spot deepfake videos

Similar to the section above on spotting fake AI images, you want to always make sure to use your common sense.

If a celebrity is well known to be dead, like John Lennon from the Beatles, he’s unlikely to be talking to a high-definition camera in a modern TV studio, like this example shared by generative AI video platform HeyGen.

It’s a bit less obvious when people use deepfake technology to create videos of politicians, but it’s still possible to tell that it’s fake.

First, check to see whether the lips sync up with what the figure on screen is saying. Then listen out for the voice. Does the accent and cadence sound just like a real video of the person? Usually, there will be a slight difference.

You should also look closely at the video to see if the lighting and shadows look strange, if the head movement looks like it belongs to someone else’s head, or if there are usual skin tones.

Although some people have used deepfake technology for trickery, other content creators are now using AI to help them create new entertaining content.

A YouTube channel called Fantasy Images, which has almost 59,000 subscribers, primarily makes humourous parody videos showing characters from popular fantasy films, reimagined as fitness-mad gym addicts.

The channel’s “Harry Spotter – The boy who lifted” video, which has been viewed 3,8 million times, is made using the Midjourney generative AI art service, together with speech generated by fake voice cloning software ElvenLabs.

Related Articles

Back to top button