One tech tip: how to spot AI-generated deepfake images

LONDON — AI fakery is quickly becoming one of the biggest problems we face online. Deceptive images, videos and audio are spreading due to the rise and misuse of generative artificial intelligence tools.

With AI deepfakes popping up almost every day, depicting everyone from Taylor Swift to Donald Trump to Katy Perry attending the Meta Gala, it's becoming increasingly difficult to distinguish what's real and what's not. Video and image generators like DALL-E, Midjourney, and OpenAI's Sora make it easy for people without any technical skills to create deepfakes — just type a request and the system will spit it out.

These fake images may seem innocent. But they can be used for fraud, identity theft, propaganda and election manipulation.

Here's how to avoid being duped by deepfakes:

In the early days of deepfakes, the technology was far from perfect and often left clear signs of tampering. Fact-checkers have pointed out images with obvious errors, such as hands with six fingers or glasses with differently shaped lenses.

But as AI has improved, it has become a lot more difficult. Some commonly shared advice — such as looking for unnatural blinking patterns in people in deepfake videos — no longer applies, says Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert on generative AI.

Still, there are some things we need to look for, he said.

Many AI deepfake photos, especially of people, have an electronic glow, “an aesthetic kind of smoothing effect” that makes the skin “look incredibly polished,” Ajder said.

However, he cautioned that creative impulses can sometimes eliminate these and many other signs of AI manipulation.

Check the consistency of shadows and lighting. Often the subject is clearly in focus and appears convincingly lifelike, but elements in the background may not be as realistic or polished.

Face swapping is one of the most common deepfake methods. Experts advise to look closely at the edges of the face. Does the facial skin tone match the rest of the head or body? Are the edges of the face sharp or blurry?

If you suspect that the video of a person speaking has been manipulated, look at his or her mouth. Do their lip movements match the audio perfectly?

Ajder suggests looking at the teeth. Are they clear, or are they blurry and somehow inconsistent with what they actually look like?

Cybersecurity company Norton says algorithms may not yet be sophisticated enough to generate individual teeth, so a lack of contours for individual teeth could be a clue.

Sometimes context matters. Take a moment to consider whether what you see is plausible.

Poynter's journalism website advises that if you see a public figure do something that seems “exaggerated, unrealistic or off-base” it could be a deepfake.

For example, could the Pope really be wearing a luxurious puffer jacket, as shown in an infamous fake photo? If he did, wouldn't additional photos or videos be published by legitimate sources?

At the Met Gala, over-the-top costumes are the whole point, adding to the confusion. But such major events are usually captured by officially licensed photographers who take plenty of photos that can help with verification. One clue that the Perry photo was a fake is the carpeting on the stairs, which some eagle-eyed social media users noticed was from the 2018 event.

Another approach is to use AI to combat AI.

OpenAI said Tuesday it is releasing a tool to detect content created with DALL-E 3, the latest version of its AI image generator. Microsoft has developed an authentication tool that can analyze photos or videos to provide a confidence score on whether they have been tampered with. Chipmaker Intel's FakeCatcher uses algorithms to analyze the pixels of an image to determine whether it is real or fake.

There are online tools that promise to detect fakes if you upload a file or paste a link to the suspicious material. But some, like OpenAI's tool and Microsoft's authenticator, are only available to select partners and not to the public. That's partly because researchers don't want to tip off bad actors and give them a bigger advantage in the deepfake arms race.

Open access to detection tools could also give people the impression that they are “divine technologies that can outsource critical thinking for us,” when instead we should be aware of their limitations, Ajder said.

All this said, artificial intelligence has developed rapidly and AI models are being trained on internet data to produce increasingly higher quality content with fewer errors.

This means that there is no guarantee that this advice will still be valid even a year from now.

Experts say it could even be dangerous to put the burden on ordinary people to become digital Sherlocks, as it could give them a false sense of confidence as deepfakes become increasingly difficult to spot, even for trained eyes.

___

Swenson reported from New York.

___

The Associated Press receives support from several private foundations to improve its explanatory reporting on elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.

Related Posts

  • Business
  • July 5, 2024
  • 3 views
  • 1 minute Read
US employment slowed in June, raising hopes for rate cuts

Voters are frustrated by inflation Voters are frustrated about inflation and overall economy 02:11 The US labor market cooled in June but remains solid, raising the possibility that the Federal…

  • Business
  • July 5, 2024
  • 4 views
  • 3 minutes Read
AI boosts profits, stocks rise

Samsung's logo is seen at its pavilion during the Mobile World Congress in Barcelona, ​​Spain, on February 28, 2024. (Photo by Joan Cros/NurPhoto via Getty Images) Nurphoto | Nurphoto |…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Renting a car for a road trip, or driving yourself? 5 things to consider

  • July 5, 2024
Renting a car for a road trip, or driving yourself? 5 things to consider

Report finds former social security watchdog Gail Ennis abused her powers

  • July 5, 2024
Report finds former social security watchdog Gail Ennis abused her powers

How to recognize an AI generated video?

  • July 5, 2024
How to recognize an AI generated video?

A cheap way to bet on Tesla's comeback using options

  • July 5, 2024
A cheap way to bet on Tesla's comeback using options

What is an API and How Do You Develop It?

  • July 5, 2024
What is an API and How Do You Develop It?

AI voice scam call in movie 'Thelma' is a growing threat

  • July 5, 2024
AI voice scam call in movie 'Thelma' is a growing threat

Newlyweds reveal details of their lavish Mexico wedding

  • July 5, 2024
Newlyweds reveal details of their lavish Mexico wedding

How dust pollution from the shrinking Great Salt Lake disproportionately affects communities

  • July 5, 2024
How dust pollution from the shrinking Great Salt Lake disproportionately affects communities

Jobs Report June 2024:

  • July 5, 2024
Jobs Report June 2024:

'MaXXXine' concludes a gruesome trilogy in style

  • July 5, 2024
'MaXXXine' concludes a gruesome trilogy in style

US employment slowed in June, raising hopes for rate cuts

  • July 5, 2024
US employment slowed in June, raising hopes for rate cuts