Below Supernav ↴

Deepfakes: What they are and how to spot them

 

Main Area Top ↴

Testing on staging11

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

(NewsNation) — Technology with the potential to learn and adapt is becoming more sophisticated, resulting in doctored photos and videos that are increasingly hard to differentiate from the real thing.

The advent of deepfakes, a type of so-called synthetic media, allows creators to make anyone, including world leaders, appear to say or do anything. The FBI’s cyber division issued a release last March warning that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.” Already, the technology has been used to produce fake videos of Ukrainian President Volodymyr Zelenskyy and Russian President Vladimir Putin amid their nations’ ongoing war.

Deepfakes are made using AI algorithms that learn from photos or audio clips to produce something similar, but artificial.

“So for instance, if we want to create realistic-looking human faces, or human voices, what we do is we give the neural network tons of real face images, videos, or human voices, audio signals,” said Siwei Lyu, an Empire innovation professor in the computer science and engineering department of State University of New York College at Buffalo. “So the model will learn, will improve with time.”

One recent example targeted Ukraine’s president. The deepfake video showed what appeared to be Zelenskyy urging Ukrainians to surrender in their fight against invading Russian troops. The Ukrainian leader’s body appears stiff in the video, his head too large, and his face a different color than his neck.

Other deepfakes are strictly entertainment, if not more sophisticated. The TikTok parody account @deeptomcruise posts hyper-realistic videos of what appears to be Tom Cruise going about his day. The effect is undisturbed even as the actor in the video obscures his face by putting on and removing sunglasses or a hat. In one on-the-nose video, he performs a magic trick before staring into the camera and repeating “it’s all the real thing.”

The possibilities of doctored media go beyond deepfakes, though.

Recently, a hoaxer posing as Ukrainian Prime Minister Denys Shmyhal landed on a March 17 video call with U.K. Secretary of State for Defence Ben Wallace.

“Today an attempt was made by an imposter claiming to be Ukrainian PM to speak with me. He posed several misleading questions and after becoming suspicious I terminated the call,” Wallace said on Twitter after the incident.

The British government has said Russia was behind the hoax and claimed that clips of the call, which were shared on YouTube, were edited to distort and misrepresent the truth.

In another video posted in 2020, Speaker of the House Nancy Pelosi’s voice was slowed down to make her speech sound slurred. The pitch of the video also was adjusted to match Pelosi’s regular speaking voice, making the fake harder to detect.

The most effective way to identify a deepfake is to cross-check media against other sources, Lyu said. If a quick search turns up overwhelmingly contradictory information, it’s more likely the media is fake, he said.

The human eyes and ears are equally helpful tools. Irregularities such as unusual blinking patterns, speech that isn’t synched up with mouth movements, and inconsistencies within the reflection of the subject’s eyes all are red flags, Lyu said.

“My favorite part to look at is the teeth,” Lyu said. “The generation algorithms, they usually have some trouble creating realistic-looking teeth. You cannot identify individual teeth, you always see something that is like a blob of white.”

Those clues aren’t smoking-gun evidence, but they point to a higher likelihood that a piece of media is fake, he said.

Some deepfakes fall into the uncanny valley, where it can be harder to pinpoint what’s not quite right.

In those situations, tools similar to the very ones used to make deepfakes can help detect other synthetic media. Lyu was part of a team that helped create DeepFake-o-meter, an online platform that lets users upload content and tells them if it’s likely to be fake. Similar tools are also being used to help interpret speech for stroke victims, Lyu said.

For better or worse, the technology is here to stay, he said.

“It’s like nuclear technology. We can use it for generating power for homes and for people, but also it could be used to create atomic bombs to destroy the world,” Lyu said. “So it depends on who is using the technology.”

The long-term effect that deepfakes and doctored media have on society, however, could jeopardize the public’s trust in the ways they consume information.

“My personal view of this is, generally speaking, the very existence of deepfakes, or any kind of manipulated media, is the fundamental erosion of our trust of media we see,” Lyu said.

On the other hand, people who aren’t so tech-savvy are likely at a greater risk of falling prey to manipulated media, he said.

“When they are bombarded with this kind of information, they’ll be more easily influenced by that information and that can also cause consequences,” Lyu said.

U.S.

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴