Deepfake videos will be “perfectly real” in six months

In less than a year, developers may be able to develop a technology that can make deepfake videos render more “perfectly” and with stronger believability since developers already know how to do it and they only need to train the machine in order to improve its output, Hao Li, a deepfake video pioneer said in an interview during a conference this week. 

Hao Li, an associate professor of computer science at the University of Southern California, said during CNBC’s PowerLaunch event that he estimates that deepfake videos will become “perfectly real” in six months to one year. 

“It’s still very easy, you can tell from the naked eye most of the deepfakes,” but for those who have the temperament and the tools to do it, “there also are examples that are really, really convincing.” He adds that these convincing videos need “sufficient effort.”

Li foresees that time will come that the naked eye will no longer be able to see the difference between a real video and a deepfake video because of the sophistication of the emerging techniques to create one these days. “Soon, it’s going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,” he added. 

Deepfake videos refer to those that are manipulated through computers and machine learning. There are several artificial intelligence that exists right now, but they are only accessible to those with tech backgrounds and the money to afford them. However, Li said that deepfake A.I.’s would soon be available to everyday people. 

The emergence of deepfake videos has numerable impacts, especially in the plight of malicious individuals who feed off proliferating fake news and misinformation. Deepfake videos have been scrutinized recently for its possible impact on the upcoming 2020 elections. 

At an MIT tech conference this week, Li created a deepfake video showing the face of Russian President Vladimir Putin. He explained that the video was created in order to show the state of deepfake algorithms nowadays and what the future holds for the said technology. 

Li told the attendees of the conference that the deepfake technology is rapidly growing and it’s been growing faster than he has expected. Initially, during the MIT conference, he told the MIT Technology Review that “perfect and virtually undetectable” deepfakes were “a few years” away.

When asked to clarify the discrepancy between his statement to CNBC and the MIT Tech Review, Li said that there are recent developments that made him “recalibrate” his timeline. 

The tech expert refers to the emergence of China-based app called Zao, which allows anyone with a smartphone to superimpose photos of celebrities and other people to someone’s video. Some news platforms have called the output of Zao app as “terrifyingly convincing.”

“In some ways, we already know how to do it,” he said in an email to CNBC. “[It’s] only a matter of training with more data and implementing it.”

In order to combat the ill effects of deepfake videos, Li said that regulatory bodies need to look at the issue differently in order to find a solution. He specifically noted that in this matter, research by academics is very important. He also referenced his work on deepfake detection with Hany Farid, a professor at the University of California at Berkeley.

“If you want to be able to detect deepfakes, you also have to see what the limits are,” Li said. “If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways, it’s impossible to detect those if you don’t know how they work.”

About the Author

Al Restar
A consumer tech and cybersecurity journalist who does content marketing while daydreaming about having unlimited coffee for life and getting a pet llama. I also own a cybersecurity blog called Zero Day.

Be the first to comment on "Deepfake videos will be “perfectly real” in six months"

Leave a comment

Your email address will not be published.


*