27672024502
AVG Signal Blog Security Scams Deepfake Videos: Is There Anything You Can Do to Protect Yourself?
Deepfake_Videos-Is_There_Anything_You_Can_Do_to_Protect_Yourself-Hero

Written by Colin Asher
Published on June 24, 2019

What we’re talking about is the newfound ease of producing convincing fake videos, known as “deepfakes”. Depending on your point of view, that’s either cool, scary, or perhaps both. What is easier to assert is that the digital fakery of you — at least, without your knowledge or consent — is not as cool. This article explores the breaking phenomenon of deepfake videos and how they are the next step in the evolution of the Disinformation Age we now find ourselves in.

This article contains:

    Origins

    The “deep” of deepfake refers to deep learning, which is a method by which artificial intelligence gathers data to become smarter. In this case, the AI uses its data, which includes certain facial movements, to superimpose a new face onto an existing face and body. While the term “deepfake” is a very recent coinage, the origin of the concept is nothing new. People have been contending with the sometimes problematic, sometimes funny results of Photoshopped images (or any digitally faked or altered image) for at least the last 15 years. What makes the current state of altered video particularly distressing, though, is the fact that this once expensive, exceptional technology is now more easily accessible, freakishly smooth, and no longer requires Hollywood special effects artists to work. And unfortunately, this newfound ease happens to coincide with the recent flux of fake news that the aforementioned “Disinformation Age” has us struggling with.

    It probably makes sense that a 2018 high-profile warning of impending deepfakery came from a film actor and director, Jordan Peele, whose viral video begins with his voice impression of Obama mouthed by a deepfaked video of Obama. Peele’s message is that convincing fake political speeches are now a reality, though, in summary, his conclusion is merely “watch out that we don’t become a dystopia.”

    Since this whole deepfake thing is hot and fresh, and its apparent negative possibilities haven’t fully manifested, we don’t yet know the exact nature of the havoc it could cause. Early reports cannot offer much more than a “we’ll have to wait and see” (though we may not have to wait very long, since nowadays new deepfake cases are appearing more frequently, so certain points in this article may quickly become obsolete).

    The uses: personal and political?

    Like fake news, deepfake videos have a variety of purposes. Some are made just because it’s amusing and funny, e.g. the Youtube channels dedicated to switching actors’ faces in movies. That’s on the more benign side.

    Then there were some political fake videos. There was recently a video of Nancy Pelosi, not even deepfaked, but merely slowed down so she appeared to be drunk. The video was actually shared by President Trump. Then there’s another instance of what is believed to be a deepfake, this time of Gabonese president, Ali Bongo, who’s health issues have been a source of speculation. The video was followed shortly by an attempted coup by the Gabonese military. Whether the video is a deepfake or not, the point is that now we have to question what we see like never before.

    Currently, though, the most prevalent negative use of deepfake videos hasn’t been the upheaval of the social order, but unfortunate instances of everyday revenge. Most often it’s been women who have fallen victim to having their faces morphed onto bodies of pornographic actresses. This also happens to celebrities, but some believe their own high-profile nature protects them from anyone actually falling for these false videos.

    The sad truth

    The sad truth is that the more videos and images there are of you online, the easier you are to deepfake, though the technology is actually getting better all the time, and now it can be done with just one photo of you (though the more footage the better for smoothness purposes). The latest tech also shows how voice-over by another actor isn’t even needed, but instead text can be generated that allows new words to be put into someone’s mouth.

    Regulation and the freedom to fake

    Internet regulation is a damned tough thing. The issue remains thoroughly muddled in the wake of the 2016 US Presidential election when the extent of fake news’ prevalence and power became known. Now the problem of online deepfakery is getting attention in high places. Recently, a hearing was held by the House Intelligence Committee to investigate the subject.

    Even in conception, the regulation of online fakes starts to step on a lot of free speech’s toes. First off, platforms like Facebook and Youtube where much of the fakery exists are private corporations. Second, neither platform is a journalistic entity. They are merely places online where content can be posted.

    These sites can choose to exclude content when it violates copyright laws. True pornographic content is also excluded. In the case of the latter, this blocking isn’t taken to be problematic censorship, but is accepted as the right that both companies have to not allow the publishing of certain explicit material.

    How has this played out? While writing this article, a deepfake of Facebook CEO Mark Zuckerberg appeared as a response to the company’s former declaration that they would not police such content. One solution that social media platforms have used in response to fake content is de-prioritizing it to not appear as recommendations, thus lowering its visibility. And in terms of popularity, fact-checking may prevent mass panic and global upheaval, but it won’t prevent interest. The rather predictable bottom line is that even if videos are debunked as fake, it certainly doesn’t seem to stop people from wanting to see them. Facebook noted that the Pelosi fake was still being viewed and shared even though it had been debunked.

    They can be used. But will they?

    There is fear that the next logical step for deepfake videos will be the influencing of public thought in the next US presidential election. Some have rebutted that deepfake videos have been around long enough to have already become a political tool, and the fact they haven’t yet is an indication that good ol’ fake news – which, unlike video, doesn’t always contain digital signatures to reveal the fake – still beats out more advanced tech. But this optimism is countered by fearful researchers who are scrambling to come up with finer methods of detection. As one computer forensics expert states, “we are outgunned 100 to 1” by those attempting to improve deepfake videos.

    How to spot them

    As we’ve said, deepfakes are improving constantly, hence they will get harder to spot. However, if you’d like to train your eye, here a few tells you can try to look for.

    • Strange blinking. Yes, faked faces don’t have real eyes and tear ducts, etc., and so whatever real blinking looks like...they’re probably not doing it very well.

    • Facial movements and musculature may be jerky, for instance, the mouth of a faked head may move robotically.

    • Shifts in skin tone and lighting. The video may fidget as the head turns, almost like bad video game graphics.

    • A weird mixture of two faces. This can be revealed particularly during complex movements, because the more movement, the more footage and original angles are needed to make a convincing fake.

    With great technology comes great responsibility. And, if we’re to keep our heads and our free speech, we’ll need to come up with more clever ways to prevent a post-truth digital world.

    Protect your Android against threats with AVG AntiVirus

    Free install

    Protect your iPhone against threats with AVG Mobile Security

    Free install
    Scams
    Security
    Colin Asher
    24-06-2019