Soteryx

View Original

Deepfake...a New Online Trend of Misinformation

Deepfake: Where Personal and Cyber Security Intersect? Part 1

In Part 1, Soteryx examines the technology of deepfakes—phony video, audio that looks…real. In Part 2 we will examine legal and political ramifications, though one can guess those easily should a digital video which looks authentic impugns a product or firm’s business reputation or purports to release personal, private information about a customer or client…or a foreign power or rival political party shows an elected official sleeping through critical policy meetings. Indeed deepfake engages the dead, where technology has been used not only to resurrect dead actors in TV/film fiction, but also in factual presentations, where deceased celebrity chef Anthony Bourdain’s voice was manufactured for a recent documentary, or avant-garde artist Salvador Dali engages visitors to the museum dedicated to him in St. Petersburg, Florida.

The Tech of “Real” Fakery and Misinformation

Over the past ten years, the tech industry has put a focus on developing increasingly competent artificial intelligence (A.I.) technologies in order to expedite processes that would have previously required human involvement or wouldn’t have been possible altogether. The results have been impressive, as A.I. algorithms have made their way into everything from website search engines to virtual assistant devices (e.g. Google Home or Amazon Echo smart speakers). By learning from the wealth of information present on the internet, as well as user inputs, A.I. technologies are consistently becoming smarter and more lifelike. However, this type of technology’s ability to learn from the information that it is fed is increasingly being used for malicious purposes–especially the spread of misinformation on social media platforms.

Here is a scary example featuring Facebook Chair Mark Zuckerberg supposedly confessing to the platform’s nefarious aims, greed.

No recent A.I. technology has posed as great of a threat to legitimate news and information online as deepfakes. This kind of A.I. functions by processing video and audio data of one or more people in order to learn their appearances and mannerisms which it then uses to create a highly realistic fake video of the main targeted person.  In the case of celebrities or politicians, who usually tend to have a wealth of video and audio recordings publicly available online, deepfake programs have much more data to work with and can thus create increasingly convincing media featuring these targets. Though the concept of swapping the faces and voices of public figures to create a fake piece of media is not new, the neural A.I. networks that allow deepfakes to function are, and their ability to effortlessly learn from existing data is both powerful and problematic.

According to Mika Westerlund, the technology behind deepfakes can carry benefits for film, advertisement, and other industries. For example, this technology has helped film companies add the face of an existing actor onto that of another even after that actor’s death. However, the greater threat deepfakes pose to society cannot be ignored. For one, Westerlund explains that the proliferation of misinformative deepfakes can threaten individuals through their ability to be utilized as blackmail. More generally, the misinformative capability of deep fakes poses an especially important threat to the legitimacy of news sources and governments. Konstain A. Pantserev adds that deepfake technology is directly following in the footsteps of fake news as a veritable psychological weapon that can equally be used to sway the beliefs of certain online communities. Additionally, access to deepfake creation programs such as FakeApp signifies that the creation and further proliferation of these kinds of videos is only becoming easier for the average online user.

Though deepfakes will likely play an important role in contributing to new online trends of misinformation, solutions are being developed to combat the proliferation of these videos. In the case of deepfakes created as personal blackmail (a trend that began in 2017), many social media websites have banned posting such content altogether. However, identifying deepfakes in order to ban them appears to be the next challenge in the fight against the risks of this technology. In their study on the subject of exposing deepfakes, Li and Lyu propose a solution that involves using their own A.I. program to detect if a video is a deepfake by analyzing if it contains visual traces–or “warping artifacts''– that are often attributed to the imperfections of the A.I. algorithms that produce deepfakes. Many researchers propose following such a model, as current deepfake-producing programs produce visual artifacts that countering A.I. programs can easily identify. However, new methods will need to be developed as this type of A.I. improves.

How can I protect yourself, my family, my business?
        
Contact us at https://www.soteryx.com/contact and we will break down how we can help secure you, your reputation, your business’s physical and cyber infrastructure, adding both value and peace of mind.

*Christopher Chambers is EVP and General Counsel at Soteryx Corp. Tristan Schentzler is a senior at The University of British Columbia-Vancouver, majoring in International Relations and Journalism.