Summary
Today, technology can be used to create convincing but fabricated videos known as deepfakes. Typically, these feature prominent figures making unusual statements or claims and have been used to lampoon various celebrities. As this deepfake video of Barack Obama shows, though, this technology could easily be used to discredit any conspicuous individual.
So prominent are concerns amongst those who are aware of this technology that it has featured in two well-received and prominent media texts within the past 12 months. The first, Years and Years, a joint production between the BBC and HBO, highlighted (albeit briefly) how deepfakes could be used to destabilise political institutions. The second, conspiracy thriller The Capture, explored how this tech could be used by corrupt authorities as a means of falsifying evidence.
Little to no literature or other materials discuss the threat this tech poses to the business community, however. Here, we consider these threats and, vitally, how businesses can combat them.
Why deepfake videos are highly persuasive
At ROCK, we pride ourselves on our ability to think differently; to find solutions that, whilst less obvious, are considerably more effective. Doing this successfully typically requires an inquisitive – possibly even sceptical – mindset. Eventually, this becomes second nature.
Open-mindedness and a willingness to question assumptions become the norm. Even when you have trained your mind to subject everything to analysis, though, you still don’t tend to question what you see or hear.
We place implicit trust in our senses to the extent that attempting to query whether or not what we see or hear is anything other than reality is simply too challenging – too removed from the status quo – to achieve the neurological acceptance needed to pursue any, let alone effective, investigation.
Consider, for a moment, that an individual that is deemed to have difficulty distinguishing reality from fiction is believed to be suffering from some form of psychosis – a blanket term associated with conditions such as schizophrenia.
Also pertinent are the findings of a 2010 study entitled The Bias Against Creativity: Why People Desire but Reject Creative Ideas.1 Conducted at Cornell University. The study’s authors discovered that, whilst people typically value and desire creativity, atypical ideas or responses are consistently rejected when assessors find themselves motivated by a desire to reduce uncertainty.
Essentially, we, as human beings, are hardwired to accept the conventional wisdom when faced with something that, to all extents and purposes, appears incontrovertible. Those that are viewed as being unable to differentiate between what is certain and uncertain are deemed psychotic and face considerable stigma.
It is only logical to conclude, then, that any material that could make us question our senses – particularly the two many would argue are most commonly used: sight and hearing – will be rejected. And it is because of this that deepfake technology is truly terrifying.
Free cyber security audit
Deepfake tech and PR crises
Having established that people are in no way inclined to question what they see or hear, we can easily see how deepfake technology and manipulated audio and video could be used to discredit virtually any individual in the public eye. When combined with desirability bias – peoples’ willingness to accept materials that support their established beliefs – we can see why people trust and spread falsified materials.
Consider how, in May 2019, a crudely edited video of Speaker of the United States House of Representatives, Nancy Pelosi, gained traction on social media. This video showed Pelosi addressing a group of people whilst seemingly inebriated and gained considerable coverage on social media, with even President Trump sharing it on his Twitter feed.
Alarmingly, this video was not edited with sophisticated deepfake technology, it was simply existing footage slowed down. This, though, was sufficient, with Trump supporters and critics of Pelosi sharing the video millions of times. Imagine the traction, then, that a sophisticated deepfake could gain. With the technology becoming more accessible and, vitally, easier to use, it’s likely we’ll find out very soon.
It is this increasing convenience that is likely to see anxiety concerning deepfakes extend beyond the worlds of politics and celebrity to business and industry. Just imagine, for example, a video of Arron Banks within which he states, perhaps mockingly, that he knows Brexit will harm livelihoods but that he funded the Leave.EU campaign because he knew it would benefit him directly.
Alternatively, consider a deepfake featuring Mark Zuckerburg claiming that he’s happy for Facebook’s users’ data to be sold to the highest bidder. Both of these topics and individuals are highly divisive and it’s quite reasonable to assume that both would find an audience willing to believe and propagate this content as a result.
These examples, though, are likely to affect large organisations and renowned public figures. Could deepfake tech be leveraged against the little guy, the millions of SMEs operating across the globe? The answer is an unequivocal and resolute ‘yes’.
Deepfake tech and cybercrime
The vast majority of coverage – and, indeed, this article so far – has focused on how deepfake can be used to create video material. The audio component of audio-visual is consistently overlooked – but it is this that presents cybercriminals with a considerable opportunity – and businesses with a significant threat.
Rather than highly-advanced techniques, the majority of cybercrime is dependent upon what is referred to as social engineering. This sees a hacker mimic an officious individual or institution in an attempt to gain trust and deceive users into providing credentials or other sensitive information. The most obvious example is phishing.
Imagine, for just a moment, if a cybercriminal were able to mimic the voice of an employee’s line manager or, worse yet, a company director or owner. They could call this individual directly and request logins, banking information or anything else and, as far as the employee was concerned, the person they were speaking to would have every right to the information they were requesting.
This person would, after all, be someone they trusted – a person that they would recognise and who has authority. Just how likely would they be to question the caller’s motivations?
The example cited above, whilst hypothetical, has multiple potential applications. The call could, for example, be directed to an IT support provider, financial institution, services provider and so on. All of these would be providing a service to a client and, even in the event that their employer had a strict policy of not disclosing information telephonically, it’s not hard to imagine them being cajoled into providing materials to what appears to be, on the face of things, an irate client.
How businesses can protect themselves
As with most matters concerning cyber security strategy, improving awareness and providing adequate training is the most logical first step – particularly as the vast majority of data breaches come about as a result of human error.2
Ensuring that employees are conscious of the threat of cybercrime and the common vectors cybercriminals will use is vital. Teaming this with a company-wide policy stipulating that credentials, financial information etc. are not disclosed telephonically will go a long way towards preventing such an attack from succeeding.
In terms of countering deepfakes designed to discredit key stakeholders and, therefore, businesses, time will be of the essence. Discovering such content and proclaiming it to be fabricated as soon as possible will be vital. The sooner denials are circulated, the more convincing this position will be. In order to identify the existence of such materials as quickly as possible, ROCK would recommend automation technology be used to frequently scan the web for new materials relating to the company.
Finally, digital forensics techniques capable of identifying when deepfake tech has been used to create materials are nearing fruition. These could, in extreme circumstances, be used to disprove and discredit harmful materials but, at the time of writing, are yet to be fully developed.
Of course, others will soon recognise this growing threat and begin the process of developing tools and techniques to counter it. Expect a few high-profile casualties before robust solutions are widely available, though.
Free cyber security audit
References
- The Bias Against Creativity: Why People Desire but Reject Creative Ideas (2010)
- 90 per cent of data breaches are caused by human error (2019)