Imagine you are on a conference call with your colleagues discussing the latest sales numbers and information your competitors would love to get a hold of. All of a sudden, your colleague Steve’s image flickers. And when you look at it, you notice something odd. Steve’s image doesn’t look exactly right — It sounds like Steve, but something appears to be off about him.
Upon closer look, you see that the area around his face looks shimmering and the lines appear blurry. You write it off as a technical glitch and continue the meeting as normal. However, a week later, you discovered that your organisation suffered a data leak and the information you discussed during the meeting with Steve is now in the hands of your biggest competitor.
It is granted that this sounds like a plot from a bad Hollywood movie. But today’s technological advancements, like artificial intelligence (AI) and deepfakes, could happen.
Deepfakes — a blend of “deep learning” and “fake” — take the form of videos, images or audio. They are created by an AI using a complex machine-learning algorithm. This deep learning technique, called Generative Adversarial Networks (GAN), is used to superimpose synthesised content over real ones or create entirely new, highly realistic content.
And with the increasing sophistication of GANs, deepfakes can be incredibly realistic and convincing. Designed to deceive their audience, they are often used by bad actors to be used in cyberattacks, fraud, extortion and other scams.
The technology behind deepfake has been around for a couple of years and was already used to create fake graphic content featuring celebrities. Initially, it was a complicated endeavour to create a deepfake. You needed hours and hours of existing material.
See also: A local spin on voice AI
But it has now advanced to the point where everyone can use it, even those without much technical knowledge.
Anyone with a powerful computer can use programmes like DeepFaceLive and Nvidia’s Maxine to fake their identity in real-time. And for audio, you can use programs like Adobe VoCo (popularised back in 2016), which can imitate someone’s voice very well.
This means you can go to a Zoom or Microsoft Teams meeting and look and sound like almost anyone. All you have to do is install the programme, choose any pre-generated identities or input one you created yourself, and you are good to go. It is that simple.
See also: Generative AI: Friend or foe?
Combining the ease of use of deepfakes with realistic content can lead to trust issues. In today’s digital age, where business is just as quickly done through a phone or video call, can you trust that you’re speaking to the real, intended person and not a scammer using someone’s identity in a deepfake?
This is one of the fundamental dangers of deepfakes. When used in an enhanced social engineering attack, they are intended to instil a level of trust in the victim. Because of this danger, the US Federal Bureau of Investigation has sent out a public service announcement and issued a warning about the rising threat of synthetic content, even going as far as dubbing the attacks with a new name: Business Identity Compromise.
So, what can you do to protect yourself from deepfakes specifically designed to fool us? Here are some indicators to look out for to recognise a deepfake.
Deepfakes can be very well made but often display defects, distortion, warping, or other inconsistencies. These indicators can be consistent eye spacing and strange-looking hair, especially around the edges. You can watch for syncing inconsistencies between the lip, audio and face movements too.
Lighting problems are also a good giveaway for a deepfake. Consider whether the lighting and shadows look realistic. Consider slowing down or pausing in certain spots if the material is a video. This might help you spot a deepfake more easily.
Another way to identify a deepfake is to consider the source. Where was it posted? And do you know if this is a reliable source that has vetted the material before putting it online?
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Security awareness training is a must-have in any good security programme. If you don’t train people to detect threats and provide them with training on the best response, how else are you going to shape the right security behaviour in people?
With deepfakes being such a new form of attack and many are still unaware of them, it is even more important to get up to speed quickly.
While there are technologies that can help organisations identify deepfakes, they are expensive and can often only be used to identify deepfakes among a set of existing media as it is still early days. This makes those solutions unsuited for real-time communications tools a modern workforce uses daily, like Zoom.
Security best practices and zero trust
A successful rule in security is verifying things you don’t trust. Examples include asking questions to someone you don’t trust on a conference call. Or using the digital fingerprint or watermarks on images.
Verification procedures are a compelling way to defend against deepfakes. Which ones you use depends on the security requirement of an organisation. But whichever procedure you use, could you test it regularly? And when you do spot a deepfake, always inform your organisation and security team about it. It might just be that you are not the only one bad actors are trying to fool.
And remember, trust is a fundamental requirement to interact. So don’t overdo it and become distrustful of everything. Be mindful of the signs and if you spot them, act accordingly.
Another best practice focuses on making conference calls private. Ensure all videos, conference calls, and webinars are (at least) password protected to ensure that only trusted individuals have access to them.
Understand the threat
Deepfake videos are probably the most well-known application because Hollywood blockbusters like The Irishman employ this technology. In reality, deepfakes are a multi-facet technology which has many applications. You must be aware that bad actors can use voice deepfakes to scam you.
Don’t give them any ammunition
To create a deepfake, you need the existing content of a victim. Given our desire to share just about every little aspect of our personal and work lives on social media, we are making it very easy for them.
So, protect yourself by limiting your public presence on social media. Don’t make it easy for a bad actor to recreate your likeliness or steal your voice based on publicly available data.
Although the technology behind deepfakes is advancing, they are still in the early stages as an attack vector. But we must prepare for them now as it is inevitable that bad actors will use deepfakes to fool and scam people more often in the future. Deepfakes are simply a threat we cannot afford to ignore.
Jacqueline Jayne is a security awareness advocate at KnowBe4