THANK YOU FOR SUBSCRIBING
The use of artificial intelligence (AI) technology is becoming more ubiquitous due to the massively decreased costs to leverage it (i.e. decrease in computing costs and on-demand cloud services), increase in toolsets and APIs (i.e. Google Cloud AI Building Blocks), and wide range of solutions it can enable (i.e. speech synthesis, conversational experiences, content discovery in images and videos). While these technologies are incredibly useful for innovators to develop the next generation of productivity and security products, it is also undoubtedly being used for nefarious purposes as well. For a cybercriminal, it doesn’t necessarily have to be perfect if you can cast a very wide net in a cheap fashion. By using AI, attacks can be launched that can have a much higher rate of return to level-of-effort ratio.
One such technology which leverages AI and ML in a nefarious way is deepfake technology. Deepfake technology is a machine learning technology which can generate new audio and video content of a person based on a relatively small sampling of legitimate audio and photos. The application of this is to convince an unsuspecting individual that someone has said or done something they haven’t. Couple this technology with the ability to engage in conversations in real-time, and many people can be tricked into believing they are having a legitimate conversation (or even a video chat) with a trusted individual instead of a cybercriminal. Several deepfake videos have surfaced in the past couple of years (and with much publicity) as a way of raising awareness to the potential risks of this technology to be used to steal money, erode public trust, and create political turmoil.
“As information security leaders, it’s always important to remember that basic hygiene and simple security practices can often be the most effective”
It’s not just a theoretical risk, either. Last March, nearly a quarter of a million dollars was wired from a British energy company based on a phone call the managing director received from a thief using deepfake technology to mimic his boss’s voice. The managing director said the request was a bit awkward, but the attacker’s ruse was convincing enough to make him believe it was a legitimate request from his boss. In addition, the attacker called the victim several times which increased the trust the victim gave to the attacker.
Despite the introduction of this new and powerful technology, the fundamental attack strategy is not anything new. In this particular case, the attacker is attempting to gain trust by tricking the individual into believing they are someone they are not or someone they should trust. Once this trust is established, they then leverage it for a monetary benefit. Phishing through email, virtual kidnapping calls, and tailgating through your company’s front door are all attacks which utilize this same strategy.
One of the oldest, yet most effective security practices which can still mitigate this risk is by utilizing some form of multi-factor authentication. For many enterprise users, this might be a familiar concept already in use when accessing corporate computer systems. However, the idea of multi-factor authentication goes far beyond carrying a token in your pocket. The fundamental principle is that, especially in the age of deepfake technology, people need to insist on having additional channels to determine legitimate identity prior to making or allowing any transaction or security change to occur. If that sounds a bit complicated, it’s because it certainly can be. And as security leaders trying to shape the best practices of our enterprise users—both in the office and outside of it—it is quite a challenge.
For example, if a user gets an Email from a person claiming to be their boss which says something along the lines of “I will be out of the office tomorrow,” this message on its own is a relatively low risk communication. A person receiving this does not necessarily need to expend tremendous effort to determine if the message is legitimate. But, imagine if the same email was sent which also added “and I need you to wire money to X account.” If your role at the company never involves wiring money, most people would be immediately suspicious and likely not comply. However, if your role occasionally requires you to wire money for your boss, the best security practice is to ensure there is some other way of authenticating this request as legitimate prior to executing it. Other methods to authenticate can be a face-to-face conversation, a phone call or text message to a known number, a direct conversation with a third person to verify the legitimacy of the request—it can take almost any form. The important point is that some other attempt was made to validate the person or request, and that the method used was not dictated or influenced by the attacker themselves.
As information security leaders, it’s always important to remember that basic hygiene and simple security practices can often be the most effective. Today’s increasingly sophisticated cyber threats utilizing either older tricks and/or powerful AI technology can often be thwarted with the same, simple security checks. When I was a young child, my parents used to tell me not to ever talk to a stranger unless they were able to tell me a secret word that only my parents knew. This basic form of multi-factor authentication many of us have used for generations with our own children in a non-technical context is still just as powerful and effective against these new, emerging threats.