deepfakes blog

The Deepfake Threat Is Here: It’s Time to Start Building it into Enterprise Risk Management

When UK consumer rights champion Martin Lewis took to social media to promote a new ‘Quantum AI’ investment scheme, many were surprised at the move. After all, Lewis had built up trust with the public over many years by refusing to endorse commercial projects of any sort. In fact, the video ad featuring Lewis turned out to be a deepfake. The quality of the likeness was branded “frightening” to the man himself, who warned that such fakes could “ruin lives”.

That’s undoubtedly true. But when used maliciously, the technology not only presents a fraud threat to consumers, it poses a massive financial and reputational risk to enterprises. It’s a risk that a sound information security management system (ISMS) could help to mitigate.

What is Deepfake Tech?

Deepfakes use a type of AI technology known as deep learning to create spoofed video and audio of people that are hard to tell apart from genuine content. It can be used to synthesise speech and effectively manipulate facial expressions to make it look like an individual is saying something they aren’t. While there are legitimate uses for the technology, for example, in the film industry, it’s increasingly being deployed for malign purposes.

The Martin Lewis fake is one of the first times a video has been used in that way to promote investment fraud. However, deepfake audio has been around for some years and used to trick recipients into wiring funds into accounts belonging to fraudsters. This technique tricked a British CEO into believing a deepfake audio of his German boss telling him to wire over $240,000 to a third-party account. And it conned a Japanese manager into believing the director of his parent business had requested a $35m transfer.

What Are The Potential Threats?

The above cases are a kind of riff on the business email compromise (BEC) scam, a type of advance fee fraud where the scammer tricks victims into making large transfers to the latter. However, there are other emerging threats to enterprises. These include:

Supercharging phishing tactics: A deepfake audio or video of a trusted entity within the company, say the head of IT or a business unit, could trick the victim into handing over their credentials or sensitive business information. The FBI has already warned employees of deepfake audio used in conjunction with video conferencing platforms.

Fraudulently apply for remote jobs: The FBI has also warned of deepfakes used with stolen personal information to help scammers win remote working jobs. Corporate network access could then be used to steal sensitive customer and company information.

Bypassing identity checks: Facial or voice recognition is an increasingly popular way for organisations to authenticate users, especially customers. As detailed by Europol, deepfakes could be deployed to trick these systems into providing account access. Although this isn’t a direct threat to enterprise security, it could seriously impact reputational and financial risk.

Defrauding customers directly: As Tanium chief security advisor Timothy Morris argues, audio deepfakes could be used as a follow-up to smishing (phishing text) campaigns, where customers are told to call a number if they don’t recognise a specific charge on their account.

“If you call, a friendly deepfake representing your bank is waiting to take your credentials and money,” he explains to ISMS.online. “Similar methods can utilise deepfakes for tech support and romance scams.”

Spreading disinformation about the company: For example, a company might want to impact its rival’s sales and share price by posting a fake video of a CEO claiming their company’s products are faulty. 

“The circulation of deepfake content aimed at defaming organisations or key personnel can cause substantial reputational damage,” Incode CEO Ricardo Amper tells ISMS.online. “By manipulating public perception, deepfakes can have far-reaching societal implications, impacting market perception, customer trust, and even political environments.”

How Can An ISMS Help?

Fortunately, there are things that organisations can do in a systematic way to mitigate the risks posed by deepfakes, according to Eze Adighibe, ISO consultant at Bulletproof.

“Deepfakes are associated with various cyber risks that an effective information security management system (ISMS), via a compliance framework such as ISO 27001, can support in mitigating. Examples of these risks include social engineering attacks, scams, identity theft, fake profiles and automated misinformation,” he tells ISMS.online.

“An ISMS requires a comprehensive information security risk assessment with an analysis of impact and the selection of controls to treat identified risks. Therefore, organisations should consider including deepfakes in their risk assessment activities considering the threat they pose today.”

Specifically, an ISMS can help to mitigate deepfake risk in three areas, according to Adighibe:

  • Raising awareness of deepfake-related risk among employees
  • Implementing the proper technical controls to detect and prevent deepfakes
  • Developing incident response/management procedures for dealing with deepfake attacks

He explains that the controls listed in ISO 27001 that could help here are security awareness training/phishing simulations, strong access controls and security monitoring tools. Others cover privacy and protection of PII, information deletion, security event reporting and threat intelligence.

“Organisations well versed in mitigating emerging threats will already have measures in place to tackle deepfakes, but it is high time all businesses, regardless of size, take stock of the risks and implement the right security controls,” Defense.com CEO Oliver Pinson-Roxburgh tells ISMS.online.

He is right. Deepfake audio/video is not only increasingly realistic. It’s becoming more affordable to more individuals with nefarious intent. There are also signs that these capabilities are being offered as a service on the cybercrime underground. That’s a sure-fire way to democratise it to even more malicious actors.

 

ISMS.online now supports ISO 42001 - the world's first AI Management System. Click to find out more