Skip to main content Skip to footer

BLOG

Beyond the illusion—unmasking the real threats of deepfakes

Essential strategies to protect leaders from AI-driven extortion, fraud, and sabotage

10-MINUTE READ

July 30, 2024

The term ‘deepfake’ was coined in 2017. When I saw the first examples, such as videos where a celebrity's face was seamlessly superimposed onto someone else's body, I felt both awe and fear of this new computing capability. These deepfakes were so convincingly done that it was hard to distinguish them from real footage, highlighting both the potential and the risks of this advanced technology. Fast forward seven years and deepfakes are all grown up, and it's not looking pretty for companies. Deepfakes have the ability to cause an awful lot of harm to the business world—in fact, they’re currently being used for extortion, targeting senior executives. Victims are conned into transferring money and IP straight to the threat actor, believing they are acting legitimately under instruction from leaders.

I have been around computers since floppy disks. Back then, security was introduced to stop data from being stolen or locked (ransomware) or entire systems taken down (zero day). But as technologies evolved, so have the security risks. I've never encountered threats as complex and sophisticated as deepfakes. Following my conversations with numerous seasoned experts who help companies respond and recover from these types of attacks, one clear consensus emerges:

Deepfake extortion, involving the use of manipulated videos or calls that appear to feature company executives, presents a unique challenge: while the security systems and data continue to function, the company's money and/or intellectual property can leave the company.

Trust me, these deepfakes are so good your employees will not be able to discern that this is not who they think they are communicating with. Therefore, we must rely on human factors, protocols and duty separation to help keep businesses and people safe.

How deepfakes are redefining cybersecurity

Deepfakes, crafted using the latest generative AI technologies, create a new breed of deception. By leveraging artificial intelligence, they produce hyper-realistic videos, audio, and texts that can fool even the most discerning eyes, potentially resulting in multimillion-dollar losses for businesses. Their capabilities are only going to become more advanced. According to Accenture’s Cyber Intelligence (ACI) researchers, threat actors are willing to spend more for higher-quality deepfakes, with prices reaching up to $20,000 per minute for high quality videos. What is more, researchers have observed a 223% increase from Q1 2023 compared to Q1 2024 in the purchasing and selling of deepfake-related tools in major dark web forums1.

The implications for corporate leaders are profound. Deepfakes can be weaponized to create disturbances not only within organizations, but within entire markets and even governments. Mitigating human vulnerabilities has long been a critical aspect of cybersecurity. Yet, the focus has often been scattered, with warnings to guard against various threats. 

Now, with the emergence of deepfakes, there's an added layer of uncertainty. It's becoming increasingly difficult to discern whether communications are genuine—be it a call from a supervisor, text message from a colleague or a scam attempt.

For example, Hong Kong Bank reportedly suffered a $25 million loss due to a sophisticated deepfake scam. The scammers digitally recreated the company’s chief technology officer, along with other employees, on a conference call instructing colleagues to transfer money, which they did.2 As technologies become more sophisticated, distinguishing between authentic and falsified identities will become more challenging, complicating security protocols. The ACI team expects a rise in AI-driven cyberattacks, highlighting that organizations must adopt advanced AI-based cybersecurity measures that detect, respond to, predict and prevent threats in real-time. We are going to see all sorts of new solutions like infrared and various scanning technologies popping up.

But the key takeaway is that people, especially those at the board level, need to understand that they are now being targeted.

How can leaders navigate the mirage?

Investing in proactive cybersecurity measures is not only a strategic move but also a cost-effective one. The financial burden associated with rebuilding an organization's reputation and regaining customer trust after a deepfake attack significantly surpasses the expenses of implementing robust cybersecurity protocols ahead of time. You might be wondering, "What steps should I take to ensure the appropriate level of protection?" Here is my suggestion. You need to act swiftly to strengthen your organization’s defenses. It is essential to integrate advanced security features, stringent controls and comprehensive employee training and awareness.

What are the key elements you should consider initially?

  • Educate and align leadership team immediately and ensure they understand the individual threat that they all face.

  • Enhance policies, procedures and governance to secure the digital core against AI-enhanced risks.

  • Conduct tabletop exercises, pen testing and crisis management procedures for leadership and finance.

This appears to be an ambitious strategy, however partnering with a trusted partner will ensure comprehensive advisory and advanced technology designed to mitigate the escalating risks associated with deepfakes.

Don’t wait for the first crisis to occur, the time to act is now!

The threat of deepfakes extends beyond individual harm, posing significant risks to the integrity and stability of corporate enterprises and global markets. As leaders, we must be at the forefront of adopting innovative solutions and advocating for stronger protections to safeguard our businesses from this emerging form of cybercrime.

It is up to us as an industry to stop the cat and mouse game between attackers and organizations to ensure that this threat of deepfakes does not become pervasive.

For more information see how we're helping our clients with Deepfakes.

References

Accenture Cyber Threat Intelligence Research

2 Deepfake scammer walks off with $25 million in first-of-its-kind AI heist

WRITTEN BY

Flick March

Managing Director – EMEA Cyber Strategy Lead & Global Cyber C-Suite and Board Lead