Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders, only at VentureBeat Transform 2024. Gain essential insights into GenAI and grow your network during this exclusive three-day event. Learn more
Deepfake is now one of the fastest growing forms of adversarial AI, with losses associated with deepfakes expected to rise from $12.3 billion in 2023 to $40 billion in 2027with an astonishing compound annual growth rate of 32%. Deloitte expects deepfakes to increase in the coming years, with banks and financial services providers being the main targets.
Deepfakes are a typical example of the latest adversarial AI attacks and reach a 3,000% increase last year alone. Deepfake incidents are predicted to increase by 50% to 60% by 2024, with 140,000-150,000 cases worldwide predicted this year.
The latest generation of generative AI apps, tools, and platforms provide attackers with everything they need to quickly and cost-effectively create deepfake videos, mimicked voices, and fraudulent documents. Pindrops' 2024 Voice Intelligence and Security Report estimates that deep fake fraud targeting call centers costs an estimated $5 billion annually. Their report underscores the serious threat that deep fake technology poses to banking and financial services
Bloomberg reported last year that there is “already an entire cottage industry on the dark web selling scam software from $20 to thousands of dollars.” A recent infographic based on Sumsub's Identity Fraud Report 2023 provides a global view of the rapid growth of AI-driven fraud.
Countdown to VB Transform 2024
Join business leaders in San Francisco July 9-11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register now
Source: Statista, how dangerous are deepfakes and other AI-driven fraud? March 13, 2024
Companies are unprepared for deepfakes and hostile AI
Adversarial AI creates new attack vectors that no one sees coming, creating a more complex, nuanced threat landscape that prioritizes identity-driven attacks.
It’s no surprise that one in three enterprises has no strategy to address the risk of a hostile AI attack. These attacks are likely to start with deepfakes of their key executives. Ivanti'Recent research shows that 30% of enterprises have no plans to identify and defend against hostile AI attacks.
The Ivanti State of Cybersecurity Report 2024 found that 74% of enterprises surveyed are already seeing evidence of AI-driven threats. The vast majority, 89%, believe AI-driven threats are just getting started. Of the majority of CISOs, CIOs and IT leaders Ivanti interviewed, 60% fear their enterprises are unprepared to defend against AI-driven threats and attacks. The use of a deepfake as part of an orchestrated strategy that includes phishing, software vulnerabilities, ransomware and API-related vulnerabilities is becoming increasingly common. This is in line with the threats that security professionals expect to become more dangerous with Generation AI.
Source: Ivanti 2024 State of Cybersecurity Report
Attackers focus their deepfakes efforts on CEOs
VentureBeat regularly hears from cybersecurity CEOs of enterprise software companies who prefer to remain anonymous about how deepfakes have evolved from easily identifiable fake news to recent videos that look legitimate. Voice and video deepfakes appear to be a favorite attack strategy of industry executives, aiming to scam their companies out of millions of dollars. The threat is compounded by the aggressive way nation states and large-scale cybercriminal organizations are moving to develop, hire, and grow their expertise with generative adversarial network (GAN) technologies. Of the thousands of CEO deepfake attempts that have taken place this year alone, the one targeting the CEO of the world's largest advertising agency shows how sophisticated attackers are becoming.
In a recent Tech News Briefing with the Wall Street Journal, CrowdStrike Director George Kurtz explained how advancements in AI are helping cybersecurity professionals defend systems and also commented on how attackers are using it. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 U.S. election, and threats from China and Russia.
“The deepfake technology is so good these days. I think that's one of the areas where you really worry about it. I mean, in 2016 we were tracking this, and you saw people actually having conversations with bots, and that was in 2016. And they were literally arguing or they were promoting their cause, and they were having an interactive conversation, and it was like there was nobody behind the thing. So I think it's pretty easy for people to get caught up in that's real, or there's a narrative that we want to get behind, but a lot of it can be driven and has been driven by other nation states,” Kurtz said.
CrowdStrike’s Intelligence team has invested significant time into understanding the nuances of what makes a convincing deepfake and where the technology is headed to achieve maximum impact on viewers.
Kurtz continued: “And what we've seen in the past, we've spent a lot of time on this with our CrowdStrike intelligence team, is that it's kind of like a pebble in a pond. You take a topic or you hear a topic, anything that has to do with the geopolitical environment, and the pebble gets thrown into the pond, and then all these ripples come out. And it's this amplification that happens.”
CrowdStrike is known for its deep expertise in AI and machine learning (ML) and its unique single-agent model, which has proven effective in driving its platform strategy. With such deep expertise at the company, it’s understandable how its teams would experiment with deep fake technologies.
“And if now, in 2024, we can do deepfakes and some of our internal guys have made funny parody videos with me to show me how scary it is, you wouldn't be able to tell that it wasn't me in the video. So I think that's one of the areas that I'm really concerned about,” Kurtz said. “There's always concern about infrastructure and things like that. Those areas, a lot of it is still paper voting and things like that. Some of it isn't, but how you create the false narrative to get people to do things that a nation state wants them to do, that's the area that I'm really concerned about.”
Companies must rise to the challenge
Companies run the risk losing the AI war if they don't keep pace with the rapid pace of attackers weaponizing AI for deepfake attacks and all other forms of adversarial AI. Deepfakes have become so commonplace that the Department of Homeland Security has published a guide, Growing threat of deepfake identities.