Cyber Security Unity is a global community and content hub that is dedicated to bringing individuals and organisations together who actively work in cyber security. The aim of Cyber Security Unity is to foster greater collaboration in the industry to help combat the growing cyber threat. Our work is showcased through the provision of strong thought leadership via blogs, articles, white papers, videos, events, podcasts and more. For more information visit www.csu.org.uk.

Written by Amit Yossi Siva Levi, Principal Researcher at Forter.

(Previously CTO and co-founder of bot detection company Immue).

As generative AI tools continue to expand, new doors are being opened for fraudsters to exploit weaknesses. Have you experimented with generative AI tools like ChatGPT yet? From beating writer’s block to composing ad copy, creating travel itineraries, and kickstarting code snippets, there’s something for everyone. Unfortunately, “everyone” includes criminals.

Generative AI & Cybercrime

Cybercriminals are early adopters. If there’s a shiny new technology to try, you can bet that crooks will explore how to use it to commit crimes. The earlier they can exploit this technology, the better – this will give them a head start on defences being put in place to block their nefarious activities. If tech helps boost the scale or sophistication of criminal attacks, it’s extra attractive. It’s no wonder cybercriminals have been loving tools like ChatGPT.

There are many examples of problematic uses of generative AI, from malicious code to phishing email composition to CAPTCHA solving. Yes, CAPTCHAs — the checks that are supposed to confirm that you’re human and not a bot. Ironic, isn’t it?

Recently, CAPTCHAs used ChatGPT in social engineering to trick a human into solving its challenge. In this case, the AI pretended to have a visual impairment that made it difficult to see the images, supposedly legitimising its need for assistance.

That means we’re not just up against AI; we’re up against AI using humans to overcome barriers against it.

The use of CAPTCHAs, mobile pins and more introduces unnecessary friction for legitiknmate customers, not just bad actors. We tend to see that for every dollar a business loses to false transactions they turn away $30 from good customers being wrongfully blocked or rejected. That’s why digital commerce leaders are abandoning traditional and legacy technologies in favour of AI and machine learning systems that can pinpoint the exact identity—good customer, bad actor, or bot—behind every interaction.

This approach ensures legitimate customers have a frictionless experience across the whole buying journey all while blocking malicious activity from bots and fraudsters.

Dealing With the Challenge of the Present

The combination of generative AI and social engineering is a worrying one. Fraudsters, who are already used to incorporating social engineering into their schemes, may pick up on this possibility and attempt to use generative AI to scale up their attacks. Any weaknesses in AI could be balanced by tricking humans into covering for them where necessary.

Before you get excited about using AI to beat AI (on the principle of setting a thief to catch a thief), don’t get too excited; look at another example of a CAPTCHA failure to see what can go wrong.

AI doesn’t yet have the intelligence of humans; it is instead focused on the ability to process and use vast amounts of information. Solutions need to be creative, not formulaic, and new, not amalgamations of old ideas.

My perspective, based on working with and against bots for many years, is that short-term solutions need to be based on innovative technological angles, finding specific weaknesses in the specific generative AI services that currently exist, and making sure that those are exploited, and the AIs blocked from being misused for fraud.

Prepare Now for What’s Ahead

Now is the time to begin thinking about where things might go and start exploring the tools and changes in processes and attitudes, we might need to protect against new attack types and methods.

It’s already time to look beyond the attacks we’re used to. Deepfakes using the voices of CEOs on phone calls and WhatsApp calls have been seen in the wild. Deepfake images and videos have entered the mainstream. With the wealth of data on the internet at its disposal, generative AI has all the material it could need to help fraudsters scale attacks using these sorts of tricks before most people are even aware that that’s a risk.

Since fraudsters have been branching out into the entire customer purchasing journey, both before, during, and after checkout, (targeting post-checkout flows such as returns fraud and pre-checkout fields such as ad tech), it seems probable that generative AI will also play a role there.

I think the most effective solutions will be relatively narrow in the short term. That’s the fastest and most effective way to deal with the threat before it becomes a severe problem.

We will need to be far more strategic in the long term. Generative AI will change fraudsters’ options and daily work, just like in other industries. And just like with other industries, that time of future change is not far away.

When it comes to crime, the possibilities for generative AI tools seem to be limitless. Fortunately, so is the human capacity for ingenuity in preventing crime. It’s time to get our thinking caps on.

Cyber Security Unity

Cyber Security Unity is a global community and content hub that is dedicated to bringing individuals and organisations together who actively work in cyber security. The aim of Cyber Security Unity is to foster greater collaboration in the industry to help combat the growing cyber threat. Our work is showcased through the provision of strong thought leadership via blogs, articles, white papers, videos, events, podcasts and more. For more information visit www.csu.org.uk.

Share This

Share This

Share this post with your friends!