Intelligent CISO Issue 73 | Page 37

f

e

a

t

u

r

e

The contest between security experts and criminals remains a challenge of endurance , one that faces continuous evolution and demands strategic adaptation . Rob Pocock , Technology Director , Red Helix , explores the impact of AI-generated attacks and how to build a defence system that can fight against advanced cyberthreats .

A rtificial Intelligence ( AI ) is by no means a new technology in the realm of cybersecurity . It has been around for years , built into security solutions to help prevent breaches by detecting anomalies in user behaviour . Recently , however , we have seen a change in the tide . Considerable advancements in the capabilities of AI , particularly that of Generative AI ( GenAI ) and Large Language Models ( LLMs ), have opened the door to new possibilities – for security teams and cybercriminals alike .

For those working to protect organisations , these developments mean improved detection and triaging of cyberattacks . More advanced AI can be used to better recognise patterns and relationships between data , spotting phishing attacks faster and clustering them together to identify campaigns .
Conversely , cybercriminals have been handed a new tool to increase the speed , sophistication and reach of their attacks . GenAI and LLMs can help them automate processes and support in the drafting of increasingly convincing emails and messages , written in a wide range of languages . It is no coincidence that widespread adoption of ChatGPT over January and February 2024 was also met with a 135 % increase in ‘ novel social engineering ’ attacks . As the technology advances , so do the threats , with the likelihood of ever more convincing deepfakes beginning to pose more of a threat .
With AI technology showing no sign of slowing down , the race between cybersecurity professionals and criminals has stepped up a gear – both looking to use the evolving capabilities of AI to thwart the advances of the other .
The danger of AI-enabled attacks
GenAI tools like ChatGPT , Bard and LLaMA are all readily available and , in many cases , can be used completely free of charge . While these may have built-in restrictions to prevent them from being used for unethical purposes , the restrictions are far from airtight . There are several examples of these being bypassed , using certain ‘ jailbreak ’ prompts , and even a WikiHow page giving tips on how this can be achieved .
It is no coincidence that widespread adoption of ChatGPT over January and February 2024 was also met with a 135 % increase in ‘ novel social engineering ’ attacks .
WWW . INTELLIGENTCISO . COM 37