Intelligent CISO Issue 73 | Page 38

f

e

a

t

u

r

e

Rob Pocock , Technology Director , Red Helix
In essence , it means that nearly anyone with nefarious intentions can craft malicious code or wellstructured phishing or smishing messages at a rapid pace .
Not only does this put highly advanced , quickto-respond GenAI in the hands of cybercriminals , but it also considerably lowers the bar to entry for cybercrime . In essence , it means that nearly anyone with nefarious intentions can craft malicious code or well-structured phishing or smishing messages at a rapid pace , opening the door to an increased number of potential attackers .
The use of GenAI in social engineering attacks also makes these types of threat harder to detect , creating more linguistically convincing messages that can be tailored to specific targets . Furthermore , the adaptive nature of AI means these threats will continuously evolve , bypassing conventional detection methods that rely on recognising patterns of attack .
The rise of deepfake technology , a by-product of advanced AI , presents an additional concern . It can be used to create highly realistic and convincing forgeries of audio and video content , with the potential to mimic individuals , or create scenarios that never occurred . The implications of this are profound , extending from personal security breaches to the manipulation of public opinion and political discourse – with the upcoming UK and US elections being particular risk areas .
Prompt injection and data poisoning
In addition to malicious actors using AI and LLMs in the development of attacks , there is the threat of criminals exploiting vulnerabilities innate to the tools themselves .
One method of achieving this is through prompt injection attacks , which are used to ‘ trick ’ LLMs into behaving in an unexpected way . Similar to the aforementioned ‘ jailbreaking ’, these attacks use
38 WWW . INTELLIGENTCISO . COM