PREDI C TI VE I NTEL L I GE NC E
Imperva expert on the rise of
AI-enabled
cybercriminals
AI has proven to be a highly effective tool in helping
to defend organisations from modern sophisticated
attacks. But what happens when the criminals start to utilise
this technology against the defenders? Terry Ray, Senior
Vice President and Fellow at Imperva, tells us how AI-enabled
cybercriminals will alter the threat landscape – and what this means
for CISOs and their teams.
A
rtificial Intelligence
(AI) – essentially
advanced
analytical
models – is not
uncommon in
the cybersecurity
landscape. It has provided IT
professionals with the ability to predict
and react to cyberthreats more efficiently
and quickly than ever before. Surprisingly,
the ‘good guys’ now have the edge
over the criminals. AI is being used to
defend against cybercrime, but not yet
to perpetrate it. This won’t last forever
– AI will turn on us in the hands of
cybercriminals eventually. Before then, the
industry has some time to prepare itself
for the rise of AI-enabled cybercriminals.
AI can allow companies to take large
volumes of information and find clusters
www.intelligentciso.com
|
Issue 21
of similarity. This is always the focus
of cybersecurity to a degree but
organisations are often unequipped to
do so in sufficient depth because of time
and resourcing constraints.
By contrast, AI can whittle down vast
quantities of seemingly unrelated data
into a few actionable incidents or
outputs at speed, giving companies
the ability to quickly pick out potential
threats in a huge haystack.
Replicating human hacking tactics
The ability to quickly turn large amounts
of data into actionable insights is
something that cybersecurity teams
are going to need in the coming years,
because AI could become a formidable
enemy. Unlike malware, which is purely
automated, AI is beginning to mimic
humans to a worryingly accurate
degree. It can draw pictures, age
photographs of people, write well
enough to persuade people of truths –
or lies. Just recently, it has been found
to impersonate human voices.
This means that AI could potentially
replicate human hacking tactics, which
are currently the most damaging but also
the most time-consuming form of attack
for hackers. The best, most difficult
hacks to detect are those performed by
humans – digging into systems, watching
user behaviour and finding or installing
backdoors. Attacks performed with tools
are much easier to detect. They bang
around, they hit things, they find the
backdoor by knocking on every wall.
Hackers aren’t yet creating ‘AI-driven
sneaky thieves’, but they could. AI could
33