P RE D I C T I V E I NTELLIGEN CE
be used to build an independent, patient,
intelligent and targeted attacker that
waits and watches: an automated APT, if
you will. That would be far more difficult
to defend against than automated
’splash’ tactics and it could be executed
or industrialised on a very large scale.
What an AI-cybercriminal
would entail
The good news is that any such
automated APTs will arrive slowly,
because AI is complicated. An AI
algorithm isn’t usually designed to be
user friendly. Instead of pointing and
clicking, you have to customise the
hacking tool to a degree that needs
AI expertise. Those skills are in short
supply in the industry, let alone the
hackersphere, so we’re likely to see
this achieved first by nation-states, not
by hobbyists. This means that the first
likely targets are organisations with
national interest.
Let’s look at some public examples.
A while ago there were hacks on
major healthcare providers in the
US, all of which worked with a lot of
federal employees. At the same time,
organisations which handle Class
5 security clearance were hacked,
losing fingerprint and personal data for
thousands of people.
One theory about these hacks was
that a nation state stole the data. As it
didn’t turn up on the Dark Web for sale,
where did it end up? If this nation does
now possess it, they have terabytes
of healthcare, HR, federal background
check and contractor data at their
command. The value of such data would
make relating one set of data to another
very difficult and time consuming if
done by hand.
But an AI program could find clusters
and patterns in the data set and use
them to work out who could be a good
target for a future attack. You could
connect their families, their health
problems, their usernames, their federal
projects – there are lots of ways to use
that information. Nation states steal
data for a reason – they want to achieve
something. So as AI matures, we could
34
Terry Ray, Senior Vice President and
Fellow at Imperva
see far more highly-targeted attacks
taking place.
AI phishing
While it’s likely that AI-powered hacking
will begin its life as the preserve of
nation-states, it’s only a matter of time
before this sort of attack becomes
commonplace in the regular market.
Let’s consider phishing as a case study
for how this might look.
At the moment, it’s often easy to tell if an
email is a phishing attempt from the way
it’s written with misspelled words and
odd grammar. AI could eliminate that.
Let’s say that AI can write better than
60% of people, using colloquialisms and
idiomatic phrasing – it’d be pretty hard
to spot. And even if AI is only ‘as good’
as humans, it can be much faster and
therefore more effective.
Phishing is one of the most lucrative
forms of hacking – if AI can raise the rate
of success from 12% to 15%, say, with
half the human effort, then it could be
worth it for hackers. We haven’t seen any
truly malicious, AI-crafted spearfishing
attempts yet, but it’s likely to be a very
effective first step for AI cybercrime.
Building an effective defence most security teams still have difficulty
weeding out data theft incidents from
the chaff.
An effective defence comes down
to having the right people and the
right tools in place. It’s been several
years now that organisations have
been working to solve the information-
overload problem in cybersecurity, yet Organisations have realised that the
collection of user and application
access to data is a responsibility of
cybersecurity. Now security is feeling
the pain of trying to understand this
vast data. The most successful teams
Issue 21
|
www.intelligentciso.com