?
CHRIS JORDAN,
CEO, FLUENCY
T
he problem of
AI/ML hype is
that CISOs who
have bad or
missing policies,
processes and
products will think
an AI/ML solution will resolve it. AI/ML is
not going to be a replacement for good
structure and know-how.
Years ago, when I was at McAfee
there was a project that demonstrated
better protection than signature-based
systems. The problem was that in an AV
test it would finish last, for it detected
known viruses poorly compared to the
signature-based.
McAfee had to complement its AV
engine, using the statistical approach
after performing signature detection in
order to get the best results.
Using AI/ML is the same thing. The best
efficiency is using mature security and
then fill in with AI/ML.
www.intelligentciso.com
|
Issue 06
Hype always
puts businesses
at risk, for hype is
greater than the
immediate value.
Is the ‘hype’ from AI and ML putting
business at risk? The answer is ‘yes’.
Hype always puts businesses at risk, for
hype is greater than the immediate value.
The winners of a hype game are those
that cash in on the hype, not the value of
what is being hyped.
But, if you are buying AI/ML products,
does that add risk to your business?
Again, the answer is ‘yes’. AI and ML
are probability algorithms. That means
there is a guarantee that they will be
wrong, the meaning of risk. As security
professionals, we understand that
accepting risk is our job. Which brings
us to the most meaningful question,
‘When is AI/ML an acceptable risk?’
• AI/ML requires pre-existing
structured processes
• Requires an expert to determine
reason and correctness
editor’s question
While AI and ML have vastly different
performance characteristics and
implementations, it’s safe to bunch them
together. That’s because what we care
about is that we are given an answer
provided a defined data.
This is the characteristic that makes AI/
ML valuable, for when the amount of data
becomes large, computer analysis can
scale where human analysis is limited.
At Fluency we focus on data retention
and compliance and this has led us
to have vast data sets. We have been
applying AI/ML algorithms to the data
we capture. It is our experience that
while AI and ML detect anomalies well,
they do not provide an answer for the
anomaly occurring. The result was
that the need for an expert was still
required, but that expert could scale to
cover larger data by allowing the system
to point out the anomalies.
Why do we talk about anomalies and
not intrusion detection? AI and ML
approach the problem by expecting a
‘next’ state. When the next element fails
to statistically be likely, they trigger. Over
time, a repeat behaviour of a trigger will
be expected and no longer trigger. But
an anomaly is an event that should not
have occurred. If the activity pattern that
is learned is normal, malicious activity
will appear as abnormal, an anomaly.
So, when is AI/ML an acceptable
risk? AI/ML should be used when you
are replacing a mature and measurable
process. The enhancement of AI/ML can
then be measured against an existing
solution to demonstrates that it is,
indeed, reducing risk.
AI/ML should be
used when you are
replacing a mature
and measurable
process.
29