editor’s question
decade to never) but no CISO
should be factoring such a
tool into their three-to-five-
year strategy.
MIKE MACINTYRE,
CHIEF SCIENTIST,
PANASEER
L
et’s start by
calling out AI
for what it really
is – marketing
hype applied to
a small subset of
Machine Learning
techniques. Much of the hype
surrounding AI comes from how
enterprise security products have
adopted the term and the misconception
(wilful or otherwise) about what
constitutes AI.
The algorithms embedded in many
modern security products could, at
best, be called narrow (or weak) AI.
They perform highly specialised tasks
in a single (narrow) field and have
been trained on large volumes of data,
specific to a single domain. This is
a far cry from general (or strong) AI,
which is a system that can perform any
generalised task and answer questions
across multiple domains. Who knows
how far away such a system is (there
is much debate ranging from the next
28
With that in mind, the
debate needs to be
focused on whether
narrow/weak AI tools
are a problem for
security. To address
this we need to
look at how we can
empower CISOs and
their teams with the right
information to challenge
security vendors about the
efficacy of their ‘AI powered’
products. The current selection
of products typically focuses on threat
detection use cases. Therefore, it seems
reasonable to ask the question ‘what
use cases does your product cover?’.
As we established before, generalised AI
doesn’t exist so the product will have a
focused purpose.
Next, ‘what data does your product
need?’. All Machine Learning algorithms
need to be fed data and, more often
than not, the algorithms perform better
with more data. The response to this
question will ei