FEATURE
RACHEL ROUMELIOTIS,
VICE PRESIDENT OF CONTENT
STRATEGY AT O’REILLY
By now we should all be accustomed
to Artificial Intelligence (AI) being in
our everyday lives and hearing how
its advancements can change how we
work, interact and learn in the longterm.
Newspapers and magazines are
littered with articles about the latest
advancements and new projects being
launched because of AI and Machine
Learning (ML) technology.
In the last year it seems like all of the
necessary ingredients – powerful,
affordable computer technologies,
advanced algorithms and the huge
amounts of data required – have come
together. We’re even at the point of
mutual acceptance for this technology
afterthought in the rush to achieve the
promised benefits. Before jumping on
the bandwagon, it is worth taking a step
back, looking more closely at where AI
blind spots might develop, and what can
be done to counteract them.
It has been proven
that security, privacy
and ethics are lowpriority
issues for
developers when
modelling their
Machine Learning
solutions.
Security, privacy and ethics
As the pace of AI and ML development
intensifies alongside heightened
awareness of cybercrime, organisations
must ensure they take into account
any potential liabilities. Despite this, it
has been proven that security, privacy
and ethics are low-priority issues
for developers when modelling their
Machine Learning solutions.
from consumers, businesses and
regulators alike. It has been speculated
that over the next few decades, AI could
be the biggest commercial driver for
companies and even entire nations.
In fact, AI is changing more than
what computers can do and how
we communicate and interact with
technology. AI is changing the very nature
of work, of hiring and is serving as a
catalyst for organisation-wide change.
However, with any new technology,
the adoption must be thoughtful both
in how it is designed and how it is
used. Organisations also need to
make sure that they have the people
to manage it, which can often be an
According to O’Reilly’s recent AI
Adoption in the Enterprise survey,
security is the most concerning
blind spot within organisations. In
fact, nearly 73% of senior business
leaders admit that they don’t check for
security vulnerabilities during model
building. Additionally, more than half
of organisations also don’t consider
fairness, bias, or ethical issues during
Machine Learning development. Privacy
is similarly neglected, with only 35%
keeping this top of mind during model
building and deployment.
The buck stops with businesses on this
issue. They need to adjust and honour
the agreement set out when they start
compiling and analysing data. This can
be tricky as businesses don’t always
have security and privacy ingrained as
www.intelligentciso.com | Issue 25
37