FEATURE
data in these systems is integral to a
successful application.
Consumers are increasingly voting with
their feet when it comes to data security.
If users don’t trust an organisation’s
systems and services to properly secure
their data, they will go elsewhere.
In fact, the Global State of Online Digital
Trust Index 2018 report found that nearly
half of European consumers would
stop using a business if their personal
information was breached. The rapid
demise of Code Spaces is a case in
point. The SaaS provider went out of
business in 2014 following a colossal
hack of its AWS environment.
Organisations increasingly understand
the critical risk that cyberattacks present
to their business, with brand damage
and a loss of returning customers
recognised as significant a threat as lost
intellectual property.
Indeed, cybersecurity has undoubtedly
become a boardroom issue over the
past couple of years, with businesses
working around the clock to ensure
their networks, servers and applications
are as secure as humanly – and
machine-ly – possible.
However, as many organisations are
just now beginning to explore AI
solutions, the expertise and solutions
to secure AI systems are in their
nascence. There are, however, a few
things that businesses should consider
to help them secure their customer data
in AI systems.
The issue of data in transit
Data at rest is widely considered to
be vastly more secure than when in
transit. As a result, some organisations
opt for AI solutions that are deployed
wholly on-premises, as it requires fewer
data transits than an AI solution that is
deployed in the public cloud. However,
this does in turn sacrifice the secure
backups and redundancies that the
cloud enables.
That said, if the right security steps are
taken by both the cloud provider and
www.intelligentciso.com
|
Issue 13
Consumers are
increasingly voting
with their feet when
it comes to data
security.
customer, implementing AI systems
on the cloud is still considered a safe
endeavour. But no matter where the
solution is hosted, all data should
be encrypted so as to ensure that
if there were to be a data breach,
the information wouldn’t be readily
accessible by a malicious actor.
For AI systems in particular, differential
privacy is an important cryptography
trend, as it adds noise to larger data
sets so as to mask individuals leaders.
However, it still makes the right
information accessible to Machine
Learning algorithms.
Correct access
The conversational realm is
currently one of the most exciting AI
applications. Digital assistants enable
people to interact with a system as if it
were a human.
But, while revolutionising the user
experience (UX) of such digital systems,
the ease of access introduces new
security considerations, such
as authentication.
While there are clear benefits to the
user to be able to access their banking
account information through an Amazon
Echo or Google Home, few would want
their visitors to be able to access this
sensitive information just by asking ‘Alexa,
what is the balance of my debit account?’
Conversational AI technologies must
have as strong access barriers as one
would expect when using a mobile or
web interface. Whether it’s biometrics,
passwords or two-factor authentication,
if the information is sensitive, a resilient
barrier is required and expected.
Stay up to date
AI systems – in particular, conversational
AI systems – are still in their nascence.
And like any new technology, there are
going to be evolving security concerns
as new iterations and applications of the
technology are explored.
As many of these security considerations
are well beyond our current experience,
it is critical that IT and security decision
makers keep up to date on information
security trends and consider their
application to AI systems.
Voice authentication solutions, for
example, were previously considered
to be a robust biometric security
technology that could identify unique
voice patterns for every person, so their
voice would become their password
when using these solutions.
However, last year BBC Click reporter
Dan Simmons teamed up with his
twin brother to fool HSBC’s voice
authentication software, with his twin
successfully accessing Dan’s bank
account after mimicking his voice.
Further to this example, researcher
recently showed how it is possible
to trick voice authentication security
solutions by using mimicking software.
At best, one of the researchers involved
recommended that voice be used
only as one aspect of a multi-factor
authentication process.
Innovate at the rate of security
Businesses should undoubtedly be
looking to new AI systems to innovate
their customer experiences and align
with the rapid shifts in customer
expectations for seamless, quick
engagement with brands. Nevertheless,
they should keep in mind that the rate of
innovation should only be as fast as their
ability to secure it.
Hackers are constantly looking to
breach AI systems with the wealth of
user data that they hold. It’s critical that
businesses stay ahead of them and take
considered steps to secure user data in
AI systems. u
39