C
Continuing advancements in Artificial
Intelligence and Machine Learning have
led to invaluable technological gains,
but threat actors are also learning to
leverage AI and ML in increasingly
sinister ways. AI technology has
extended the capabilities of producing
convincing deepfake video to a less-
skilled class of threat actor attempting to
manipulate individual and public opinion.
AI-driven facial recognition, a growing
security asset, is also being used to
produce deepfake media capable of
fooling humans and machines.
Our researchers also foresee more threat
actors targeting corporate networks to
exfiltrate corporate information in two-
stage ransomware campaigns.
With more and more enterprises
adopting cloud services to accelerate
their business and promote
collaboration, the need for cloud
security is greater than ever.
As a result, the number of organisations
prioritising the adoption of container
technologies will likely continue to
increase in 2020. Which products will
they rely on to help reduce container-
related risk and accelerate DevSecOps?
The threatscape of 2020 and beyond
promises to be interesting for the
cybersecurity community.
Broader deepfakes capabilities
for less-skilled threat actors
The ability to create manipulated content
is not new. Manipulated images were
used as far back as World War II in
campaigns designed to make people
believe things that weren’t true.
Raj Samani, Chief Scientist and McAfee
Fellow, Advanced Threat Research
www.intelligentciso.com
|
Issue 21
What’s changed with the advances in
Artificial Intelligence is you can now
build a very convincing deepfake without
being an expert in technology. There are
websites set up where you can upload a
video and receive, in return, a deepfake
video. There are very compelling
capabilities in the public domain that can
deliver both deepfake audio and video
abilities to hundreds of thousands of
potential threat actors with the skills to
create persuasive phoney content.
FEATURE
Deepfake video or text can be
weaponised to enhance information
warfare. Freely available video of
public comments can be used to train
a Machine Learning model that can
develop of deepfake video depicting
one person’s words coming out of
another’s mouth.
Attackers can now create automated,
targeted content to increase the
probability that an individual or group
falls for a campaign. In this way, AI and
Machine Learning can be combined to
create massive chaos.
In general, adversaries are going to
use the best technology to accomplish
their goals, so if we think about nation-
state actors attempting to manipulate
an election, using deepfake video to
manipulate an audience makes a lot
of sense. Adversaries will try to create
wedges and divides in society. Or a
cybercriminal can have a CEO make
what appears to be a compelling
statement that a company missed
Our researchers
also foresee more
threat actors
targeting corporate
networks to
exfiltrate corporate
information in two-
stage ransomware
campaigns.
earnings or that there’s a fatal flaw in a
product that’s going to require a massive
recall. Such a video can be distributed
to manipulate a stock price or enable
other financial crimes.
We predict the ability of an untrained
class to create deepfakes will enhance
an increase in quantity of misinformation.
49