EDITOR ’ S question
BEATRIZ SANZ SÁIZ , GLOBAL CONSULTING DATA AND AI LEADER , EY
Europol has estimated that 90 % of online content may be synthetically generated by 2026 . enerative AI is a disruptive
G technology . It ’ s innovative and helpful , but also dangerous if it ends up in the wrong hands . CIOs are deploying AI at scale to find new solutions that help their organisation ; on the other hand , threat actors themselves are using it to evade detection and commit cybercrime .
The open-source nature of AI has levelled the playing field and is leading to a wave in cybercrime – which EY has forecast to reach US $ 10.5 trillion by 2025 . So , how should CIOs react now the ‘ bad ’ guys also have such a powerful tool ? And what strategies can be deployed to get ahead of the criminals ?
Synthetic threats are very real
Deepfakes are a standout example of how GenAI is being used maliciously to devastating effect . The rapid improvement of AI-generated audio and video raises the possibility of creating and manipulating various media formats with minimal editing skills . Much of this technology is opensource , meaning it ’ s evolving at a rate which is nearimpossible for information and security officers to keep up with . In fact , Europol has estimated that 90 % of online content may be synthetically generated by 2026 .
Criminals are using AI deepfakes at scale to steal from organisations and their employees through spear-phishing , vishing and social engineering campaigns . Initiatives to both impersonate and target prominent individuals are becoming much more successful with this technology , resulting in a trend towards more financial scams and commercial fraud . Alongside this , sophisticated deepfake recordings are capable of tricking verification tools , such as Two-Factor Authentication or voice recognition , enabling threat actors to bypass the basic security controls most businesses have in place .
Improving infrastructure , educating people
Almost 80 % of companies report that voice and video deepfakes now represent a significant threat , especially through the impersonation of high-level executives . The onus is on CIOs and security teams to adapt and protect their business , but also their people , from hackers and fraudsters .
Improving the current technology offering of existing security systems certainly helps . Having stronger computing power and more sophisticated security infrastructure will greatly improve the detection and response mechanisms needed to spot deepfake-enhanced phishing scams . AI-powered tools have become essential in both automating these complex processes and generating a positive feedback mechanism , improving these tools over time . This enables security teams to predict threats in advance , and dynamically spot vulnerabilities before an exploit can take place .
Upskilling employees to have a basic understanding of AI – and how to spot it – is essential in raising awareness of AI fraud . Meanwhile , it ’ s important to get ahead of cybercrime and hire people with AI and cybersecurity expertise to lead proof-of-concept projects , deploy and train Deep Learning models to detect deepfake campaigns . Additionally , some responsibility falls on the shoulders of developers who make deepfake tools without consideration of how they ’ re applied . Policymakers must work closely with AI experts to ensure these tools are properly controlled and regulated , and greater top-down governance is key to curating a healthy security environment .
28 WWW . INTELLIGENTCISO . COM