BUSINESS surveillance
AI AND RANSOMWARE: CUTTING THROUGH THE HYPE
Rick Vanover, Vice President of Product Strategy at Veeam, explains that AI is transforming most digital industries, including cybercrime.
t might be the great paradox: AI.
I
Everyone’ s bored of hearing it, but can’ t stop talking about it. It’ s not going away so we had better get used to it. AI is disrupting most digital industries and cybercrime is no exception.
However, cutting through the hype and getting to the facts is worth it. Much has been made of AI’ s potential impact on the global ransomware threat, but how much does it really change the picture?
AI-cops and AI-robbers
While the future potential of AI, on cybercrime and society in general, is immense( and a little scary), it’ s more helpful to focus on the here and now. Currently, AI is just another tool at threat actors’ disposal, but it is quite a significant one because it lowers the barrier to entry for criminals.
Using AI to assist with coding is already common among legitimate programmers. Even if it’ s just reviewing broken code or answering specific questions faster than Google, AI will support people hacking systems just as much as those developing them. But while this might make ransomware gangs’ lives easier, it won’ t make things any worse for security teams. The result hasn’ t changed; depending on who you ask, the end product might even be worse.
However, the other current use cases are more consequential. AI algorithms can scan networks or environments to map architecture and endpoints and, crucially, spot vulnerabilities. Threat actors will already do this manually, but AI will make it much easier and more effective. AI can also be used to automate information gathering for more targeted attacks. These tools can scrape the Internet( particularly social media) to collect as much information on a target as possible for phishing and social engineering.
This brings us to the last typical use of AI by cybercriminals. In a conversation where the hype is aplenty, describing AI as‘ supporting phishing’ is probably underselling it. At its most basic, even the most readily available AI tools can be used to craft better phishing emails – bridging the language barrier that often makes such scams spottable. That’ s another example of AI improving malicious activity that already exists, but the voice cloning( deepfakes) of specific people is another entirely different thing. When combined with automated information gathering on a target, we’ re looking at the next generation of social engineering.
What it means for security
While cybercriminals having more tools at their disposal is never going to feel great, there are two things to bear in mind: one, security teams have access to these tools as well, and two, AI is going to make attacks more sophisticated and effective. For now, it isn’ t introducing any brand-new or entirely novel threats, so there’ s no need to tear up the playbook.
Rick Vanover, Vice President Product Strategy, Veeam
WWW. INTELLIGENTCISO. COM 63