Intelligent CISO Issue 85 | Page 42

expert

OPINION
Human oversight remains essential to ensure that AI’ s responses are in line with best practices and ethical standards.
Large language models have the unique ability to sift through vast amounts of data by analysing network traffic and user behaviour at a scale and speed that would be impossible for human analysts to replicate.
This is particularly effective at identifying new attack vectors which might otherwise go unnoticed until exploited by an attacker.
AI also plays a critical role in vulnerability management. By analysing data from various sources, such as security scans, system logs and threat intelligence reports, AI can automatically prioritise security vulnerabilities based on potential risk.
It can simulate attack scenarios to assess the impact of an attack, recommending patches and fixes with a level of precision and speed unmatched by human teams.
The dark side: How AI fuels cyberthreats
Despite its transformative potential, AI also poses significant risks. Just as it helps defenders, it can equally empower attackers, leading to more sophisticated cyberattacks. AI’ s ability to process social media profiles and personal information means that malicious attackers can create personalised phishing scams or convincing deepfake videos. The realism of these scams makes them far more difficult to detect than originally thought.
In the cybersecurity landscape, we’ ve seen an increase in new types of malwares. Traditional malware development requires manual coding, but with AI, attackers can rapidly generate malicious code that changes its form to avoid detection by security systems. This adaptability makes it much harder for traditional security teams to keep up with evolving threats.
AI can also be used to automate attack strategies to exploit known vulnerabilities across multiple targets in a matter of minutes. Attackers can launch widespread campaigns with greater speed and efficiency, targeting critical infrastructure to significantly increase the risk to businesses.
Even as these threats become more sophisticated, traditional cybersecurity methods are struggling to keep pace. The speed at which attackers can create and deploy new types of malware strategies leaves businesses constantly playing catch-up. The result is an ever-escalating arms race between defenders and attackers, with each side leveraging increasingly advanced AI tools.
The risks of over-reliance and ethical considerations
As this innovative technology has become more integrated into cybersecurity within the last two years, some businesses have started to over-rely on AI. While the capabilities of these systems are impressive, they are fallible.
AI models are only as good as the data they are trained on, and they can make mistakes or fail to recognise attack methods. Human oversight remains essential to ensure that AI’ s responses are in line with best practices and ethical standards.
Data privacy remains a significant concern when deploying AI in cybersecurity. While it can be used positively, for example, to develop anonymised data security models, the negative impact is greater. AI models used for training can pose privacy challenges or if AI data sets contain bias, this can be reflected in the outputs.
To mitigate these risks, organisations must implement strong data privacy and security guidelines, addressing data encryption, access controls and conduct regular security audits. But striking the balance between innovation and security can be difficult; the way around this is to embed security in the development process and adopt a risk-based approach, implementing a culture of security and continuous monitoring.
Navigating the balance: How to leverage AI safely
While the risks associated with AI are significant, they can be mitigated with a thoughtful approach
42 WWW. INTELLIGENTCISO. COM