BUSINESS surveillance
BUILDING A CULTURE OF VIGILANCE THROUGH CYBERSECURITY AWARENESS
Email is essential for business , but it ’ s also a prime target for cybercriminals , particularly as they increasingly leverage AI to craft convincing phishing attempts . Matt Cooke , Director of Cybersecurity Strategy at Proofpoint , tells Intelligent CISO how organisations can boost their defences , build AI risk awareness and foster a strong security culture that is tailored to specific user needs .
here is no doubt that email is a
T business-critical tool . It ’ s what helps make first impressions , and long-lasting ones too . But it ’ s also a primary target for cybercriminals .
In fact , email remains the number one threat vector and it ’ s getting increasingly difficult for people to differentiate a genuine email from a malicious one .
The role of AI in phishing attacks
One of the drivers behind this is the implementation of AI tools within a cybercriminals ’ arsenal . AI gives attackers the tools to craft more believable emails , designed to trick users . It also enables them the opportunity to scale their attacks and localise in different languages .
AI is helping threat actors to craft compelling emails that recipients are more likely to believe are legitimate , written in a style that does not suggest foul play . The more believable this content is , the more likely a user is to engage , interact and proceed to ‘ clicking ’ through to malicious links .
One example of a particularly lucrative email attack for cybercriminals is Business Email Compromise ( BEC ) which , according to Proofpoint ’ s State of Phish research , is benefiting from AI . The research highlights that attack volume grew in countries such as Japan ( 35 % year-over-year increase ), South Korea (+ 31 %) and the UAE (+ 29 %).
These countries may have previously seen fewer BEC attacks due to cultural or language barriers , but Generative AI allows attackers to create more convincing and personalised emails in multiple languages . Proofpoint detects an average of 66 million targeted BEC attacks every month .
AI also poses challenges around the loss of sensitive data . If , for example , an individual was to paste sales figures or personal information into a public AI platform , there is a possibility that this information could later be regurgitated and given to someone else .
In 2024 , 44 % of UK CISOs surveyed by Proofpoint in our Voice of the CISO research believe that Generative AI poses a security risk to their organisation . The top three systems CISOs view as introducing risk to their organisations are : ChatGPT / other GenAI ( 40 %), perimeter network device ( 33 %) and Slack / Teams / Zoom / other collaboration tools ( 31 %).
However , only 26 % of UK organisations educate their users on Generative AI safety .
Leveraging AI for enhanced protection
As cybercriminals pivot to increasingly use AI for their attacks , cyberdefenders must do the same .
One way to begin is for CISOs to build their own AI programmes to ensure staff understand the risks associated with it . The more awareness an individual
Matt Cooke , Director of Cybersecurity Strategy at Proofpoint
WWW . INTELLIGENTCISO . COM 67