Intelligent CISO Issue 61 | Page 28

The concern is that malicious actors will exploit a vulnerability in the software or gain access to your data another way .
Just as you wouldn ’ t hand out your personal information to a stranger over the Internet , you should also consider what information you ’ re giving to an AI chatbot .
editor ’ s question

?

ollowing a data leak

F of the ChatGPT app that compromised stored user conversations and payment information , an Italian watchdog immediately – but temporarily – banned the service in their country . The watchdog expressed concerns about whether ChatGPT had the right to store the data and , additionally , what it was doing to protect children from accidentally accessing age-inappropriate results .

A primary security implication of using ChatGPT , other AI chatbots or any application that asks for personal information , involves your privacy . The concern is that malicious actors will exploit a vulnerability in the software or gain access to your data another way . After the ChatGPT breach , the Italian watchdog saw the data that was exposed and worried the company was not doing enough to educate users about what happens with the data they provide . The makers of ChatGPT were given the opportunity to have the ban lifted by including information about how and why contributed data is used on their website , gain consent to use the data and place age restrictions on the website .

The concern is that malicious actors will exploit a vulnerability in the software or gain access to your data another way .

Users sharing their information online – whether with an AI tool or on any other website – need to take their own precautions to protect their information by considering what they ’ re sharing and where . Anytime you ’ re asked for information – whether it be from websites , chatbots , friends , family , coworkers , your doctor – consider if the site needs that information , whether this is the safest way to share it and how the site will store it . Just as you wouldn ’ t hand out your personal information to a stranger over the Internet , you should also consider what information you ’ re giving to an AI chatbot , especially a nosey one .
As more websites and apps utilise chatbots , there ’ s also the potential for abuse from bad actors posing as chatbots who can manipulate you into revealing information you wouldn ’ t normally . The more you educate yourself about spotting phishing attempts and malicious websites , the less likely you are to become a victim .
While specific concerns with AI will continue to be addressed as they arise ,
DARREN GUCCIONE , CEO , KEEPER SECURITY

Just as you wouldn ’ t hand out your personal information to a stranger over the Internet , you should also consider what information you ’ re giving to an AI chatbot .

bad actors will attempt to use chatbots for social engineering and phishing . Some of the most common signs of a phishing email are poor grammar and spelling . By asking AI to write the email , the bad actors can not only rectify these mistakes , but also potentially include prompts to make the language more persuasive as if an actual marketing professional wrote it . This same technique can be used when creating written copy or user testimonials on a malicious website to make it appear more realistic .
28 www . intelligentciso . com