editor ’ s question
WHAT ARE THE SECURITY IMPLICATIONS
OF USING CHATGPT ?
?
C ountries across the globe are one-byone banning the use of the latest tech phenomenon that is ChatGPT .
In light of recent ChatGPT concerns in the news , Richard Forrest , Legal Director at UK law firm , Hayes Connor , expresses major apprehensions that a considerable proportion of the population lacks a proper understanding of how generative AI , such as ChatGPT , operates . This situation , he fears , could lead to the inadvertent disclosure of private information and therefore a breach of GDPR .
As such , he urges businesses to implement compliance measures to ensure employees in all sectors , including healthcare and education , are remaining compliant .
This comes after a recent investigation by Cyberhaven revealed that sensitive data makes up 11 % of what employees copy and paste into ChatGPT . In one instance , the investigation provided details of a medical practitioner who inputted private patient details into the chatbot , the repercussions of which are still unknown . Forrest says this raises serious GDPR compliance and confidentiality concerns .
Due to the chatbot ’ s recent appraisals of being able to assist business growth and efficiency , there has been an increase in users across many sectors . However , concerns have arisen after a number of employees have been found to be negligently submitting sensitive corporate data to the chatbot , as well as sensitive patient and client information .
As a result of these ongoing privacy fears , several large-scale companies , including JP Morgan , Amazon and Accenture , have since restricted the use of ChatGPT by employees .
Forrest weighs in on the matter : ‘ ChatGPT and other similar Large Language Models ( LLMs ) are still very much in their infancy . This means we are in unchartered territory in terms of business compliance and regulations surrounding their usage .
‘ The nature of LLMs , like ChatGPT , has sparked ongoing discussions about the integration and retrieval of data within these systems . If these services do not have appropriate data protection and security measures in place , then sensitive data could become unintentionally compromised .
‘ The issue at hand is that a significant proportion of the population lacks a clear understanding of how LLMs function , which can result in the inadvertent submission of private information . What ’ s more , the interfaces themselves may not necessarily be GDPR-compliant . If company or client data becomes compromised due its usage , current laws are blurred in terms of which party may be liable .
‘ Businesses that use chatbots like ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches , resulting in significant fines , reputational damage and legal action . As such , usage as a workplace tool without proper training and regulatory measures is ill-advised .
‘ It is the onus of businesses to take action to ensure regulations are drawn up within their business and to educate employees on how AI chatbots integrate and retrieve data . It is also imperative that the UK engages in discussions for the development of a pro-innovation approach to AI regulation .’ www . intelligentciso . com
27