Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

Lots of sensitive data is still being posted to ChatGPT

Business person chatting with a smart AI using an artificial intelligence chatbot developed by OpenAI. Artificial intelligence system support is the future.

New data from Netskope has claimed employees are continuing to share sensitive company information with AI writers and chatbots like ChatGPT despite the clear risk of leaks or breaches.

The research covers some 1.7 million users across 70 global organizations, and found an average of 158 monthly incidents of source code being posted to ChatGPT per 10,000 users, making it the most significant company vulnerability ahead of other types of sensitive data.

While cases of regulated data (18 incidents/10,000 users/month) and intellectual property (four incidents/10,000 users/month) being posted to ChatGPT are much less common, it’s clear that many developers are simply not realizing the damage that can be caused by leaked source code.

Be careful what you post on ChatGPT

Alongside continued exposures that could lead to weak points for enterprises, Netskope also highlighted the boom in interest in artificial intelligence. The figures point at a 22.5% growth in GenAI app usage over the past two months, with large enterprises of over 10,000 users using an average of five AI apps daily.

ChatGPT takes the lead, accounting for eight times as many daily active users than any other GenAI app. With an average of six prompts daily, each user has the potential to cause considerable damage to their employer.

Rounding up the top three generative AI apps in use by organizations globally besides ChatGPT (84%) are Grammarly (9.9%) and Bard (4.5%), which itself is experiencing healthy growth of 7.1% per week compared with 1.6% per week for ChatGPT.

Many will argue that uploading source code or other sensitive information can be avoided, but Netskope’s Threat Research Director, Ray Canzanese, says that it is “inevitable.” Instead, Canzanese places the responsibility on organizations to implement controls around AI.

James Robinson, the company’s Deputy Chief Information Security Officer, added: “Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively.”

For admins and IT teams, the company suggests blocking access to unnecessary apps or those that pose a disproportionate risk, providing frequent user coaching, and adopting sufficient modern data loss prevention technologies.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.