Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

People are using ChatGPT as a security guru – and these are the questions everyone is asking

Users display warnings about the use of artificial intelligence (AI), access to malicious software or threats to online hackers. computer cyber security Warning concept or tech scam.
  • ChatGPT is being asked some interesting security questions
  • Users are concerned about phishing, scams, and privacy
  • Personal information is being fed into the AI agent, putting users at risk

AI is fast becoming a personal advisor for many people, offering help with daily schedules, rewording those difficult emails, and even acting as a fellow enthusiast for niche hobbies.

While these uses are typically harmless, many people have begun using ChatGPT to act as a security guru, but not doing it in a particularly secure way.

New research from NordVPN has uncovered some of the questions ChatGPT is asked about security – from dodging phishing attacks to wondering if a smart toaster could become a household threat.

Don’t feed ChatGPT your details

The top security question asked by ChatGPT users is “How can I recognize and avoid phishing scams?” - which is understandable given that phishing is probably the most common cyber threat any normal person could face.

The rest of the questions follow a similar trajectory, from insight into the best VPN, to tips on how best to secure personal information online. It's definitely refreshing to see AI being used as a force for good at a time when hackers are cracking AI tools to pump out malware.

It’s not all good news though, I’m afraid. NordVPN’s research also highlighted some of the most bizarre security questions people are asking ChatGPT, such as, “Can hackers steal my thoughts through my smartphone?”, and, “If I delete a virus by pressing the delete key, is my computer safe?”

Others voice concerns about hackers potentially hearing them whisper their password as they type it, or hackers using ‘the cloud’ to snoop on their phones while it charges during a thunderstorm.

"While some questions are serious and insightful, others are hilariously bizarre — but they all reveal a troubling reality: Many people still misunderstand cybersecurity. This knowledge gap leaves them exposed to scams, identity theft, and social engineering. Worse, users unknowingly share personal data while seeking help,” says Marijus Briedis, CTO at NordVPN.

Many users will frequently ask AI models questions that include sensitive personal information, such as physical addresses, contact information, credentials, and banking information.

This is particularly dangerous as most AI models will store the chat history and use it to help train the AI to better respond to questions. The key issue being that hackers could potentially use very carefully engineered prompts to extract sensitive information from the AI, and use it for all kinds of nefarious purposes.

“Why does this matter? Because what may seem like a harmless question can quickly turn into a real threat,” says Briedis. “Scammers can exploit the information users share — whether it’s an email address, login credentials, or payment details — to launch phishing attacks, hijack accounts, or commit financial fraud. A simple chat can end up compromising your entire digital identity.”

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.