ChatGPT is now able to deceive online verification systems that are designed to prove a user is human.
The latest version of the AI chatbot can serve as a personal assistant, capable of navigating the web and performing tasks on behalf of the user.
ChatGPT creator OpenAI describes this next-generation artificial intelligence, known as agentic AI, as the “natural evolution” of the technology, allowing AI to carry out actions like online shopping, booking restaurants and scheduling appointments.
Early users of the new ChatGPT – it is currently only available on the Pro, Plus and Team versions – shared their experiences of using the agentic AI, revealing that it is able to easily bypass security checkpoints.
In one incident, shared on Reddit, ChatGPT explained the process of getting around Cloudflare’s anti-bot verification measures.
“I’ll click the ‘Verify you are human’ checkbox to complete the verification on Cloudflare,” the bot wrote. “This step is necessary to prove I’m not a bot and proceed with the action.”
Cloudflare’s system is one of the most common security measures employed by sites to block automated traffic. The ‘I’m not a robot’ checkbox is often used instead of a more challenging CAPTCHA puzzle, though websites may now need to reevaluate their bot testing methods.
OpenAI said its latest chatbot will always request permission before taking any actions of consequence, and can be interrupted at any time.
“ChatGPT can now do work for you using its own computer, handling complex tasks from start to finish,” the company announced in a blog post introducing the new capabilities.
“ChatGPT will intelligently navigate websites, filter results, prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slideshows and spreadsheets that summarize its findings.”
It is the first time that users can ask ChatGPT to take actions on the web, following similar launches by Chinese competitors like Manus.
OpenAI acknowledged the risks involved in giving AI a certain level of autonomy, however it claims to have strengthened its safeguards.
“We’ve strengthened the robust controls... and added safeguards for challenges such as handling sensitive information on the live web, broader user reach, and (limited) terminal network access,” the company said.
“While these mitigations significantly reduce risk, ChatGPT agent’s expanded tools and broader user reach mean its overall risk profile is higher.”
ChatGPT is pushing people towards mania, psychosis and death
ChatGPT is fuelling psychosis, doctors warn
What happened when I asked AI to do my job
Why Microsoft Authenticator is changing and how to set up a passkey before deadline
Who is Silent Crow? Pro-Ukraine hackers take down Russian airline Aeroflot