Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Microsoft Copilot's own default configuration exposed users to the first-ever "zero-click" AI attack, but there was no data breach

In this photo illustration, Microsoft Copilot AI logo is seen on a smartphone screen.

Security researchers from Aim Labs uncovered a critical attack dubbed 'EchoLeak' impacting Microsoft 365 Copilot. The vulnerability could potentially allow bad actors to gain unauthorized access to sensitive data from Microsoft 365 Copilot users without any interaction.

The security researchers presented their findings to Microsoft, prompting the tech giant to assign the vulnerability the identifier CVE-2025-32711. EchoLeak marks the first known zero-click attack on an AI agent (via Fortune).

The cybersecurity firm presented its findings to Microsoft earlier this year in January. The tech giant rated the vulnerability as critical, but it has since fixed the issue on sever-side in May.

Additionally, the tech giant indicated that no user action is required in the resolution of this issue, further indicating that there's no evidence of any real-world exploitation from bad actors.

This vulnerability represents a significant breakthrough in AI security research because it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever.

Aim Security co-founder and CTO, Adir Gruss

According to the researchers, the vulnerability constituted "an LLM scope violation," which allows bad actors can leverage an AI model to access sensitive data, including chat histories, OneDrive documents, Sharepoint content, Teams conversations, and more.

Perhaps more concerning, Gruss indicated that Microsoft Copilot's default configuration made most organizations more susceptible to malicious attacks before the tech giant fixed the issue. However, the executive indicated that evidence gathered suggested that no customers were impacted by the vulnerability.

According to a Microsoft spokesman on the matter:

“We appreciate Aim Labs for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted.”

To that end, Microsoft says it has updated its products to mitigate the issue. It has also integrated elaborate defense mechanisms to bolster Microsoft 365 Copilot's security.

It will be interestingly to see how Microsoft combats security threats threatening its AI tools, especially after former Microsoft security architect Michael Bargury demonstrated 15 different ways to breach Copilot's security guardrails.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.