Get all your news in one place.
100's of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

Over half of UK businesses have no idea how fast they could stop AI in a crisis

Artificial intelligence India.

  • The EU AI Act requires AI explainability and accountability
  • Only 38% of workers can accurately pinpoint who's accountable in their business
  • More than half (59%) aren't even sure how quickly they could shut down AI in a crisis

Despite rapid AI adoption, new research from ISACA suggests many businesses might be going in blindly – more than half (59%) of UK businesses wouldn't even know how quickly they could stop AI during a crisis.

Only around one in five (21%) say they's feel confident stopping an AI system within 30 minutes, highlighting major safety gaps.

And it's not just shutting them down that's a problem – not even half (42%) say they could explain an AI failure to leadership or regulators.

Are businesses blind about the risks of AI?

ISACA explained that the gaps aren't just concerning for business operations and reputation, but also from a legislative framework. The EU AI Act requires explainability and accountability.

Part of the failure comes down to unclear accountability, with 20% of workers unsure of who is responsible for AI failures. Poor visibility is also a contributing factory, with one in three organizations not requiring AI's use at work to be disclosed, which ISACA says is a nightmare for blind spots.

The report explains that businesses are currently treating is as a technical problem, but they should instead focus on it being an organization-wide governance challenge. "Truly closing the gap can’t be done by process changes alone," Chief Global Strategy Officer Chris Dimitriadis wrote. "Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle."

Looking ahead, businesses are being urged to define accountability at the senior level and to start rolling out better visibility and auditing. Besides this, they must also build AI incident response into their strategies and factor it into their broader cybersecurity postures.

With only 38% of respondents identifying the board or an exec as being accountable in the event of an AI incident, it's clear more needs to be done to disseminate information and processes through the workforce.


Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.