Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times
Business
Callum Turner

360 Advanced on the Rise of Accountable AI: Navigating ISO 42001 and HITRUST® AI

John Kadechka
John Kadechka Jonathan Fanning Studio Photography

360 Advanced, a cybersecurity and compliance firm, recognizes that artificial intelligence is progressing from a phase of rapid innovation into one defined by accountability and structured oversight.

As AI capabilities integrate more deeply into enterprise operations, the conversation has expanded to include governance, risk alignment, and demonstrable assurance. Within this evolving environment, 360 Advanced works with organizations to clarify expectations, implement durable compliance frameworks, and define what responsible, compliant AI development entails in practice.

"AI adoption continues to grow across enterprise environments, finding its way into SaaS platforms, internal productivity tools, customer applications, and automated decision processes," John Kadechka,

Practice Director at 360 Advanced, says. "As this momentum builds, boards and executive teams are spending more time considering how they want to shape governance. The question becomes: how can organizations approach oversight, transparency, and control validation responsibly?"

This shift coincides with regulatory momentum. In the United States, the White House Executive Order on AI issued in October 2023 articulated guiding principles on safety, civil rights, consumer protection, and privacy. Meanwhile, a December 2025 Executive Order emphasized consolidation and coordination across federal agencies, signaling that a more structured enforcement environment is possible in the year ahead.

Across global markets, similar developments are underway, including the implementation of the EU AI Act, alongside sector-specific guidance in healthcare, finance, and defense. "Organizations bringing AI into their daily operations are seeing a more coordinated regulatory environment. Agencies are aligning their oversight priorities. We see governance guidance shifting toward clearer expectations that can be reviewed and validated as this continues," Kadechka states.

This transition has implications for enterprise strategy. 360 Advanced emphasizes that while policies provide internal direction, regulators, customers, insurers, and investors increasingly seek evidence of operationalized controls. "AI governance is moving into a phase where organizations are expected to show their work," Kadechka explains. "Documentation, testing, and accountability mechanisms are becoming part of routine due diligence."

Several market forces appear to be shaping this shift, according to 360 Advanced. The firm notes that AI agents now operate with varying levels of system access and delegated authority, which introduces new control considerations. It also observes that data protection concerns are widening to include training data sources, model outputs, and third-party API dependencies.

The firm adds that sector-specific oversight bodies are refining expectations for AI-enabled clinical tools, credit decisioning systems, and fraud detection models. At the same time, 360 Advanced points out that their program management teams are circulating more AI-focused risk questionnaires, while insurers and investment partners are beginning to factor governance maturity into underwriting and capital discussions.

John Kadechka (Credit: Jonathan Fanning Studio Photography)

In this environment, 360 Advanced sees AI compliance certification emerging as one structured way to respond to growing operational complexity. The firm suggests that the line between principle-based guidance and certifiable management standards is becoming more defined, with formal frameworks offering a path to translate governance intent into clearer controls, measurable risk assessments, and ongoing improvement cycles.

One example, 360 Advanced notes, is ISO/IEC 42001, which offers organizations a structured way to govern AI responsibly. It outlines expectations for governance models, risk assessment methods, lifecycle controls, and continuous evaluation. For companies already familiar with standards like ISO/IEC 27001, the framework can provide a recognizable structure while extending it to AI-specific risks.

"We support organizations in implementing these requirements by helping them document AI inventories, define risk treatment plans, and establish monitoring processes that integrate with internal audit functions," Kadechka states. Audit preparation, covering policies, governance roles, deployment practices, and monitoring evidence, can also be supported to help ensure alignment with the standard's guidance.

Similarly, HITRUST has two frameworks for compliance centered on AI assessments that incorporate considerations for both built systems and business use of agents. 360 Advanced notes that this is especially relevant to healthcare and other regulated sectors that manage sensitive data. The HITRUST AI Security Assessment introduces controls for algorithmic transparency, data governance, and model risk categorization. Its AI Risk Management Assessment focuses on governance, oversight, and enterprise-level risk management for AI use. 360 Advanced assists organizations in aligning to the appropriate assessment, preparing evidence, and supporting the maturity progression across e1, i1, and r2 reporting levels.

360 Advanced notes that across these frameworks, many organizations are facing rising expectations around AI audit readiness, from maintaining clear AI system inventories to documenting data provenance, managing third-party dependencies, monitoring models for drift, and preparing for AI-related incidents. Oversight often blends human-in-the-loop structures with automated monitoring to show that AI operations are being managed responsibly. 360 Advanced supports companies in developing these capabilities in ways that reflect their risk profile and operational maturity, while also helping them evaluate regulatory exposure, industry requirements, and deployment stage.

This approach can enable them to choose a compliance path that fits their environment, whether they are experimenting with early pilots or operating production-level AI systems within critical workflows. Kadechka says, "There isn't a single template that applies universally. The effective approach begins with understanding how AI interacts with enterprise systems and stakeholders, then aligning certification strategy and framework choice with that operational reality."

Overall, AI innovation continues to create new efficiencies and service models, and compliance frameworks offer a structured way to show responsible oversight, manage risk, and maintain stakeholder trust. Through its integrated cybersecurity and compliance capabilities, 360 Advanced helps organizations translate emerging AI governance standards into practical, certifiable programs that aim to balance innovation with accountability.

Disclaimer:

360 Advanced operates under an alternative practice structure in accordance with all applicable laws, regulations, standards, and codes of conduct of the AICPA.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.