Get all your news in one place.
100's of premium titles.
One app.
Start reading
International Business Times
International Business Times
Business
Adam Bent

Cognitive Explainable AI Challenges The Bigger Is Better Narrative In AI

The current trajectory of artificial intelligence is often framed as a race defined by scale, where larger models, greater computational power, and expanding datasets are seen as indicators of progress. According to research, artificial intelligence could contribute up to 16% or approximately $13 trillion to the global economy by 2030, while also increasing global GDP by as much as 26%.

According to Rob Sobhani, PhD, an early investor and board member of Z Advanced Computing, Inc. (ZAC), this narrative, while influential, is beginning to reveal structural limitations, particularly as costs escalate and questions around transparency remain unresolved. He observes that billions are being directed toward infrastructure-heavy approaches that depend on vast data aggregation, yet the outcomes do not always translate into systems that are understandable or inherently trustworthy.

This challenge is closely tied to what is commonly referred to as the black box problem in AI. Sobhani, who is an Adjunct Professor at Georgetown University, explains that many existing systems produce outputs without offering clarity and "hallucinations" on how decisions are made, which introduces friction in environments where accountability is essential. "In sectors such as healthcare, defense, and critical infrastructure, this lack of explainability can limit adoption, regardless of performance benchmarks," he says. "The issue is both technical and philosophical, raising broader questions about how intelligence should function within human systems."

ZAC operates within this context as an artificial intelligence company developing a cognitive, explainable AI platform known as CXAI. According to Sobhani, the platform is designed to reflect how humans learn concepts rather than relying on pattern recognition across massive datasets. This approach, he notes, enables systems to generalize from significantly smaller inputs while maintaining the ability to articulate reasoning. "We are delivering AI that learns the way a human brain learns, and that changes how decisions are understood, not just how they are produced," he says.

This distinction carries implications beyond performance. Sobhani points out that reducing dependence on large-scale data processing inherently lowers computational requirements, which in turn reduces energy consumption. As global attention turns toward the environmental footprint of data centers and AI infrastructure, he notes that this efficiency becomes increasingly relevant. He emphasizes that the lower carbon footprint associated with CXAI is a separate objective but a direct outcome of its underlying architecture.

The operational flexibility of the platform is another dimension highlighted in his perspective. "Because CXAI does not require extensive cloud-based infrastructure, it can be deployed on a wide range of devices, including edge environments," Sobhani explains. "This opens pathways for broader accessibility, particularly in regions where advanced computing resources are limited. In essence, this is the democratization of AI." He references applications in healthcare where individuals could use image-based analysis tools to identify potential medical conditions, illustrating how distributed AI could extend services to underserved populations.

Validation of this approach, he notes, is reflected in engagements with government institutions, including a $25 million sole-source contract with the U.S. Air Force to develop cognitive explainable AI capabilities. While such partnerships indicate technical viability, Sobhani frames them more broadly as evidence that alternative models of AI development are being recognized within mission-critical environments. According to him, these settings prioritize reliability and clarity, reinforcing the importance of systems that can explain their outputs.

The company's origins also contribute to its positioning within the broader AI landscape. Sobhani references the background of ZAC's founders, brothers Bijan and Saied Tadayon, and Mahnaz Dean as immigrants of Iranian descent, framing their journey as one shaped by opportunity and scientific ambition. He suggests that this context informs the company's ethos, particularly its focus on building technology that aligns with human values. "Our foundation is rooted in truthfulness and responsibility, and that is reflected in how we approach AI," he explains.

This emphasis extends to how the role of AI is defined in relation to human work. Sobhani consistently frames the technology as an augmentation tool rather than a replacement mechanism. In his view, the objective is to enhance human capability while preserving the dignity associated with work. He highlights potential applications such as assistive mobility for individuals with disabilities and early detection systems for environmental risks, illustrating how AI can be integrated into everyday life without displacing its human context.

At a macro level, the direction of AI development is increasingly intertwined with economic and geopolitical considerations. Analysis on AI infrastructure suggests that AI is no longer viewed solely as a technological layer but as critical infrastructure, with rising energy demands and capital costs prompting calls to treat it as a national utility for economic resilience.

Sobhani notes that infrastructure costs continue to rise as nations invest heavily in computational capacity, creating a landscape where efficiency becomes a strategic advantage. He suggests that systems capable of delivering performance without requiring extensive resources may offer a more sustainable path forward, particularly as AI adoption expands across industries.

This perspective ultimately challenges the assumption that scale alone defines intelligence. Sobhani argues that effectiveness should be measured by how well systems integrate with human needs, how clearly they communicate decisions, and how responsibly they operate within society. "Bigger datasets do not automatically create better intelligence. What matters is whether the system understands, explains, and serves," he says.

As AI continues to evolve, the distinction between computational power and conceptual understanding is likely to shape the next phase of innovation. "The future will not be defined by how powerful our systems become, but by how clearly they can be understood and trusted," Sobhani says. "True progress lies in building technology that delivers performance with purpose, where explainability is not an afterthought, but the standard that shapes how trust is earned in the digital age. Technology must serve humanity."

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.