Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Economic Times
The Economic Times
Subhashis Banerjee and Debayan Gupta

AI won’t fix broken systems: India needs secure-by-design approach

The conversation about AI and cybersecurity has settled into a familiar pattern: AI systems - including platforms like Claude Mythos - are accelerating attacks. The solution: we must deploy AI in defence.

This framing isn't wrong. But it is incomplete. It skips a prior question: whether the systems we are defending are designed to be secure. Computer science has had the tools to answer this for decades. Skipping the question risks building the next layer of vulnerability on top of the last.

Also Read: It's popular mechanics 101: India’s edge lies in using AI to enhance human productivity, not replace it

India's digital infrastructure spans every aspect of our existence. Integrity of that infrastructure depends not just on how well we detect attacks but on how well we design against them in the first place. Sound information security begins with threat modelling - a systematic articulation of what needs protecting, from whom, under what assumptions and with what guarantees.

It requires clarity about threat actors and their capabilities, because a nation-state adversary, a ransomware group and a malicious insider demand different postures. It requires provable mitigation, not merely precautionary decoration or forceful proclamations of security. It also requires that trust points be explicitly identified and minimised. Every assumption that a user, a process, a certificate or an insider is trustworthy is a potential point of failure.

These are foundational principles of security engineering, applicable with or without AI. The reality is that they are often sacrificed to deployment speed, procurement cycles, and the temptation to bolt security on after the fact rather than build it in from the start. Even where security design is taken seriously, a critical distinction is often collapsed: the difference between a design threat model, a use case threat model and an implementation threat model.

A system can be architecturally sound, and still be exploited through unintended workflows - legitimate functionality weaponised against its design assumptions. This is the use case layer, where business logic attacks, API abuse, rights violations and social engineering tend to live.

More consequentially, a system can be correctly specified and catastrophically implemented. The gap between what a system is intended to do and what the deployed, running system actually does is where a disproportionate share of real-world breaches originate.

These three layers - design, use case, implementation - are analytically distinct. Verification at one provides no guarantee at another. Treating them as a single problem produces neither clarity nor effective defence. This is where the AI conversation becomes productive. Not in the familiar framing of AI-enabled attack vs AI-enabled defence, but in a more foundational application: using automated methods to verify that systems exhibit the security properties their designs intend.

Formal verification - mathematical proof that an implementation satisfies its specification - has been successfully applied to critical components in aerospace, medical devices and secure operating systems. It remains demanding. But for systems above a defined criticality threshold, the argument for it is proportionate and pressing. Static analysis, fuzzing and model checking offer complementary approaches, each capable of surfacing implementation-level failures that human review will inevitably miss at scale.

Emerging AI-assisted verification tools address a bottleneck: the scarcity of human expertise in formal methods in computer science. This is AI applied not to the arms race of attack and defence, but to the harder problem of building systems that are verifiably correct before they are deployed.

India's policy discourse on cybersecurity has focused heavily on incident response and threat intelligence-sharing. These matter. But they are downstream of a more fundamental requirement: that systems deployed at scale are designed and verified to a standard that makes their security properties arguable rather than assumed.

Also Read: Meta lifts capital expenditure forecast, doubling down on AI push

This means mandating layered threat models as a condition of deployment for critical infrastructure. Not design documentation as a compliance artefact, but live, maintained models reflecting actual threat actor profiles and traceable mitigations. It means requiring implementation-level verification evidence for systems above a defined risk threshold. And it means building national capability in cryptography and formal and automated verification as a strategic asset.

Regulatory frameworks that drive genuine security uplift, rather than checkbox compliance, would do more structural good than most reactive measures currently under discussion.

AI is reshaping the threat landscape. But speed isn't the core problem. The real failure is our persistent lack of design discipline to make attacks harder in the first place. Machine-speed threats don't just demand faster detection but also systems built from the outset to be provably harder to break. The tools exist. The principles are clear. What's missing is the will to apply them before the next breach makes the case for us.

Banerjee is professor, and Gupta is assistant professor, computer science, Ashoka University

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.