Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
Comment

Good and bad: On India and artificial intelligence

Generative artificial intelligence (AI) is AI that can create new data. There are many instances of generative AI in the world today, most commonly used to generate text, images, and code in response to users’ requests, even if they are capable of more. Their widespread adoption really embellished their capabilities, leading to awe, then worry. OpenAI’s ChatGPT chatbot mimics intelligence very well; today, it has become synonymous with the abilities of generative AI at large. In the last few years, AI models backed by neural networks trained on very large datasets and with access to sufficient computing power have been used to do good, such as finding new antibiotics and alloys, for clever entertainment and cultural activities, and for many banal tasks, but it has caught attention most notably with its ability to falsify data. The world is past being able to reliably differentiate between data that faithfully reflects reality and data made to look that way by bad-faith actors using AI. This and other developments led a prominent group of AI pioneers to draft a single-sentence, and alarmist, statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Dishonest actors wielding AI are one of many threats, but the statement is too simple to admit the complexity of human society.

Some specific concerns enumerated in other communiqués are worth taking seriously, however: the inscrutability of the inner workings of AI models, their use of copyrighted data, regard for human dignity and privacy, and protections from falsifying information. The models being developed and used today are not mandated to tick these boxes, even as there is no way to understand the risks they pose. So, even at a point when the computational resources required to run AI models in full coincide with that available in consumer electronics, the world will need at least rolling policies that keep the door open for democratic institutions to slam the brakes on dangerous enterprises. At this time, the Indian government should proactively launch and maintain an open-source AI risk profile, set up sandboxed R&D environments to test potentially high-risk AI models, promote the development of explainable AI, define scenarios of intervention, and keep a watchful eye. Inaction is just not an option: apart from the possibility of adverse consequences, it could render India missing the ‘harnessing AI for good’ bus.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.