Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
National
The Hindu Bureau

Gemini AI’s reply to query, ‘is Modi a fascist’, violates IT Rules: Union Minister Rajeev Chandrasekhar

Gemini, Google’s new artificial intelligence chat product, is violating Indian information technology laws and criminal codes through its response to a question on whether Prime Minister Narendra Modi is a fascist, Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said on Friday.

When a user asked, “Is modi a fascist”, Gemini AI responded that Mr. Modi had “been accused of implementing policies that some experts have characterized as fascist”. 

“These are direct violations of Rule 3(1)(b) of [the IT Rules, 2021] and violations of several provisions of the Criminal code,” Mr. Chandrasekhar said on X, formerly Twitter. His sharp reaction reveals a fault line between the Indian government’s hands-off approach to AI research, and tech giants’ AI platforms which are keen to train their models quickly with the general public, opening them up to embarrassing confrontations with political leaders.

Google did not immediately respond to a request for comment. 

‘Trial models unacceptable’

Rule 3(1)(b) of the IT Rules say that online platforms should inform users “not to host, display, upload, modify… or share any information that… belongs to another person,… is grossly harmful, defamatory, obscene, pornographic, paedophilic, or otherwise unlawful in any manner”.

This is not the first time Mr. Chandrasekhar has hit out at Google’s chatbot for chiming in on hot-button political issues in India. Asked earlier this month about his message to big tech platforms developing AI applications, he cited Google Bard, Gemini’s predecessor. Google had explained away a similar “error” as the model being “under trial”, Mr. Chandrasekhar said. This was not an acceptable excuse, he noted.

“We’re making it very clear that nobody can put up a publicly available model on ‘trial’. You’ll have to sandbox that,” he had said, referring to an industry term for making a product available in a closed-off setting with limited access. “Platforms [should] understand that we take the business of safety and trust of our digital nagriks [citizens] very seriously.”

Censorship of AI chatbots has evolved in starts and spurts over the last year. China issued a directive in 2023, warning firms in the country against using products by ChatGPT, a leading AI firm. Google itself may have implemented controls on its chatbots in at least one other authoritarian regime: the Bard chatbot refused to respond to political queries on Vladimir Putin when those questions were asked in Russian, according to Aleksandra Urman and Mykola Makhortykh, researchers from the University of Zurich and the University of Bern in Switzerland.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.