Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
Comment
John Xavier

Should generative Artificial Intelligence be regulated?

Generative Artificial Intelligence (AI) is like the proverbial genie out of the bottle. In less than a year, chatbots like ChatGPT, Bard, Claude, and Pi have shown what gen AI-powered applications can do. These tools have also revealed their vulnerabilities, which has pushed policymakers and scientists to think deeply about these new systems. Should generative AI be regulated? Arul George Scaria and Trisha Ray discuss the question in a conversation moderated by John Xavier. Edited excerpts:

What is the legal framework on which generative AI rests, and who owns content?

Arul George Scaria: This is an issue being discussed in jurisdictions across the globe, and different jurisdictions may eventually take different positions on it. So, let’s start from a jurisdiction that has most clarity on it, which is the U.S. If you look at practices of the U.S. Copyright Office, as well as the approach taken by one of the U.S. courts in a recent decision, only human beings can own copyright. This means that most of the output generated by AI tools today is outside copyright protection. There is some noise around the need for copyright protections being given to companies which are involved in generative AI. But the position that the U.S. Copyright Office has taken is that there will be no copyright over these [AI-developed] works when it is not authored by a human.

Also read | Children, a key yet missed demographic in AI regulation

This is in contrast with India’s position. A couple of months ago, an intellectual property lawyer in India filed a copyright registration application for a painting. The initial application, claiming that the painting was generated by AI alone, was rejected. Subsequently, when he filed it as a jointly authored work, the copyright office accepted the application. This is a bizarre situation because we have not had any in-depth deliberations on whether AI-generated works are subject to copyright protection. So, the copyright office was jumping the gun when they granted joint authorship to a work which was generated by AI. And when the matter became a controversy, they issued a withdrawal notice to the human co-author. But when I checked the copyright office’s website, it shows that the registration has not been revoked yet. This is a problematic situation. These two jurisdictions illustrate the kind of complexity in this area.

Trisha Ray: The U.S. is quite ahead, at least on starting to think about different approaches to how AI would interact with existing copyright law. The U.S. Copyright Office’s guidance on generative AI only recognises copyright for works created by people. But AI is a little different, so we can see it in different ways. One is where I’m just giving a very basic prompt to a generative AI model. For example, ‘write me a 300-word essay on copyright and generative AI’. Here, the AI is doing the most ‘creative labour’. But when I, as a prompt engineer, give more detailed inputs and transform what the model has produced, I can arguably apply for copyright. So, it’s still an evolving debate and there is no clear ‘yes’ or ‘no’ answer. Generative AI is new, at least in public consciousness, not new as a technology. We’re still in the first few years of this debate coming into legal and policy circles and we are likely to see a more nuanced interpretation over time.

Also read | The need for an Indian system to regulate AI

How do you see the European Union’s AI Act in the context of what is happening in the U.S. and India?

Trisha Ray: In general, the way the EU has been regulating emerging tech and AI has been very focused on individual protections against large platforms and companies that dominate the market. I expect that to be an aspect to be taken into account in the EU’s AI Act well. Another legal tool, the EU Digital Markets Act, is designed to level the playing field by imposing interoperability for the so-called ‘gatekeeper’ platforms. In the current landscape, when we think about who is building and investing in large language models and generative AI models, it is heavily concentrated towards large entities. ChatGPT is backed by Microsoft; Llama is backed by Meta. This is certainly one challenge that the EU AI Act might look into.

Arul George Scaria: I would like to highlight two points from the EU AI Act. One, the transparency-related obligations it is trying to bring on, in terms of generative AI. For example, if something is generated through generative AI tools, then it needs to be tagged as material generated by an AI tool. That’s important. Two, the suggestion to provide at least a short summary of the training material used, which is important from a copyright perspective. Whether all this will be successful or not, only time will tell.

Also read | Big Tech players balance seeking and avoiding AI regulation

The EU is taking a risk-based approach wherein they are prohibiting certain kinds of practices and suggesting ex-ante assessments for certain others. With respect to limited-risk ones, they are bringing in transparency requirements. This kind of graded approach towards risk is important in the current context. The EU is taking bold initiatives and initiating discussions at the global level, which is remarkable.

Could you contrast the EU’s graded approach with the U.S.’s legal framework?

Arul George Scaria: The U.S. is taking a far more relaxed approach. I don’t know whether it’s because they’re underestimating the risk involved or whether it’s because of their general outlook towards regulation. In the specific context of generative AI, I think we are underestimating the diverse risks. For example, in the education sector, you will notice that there is no control on how generative AI tools are used by students. Are there any age restrictions? Content restrictions? And if all platforms have some age restrictions, are they enforced? The answer is no. Also, there is hardly any awareness initiative on the potential risks of using generative AI tools in education. To me, these tools have extreme long-term negative effects on critical thinking and the creative capacities of students. And we are actually working without any kind of guard rails. So, we should at least initiate a discussion on a risk-based approach. Maybe, we should develop our own indigenous approach.

Also read | OpenAI CEO says possible to get regulation wrong, but should not fear it

Trisha Ray: To add to the risks, generative AI is compounding or can compound some existing online threats like the use of deepfakes for disinformation campaigns. This can include simple things like using ChatGPT to make phishing emails sound convincing. There are multiple ways in which cheaper and more accessible generative AI models can compound issues that we’re still struggling to regulate, especially in cybersecurity and online harms.

Arul George Scaria: And these can threaten the basic foundations of our democracy. India is going to have a national election in 2024. Are we even discussing what will be the impact of these generative AI tools on fair and transparent elections? There are hardly any discussions.

How do we approach AI through an Indian legal lens?

Arul George Scaria: Constitutional law provides certain safeguards against discrimination. But to address this specific issue, we require two things. One, a comprehensive regulatory framework, by which I mean both horizontal regulations that would be applicable across sectors and vertical regulations which are sector-specific. Two, we need more clarity on data protection. If you look at the Digital Personal Data Protection (DPDP) Act, 2023, you will notice that it does not apply to any personal data that was publicly made available by the user to whom the data relates. This, in effect, legitimises all the scrapping that was done by these AI companies. These are the areas where we need to have a more nuanced approach.

Also read | Crafting safe Generative AI systems

Trisha Ray: The DPDP Act has gone through many lives before it was finally passed. It started as something that was supposed to protect individuals from data collection both by the private sector and the government. It has since taken on a new life as something that’s more focused on generating economic value. Though it does hold private entities a little more accountable, individual rights have weakened considerably. Now, the expectation is that the proposed Digital India Act is going to fill some of the gaps left by the DPDP Act. At least that’s what we’ve been hearing from the IT Ministry. We’ll have to see what shape it takes. It would be more advisable to have a leaner regulation, rather than over regulation.

Also read | Two reasons AI is hard to regulate: the pacing problem and the Collingridge dilemma

What happens when companies say that explaining their language models will lead to trade secret exposure?

Arul George Scaria: They rely on the trade secrets regime to guard against disclosure. They are scrapping data from across the globe, but they don’t want to actually show the details of the training data or the details of their model. But when there is enormous social harm, I don’t see any reason why we shouldn’t force them to disclose it. They would try to argue that it is a trade secret, but during the pandemic we saw discussions as to whether there should be a kind of compulsory licensing against trade secrets. I would say that maybe this is one of those instances where a compulsory licensing-like regime is a must in view of the broader social consequences.

Arul George Scaria is an Associate Professor of Law at the National Law School of India; Trisha Ray is a resident fellow at the Atlantic Council’s GeoTech Centre

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.