Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Rosanne Kincaid-Smith

Google’s AI paywall and the ethics of access

A representative abstraction of artificial intelligence.

Reports that Google is considering a paywall for some of its AI-powered search features have caused a stir across the industry. From a commercial perspective, the rationale is understandable. Unlimited access has wider implications and of course, it reduces the opportunity to curate information. AI searches are comparatively energy-intensive and require specific infrastructure, which are an increasing focus for tech brands. But the negative outcomes of introducing barriers to access may outweigh the positive. Choosing to implement paywalls risks reducing equal accessibility to information, tools, and resources for everyone – regardless of their socioeconomic background. Sparking a wider ethical dilemma that should be considered before action is taken.

The digital divide

Google has always been a valuable contributor of accessible tech, including a search engine that is used around the world. With a self-proclaimed mission to make the world’s information “useful and accessible”, it would be hard to imagine the company charging for its content. But other tech giants, too, may look to introduce costs for AI-generated content in future. The question is, should information of this nature come with a price tag? The debate comes at a time of economic volatility, with inflation and cost-of-living crises impacting many. With some struggling to afford broadband access, it’s unlikely that the majority of those interested in attaining access to premium AI tech would be able to pay a subscription fee. If introduced, paywalls could widen the digital divide. Perhaps even fueling skepticism or unease around AI and its role in society.

Arguably, it is the role of tech giants to ensure that advancements in technology promote inclusivity, rather than create exclusivity. Choosing not to democratise access to AI could mean limiting opportunities for learning and education. Preventing innovators from powering progress in key areas – from healthcare tech to environmental research. And stalling the development of solutions that could help overcome today's biggest challenges.

Roles and responsibilities

The ethical management of AI is undoubtedly one of the most pressing issues faced by those in the space. The same can be said about regulation. And though ethics and regulation are two separate topics presenting their own unique challenges, there are similarities in terms of the way they could be approached. For instance, when it comes to implementing appropriate regulation around AI, government bodies are looking at each other to enact positive policies. But there’s little evidence that they have the level of understanding necessary to do so. Regulations that will power the next era of AI must be placed in the hands of both technical and societal bodies who understand its impact.

Managing AI – both in terms of regulation and ethics – is the direct responsibility of companies with high-compute power who produce this hardware. These are the businesses that hold the key to Generative AI, today and in the future. It’s essential that they vet those who wish to obtain access to its capabilities, assessing their credentials, intentions, and – ultimately – how these could impact employees and consumers.

An ethical ecosystem

From data privacy violations to the distribution of harmful content, there are many risks associated with the improper management of AI. It’s no wonder that tech giants are worried about doing the right thing, at a time when there is still much to be learned about how AI might evolve over the years to come.

Moderation has been a cause for concern for many companies. Some have come under sharp scrutiny for the information outputted by their own AI products. But nonetheless, these businesses are still perfectly positioned to achieve ethical implementation. Because much of the content we already see from online searches is moderated. And though work must be done enhance moderation of AI search results, ethical roles and AI algorithms can be leveraged to power progress in this space.

Those in the tech arena who understand AI and its implications are best placed to uncover what is still needed to build an ethical ecosystem that serves and safeguards those who use it. This can be achieved without hindering innovation or hampering accessibility. And it can certainly be achieved without funding from a paywall.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.