Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Technology
Josh Taylor

Australia is looking to regulate AI – what might they be used for and what could go wrong?

OpenAI logo on a phone screen
The Australian government has launched a consultation paper on the responsible and safe use of generative AI such as ChatGPT by companies such as OpenAI. Photograph: salarko/Alamy

The Australian government is looking to regulate artificial intelligence applications, but which uses are concerning and what are the fears if it goes unregulated?

On Thursday, the industry and science minister, Ed Husic, released a consultation paper on measures that can be put in place to ensure AI is used responsibly and safely in Australia.

Husic noted that since the release of generative AI applications such as ChatGPT, there was a “growing sense” that it is in a state of accelerated development and a big leap forward in technology.

“People want to think about whether or not that technology and the risks that might be presented have been thought through and responded to in a way that gives people assurance and comfort about what is going on around them,” he said.

“Ultimately, what we want is modern laws for modern technology, and that is what we have been working on.”

What types of AI are they concerned about?

Generative AI underpins much of the public debate around the future of AI: that is, AI built on large datasets of information that generates text, images, audio and code in response to prompts.

The applications using generative AI include large language models (LLM) that generate text such as ChatGPT, or multimodal foundation models (MfM) for applications that can output text, audio, or images.

Applications that allow AI to make decisions, called automated decision making, are also within the scope of the review.

What are the fears?

Fake images, misinformation and disinformation are at the top of the pile of concerns.

The paper says there are fears generative AI could be used to create deepfakes – fake images, video or audio that people confuse for real – that could influence democratic processes or “cause other deceit”.

So far the way this has played out has been mostly innocent – an AI-generated image of the Pope in a Balenciaga jacket is the most cited – but last month an AI-generated image of an explosion next to the Pentagon in the United States circulated widely on social media, despite being debunked.

There is also concern about what is termed “hallucinations” from generative AI, where the output text cites sources, information or quotes that do not exist. Some generative AI firms are trying to prevent this from occurring by providing links to sources in generated text.

There is also a major fear that in areas where AI makes decisions, there could be issues with algorithmic bias leading to bad decisions being made. Where datasets used to train the AI are not comprehensive, it can lead to decisions being made that discriminate against minority groups or lead to male candidates being prioritised in recruitment over female candidates, for example.

How can we know if the AI is going wrong?

The paper suggests the best way to see how an AI might respond to something is to be as transparent as possible in how it works, including providing complete details on the dataset the AI is trained on.

Will new laws be needed?

The Australian government admits in the paper that many of the risks associated with AI can be covered by existing regulation, including privacy law, Australian consumer law, online safety, competition law, copyright law and discrimination law. The paper suggests any changes will need to close gaps once regulators have determined a gap exists within their existing powers.

For example, the Office of the Australian Information Commissioner had already used its powers under the Privacy Act to take action against Clearview AI for using people’s photos scraped from social media without permission.

The Australian Competition and Consumer Commission (ACCC) also won a lawsuit against travel booking site Trivago under existing Australian consumer law for misleading hotel booking results which were provided by an algorithm.

Is it all bad news?

While much of the discussion around AI at the moment seems geared towards the dangers, the paper does recognise that there will be benefits for society with the arrival of AI. The Productivity Commission has said that AI will be one technology that will help drive productivity growth in Australia. The paper states AI will also be used by hospitals to consolidate large amounts of patient data and analyse medical images and says AI can be used to optimise engineering designs and save costs in the provision of legal services.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.