Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
World
Raphael Rashid in Seoul

South Korea’s ‘world-first’ AI laws face pushback amid bid to become leading tech power

AI images created in South Korea.
South Korea has launched what it calls ‘world-first’ laws aimed at regulating artificial intelligence. Photograph: Jung Yeon-Je/AFP/Getty Images

South Korea has embarked on a foray into the regulation of AI, launching what has been billed as the most comprehensive set of laws anywhere in the world, that could prove a model for other countries, but the new legislation has already encountered pushback.

The laws, which will force companies to label AI-generated content, have been criticised by local tech startups, which say they go too far, and civil society groups, which say they don’t go far enough.

The AI basic act, which took effect on Thursday last week, comes amid growing global unease over artificially created media and automated decision-making, as governments struggle to keep pace with rapidly advancing technologies.

The act will force companies providing AI services to:

  • Add invisible digital watermarks for clearly artificial outputs such as cartoons or artwork. For realistic deepfakes, visible labels are required.

  • “High-impact AI”, including systems used for medical diagnosis, hiring and loan approvals, will require operators to conduct risk assessments and document how decisions are made. If a human makes the final decision the system may fall outside the category.

  • Extremely powerful AI models will require safety reports, but the threshold is set so high that government officials acknowledge no models worldwide currently meet it.

Companies that violate the rules face fines of up to 30m won (£15,000), but the government has promised a grace period of at least a year before penalties are imposed.

The legislation is being billed as the “world’s first” to be fully enforced by a country, and central to South Korea’s ambition to become one of the world’s three leading AI powers alongside the US and China. Government officials maintain the law is 80-90% focused on promoting industry rather than restricting it.

Alice Oh, a computer science professor at the Korea Advanced Institute of Science and Technology (KAIST), said that while the law was not perfect, it was intended to evolve without stifling innovation. However a survey in December from the Startup Alliance found that 98% of AI startups were unprepared for compliance. Its co-head, Lim Jung-wook, said frustration was widespread. “There’s a bit of resentment,” he said. “Why do we have to be the first to do this?”

Companies must self-determine whether their systems qualify as high-impact AI, a process critics say is lengthy and creates uncertainty.

They also warn of competitive imbalance: all Korean companies face regulation regardless of size, while only foreign firms meeting certain thresholds – such as Google and OpenAI – must comply.

The push for regulation has unfolded against a uniquely charged domestic backdrop that has left civil society groups worried the legislation does not go far enough.

South Korea accounts for 53% of all global deepfake pornography victims, according to a 2023 report by Security Hero, a US-based identity protection firm. In August 2024, an investigation exposed massive networks of Telegram chatrooms creating and distributing AI-generated sexual imagery of women and girls, foreshadowing the scandal that would later erupt around Elon Musk’s Grok chatbot.

The law’s origins, however, predate this crisis, with the first AI-related bill submitted to parliament in July 2020. It stalled repeatedly in part due to provisions that were accused of prioritising industry interests over citizen protection.

Civil society groups maintain that the new legislation provides limited protection for people harmed by AI systems.

Four organisations, including Minbyun, a collective of human rights lawyers, issued a joint statement the day after it was implemented arguing the law contained almost no provisions to protect citizens from AI risks.

The groups noted that while the law stipulated protection for “users”, those users were hospitals, financial companies and public institutions that use AI systems, not people affected by AI. The law established no prohibited AI systems, they argued, and exemptions for “human involvement” created significant loopholes.

The country’s human rights commission has criticised the enforcement decree for lacking clear definitions of high-impact AI, noting that those most likely to suffer rights violations remain in regulatory blind spots.

In a statement, the ministry of science and ICT said it expected the law to “remove legal uncertainty” and build “a healthy and safe domestic AI ecosystem”, adding that it would continue to clarify the rules through revised guidelines.

Experts said South Korea had deliberately chosen a different path from other jurisdictions.

Unlike the EU’s strict risk-based regulatory model, the US and UK’s largely sector-specific, market-driven approaches, or China’s combination of state-led industrial policy and detailed service-specific regulation, South Korea has opted for a more flexible, principles-based framework, said Melissa Hyesun Yoon, a law professor at Hanyang University who specialises in AI governance.

That approach is centred on what Yoon describes as “trust-based promotion and regulation”.

“Korea’s framework will serve as a useful reference point in global AI governance discussions,” she said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.