Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Antonio Aloisi, Associate Professor of European and comparative Labour Law, IE University

We’re criticising GDPR for all the wrong reasons

PrasitRodphan/Shutterstock

Simplify”, “Streamline”, “Scale back”. While EU communiqués often find creative ways to avoid uttering the word “deregulation”, this new European Commission is all about boosting the bloc’s competitiveness by “cutting red tape”. The intention to stimulate the continent’s economy might be laudable, but there is a real risk of throwing the baby out with the bathwater.

The Draghi Report, presented in September 2024, laid the foundation for a shake-up of one of the EU’s crown jewels in digital regulation – the General Data Protection Regulation (GDPR). According to the report, certain regulations present “overlaps and inconsistencies”, leading to fragmentation.

Draghi pinpointed GDPR as a particular source of headaches, thanks largely to its complexity, burdensome national implementation, inconsistent local enforcement, and disproportionately high compliance costs for small and medium enterprises compared to larger corporations. Now the whispers are over: GDPR now seems headed for the chop, much like sustainability reporting rules before it.

Yet the world has changed dramatically in recent months, meaning many of Draghi’s proposals are tailor-made for a context that no longer exists. Additionally, the US’ disastrous DOGE experiment offers a stark cautionary tale of deregulation leading to chaos rather than efficiency. Legal institutions, after all, are complex systems designed for the critical purpose of protecting people’s rights.


Leer más: A need for chaos powers some Americans' support for Elon Musk taking a chainsaw to the US government


Regulation is not the problem

Robust rules are essential to guaranteeing clarity and transparency. Especially in the digital sector, setting clear guardrails is vital to containing both the excesses of tech oligarchs and the erraticism of their satellites-in-chief. Far from slashing red tape, the EU would be wise to take this opportunity to refocus its energies on delivering and enforcing better regulations.

EU regulations are often cast as stifling the continent’s innovation, but EU trade law professor Anu Bradford argues that this narrative is, at best, oversimplified. Europe’s sluggish dynamism can instead be attributed to a wide range of structural issues, including a fragmented digital single market, underdeveloped capital markets, and harsh bankruptcy laws that punish failure rather than encourage experimentation.

Looking beyond the fiscal level, European cultural attitudes tend to be more risk-averse, and the bloc lacks the proactive immigration policies needed to attract international tech talent.

Experts have also clarified that if fragmentation truly impedes innovation, trimming regulation without serious harmonisation of domestic frameworks will achieve little.

Where the GDPR falls short: AI at work

While regulation like the GDPR is often unfairly scapegoated for the continent’s woes, it is not exempt from criticism.

Consider the algorithmic management (AM) and AI systems that have steadily infiltrated workplaces in recent years. Recent OECD figures reveal that in France, Germany, Italy, and Spain, around 79% of managers across diverse sectors report that their firms already use AM software to hire, organise and monitor their workforces.

Algorithms and AI are not just assisting managers either – in some cases they are replacing them altogether. This ushers in new risks, and entrenches or amplifies old, unresolved problems such as unfairness, opacity, incontestability, dysfunctionality and distrust.

The boom in decision-making digital tools perfectly illustrates the GDPR’s ambivalent role. On paper, it remains a gold-standard shield for personal data, including the data used to fuel Generative AI applications. Yet in practice, the GDPR struggles to fully address the challenges posed by machines making decisions, either independently or on behalf of human managers.

In one recent study commissioned by the EU Directorate-General for Employment, Social Affairs and Inclusion, data protection frameworks were put under the microscope to see whether they can tame AM systems. The verdict was mixed, leaning towards pessimistic. While it is undeniable that the GDPR can be mobilised to limit data processing and avoid repurposing, most of its headline provisions have wide gaps when it comes to the workplace.

The study flags the indeterminacy, ambiguity, and open-textured nature of the rules on automated decision-making, among other things. For instance, semi-automated decisions – hybrid systems with human intervention at the last stage of the executive chain – often slip beneath the radar, reducing the chances for workers to be informed about their existence and reasoning, or to have a real shot at contesting and changing their outcomes.

In a similar vein, uncertainty about the interpretation of grounds for lawful processing and the application of the proportionality principle is leading to a patchwork of discordant decisions made by Data Protection Authorities. As the case law on data controllers’ “legitimate interest” shows, compliance risks becoming a postcode lottery.

Fine-tuning the GDPR

None of this should come as a surprise, as the GDPR was designed to be general, not workplace-specific. Nevertheless, its exceptions and loopholes disadvantage workers, and create uncertainties that affect companies.

In a different season, institutions were contemplating the introduction of a work-specific instrument to govern algorithms, a proposition that was also included in the mission letter of Roxana Mînzatu, Executive Vice-President for Social Rights and Skills, Quality Jobs and Preparedness. The current deregulatory drumbeat, stimulated by the US fury against EU powers, has cooled that talk, but the idea is not dead.

Workplace technologies are still largely governed by consumer-oriented data protection principles, even though employment contexts differ profoundly. Employers routinely collect sensitive data that extends managerial control into workers’ emotional domains, and AM systems intensify these dynamics by automating decisions and generating detailed profiles.


Leer más: 3 ways 'algorithmic management' makes work more stressful and less satisfying


The persistent and asymmetrical nature of workplace surveillance undermines autonomy and erodes mutual trust. Unlike consumers, workers cannot meaningfully refuse these intrusive practices, making power imbalances more acute. Moreover, data harms are often collective, threatening solidarity and enabling anti-union practices.

The Platform Work Directive (PWD) offers a ready-made compass to reorient action on workers’ digital rights. Indeed, a whole chapter is devoted to fine-tuning the GDPR to better govern AM at work. As argued in a policy brief, several PWD provisions appear to be deliberately drafted to fill the gaps left by the omnibus framework.

The PWD covers “decisions supported by” algorithms (not just fully automated ones), extends workers’ information and access rights, re-establishes a right to explanation, and bans robo-firing outright.

It is, however, crucially limited, as its sectoral scope stops at the gig-economy’s edge, leaving everyone else in the open. If the GDPR is not good enough for delivery couriers and click-workers, why is it still being applied to all other workers?

Put the chainsaw away

Blaming the GDPR for Europe’s growth woes makes for great clickbait, LinkedIn memes and after-dinner quips, but it ignores the real issues. Looser privacy rules will not fix our problems. On the contrary, a smarter framework for workers’ digital rights could serve as a robust counterbalance, ensuring that AM operates as a tool for efficiency rather than unchecked command-and-control.

By all means, critique the GDPR, but aim at the right target. Its abstract, transactional, individualistic DNA is ill-suited to the collective, lopsided reality of modern workplaces where employees’ data is fed into black-box AI systems.

In those environments the answer is not to prune protections, but to reinforce them by clarifying legal bases, establishing red lines, hard-wiring collective rights, and closing enforcement loopholes. Reform, yes. Regression, no.

The Conversation

Antonio Aloisi no recibe salario, ni ejerce labores de consultoría, ni posee acciones, ni recibe financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y ha declarado carecer de vínculos relevantes más allá del cargo académico citado.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.