Ad
A new report published by MIT’s NANDA initiative, “The GenAI Divide: State of AI in Business 2025”, finds that 95 percent of AI “pilots” show no measurable gains in productivity. (Photo: Unsplash)

How the EU could make AI an engine of workplace democracy

Adoption is high, disruption is low. What if the real problem with artificial intelligence in organisations isn’t the technology, but how it’s introduced?

A new report published by MIT’s NANDA initiative, “The GenAI Divide: State of AI in Business 2025”, finds that 95 percent of AI “pilots” show no measurable gains in productivity. Not only because the models misfire, but because they’re dropped in from above, never grafted onto the living fabric of work and everyday processes.

One explanation for these zero returns is obvious to anyone who’s actually done the job: providers, consultants and programmers don’t know work methods as well as the people doing the work.

Work, in practice, runs on exceptions, tacit know-how, micro-decisions, and operational compromises; things no top-down solution can fully capture. When AI arrives as a “turnkey” system, it often stiffens workflows, piles on bureaucracy, and stalls in pilot purgatory. The frenzy pushing many managers to move fast risks backfiring, bubble or no bubble.

Participation is key

There is another path, documented by comparative research: build participation in from the start and through rollout. A new ILO working paper edited by Virginia Doellgast and colleagues presents cases from around the world where social dialogue and collective bargaining shifted AI from replacing to complementing work, from control to empowerment, from disruption to embedding innovations within safeguards and reskilling pathways.

Where representative bodies exist, where consultation is real, and where employers face limits on “exit” strategies (automation/outsourcing), AI delivers more, with fewer conflicts and higher-quality results.

This point is crucial – obvious to anyone who has wrestled with organisational complexity – yet too often ignored.

Shadow adoption

There is more. Workers across a wide range of clerical and cognitive jobs are quietly folding generative AI into their daily routines. Early signs suggest this “shadow adoption” is creating time dividends and pockets of autonomy, making the workday feel better. But liberation is hardly guaranteed. Too often, the gains don’t translate into agency or job quality; they’re absorbed by the system as a mandate to do more, but without more control, flexibility or reward.

From a legal standpoint, the debate has focused almost exclusively on privacy as the antidote to excesses of automation. But the heart of “algorithmic management” (a set of systems and practices to automate managerial functions) isn’t (just) data processing. It’s the expansion – and the obfuscation – of employer power over hiring, shift allocation, pace and intensity of work, surveillance, evaluations, bonuses and sanctions, up to and including dismissals. Reduce it all to GDPR consent and disclosure, and the command structure stays intact. 

Many of the harms associated with algorithmic systems, such as opacity, intensification of control, or discriminatory outcomes, do not necessarily stem from privacy infringements nor only violate the right to respect for workers’ private lives. To properly address the risks stemming from automated decision-making systems, privacy must be situated within the wider context of workplaces, where managerial authority pervades the worker’s entire personhood.

As we contend, what’s needed are rules that rebalance power: functional transparency (how and for what AI is used), decisions that can be audited, a right to challenge outcomes, and the co-design of systems with the people who actually do the work. This approach reflects the foundational principle that power must be both authorised and constrained to remain legitimate.

Algorithmic management

Against this backdrop, the European Parliament will debate a report on algorithmic management. It’s a meaningful step: it recognises that algorithmic tools now organize, monitor, and evaluate work. But to meet the moment, policymakers have to accept that privacy alone won’t do. The centre of gravity must shift from mere “information and consultation” (which leaves the last word to the employer) to collective bargaining over algorithmic systems—because what’s at stake are basics: hours, pay, staffing, progression, health and safety, job security.

In practice, that means: no unilateral adoption of tools that affect schedules, wages, or duties; negotiated, transparent impact assessments of how work is organised; collective access to algorithms (decision logic, metrics, data inputs) and pilot tests for new systems; the right to halt or retool systems that generate discrimination, intensify work, or create health risks; guaranteed, funded reskilling when automation changes roles. This doesn’t “hold back innovation.” It steers it toward high-quality productivity and social legitimacy—as the ILO cases show.

The European Union now faces a choice: keep rolling out showcase projects that inflate slide decks, campaigns, and balance sheets but fail on the ground, or make AI an engine of workplace democracy.

The second path doesn’t require miracles; it requires time for shared design, institutions of participation (works councils, health and safety reps, joint committees on data and algorithms), and contracts that set purposes, limits and responsibilities. That’s how you avoid joining the 95 percent of failed pilots—and deliver on AI’s real promise: to strengthen work, not replace it; to improve quality, not erode rights.

If innovation is truly meant to serve society, involving workers and affected people is what makes the technology intelligent. The EU has the chance – now – to write that into law. The rest of us should insist on it.

Disclaimer

The views expressed in this opinion piece are the author’s, not those of EUobserver

Author Bio

Antonio Aloisi is an associate professor at IE University Law School in Madrid. Co-author of “Your Boss Is an Algorithm. Artificial Intelligence, Platform Work and Labour”. Valerio De Stefano is a Professor of Law and Canada Research Chair in Innovation, Law, and Society at Osgoode Hall School, York University, Toronto.

A new report published by MIT’s NANDA initiative, “The GenAI Divide: State of AI in Business 2025”, finds that 95 percent of AI “pilots” show no measurable gains in productivity. (Photo: Unsplash)

Tags

Author Bio

Antonio Aloisi is an associate professor at IE University Law School in Madrid. Co-author of “Your Boss Is an Algorithm. Artificial Intelligence, Platform Work and Labour”. Valerio De Stefano is a Professor of Law and Canada Research Chair in Innovation, Law, and Society at Osgoode Hall School, York University, Toronto.

Ad

Related articles

Ad
Ad