Ad
When companies understand clearly that certain practices cross the line, compliance becomes more straightforward and enforcement becomes feasible (Photo: Vlad Deep)

Opinion

The new EU Digital Fairness Act needs teeth to tackle Big Tech platforms

Free Article

Scheduled for Wednesday (19 November), the EU will present a digital package intended to decrease regulatory burden and cut red tape.

Yet even as Brussels promises simplification, it plans another regulation: the Digital Fairness Act (DFA).

Its development promises to be intensively scrutinised and heavily debated. The idea for a DFA was first introduced in 2024 by EU Commission president Ursula von der Leyen. The objective of the initiative is to tackle unethical techniques and exploitative commercial practices carried out through digital platforms. 

Industry united in opposition to the proposal in the public consultation.

Submissions from Apple, Google, TikTok and others carried the same message: existing EU law already addresses these harms.

This argument has surface appeal. Some goals of the DFA are already covered in existing legislation. Digital Services Act (DSA) Article 25 prohibits interfaces that 'deceive or manipulate' users and the Unfair Commercial Practices Directive prohibits misleading consumers.

Multiple pieces of individually reasonable legislation can create undesirable interactions.

For example, companies are incentivised to use sensitive data to test AI systems for bias under the AI Act (article 10(5)). Yet they might struggle to find a valid legal basis under data protection rules (GDPR article 9) to do so. Adding another law risks making things worse rather than better, directly undermining Brussels' simplification agenda.

The DFA offers an opportunity to put clear prohibitions on practices that are demonstrably harmful

4,000 citizens tell a different story

Yet over 4,000 citizens responded to the public consultation, significantly exceeding the number of citizens who responded to consultations on affordable housing (60) or digital simplification (50).

After scraping and analysing these responses, the extent of consumer harm that current regulation fails to address became clear.

Many participated through the 'Stop Killing Games' campaign, demanding legal protection against server shutdowns, rendering purchased games unplayable. Others reported spending money on digital currencies that obscure prices, and raised concerns about children captured by infinite scroll features and loot boxes engineered to be addictive.

The commission's 2024 Fitness Check quantified what citizens experience: harmful commercial practices online cost EU consumers at least €7.9bn annually.

Research links the addictive design of social media platforms to significant increases in depression and anxiety among young people. Each additional hour of daily social media use increasing depression risk by 13 percent for adolescents.

AI systems threaten to amplify these harms. Algorithms personalise prices to your vulnerabilities. Chatbots frequently initiate personal and intimate relationships with users who intended to use them as an assistant and use emotional tricks to keep users engaged.

When companies fund these services through advertising or data sales, they face strong commercial incentives to design them to be as addictive and manipulative as possible. In one documented case, a 14-year-old Florida boy died by suicide after months of intensive interaction with an AI companion. 

Why enforcement fails, and what this means for the DFA

Existing regulation has not been able to stop the harm citizens experience because enforcement is too complicated and slow.

Proving manipulation in court requires years of procedural steps, a reality well understood by platform operators. During these lengthy proceedings, harms continue and new manipulative features appear faster than regulators can address existing ones. 

The TikTok case illustrates this problem. Since 2024, the European Commission has opened four separate investigations into the platform, as this tracker by Think Tank Europa shows:

Source: Think Tank Europa, DSA Tracker, based on European Commission data. Reproduction with permission from the author


While facing scrutiny for addictive design practices, TikTok launched a programme paying users for screen time, directly incentivising the very behaviour under investigation.

This prompted a new investigation (TikTok III in the image above), and TikTok quickly retracted the feature.

However, this remains the only case reaching a final decision. The addictive features that prompted the initial investigation are still present, and we are still awaiting a decision.

When every intervention remains subject to multi-year litigation over whether harm was "significant" enough or manipulation was "material" enough, vulnerable users continue to experience harm during protracted legal proceedings.

However, when companies understand clearly that certain practices cross the line, compliance becomes more straightforward and enforcement becomes feasible.

The DFA offers an opportunity to put clear prohibitions on practices that are demonstrably harmful. The consultation responses point towards clear candidates for prohibition: loot boxes targeting children, server shutdowns rendering purchased games unplayable, AI companions funded by advertising models that incentivise addiction. 

If instead the DFA becomes another broad framework of abstract principles, attempting to be perfectly targeted while giving companies maximum flexibility, it will just create additional tension with existing legislation and strain enforcement capacity further.

The commission must choose prohibitions over principles, or watch harmful practices multiply faster than courts can act.


Every month, hundreds of thousands of people read the journalism and opinion published by EUobserver. With your support, millions of others will as well.

If you're not already, become a supporting member today.

Ad
Ad