In the past few years, the EU has introduced new laws to regulate large online platforms and the broader digital ecosystem, including the Digital Services Act (DSA), the Digital Markets Act (DMA), and the AI Act. Now that these rules are coming into force, Big Tech is using its enormous power and influence – not least over the US government under President Trump – to avoid compliance and shirk its responsibilities.
In our new geopolitical reality, what hurdles must the EU clear to ensure its legislation is effective? And how can the EU stand up for itself, protect its founding democratic values, and uphold its reputation in the process?
Tech under Trump
The European digital ecosystem has changed since the re-election of President Trump, who has already criticised the EU’s decisions to fine and investigate US tech firms. The goal here is to push the EU to water down enforcement of its new rules for digital platforms, such as the DSA and DMA.
At the same time, Big Tech has made a point of ensuring citizens and governments know it supports Trump and his administration. At the inauguration, tech CEOs had better seats than Trump’s cabinet picks – seats usually reserved for family, past presidents, and other honourable guests. Elon Musk voiced his support for Trump during his campaign, donated more than $250 million, and was appointed to co-lead the Department of Government Efficiency (DOGE): a new initiative that is currently dismantling USAID, among other government agencies. Mark Zuckerberg’s Meta donated $1 million to Trump’s inauguration fund, reshuffled its lobbying staff, and changed its content-moderation policies to better resonate with the new administration’s narrative that such moderation restricts free speech.
The tech market was already highly concentrated, opaque, and unaccountable before Trump’s return to power – a situation that only worsened in recent years due to aggressive acquisitions and anti-competitive behaviours – but now, these tech giants are telling the world that their enormous power has the explicit backing of the US government.
The consequences of this concentration of power extend far beyond market distortions. Allowing a few unelected individuals who own private corporations to control entire markets and to set the rules for people, businesses, and even public services is a threat to democracy itself.
One very clear example is how online platforms have contributed to the spread of harmful content, such as hate speech and disinformation. Their business model, which relies on advertising, has a significant impact on civic discourse and electoral processes. Algorithms that display tailored content to users are designed to favour items that generate the most engagement so that online ads get the most visibility. But the content that generates most engagement is often polemic and controversial, which ends up amplifying disinformation and extremism (sometimes purposefully, sometimes not), which, in turn, pollutes online political debate and paves the way to election interference. It is obviously fair for a company to make business decisions based on profit maximisation, but it should not be acceptable for their decisions to negatively impact democracy. Furthermore, it places public discourse at the mercy of those who control the platforms, who can skew it in any direction they wish, and the lack of transparency and accountability around how the algorithms function means they can do so with no oversight or consequences.
Another example is the impact that collecting and using personal data has on users’ fundamental rights. The massive amount of data generated by online activities is used to profile individuals so as to display tailored content to them – including political ads – and to train AI applications that might replicate that personal data in chatbots, deepfakes, or targeted ads. Once again, decisions on what protections are granted to personal data, and whether the data fed to AI systems should be aligned with democratic values or not, should not be left in the hands of companies, which will just choose the cheapest option. This is why rules enacted by bodies with democratic legitimacy are needed.
EU regulations
The EU has taken Big Tech’s threat to democracy very seriously. The DSA, DMA, and AI Act were well-thought-out solutions: they impose duties and responsibilities on tech companies to mitigate the harms they can cause while upholding the founding values of the EU, such as respect for human dignity, democracy, the rule of law, freedom of expression, equality, and human rights more broadly – including the rights of persons belonging to minorities.
Getting to this level of regulation was not an easy ride. Tech companies have been severely underregulated for a long time because of policymakers’ struggle to fully grasp how new technologies function and their impact on users. The massive economic power of these companies also enabled expensive lobbying campaigns to water down the development of new rules, including on the delicate issue of adopting proportionate and dissuasive fines to ensure compliance.
Yet even when new rules have been adopted, effective implementation and enforcement can be an uphill battle – as we have seen in the case of the DSA.
Big Tech’s dirty games
Tech companies are already using various strategies to avoid the rules set out in the DSA, which entered into force in 2024. Four such strategies stand out.
First and foremost, Big Tech is putting pressure on the US administration to weaken DSA enforcement in the EU. Zuckerberg argued that “the US government has a role in basically defending [the US tech industry] abroad,” while Apple’s Tim Cook reportedly asked the US administration to intervene against EU fines imposed on his company. And the pressure seems to be working: Trump criticised EU regulators for imposing fines on Apple, Google, and Meta, and for their ongoing investigations against them, and vice-president JD Vance said the US could drop support for NATO if Europe tries to regulate Musk’s platforms. The EU has responded that its tech regulation does not target US companies and that it aims to ensure compliance, not to issue fines.
Second, platforms are reinforcing the false rhetoric that the DSA is a censorship law. In reality, the DSA is a rather procedural, content-neutral regulation that obliges platforms to flag and remove illegal content, to adopt transparent content-moderation policies, and to assess and mitigate systemic risks in different areas, including civic discourse and electoral processes. The censorship narrative, however, has spread to the point that even the US House judiciary chair has written to the European Commission to express concerns over “how the DSA’s censorship provisions affect free speech in the United States”.
Third, platforms are withdrawing essential services from the EU to avoid complying with the new rules. Google decided to leave the EU market for political ads, alleging “operational challenges” and “legal uncertainty” around the Regulation on the Transparency and Targeting of Political Advertising, signalling that withdrawing from the market is equal to compliance. Meanwhile, X announced that it was pulling out from the EU’s Code of Practice on Disinformation, which, while voluntary, is an important aspect of compliance with the new rules, particularly when it comes to the DSA’s requirement for “very large online platforms” to mitigate risks.
Finally, platforms are openly disregarding existing rules and ongoing investigations. X has refused to provide access to relevant data under the DSA; in Germany, Democracy Reporting International (DRI) and the Society for Civil Rights took the company to court, which ruled that X must grant DRI unrestricted access to all publicly available data on its platform: immediately, and until shortly after the German federal election. Meanwhile, Meta changed its content-moderation policies to introduce “Community Notes,” claiming it had “seen this approach work on X” – yet X is under investigation in the EU for the “effectiveness of X’s so-called ‘Community Notes’ system” as a measure to combat information manipulation. In other words, Meta is adopting policies that it knows potentially infringe on EU rules.
These companies know very well that, given the extreme market concentration, people have no real alternative to their services. If all the platforms refuse to comply, the new EU rules will be empty words on paper while social media runs wild: amplifying hate speech and disinformation, skewing public discourse, destroying independent media, and spreading propaganda that benefits its owners.
Indeed, this is already happening – as we saw when Musk supported Germany’s far-right Alternative für Deutschland (AfD) party and amplified inflammatory content about riots in the UK. Let us be clear here: Elon Musk has every right to talk to the AfD leader and to express his views on events in the UK; the problem is that he unilaterally decides to amplify his content over, for example, the content produced by independent media outlets in Germany or the UK.
Time to “think and act big”
It is time for the EU to take a clear stand and make Big Tech live up to its legal obligations and show that democratic principles are not for sale. To do this, it must use all the tools at its disposal, including the DSA, DMA, AI Act, and Political Ads Regulation; it must stand firm on implementation; and it must refuse to tolerate threats from Big Tech, third countries, or any other actors. Investigating such powerful actors will be challenging, but handing out real sanctions for non-compliance is the only way to protect the EU’s values and interests.
The EU must “stick together and dare to think and act big,” as Ursula von der Leyen stated in her political guidelines for the new Commission. This means considering new ideas and investing in real alternatives, such as breaking up Big Tech or making its infrastructure public and recognising its platforms as public utilities. And it means reducing our dependency on a handful of corporations – whoever they may be.
Big Tech has played its Trump card. It is time for the EU to show its hand. The next move will be decisive for democracy.
Authors
Sofia Calabrese is a Digital Policy Manager at the European Partnership for Democracy (EPD). Her work involves all topics at the crossroads between digital policy and democracy, with a strong focus on political advertising, online platform regulation and Artificial Intelligence. She previously worked as a consultant in the Brussels-based public affairs consultancy Grayling, focusing mainly on platform regulation. Other experiences include a Schuman Traineeship at the Cabinet of the President of the European Parliament. She holds a double Master’s Degree in Italian and French Law from the University of Milan and the University of Toulouse.
Roy Virah-Sawmy is a Policy Coordinator at the European Partnership for Democracy (EPD), working on both EU internal and external democracy support. Prior to joining EPD, Roy worked for a consortium of funders, where he designed and implemented democracy support funding strategies on issues including civic space, digital policy and media. He also has experience in journalism and corporate law in his home country, Mauritius. Roy holds an LLM in Human Rights Law and International Law and a LLB in English and French law from the University of Kent and the University of Bordeaux.
This publication was produced with the financial support of the European Union. Its contents are the sole responsibility of the authors and do not necessarily reflect the views of the European Union.
Photo credit: ©Nothing Ahead, pexel