Column
AI has escaped the 'sandbox' — can it still be regulated?
Recently I was introduced to the concept of "algorithmic gifts" as part of a research interview on tech lobbying in Brussels. The question was how algorithmic favours might be used to get sway over the direction of debates and policy.
When Twitter released segments of its code a few weeks back we got a first, perhaps unsurprising, answer: far from the professed neutrality, posts from president Joe Biden, Elon Musk and a few dozen selected luminaries such as LeBron James, Ben Shapiro and Marc Andreessen get an additional, artificial push by Twitter's algorithm.
Join EUobserver today
Become an expert on Europe
Get instant access to all articles — and 20 years of archives. 14-day free trial.
Choose your plan
... or subscribe as a group
Already a member?
It only adds to previous questions as to where Musk's Twitter is heading, and more fundamentally of course, about the structure and integrity of today's platform-mediated public space. I have myself long wondered in the same vein whether Facebook's algorithm automatically restrains the visibility of posts that are critical of Meta/Facebook.
By now, algorithms create and redistribute power across most aspects of our social, economic and political life. We live in an algorithmic society and with that come steep ethical questions.
AI and singularity
Nowhere is the acceleration and disruption more evident than in artificial intelligence now. The combination of immense data sets, massive computational force and self-learning algorithms promises to unleash enormous powers — in every sense of the word.
In medical research, to take one example, the use of machine learning and mRNA-technology (the same as in the Covid-19 jab) holds tremendous potential. Vaccines against cancer, cardiovascular and auto-immune diseases could be ready as early as by the end of this decade.
Few would want to relinquish this promise. Much more contested is the emergence of so-called General Purpose Artificial Intelligence, or GPAI — self-learning algorithms capable of performing multiple, varied tasks to the extent of giving the impression of a sense of thought.
Within two months of its launch, the first application out of the starting blocks, ChatGPT, reached 100 million users, a pace not seen for any other tech consumer application.
As these users now play with prompts, the machine learns. The latest version of the software already boasts spectacular analytical and creative capacities, presaging future permutations into practical human activities from finance (and column writing!) to arts and sciences.
With the dynamics of exponential advance, the popular scare is that these powerful, 'intelligent' technologies will radically and unpredictably transform our reality — or even develop some form of life of their own.
It's difficult to blame the naysayers and doomsters. In Silicon Valley, wizards' apprentices, accoutred with a libertarian philosophy and venture capital, have long been yearning for the moment of 'technological singularity', a future where technological growth is out of control and irreversible.
In the mind of Ray Kurzweil, Google's chief of engineering and a key figure, the process towards singularity has already long begun and will happen around 2030 (note: the same as for the vaccines).
Computers will have human intelligence, and our (final) choice might be to put them inside our brains, connecting our neocortex to the cloud.
EU's AI Act on the spot
As often, Europeans will be the first to regulate, and can, by and large, take some pride in that. An Artificial Intelligence Act has been on EU lawmakers' table for two years with the aim of setting the guardrails for safe and lawful AI.
Certain AI practices such as social scoring will be prohibited and yet others, categorised as "high risk", will be subject to third-party audits and significant transparency by the legislation due to be finalised this year.
That is all good, but it's the response to the commercialisation of GPAI that now stands as the decisive test.
In truth EU lawmakers are very much in the dark about what to do. At a recent lunch with senior Brussels lawmakers and industry representatives, civil society voices raised the question whether it all could be stopped: the emphatic answer was it could only go faster.
Coincidentally, only days later, more than 1,000 AI experts wrote an open letter asking for a pause in the training of systems more powerful than GPT-4, saying that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."
On the face of it, the wildfire spread of GPAI already marks a failure of European regulatory efforts. To check the unsupervised real-world testing of AI systems, the EU AI Act had envisaged regulatory 'sandboxes' establishing a controlled environment for the development, testing and validation of AI innovation.
Regulation is as ever reactive and lacks speed compared to the disruptive vitality of technology. Lawmakers must now figure out how to frame the unrestrained acceleration and futureproof regulation that will take effect in more than three years.
Competition policy certainly has a role to play to prevent a few GPAI 'gatekeepers' from becoming single entry points for AI tasks. Lawmakers must also consider how strict liability regimes can force developers to think twice about out-of-sandbox releases, without stifling European domestic developments in the global race ahead.
(Big) Tech ethics
In the contest between Silicon Valley tech mavens and Chinese state-controlled innovation, the window to act meaningfully from Europe often can appear vanishingly small. At the same time the stakes are lasting and high: we are peeking through an Aventine keyhole onto major future ethical and societal dilemmas linked to algorithmic powers.
For something as paradigmatic as AI, there's always the hold out hope that all actors will accept the necessity of open democratic debate, control and regulation.
Yet, we should also be wary of illusions: for all the talk of 'doing good' from innovators, most ventures are in the end governed by profit-maximising imperatives, rather than wider societal interest.
Big Tech's track record, in particular, sadly is appalling. When the EU debated the Digital Services Act in 2022, front groups and other forms of hidden lobbying were swarming all over it.
In a leaked internal memo, Google had then set out a list of by-every-means-possible tactics to fight effective EU regulation, for which Alphabet's CEO Sundar Pichai later had to apologise.
On the other side of the Atlantic, behaviour have been similarly, if not even more, brutal. Ahead of the US midterm elections in 2022, incumbent lawmakers faced arm-twisting threats of being unseated by Big Tech funding going to their political adversaries if congressional bills moved ahead.
Currently, it's lawmakers in Canada that are under fire in the context of regulatory initiatives on online news, broadcasting and online safety. In fact, it has gone so far that the Canadian parliament is undertaking what will be a world-first parliamentary study on the tech giants' use of intimidation and subversion tactics to evade regulation across the globe.
In the end, Europeans are perhaps not all alone at the hard edge of regulation. But what is needed is not just regulation, but a new and broader paradigm of what I call tech control.
Independence of research is one area where alarm bells are ringing. Former Google insider and AI-ethics researcher, now-Signal CEO, Meredith Whittaker, has attested how independence from corporate actors in AI ethics can be asphyxiated. Even within the EU AI ethics group it was hard to come by.
The defence of fundamental interests requires therefore, not just capable AI agencies and effective liability rules, but a wide ecosystem focused and acting on how technology and corporate powers direct our future. As Timnit Gebru, another AI ethics researcher forced out of Google, noted recently: for the time being, let's avoid ascribing agency to the algorithms — rather than to the organisations building them.
Author bio
Georg E. Riekeles is associate director and head of the Europe’s political economy programme at the European Policy Centre. Before joining the EPC, he served as diplomatic adviser to the EU’s chief negotiator Michel Barnier and head of strategy, media and diplomatic relations in the European Commission’s Task force for EU-UK negotiations.
Disclaimer
The views expressed in this opinion piece are the author's, not those of EUobserver.