Friday

1st Mar 2024

Column

GPTChat could be 21st century Goethe's Sorcerer's Apprentice

Listen to article

Is artificial intelligence a dangerous innovation that could destroy humanity? A lot of serious people think so. More than 3,000 of them have signed a letter calling for a six-month moratorium on "giant AI experiments". The writer Yuval Noah Harari, the tech legend Steve Wozniak, and Elon Musk signed it.

Why all the uproar now? AI has been around for a while in daily applications like driver-assistance systems, the selection of what we see on social media, or the work of internet search engines.

Read and decide

Join EUobserver today

Get the EU news that really matters

Instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

  • 'Today they make you more efficient, tomorrow they make you redundant'

But we have been passive consumers of these applications. ChatGPT has changed that. It lifted the curtain on AI. Anyone with an internet connection can use the tool and other AI-powered language and image generators. Anyone can get a feeling about the power of the technology. It is impressive.

It is also frightening. Many white-collar workers thought they were safe in their jobs — they felt they were too specialised or creative to worry. They are thinking again. Language generators can produce long, complex texts, often to a high standard.

They can write poems in any style, write computer code or deliver any kind of information. Today they make you more efficient, tomorrow they make you redundant.

Hence the sense of foreboding among the 3,000+ signatories of the letter. And it goes beyond concerns about a massive upheaval in the job market. They fear that we will have the experience of Goethe's Zauberlehrling (the Sorcerer's Apprentice).

He animates a broomstick, which begins to flood the house — and then he loses control of it: "Spirits that I've cited, my commands ignore."

Goethe's prophetic warning

Like Goethe's poem the letter has an End-of-Times tone: "Should we risk loss of control of our civilisation?". Goethe: "Brood of hell, you're not a mortal! Shall the entire house go under?" And like Goethe, the letter talks about flooding: "Should we let machines flood our information channels with propaganda and untruth?"

Many have criticised the letter. The authors stand accused of doing free marketing for AI companies, by exaggerating the power of AI.

They have a point about the sudden hype around ChatGPT. It is far from perfect and does some things systematically wrong, for example producing misinformation in areas where it had only limited training material ("closed domains").

But this is no reason to relax, on the contrary. When such language models produce misinformation, they will do so on a large scale. Why? Because the industry believes the future of internet search will be based on language models. We will get authoritative-sounding answers to our search questions, rather than links to various sources.

When it goes wrong, it will go wrong in a big way.

Some people argue that these are merely language models working with probabilities, that they have no human intelligence that attaches meaning to words and therefore there is nothing to fear. But they confuse the inner workings of the technology with its effects.

At the end of the day users do not care how these models generate language. They perceive it as human-like and they will often perceive it as authoritative.

With social media we have allowed a few companies to totally restructure the course of public debate, based on their choices of what we should read, see and 'engage with'. There was no risk assessment before social media were marketed.

Mark Zuckerberg's now infamous motto was "move fast and break things". And so he rolled out social media that could be used to share family photos — or to promote calls for murderous violence against minorities.

Large language models will be far more influential than social media have been. They are approaching "general purpose" AI — able to do all kind of things — mediated by human language. We should not sit back and watch how this will play out.

Fortunately, in the EU we have the beginnings of some regulation. The misinformation problem of models like ChatGPT needs to be addressed through the code of practice against disinformation. Possibly it can also be tackled on the basis of the Digital Services Act. The EU´s AI-Act is in the works and should be enacted as a matter of urgency.

More attention should be paid to technical solutions as well. There are many ideas and initiatives to address harms of AI, such as mechanisms to authenticate content. If workable, they would remove one problem with AI-generated content: the current inability to tell whether some content is authentic, or whether it was manipulated or artificially-generated.

The letter writers worry in particular about the lack of risk assessments. They have a point.

OpenAI, the company behind ChatGPT (now largely controlled by Microsoft), tried to reduce harms and indeed ChatGPT avoids some obvious pitfalls. It will not tell you how to make a dirty bomb (except if you find a roundabout way of asking the question).

On the ChatGPT website, OpenAI has published a technical paper on a sibling language model. It provides important insights into how the model has been fine-tuned to reduce harm. However, it is a paper by data engineers for data engineers. It is mainly about optimisation of the model compared to other models.

This analysis is a far cry from a systematic risk assessment which would have to include many different specialisations and systematically anticipate the many malicious use cases of a language model and its unintended consequences, before making it available to everyone.

If ever policymakers need to show agility, it is now on AI. The technology urgently needs a regulatory framework to address and reduce its risks and to create a business-friendly environment based on the rule of law, offering a level-playing field for all companies.

Author bio

Michael Meyer-Resende is the executive director of Democracy Reporting International, a non-partisan NGO in Berlin that supports political participation.

Disclaimer

The views expressed in this opinion piece are the author's, not those of EUobserver.

Column

Can Mastodon be the first big social network 'Made in Europe'?

Mastodon is much more than a company registered in Berlin with one employee — founder, chief executive and only shareholder, the young programmer Eugen Rochko. He designed Mastodon as open software that allows people and groups to build a network.

Editorial

Okay, alright, AI might be useful after all

Large Language Models could give the powers trained data-journalists wield, to regular boring journalists like me — who don't know how to use Python. And that makes me tremendously excited, to be honest.

Column

AI has escaped the 'sandbox' — can it still be regulated?

In truth EU lawmakers are in the dark about what to do. At a recent lunch with senior Brussels lawmakers and industry representatives, civil society voices asked whether it all could be stopped. The answer? It could only go faster.

Latest News

  1. Deepfake dystopia — Russia's disinformation in Spain and Italy
  2. Putin's nuclear riposte to Macron fails to impress EU diplomats
  3. EU won't yet commit funding UN agency in Gaza amid hunger
  4. EU Commission clears Poland's access to up to €137bn EU funds
  5. Right of Reply: The EU-ACP Samoa agreement
  6. The macabre saga of Navalny's corpse
  7. Belgium braces for Flemish far-right gains, deadlock looms
  8. Podcast: Hyperlocal meets supranational

Stakeholders' Highlights

  1. Nordic Council of MinistersJoin the Nordic Food Systems Takeover at COP28
  2. Nordic Council of MinistersHow women and men are affected differently by climate policy
  3. Nordic Council of MinistersArtist Jessie Kleemann at Nordic pavilion during UN climate summit COP28
  4. Nordic Council of MinistersCOP28: Gathering Nordic and global experts to put food and health on the agenda
  5. Friedrich Naumann FoundationPoems of Liberty – Call for Submission “Human Rights in Inhume War”: 250€ honorary fee for selected poems
  6. World BankWorld Bank report: How to create a future where the rewards of technology benefit all levels of society?

Stakeholders' Highlights

  1. Georgia Ministry of Foreign AffairsThis autumn Europalia arts festival is all about GEORGIA!
  2. UNOPSFostering health system resilience in fragile and conflict-affected countries
  3. European Citizen's InitiativeThe European Commission launches the ‘ImagineEU’ competition for secondary school students in the EU.
  4. Nordic Council of MinistersThe Nordic Region is stepping up its efforts to reduce food waste
  5. UNOPSUNOPS begins works under EU-funded project to repair schools in Ukraine
  6. Georgia Ministry of Foreign AffairsGeorgia effectively prevents sanctions evasion against Russia – confirm EU, UK, USA

Join EUobserver

EU news that matters

Join us