28th Sep 2023


GPTChat could be 21st century Goethe's Sorcerer's Apprentice

Listen to article

Is artificial intelligence a dangerous innovation that could destroy humanity? A lot of serious people think so. More than 3,000 of them have signed a letter calling for a six-month moratorium on "giant AI experiments". The writer Yuval Noah Harari, the tech legend Steve Wozniak, and Elon Musk signed it.

Why all the uproar now? AI has been around for a while in daily applications like driver-assistance systems, the selection of what we see on social media, or the work of internet search engines.

Read and decide

Join EUobserver today

Become an expert on Europe

Get instant access to all articles — and 20 years of archives. 14-day free trial.

... or subscribe as a group

  • 'Today they make you more efficient, tomorrow they make you redundant'

But we have been passive consumers of these applications. ChatGPT has changed that. It lifted the curtain on AI. Anyone with an internet connection can use the tool and other AI-powered language and image generators. Anyone can get a feeling about the power of the technology. It is impressive.

It is also frightening. Many white-collar workers thought they were safe in their jobs — they felt they were too specialised or creative to worry. They are thinking again. Language generators can produce long, complex texts, often to a high standard.

They can write poems in any style, write computer code or deliver any kind of information. Today they make you more efficient, tomorrow they make you redundant.

Hence the sense of foreboding among the 3,000+ signatories of the letter. And it goes beyond concerns about a massive upheaval in the job market. They fear that we will have the experience of Goethe's Zauberlehrling (the Sorcerer's Apprentice).

He animates a broomstick, which begins to flood the house — and then he loses control of it: "Spirits that I've cited, my commands ignore."

Goethe's prophetic warning

Like Goethe's poem the letter has an End-of-Times tone: "Should we risk loss of control of our civilisation?". Goethe: "Brood of hell, you're not a mortal! Shall the entire house go under?" And like Goethe, the letter talks about flooding: "Should we let machines flood our information channels with propaganda and untruth?"

Many have criticised the letter. The authors stand accused of doing free marketing for AI companies, by exaggerating the power of AI.

They have a point about the sudden hype around ChatGPT. It is far from perfect and does some things systematically wrong, for example producing misinformation in areas where it had only limited training material ("closed domains").

But this is no reason to relax, on the contrary. When such language models produce misinformation, they will do so on a large scale. Why? Because the industry believes the future of internet search will be based on language models. We will get authoritative-sounding answers to our search questions, rather than links to various sources.

When it goes wrong, it will go wrong in a big way.

Some people argue that these are merely language models working with probabilities, that they have no human intelligence that attaches meaning to words and therefore there is nothing to fear. But they confuse the inner workings of the technology with its effects.

At the end of the day users do not care how these models generate language. They perceive it as human-like and they will often perceive it as authoritative.

With social media we have allowed a few companies to totally restructure the course of public debate, based on their choices of what we should read, see and 'engage with'. There was no risk assessment before social media were marketed.

Mark Zuckerberg's now infamous motto was "move fast and break things". And so he rolled out social media that could be used to share family photos — or to promote calls for murderous violence against minorities.

Large language models will be far more influential than social media have been. They are approaching "general purpose" AI — able to do all kind of things — mediated by human language. We should not sit back and watch how this will play out.

Fortunately, in the EU we have the beginnings of some regulation. The misinformation problem of models like ChatGPT needs to be addressed through the code of practice against disinformation. Possibly it can also be tackled on the basis of the Digital Services Act. The EU´s AI-Act is in the works and should be enacted as a matter of urgency.

More attention should be paid to technical solutions as well. There are many ideas and initiatives to address harms of AI, such as mechanisms to authenticate content. If workable, they would remove one problem with AI-generated content: the current inability to tell whether some content is authentic, or whether it was manipulated or artificially-generated.

The letter writers worry in particular about the lack of risk assessments. They have a point.

OpenAI, the company behind ChatGPT (now largely controlled by Microsoft), tried to reduce harms and indeed ChatGPT avoids some obvious pitfalls. It will not tell you how to make a dirty bomb (except if you find a roundabout way of asking the question).

On the ChatGPT website, OpenAI has published a technical paper on a sibling language model. It provides important insights into how the model has been fine-tuned to reduce harm. However, it is a paper by data engineers for data engineers. It is mainly about optimisation of the model compared to other models.

This analysis is a far cry from a systematic risk assessment which would have to include many different specialisations and systematically anticipate the many malicious use cases of a language model and its unintended consequences, before making it available to everyone.

If ever policymakers need to show agility, it is now on AI. The technology urgently needs a regulatory framework to address and reduce its risks and to create a business-friendly environment based on the rule of law, offering a level-playing field for all companies.

Author bio

Michael Meyer-Resende is the executive director of Democracy Reporting International, a non-partisan NGO in Berlin that supports political participation.


The views expressed in this opinion piece are the author's, not those of EUobserver.


Can Mastodon be the first big social network 'Made in Europe'?

Mastodon is much more than a company registered in Berlin with one employee — founder, chief executive and only shareholder, the young programmer Eugen Rochko. He designed Mastodon as open software that allows people and groups to build a network.


Okay, alright, AI might be useful after all

Large Language Models could give the powers trained data-journalists wield, to regular boring journalists like me — who don't know how to use Python. And that makes me tremendously excited, to be honest.


AI has escaped the 'sandbox' — can it still be regulated?

In truth EU lawmakers are in the dark about what to do. At a recent lunch with senior Brussels lawmakers and industry representatives, civil society voices asked whether it all could be stopped. The answer? It could only go faster.


Will Poles vote for the end of democracy?

International media must make clear that these are not fair, democratic elections. The flawed race should be the story at least as much as the race itself.

Latest News

  1. Germany tightens police checks on Czech and Polish border
  2. EU Ombudsman warns of 'new normal' of crisis decision-making
  3. How do you make embarrassing EU documents 'disappear'?
  4. Resurgent Fico hopes for Slovak comeback at Saturday's election
  5. EU and US urge Azerbijan to allow aid access to Armenians
  6. EU warns of Russian 'mass manipulation' as elections loom
  7. Blocking minority of EU states risks derailing asylum overhaul
  8. Will Poles vote for the end of democracy?

Stakeholders' Highlights

  1. International Medical Devices Regulators Forum (IMDRF)Join regulators, industry & healthcare experts at the 24th IMDRF session, September 25-26, Berlin. Register by 20 Sept to join in person or online.
  2. UNOPSUNOPS begins works under EU-funded project to repair schools in Ukraine
  3. Georgia Ministry of Foreign AffairsGeorgia effectively prevents sanctions evasion against Russia – confirm EU, UK, USA
  4. International Medical Devices Regulators Forum (IMDRF)Join regulators & industry experts at the 24th IMDRF session- Berlin September 25-26. Register early for discounted hotel rates
  5. Nordic Council of MinistersGlobal interest in the new Nordic Nutrition Recommendations – here are the speakers for the launch
  6. Nordic Council of Ministers20 June: Launch of the new Nordic Nutrition Recommendations

Stakeholders' Highlights

  1. International Sustainable Finance CentreJoin CEE Sustainable Finance Summit, 15 – 19 May 2023, high-level event for finance & business
  2. ICLEISeven actionable measures to make food procurement in Europe more sustainable
  3. World BankWorld Bank Report Highlights Role of Human Development for a Successful Green Transition in Europe
  4. Nordic Council of MinistersNordic summit to step up the fight against food loss and waste
  5. Nordic Council of MinistersThink-tank: Strengthen co-operation around tech giants’ influence in the Nordics
  6. EFBWWEFBWW calls for the EC to stop exploitation in subcontracting chains

Join EUobserver

Support quality EU news

Join us