Ad
When people are ill, they tend to research their symptoms online – now even more so with ChatGPT or other consumer-friendly AI bots (Photo: Pixabay)

Clear rules needed to protect patients using AI in healthcare, WHO warns

Free Article

As AI tools become more widespread across healthcare systems in Europe, a new report by the World Health Organization (WHO) on Wednesday (19 November) warns that the current legal framework on AI is not sufficient to protect European patients and healthcare workers.

When people are ill, they tend to research their symptoms online – now even more so with ChatGPT or other consumer-friendly AI bots. Doctors also use AI when detecting medical conditions, managing administrative duties and communicating with patients.

The report found that 32 countries (64 percent) currently use AI-assisted diagnostics, particularly for imaging and disease detection.

Half of all countries have deployed AI chatbots for patient interaction and support, and slightly more than half have designated priority areas for health AI implementation.

While countries cite enhancing patient care, alleviating workforce strain and boosting efficiency as key motivations for AI adoption, only 25 percent have dedicated funding to pursue these priorities.

The report states that only four countries have implemented a dedicated national AI strategy for health, with another seven working on one.

For the WHO, the European region includes EU member states, plus the UK, Russia, Serbia, Georgia and others.

Civil society organisations and experts have warned the rapid adoption of AI comes with significant risks.

The main challenges European countries mentioned regarding AI implementation in health care are financial constraints and legal uncertainties like unclear responsibility when AI systems make mistakes, the UN health body argued.

Without clear regulations, doctors may hesitate to use AI tools, and patients may have no way to seek compensation if errors occur.

David Novillo, lead author of the report, noted that the landscape is evolving rapidly, adding that regulation is key to ensuring patient safety and privacy as well as protecting health workers.

"We know that, by tradition, when it comes to technology, regulation is always behind. I think that regulation is needed when it's about implementing digital solutions", Novillo told EUobserver.

Natasha Azzopardi-Muscat, director of health systems of WHO Europe, framed the stakes clearly: "Either AI will be used to improve people's health and wellbeing, reduce the burden on our exhausted health workers and bring down health-care costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care. The choice is ours."


Every month, hundreds of thousands of people read the journalism and opinion published by EUobserver. With your support, millions of others will as well.

If you're not already, become a supporting member today.

Ad
Ad