The WHO calls for “caution” to incorporate AIs such as ChatGPT in the health field

A tab of the ChatGPt website on a computer, on April 14, 2023, in Madrid, Spain. The Spanish Agency for Data Protection has ex officio initiated prior investigation actions against the American company OpenAI, owner of the ChatGPT service, for a possible breach of the regulations. The Agency requested the European Data Protection Board (EDPB) to include the ChatGPT service as a topic to be addressed in its plenary meeting, considering that "global treatments that can have a significant impact on the rights of people require harmonized and coordinated actions at the European level in application of the General Data Protection Regulation". With the initiation of the investigation in Spain and the participation in the European working group, the AEPD acts in parallel within the framework of its powers and competences as a national supervisory and control authority, as well as in coordination with its European counterparts through the Committee. APRIL 14, 2023;CHATGPT;DATA;DATA PROTECTION AGENCY;DATA;REGULATION;DATA PROTECTION Eduardo Parra / Europa Press (FILE photo) 4/14/2023

They ask that these AI “be used safely, effectively and ethically”

The World Health Organization (WHO) has asked that “caution be exercised” in the use of artificial intelligence (AI) tools such as ChatGPT in the health field and that “carefully examine” what their risks may be in this matter. .

“While WHO is enthusiastic about the appropriate use of technologies to support healthcare professionals, patients, researchers and scientists, it is concerned that the caution that would normally be exercised with any new technology is not being exercised consistently with these IA”, has warned the international health organization of the United Nations through a statement.

In this context, the WHO warns that the “hasty adoption” of these AI systems “could lead to errors on the part of health personnel, cause harm to patients, erode confidence in AI and, therefore, undermine ( or delay) the potential benefits and long-term uses of these technologies around the world.”

The WHO points out, for example, that the data used to train AI “may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusion.” They also indicate that ChatGPT responses “may be completely incorrect or contain serious errors, especially in the case of responses related to health.”

In the same way, they warn that these AIs “may not protect sensitive data (including health data) that a user provides to generate a response.” Finally, the WHO points out that “they can be misused to generate and disseminate very convincing disinformation in the form of text, audio or video content that the public can hardly differentiate from reliable health content.”

For all these reasons, they insist on the need for these AI “to be used safely, efficiently and ethically”. Thus, they ask political leaders to “guarantee the safety and protection of patients.”


Please enter your comment!
Please enter your name here