Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
Generative AI could be game-changing for the world of medicine. It could help researchers discover new drugs and better match ailing patients with correct diagnoses.
But the World Health Organization is concerned about everything that could go wrong. The global health authority is formally warning countries to monitor and evaluate large language models for medical and health-related risks.
“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” said WHO official Alain Labrique. This advice was issued as part of a larger guidance on AI in healthcare, a topic on which the WHO began advising in 2021.
Artificial intelligence systems are susceptible to bias, because the inclusion or absence of data could seriously affect its outputs. For example, if a medical AI model is trained solely on health data from people in wealthy nations, it could miss or misunderstand populations in poorer nations and do harm if used improperly.