GZERO AI

Slapping nutrition labels on AI for your health

Female doctor in hospital setting.
Female doctor in hospital setting.
Reuters
Doctors use AI to help make diagnoses, but machines can’t take the Hippocratic Oath. So how can Washington ensure AI does no harm? The US Department of Health and Human Services is on the case: It’s proposing “nutrition labels” to bring transparency for healthcare-related AI tools.

At a congressional hearing last week, Rep. Cathy McMorris Rodgers (R-WA) noted how AI can help detect deadly diseases early, improve medical imaging, and clear cumbersome paperwork from doctors’ desks. But she also expressed concern that it could exacerbate bias and discrimination in healthcare.

Patients need to know who, or what, is behind their healthcare determinations and treatment plans. This requires transparency, which is a key part of Biden's AI Bill of Rights, released last year.

The new rule, first proposed in April by the HHS’s health information technology office, would require developers to publish information about how AI healthcare apps were trained and how they should and shouldn’t be used. The rule, which could be finalized before January, aims to improve both transparency and accountability.

More For You

- YouTube

In this Quick Take, Ian Bremmer addresses the killing of Alex Pretti at a protest in Minneapolis, calling it “a tipping point” in America’s increasingly volatile politics.

- YouTube

Who decides the boundaries for artificial intelligence, and how do governments ensure public trust? Speaking at the 2026 World Economic Forum in Davos, Arancha González Laya, Dean of the Paris School of International Affairs and former Foreign Minister of Spain, emphasized the importance of clear regulations to maintain trust in technology.