Scroll to the top

Get AI out of my health care

​A man uses a chatbot in this illustration photo.

A man uses a chatbot in this illustration photo.

Jaap Arriens/NurPhoto via Reuters

You fall and break an arm. Doctors set the break and send you to rehab. It’s pricy, but insurance should take care of it, so you submit your claim – only to be denied. Was it a claims examiner who rejected it? Or AI?

On Feb. 6, the US government sent a memo to certain Medicare insurers clarifying that no, they cannot use artificial intelligence to deny claims. While machine-learning algorithms can be used to assist them in making determinations, an algorithm alone cannot be the basis for denying care.


This memo, sent by the Centers for Medicare & Medicaid Services, follows lawsuits against health insurers for allegedly using AI to erroneously deny deserved care to patients. United Healthcare and Humana have each been sued by patients claiming the companies used the AI model nH Predict nefariously — a model they claim has a 90% error rate. It’s a clear and present danger of the technology at a time when many regulators and critics are focusing on far-off threats of AI.

CMS also said it’s concerned about the propensity for algorithms to “exacerbate discrimination and bias” and said the onus is on insurers to make sure these models comply with the Affordable Care Act’s anti-discrimination requirements. And it’s not just the federal government: A number of states including New York and California have issued warnings to insurance companies to ensure their own algorithms aren’t discriminatory.

GZEROMEDIA

Subscribe to GZERO's daily newsletter