Artificial Intelligence (AI) is infiltrating all areas of the society and the economy and is only getting more prolific by the hour. In Washington, health care sector leaders met with Congressional leaders in late November to provide insight on the use of AI in its industry and some of the pitfalls it has been seeing.

A recent hearing on AI in health care saw witnesses expressing their concerns over implicit bias in AI health care that could discriminate against patients based on demographics.

Recommended For You

"This hearing will give our members a chance to hear from experts and those in the field about how AI is currently being used, as well as what guardrails, like a national data privacy standard, are needed to protect people's privacy," E&C Chair Cathy McMorris Rodgers (R-Wash.) and Health Subcommittee Chair Brett Guthrie (R-Ky.) said in a statement.

"Generative [large language models] must be 'trained' on massive volumes of written language — the ultimate compendium of human experience," said Benjamin Nguyen, senior product manager at health care company Transcarent. "It therefore inherits the inherent biases of that experience through the data used to train the model."

Many witnesses from the industry told Congress it should consider the training procedures used in AI that could result in bias to ensure equitable use of AI in medicine.

Dr. David Newman-Toker, director of the division of neurovisual and vestibular disorders at Johns Hopkins University School of Medicine neurology department, said AI systems should be trained on "gold-standard data sets" to ensure health care professionals aren't "converting human racial bias into hard and fast AI-determined rules."

Ensuring AI does no harm to patients is another concern. While it has the potential to detect disease early, improve technology and eliminate paperwork from health care professionals, bias and discrimination still persist. Last year, President Biden proposed an AI Bill of Right that would require AI developers to publish information about how AI apps were trained and how they shouldn't be used. This rule is still in the works but has the potential to improve transparency and accountability in health sector AI applications.

Related: FDA plan for reviewing AI-enabled medical devices isn't quite computing

This also comes along the heels of the American Medical Association's Principles for Augmented Intelligence Development, which calls for regulations around liability and transparency.

It's a delicate balance that doctors are just starting to consider as AI functionality grows. "`AI to improve medical diagnosis poses significant risks but also presents uniquely large opportunities for positive impact," Dr. David Newman-Toker, director of the division of neuro-visual and vestibular disorders in Johns Hopkins Medicine's neurology department, wrote in his opening statement. "[With the] absence [of] carefully crafted regulations, innovative payment incentives, and new research resources directed to overcome key barriers to successful deployment of high-quality AI systems, risks will dominate."

NOT FOR REPRINT

© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.