California Assemblymember Rebecca Bauer-Kahan, D-San Ramon, Calif., speaks at an Assembly hearing on health privacy concerns. Credit: California State Assembly
Should governments try to keep artificial intelligence systems from collecting patient data, or should they simply regulate how companies use the information that the AI systems collect?
California Assemblymember Rebecca Bauer-Kahan, D-San Ramon, Calif., said Wednesday that California should regulate AI systems' efforts to collect patient data, not just potentially harmful uses of the data.
Recommended For You
Health care organizations can do great things with patient data, but, "in many settings, it's used to harm people," Bauer-Kahan said.
Bauer-Kahan — an intellectual property lawyer and the chair of the California Assembly Privacy and Consumer Protection Committee — said she saw the limits of the government's ability to regulate use of patient data recently when she went to her own child's pediatrician's office.
The office was using an AI-based "scribe" system, or session recording system. The office staff asked Bauer-Kahan to waive her privacy rights both under the Health Insurance Portability and Accountability Act and the California Confidentiality of Medical Information Act so that doctors could use the scribe system.
"This was in part because I was at a smaller health system with less bargaining power," Bauer-Kahan said. "The larger health systems appear to have bargained for a contract with the AI companies that protects patient data. My smaller system had not."
Related: Digital health apps marred by lax privacy practices
Bauer-Kahan talked about the divide between health care organizations with the clout to protect patient health data and the organizations without that kind of clout at a health privacy hearing organized by her committee and the Assembly Health Committee.
What it means: California's work on health AI laws and regulations could affect the future of health AI laws throughout the world, because California has the world's fifth largest economy, in terms of gross domestic product, and because it has jurisdiction over the AI companies, health technology firms and other tech firms in Silicon Valley.
The health privacy skirmish over a focus on regulation of "upstream" AI data collection efforts or "downstream uses" of the data could help frame how lawmakers and regulators see health AI regulation going forward.
For employer health plan sponsors, health AI discussions could limit plans' access to useful new systems, expose the sponsors to new types of liability lawsuits and affect the sponsors' control over potentially valuable plan data.
The hearing: Hearing organizers brought in witnesses from tech companies like Google and Penguin AI, health care delivery organizations like Cedars-Sinai, and Kaiser Permanente, an organization that acts both as a health coverage provider and as a health care provider.
Dr. Daniel Yang, vice president of AI and emerging technologies at Kaiser Permanente, said health care organizations have no choice but to try to use AI to cope with a critical shortage of human health care workers.
Next year, the United States could have 3.2 million fewer health care workers than it needs, Yang said.
AI is already helping Kaiser Permanente make the health care workers it has more efficient by helping them with tasks such as documenting patient interactions, Yang said.
Life insurance underwriting: Insurance companies see use of applicant data in underwriting as an important strategy for managing mortality risk and holding down the cost of insurance.
But Bauer-Kahan said that, from the perspective of a patient, the possibility that an insurance company could use the data obtained by an AI in underwriting is alarming.
If, for example, an AI-based mobile phone app can track a patient's heart performance data and send the data to a cardiologist, that could save lives. But, if U.S. laws change, health insurers can bring back medical underwriting, and data streams from an AI heart monitor app keep patients from getting health insurance, that could cost those patients their lives, Bauer-Kahan said.
Dr. Ziad Obermeyer, a University of California, Berkeley, public health researcher, made the case for focusing on regulation of downstream uses of patient data rather than on data collection.
"Almost any tool can be used for a variety of different purposes," Obermeyer said. "The question that I think you have to struggle with is, 'What is the right point at which to regulate the tool?'"
Simply enforcing existing bans on medical underwriting and on nefarious uses of AI health data tools may work better than trying to limit AI data collection, he said.
"It seems like regulating those downstream uses is a lot more targeted and precise than trying to regulate upstream," Obermeyer added. Regulating health AI systems' upstream data collection "is going to prevent people from doing the lifesaving parts of the tool."
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.