Credit: lhorvsn/Adobe Stock

In response to the escalating risk of AI-manipulated images, videos, audio that impersonate physicians, the American Medical Association has announced a new comprehensive policy framework to establish clear, enforceable protections for physicians against unauthorized AI-generated "deepfakes."

According to the AMA, deepfakes have been maliciously used to impersonate doctors, manipulate the public, and endorse unproven treatments. These pose "grave threats" to both individual patients and the broader health care system, according to the association, which notes that impersonation scams erode the patient-physician relationship, undermine confidence in evidence-based care, and place the public at risk of deception and harm.

"When bad actors exploit a doctor's identity, they undermine patient trust and can steer people toward harmful, unproven care," said AMA CEO John Whyte. "We need strong action by federal and state lawmakers to protect physicians' identities, ensure transparency, and stop this fraud. Safeguarding professional integrity is essential to preserving trust and delivering high-quality care in a rapidly evolving digital landscape."

Created by the AMA Center for Digital Health and AI, the framework is designed to modernize physician identity protections and close legal gaps to uphold patient safety, professional integrity, and public trust.

It is built on seven key policy principles:

  1. Physician identity is a protected right: A physician's name, image, likeness, voice, and digital replicas are protected. Health institutions, vendors, and third-party apps must explicitly recognize that these are not transferrable assets and may only be used with affirmative, informed consent.
  2. Prohibition on deceptive medical impersonation: Without clear, informed consent, any AI-generated or altered content impersonating a physician — especially if it falsely conveys endorsement or authorship likely to mislead patients — must be prohibited and treated as deception.
  3. Informed, opt-in and revocable consent: The use of physician identity in AI-created or manipulated content requires separate, explicit, opt-in consent; it is never implied or bundled in general agreements. Consent must specify the use, audience, purpose, and duration, and it must be revocable if circumstances change.
  4. Mandatory labeling and transparency: All AI-generated or altered depictions of physicians must be clearly labeled in plain language and include a digital watermark, and patients must be proactively notified before any interaction with "synthetic professionals."
  5. Shared responsibility for preventing impersonation: Platforms, hospitals, and AI vendors share responsibility in implementing safeguards such as rapid takedown mechanisms, conspicuous labeling, and prohibiting AI use of health professional titles.
  6. Enforcement and practical remedies: Physicians must have access to robust procedures for documenting misuse, triggering takedowns, and seeking remedies. Institutions, meanwhile, must preserve audit logs and work with investigations.
  7. Minimizing administrative burden: Identity protection should be the default, with no undue administrative burden placed on physicians. Consent processes must be standardized, reusable, and institutionally supported.

"The new AMA framework will guide how the organization works with government officials and industry partners to stop AI-generated deepfakes of physicians," the association said in a statement. "The AMA is ready to collaborate with lawmakers, regulators, and industry to protect patients and doctors from these risks."

NOT FOR REPRINT

© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.