Op-Ed: Most Patients Want to Know When Artificial Intelligence Influences Their Care: (Citing Platt et al., 2023, from JAMA Network Open)
Publications, Podcasts, and Articles
Imagine walking into
a modern hospital, where the bustling corridors are alive with activity and a
sense of urgency swarms. Radiology technologists are in flux with patient care;
doctors confer about treatment plans, and a radiologist analyzes a CT of an ER
patient's exam. What might not be immediately apparent is that the
radiologist—like many other healthcare professionals—is assisted by invisible
technology, often powered by cutting-edge artificial intelligence (AI). This
technology can rapidly sift through troves of medical images, flagging areas of
concern before the radiologist has had time to blink. It sounds like something
from the future, but AI has become integral in today's healthcare. From routine
imaging analysis to personalized treatment plans, AI tools have permeated
numerous clinical specialties, particularly radiology. Nevertheless, as the
technology becomes more sophisticated, essential questions about patient
autonomy, informed consent, and the ethics of disclosure have surfaced. A study
published in JAMA Network Open by Platt et al. (2023) examines these
exact considerations. Their research focuses on whether patients want to be
informed when AI is used in their care. Contrary to common assumptions that
complex science and technology might overwhelm patients, most respondents
affirmed their desire for transparency. More than two-thirds reported it was
“very true” that they wanted explicit notification about AI’s involvement in
their treatments, while only 5% expressed disinterest. Through a narrative
exploration of these findings, this op-ed argues that transparency about AI use
in clinical settings—particularly in radiology—is crucial for preserving trust,
upholding ethical standards, and ensuring patients feel respected and empowered
in their healthcare journeys.
In many ways,
radiology stands at the front lines of the medical AI revolution. Diagnostic
imaging such as MRIs, CT scans, and mammograms often involves massive amounts
of complex visual data. By harnessing deep learning algorithms, radiologists
can quickly identify subtle markers of disease that may evade even the most
well-trained specialists. For instance, AI can detect early-stage lung nodules
or suspicious calcifications in breast tissue, accelerating the detection of
potentially life-threatening conditions and allowing earlier interventions. Yet,
while AI-driven tools can undoubtedly improve diagnostic accuracy and
efficiency, they also raise concerns about transparency and patient knowledge. Radiology
reports may list the final interpretations, providing limited insight into
whether a human clinician, a machine, or some combination of the two made the
assessment. When so many adult patients in the United States desire to be
notified about AI involvement, the typical radiology workflow—where AI might be
used behind the scenes—signals a pressing need for systemic disclosure
policies.
Conducted in the
summer of 2023, the survey by Platt et al. (2023) aimed to gauge real-world
attitudes toward AI. Respondents were not simply asked dry questions; they were
first shown an informative video outlining the expanding role of AI in
healthcare. The goal was to provide a baseline level of understanding so that
people’s responses would come from a place of informed reflection. The results
spoke volumes. Approximately 67% of participants stated unequivocally that it
was “very true” that they wanted to be informed. What may come as a surprise is
that only a fraction—less than 5%—voiced reluctance or indifference about such
disclosure. This finding counters the narrative that patients might feel
overwhelmed or apathetic about the technicalities of their care. Instead, it
reinforces the principle that autonomy and respect for individuals’ right to
know are cornerstones of ethical healthcare. A secondary layer to the findings
concerned demographic distinctions. Women were more likely than men to wish for
notification, and White participants were more likely to express interest in AI
disclosures than Black respondents. While the study did not investigate the
reasons behind these differences, they could reflect social, cultural, or
historical factors related to technology trust and experiences with the
healthcare system. Regardless of the causes, these demographic nuances
underscore the need for health systems to adopt inclusive, culturally sensitive
strategies to inform all patients about the role of AI in their care.
The concept of
informed consent transcends procedural checklists; it is about honoring patient
autonomy. When patients understand what interventions they are receiving, they
can more freely decide whether they are comfortable with those methods.
Traditionally, informed consent has involved disclosing information about
surgical or medicinal interventions, including potential risks and benefits. However,
the introduction of AI complicates matters. These algorithms can be viewed as
“black boxes,” with complex layers of computation that are not immediately
interpretable even to the clinicians who employ them. When a system is so
opaque that even experts cannot fully explain how it concluded, conveying
meaningful information to patients can become challenging. Nevertheless, as
Platt et al. (2023) argue, the notification is necessary for ethical AI
implementation. It affirms a “right to know,” ensuring that no patient is
unwittingly part of a process where algorithms influence decisions about
diagnoses or treatments without their awareness. In radiology, where AI is
likely to interpret the images before a radiologist reviews them, this
principle should theoretically be as standard as an X-ray technologist informing
a patient about the steps involved in an imaging procedure.
A key question is “How”
and “When” health systems should alert patients to AI usage. While the findings
from Platt et al. (2023) make a compelling case for disclosure, they do not
specify how that disclosure should happen. Some experts suggest that a
succinct, standardized disclosure could suffice in scenarios where AI is widely
accepted, such as automated image analysis for standard chest X-rays. This
might involve a simple statement in the patient’s electronic portal indicating
that AI analysis aids in diagnosing certain conditions. In contrast, more
elaborate explanations and consent procedures may be warranted when AI
significantly impacts the care plan—perhaps by helping a physician decide
whether to proceed with a risky surgery. Furthermore, these disclosures must be
carried out in ways that are culturally and linguistically accessible. For
instance, African American communities with historical reasons for distrusting
medical institutions might need a more comprehensive conversation about AI and
its uses. Similarly, non-English speakers or individuals with lower digital
literacy might need multimedia tools or face-to-face discussions rather than
lengthy written materials. Tailoring these messages not only addresses ethical
obligations but also helps to foster trust and engagement. Although the United
States has yet to adopt comprehensive, unified regulations governing the use
and disclosure of AI in healthcare, frameworks from other regions might offer
guidance. European law, for example, includes articles within the General Data
Protection Regulation (GDPR) that stress transparency and the right to an
explanation regarding automated decision-making. While these regulations do not
strictly apply to all U.S. clinical settings, they underscore the global trend
toward holding organizations accountable for disclosing how advanced
technologies are used. In the American context, the Food and Drug
Administration (FDA) has taken steps to evaluate AI and machine-learning-based
medical devices. However, a legislative gap remains concerning the mandatory
disclosure of AI use to patients. Initiatives from professional bodies, such as
the American College of Radiology, could help bridge this gap by proactively
establishing best practices that hospitals and imaging centers can adopt. Patient
consent for AI use has become a hot topic in radiology. It is estimated that up
to half of imaging facilities already implement AI into their workflows. As the
number of FDA-approved AI algorithms approaches 1,000—around 70% of which
revolve around imaging—the impetus to address notification is becoming more
urgent. The survey by Platt et al. (2023) underscores the strong public’s
readiness to be informed. Indeed, people want the conversation about AI, and
they deserve it. By prioritizing transparency, healthcare institutions can
nurture a culture of trust and collaboration. Technology may power the
analysis, but the ultimate decisions remain in the clinician-patient
relationship. Patients who feel respected and informed are more likely to
engage positively with their care, adhere to treatment recommendations, and
participate in long-term follow-up. Conversely, secrecy or opacity around AI usage
can contribute to distrust, especially among populations who might already
harbor skepticism toward the healthcare system.
The healthcare
sector stands at a crossroads as AI continues to evolve rapidly. On the one
hand, the potential for improved diagnostic accuracy, more personalized
treatment plans, and quicker clinical workflows is profoundly transformative. On
the other hand, the path to realizing this potential hinges on whether
clinicians and policymakers can maintain an unwavering commitment to
patient-centered ethics. The study by Platt et al. (2023) illustrates the
desires of a patient population that, by and large, wants to be included in
decisions affecting their health. AI might be sophisticated, but transparency
does not have to be. Ethical guidelines in this new era could be as simple and
direct as letting people know that an algorithm contributed to their diagnostic
assessment, explaining its role in plain language, and ensuring they have the
chance to ask questions or seek second opinions. If healthcare organizations
heed these findings, patient trust can be preserved and strengthened. By
normalizing notification and weaving it seamlessly into patient care,
hospitals, clinics, and imaging centers can demonstrate that AI is not a
secretive, top-down imposition but a modern, collaborative tool designed to
enhance human expertise. AI may power the future of medicine, but ensuring that
patients remain at the center of every decision will keep healthcare humane,
ethically grounded, and worthy of public trust.
References
Platt, J. E., et al. (2023). Attitudes
toward mandatory AI notifications among patients in a large health system. JAMA
Network Open, 6(9), e2332834. https://pubmed.ncbi.nlm.nih.gov/39661391/
Comments
Post a Comment