Professional Documents
Culture Documents
To cite this article: Charles E. Binkley & Bryan C. Pilkington (2023) Informed Consent for
Clinician-AI Collaboration and Patient Data Sharing: Substantive, Illusory, or Both, The
American Journal of Bioethics, 23:10, 83-85, DOI: 10.1080/15265161.2023.2250289
Article views: 55
In the piece, “What Should ChatGPT Mean for systems to participate in their care, then patients
Bioethics?” Professor Cohen proposes that the intro- should be informed, lest patient autonomy be violated.
duction of AI generally, and generative AI specifically, (Richardson et al. 2021)
requires that patients be informed of, and consent to, One argument in favor of informing patients if
both their clinician collaborating with AI systems to their health data is used to train AI systems is that
make clinical decisions, and their data being used to their medical record is their digital phenotype, the
train these systems (Cohen 2023). While, per se, the account of how their genetic code is expressed over
claims are valid, when applied to AI systems for clin- time. As such, it can be thought of as part of their
ical decision making, the informational and the vol- person, a complement to their own DNA. Thus,
itional components of informed consent deserve patients’ sharing of their health information is a kind
separate consideration. First, this piece will consider of donation, and it is generally understood that
why information about clinician-AI collaborations and donors should be informed (Beskow 2016). As with
data sharing should be communicated to patients. any donation, patients should be aware of the benefits
Next, it will consider both the eventuality that consent as well as the risks of donating their health data, even
for data sharing and physician-AI collaboration will if the data are de-identified (Mittelstadt and Floridi
become illusory, and also some arguments that might 2016). The ease with which personal information can
justify the omission of consent. Finally, it will propose be reidentified, or how individuals can remain deiden-
potential means of communicating information about tified but nonetheless profiled across platforms, high-
AI in a way that is understandable across diverse lights the potential for violating patient privacy and
patient populations. confidentiality. Not only would this be a dereliction of
the clinician’s ethical duty, but it could also result in
harm to patients in terms of loss of insurance or
INFORMATIONAL CONSIDERATIONS FOR AI
employment benefits should their personal health
Clinicians who collaborate with AI systems share with information be readily available.
the systems some of their own decisional agency and More important than the aforementioned consider-
thus introduce a third entity into the clinician-patient ations is that informing patients about their clinician
relationship (Binkley and Pilkington 2023). Several collaborating with an AI system, and the patient’s per-
reasons have been proposed for why patients should sonal health information being shared, engenders
be informed when their clinician introduces an AI trust. Clinician transparency and truthfulness are
system into their relationship. Some arguments are essential ways through which patients come to trust
rooted in the idea that patients have the expectation their clinicians (Kaldjian and Pilkington 2021). The
of exclusivity in their relationships with clinicians, risk of nondisclosure about the involvement of AI or
and if a third party is introduced, they should be the sharing of health information may lead patients to
informed (Cohen 2019). In other arguments, it is sug- wonder what other information was withheld from
gested that if most patients would refuse to allow AI them. This is not simply a consideration for clinician
CONTACT Charles E. Binkley charles.binkley@hmhn.org Hackensack Meridian Health, Edison, NJ, USA.
ß 2023 Taylor & Francis Group, LLC
84 OPEN PEER COMMENTARIES
practice, but for institutions, as well. In some cases, consent to have their data used for training AI sys-
clinicians may not have control over their collabora- tems from those patients who refuse.
tions with AI systems. For instance, health systems Even if it were technically or operationally possible
may choose to push out AI generated predictions to to allow patients to refuse to share their personal
clinicians as “Best Practice Alerts” within the elec- health information in order to train AI clinical sys-
tronic medical record. Clinicians must factor the tems, a public goods argument could be made, similar
information into their decision making, whether or to that which has been made for the participation in
not the information was requested, or even desired. It medical research (Schaefer, Emanuel, and Wertheimer
is unlikely that individual clinicians will play a role in 2009). Further, allowing patients to refuse to share
deciding how patient data is used to train AI systems. their health data runs the risk that if discrete groups
Nevertheless, it is with their clinician that patients of patients refuse, training data would be unrepresen-
have a relationship, and it is the clinician who is vul- tative of the population at large. This could lead to AI
nerable to mistrust if patients are not informed. models making more accurate predictions for some
groups compared to others, thus perpetuating rather
than correcting the risk of AI bias.
SEPARATING INFORMATION FROM VOLITION
The proposed justification for informing patients
IN INFORMED CONSENT
about AI-clinician collaborations and patient data
What would justify informing patients about clinician sharing without consent is not intended to be with-
AI-collaborations and the use of their personal health out qualification. AI systems collaborating with clini-
information to train AI systems, without allowing for cians may be demonstrably superior to clinicians
consent or refusal? In the very near future, for many acting alone before the system is deployed or early in
clinical tasks, collaborations between clinicians and its clinical use. However, since almost all of these
AI systems will be proven to be more beneficial and systems are continuously learning and will undergo
less harmful than clinicians acting alone. When this regular updates, it is essential that the increased
happens, clinician-AI collaborations will likely be benefit and/or decreased harm of the collaboration
considered the standard of care for those tasks. be reestablished in the local clinical ecosystem and
Offering patients health care alternatives which do when the system is updated. In addition, the risk of
not include clinician-AI collaborations may not only patient harm that could result from the use of their
be below the standard of care, but also potentially personal health data to train AI systems must be
harmful, especially if clinical skills have deteriorated mitigated in ways other than depersonalizing the
as a result of AI collaborations. It may not be oper- data as prescribed by current HIPAA standards.
ationally feasible to create workflows which could These regulations were intended for a different era of
provide equivalent care for patients who refuse to personal health information and seem woefully
allow their clinicians to collaborate with AI systems. unsuited to the current digital use of patient infor-
Additionally, clinicians may be concerned about pro- mation. Regulatory bodies must assume that patient
fessional liability if patients are able to refuse to allow data can be reidentified and that patients can be pro-
them to collaborate with AI systems. Consent and filed across digital platforms. Other safeguards of
refusal may thus become illusory if reasonable and privacy and confidentiality will be required in order
equivalent alternatives to clinical AI collaborations to protect patients from harm.
are not available.
Although informing patients about clinician-AI col-
THE WELL INFORMED INFORMER
laborations and the use of health information for
training AI systems can be considered separately, Given that patients should be informed about both
patient data is necessary in order to train AI systems their clinician’s collaboration with an AI system and
to perform the tasks for which they are programmed. the donation of their personal health information
Thus, there is no AI system without patient data to without the option of consenting or refusing, the issue
train it. This creates a necessary quid pro quo. While thus arises who, or what, should provide the informa-
it is possible to segregate AI systems which use patient tion, and how should it be provided? Generally, the
data exclusively to make predictions from those sys- clinician responsible for performing an intervention is
tems that use data for both for predicting and train- also responsible for informing the patient about the
ing, it is far more difficult, if not impossible, to sort intervention since that clinician would best know the
patients within a health system into those who patient and the details of the intervention being
THE AMERICAN JOURNAL OF BIOETHICS 85