You are on page 1of 4

The American Journal of Bioethics

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/uajb20

Informed Consent for Clinician-AI Collaboration


and Patient Data Sharing: Substantive, Illusory, or
Both

Charles E. Binkley & Bryan C. Pilkington

To cite this article: Charles E. Binkley & Bryan C. Pilkington (2023) Informed Consent for
Clinician-AI Collaboration and Patient Data Sharing: Substantive, Illusory, or Both, The
American Journal of Bioethics, 23:10, 83-85, DOI: 10.1080/15265161.2023.2250289

To link to this article: https://doi.org/10.1080/15265161.2023.2250289

Published online: 09 Oct 2023.

Submit your article to this journal

Article views: 55

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=uajb20
THE AMERICAN JOURNAL OF BIOETHICS 83

THE AMERICAN JOURNAL OF BIOETHICS


2023, VOL. 23, NO. 10, 83–85
https://doi.org/10.1080/15265161.2023.2250289

OPEN PEER COMMENTARIES

Informed Consent for Clinician-AI Collaboration and Patient Data Sharing:


Substantive, Illusory, or Both
Charles E. Binkleya,b and Bryan C. Pilkingtonb,c
a
Hackensack Meridian Health; bHackensack Meridian School of Medicine; cSeton Hall University

In the piece, “What Should ChatGPT Mean for systems to participate in their care, then patients
Bioethics?” Professor Cohen proposes that the intro- should be informed, lest patient autonomy be violated.
duction of AI generally, and generative AI specifically, (Richardson et al. 2021)
requires that patients be informed of, and consent to, One argument in favor of informing patients if
both their clinician collaborating with AI systems to their health data is used to train AI systems is that
make clinical decisions, and their data being used to their medical record is their digital phenotype, the
train these systems (Cohen 2023). While, per se, the account of how their genetic code is expressed over
claims are valid, when applied to AI systems for clin- time. As such, it can be thought of as part of their
ical decision making, the informational and the vol- person, a complement to their own DNA. Thus,
itional components of informed consent deserve patients’ sharing of their health information is a kind
separate consideration. First, this piece will consider of donation, and it is generally understood that
why information about clinician-AI collaborations and donors should be informed (Beskow 2016). As with
data sharing should be communicated to patients. any donation, patients should be aware of the benefits
Next, it will consider both the eventuality that consent as well as the risks of donating their health data, even
for data sharing and physician-AI collaboration will if the data are de-identified (Mittelstadt and Floridi
become illusory, and also some arguments that might 2016). The ease with which personal information can
justify the omission of consent. Finally, it will propose be reidentified, or how individuals can remain deiden-
potential means of communicating information about tified but nonetheless profiled across platforms, high-
AI in a way that is understandable across diverse lights the potential for violating patient privacy and
patient populations. confidentiality. Not only would this be a dereliction of
the clinician’s ethical duty, but it could also result in
harm to patients in terms of loss of insurance or
INFORMATIONAL CONSIDERATIONS FOR AI
employment benefits should their personal health
Clinicians who collaborate with AI systems share with information be readily available.
the systems some of their own decisional agency and More important than the aforementioned consider-
thus introduce a third entity into the clinician-patient ations is that informing patients about their clinician
relationship (Binkley and Pilkington 2023). Several collaborating with an AI system, and the patient’s per-
reasons have been proposed for why patients should sonal health information being shared, engenders
be informed when their clinician introduces an AI trust. Clinician transparency and truthfulness are
system into their relationship. Some arguments are essential ways through which patients come to trust
rooted in the idea that patients have the expectation their clinicians (Kaldjian and Pilkington 2021). The
of exclusivity in their relationships with clinicians, risk of nondisclosure about the involvement of AI or
and if a third party is introduced, they should be the sharing of health information may lead patients to
informed (Cohen 2019). In other arguments, it is sug- wonder what other information was withheld from
gested that if most patients would refuse to allow AI them. This is not simply a consideration for clinician

CONTACT Charles E. Binkley charles.binkley@hmhn.org Hackensack Meridian Health, Edison, NJ, USA.
ß 2023 Taylor & Francis Group, LLC
84 OPEN PEER COMMENTARIES

practice, but for institutions, as well. In some cases, consent to have their data used for training AI sys-
clinicians may not have control over their collabora- tems from those patients who refuse.
tions with AI systems. For instance, health systems Even if it were technically or operationally possible
may choose to push out AI generated predictions to to allow patients to refuse to share their personal
clinicians as “Best Practice Alerts” within the elec- health information in order to train AI clinical sys-
tronic medical record. Clinicians must factor the tems, a public goods argument could be made, similar
information into their decision making, whether or to that which has been made for the participation in
not the information was requested, or even desired. It medical research (Schaefer, Emanuel, and Wertheimer
is unlikely that individual clinicians will play a role in 2009). Further, allowing patients to refuse to share
deciding how patient data is used to train AI systems. their health data runs the risk that if discrete groups
Nevertheless, it is with their clinician that patients of patients refuse, training data would be unrepresen-
have a relationship, and it is the clinician who is vul- tative of the population at large. This could lead to AI
nerable to mistrust if patients are not informed. models making more accurate predictions for some
groups compared to others, thus perpetuating rather
than correcting the risk of AI bias.
SEPARATING INFORMATION FROM VOLITION
The proposed justification for informing patients
IN INFORMED CONSENT
about AI-clinician collaborations and patient data
What would justify informing patients about clinician sharing without consent is not intended to be with-
AI-collaborations and the use of their personal health out qualification. AI systems collaborating with clini-
information to train AI systems, without allowing for cians may be demonstrably superior to clinicians
consent or refusal? In the very near future, for many acting alone before the system is deployed or early in
clinical tasks, collaborations between clinicians and its clinical use. However, since almost all of these
AI systems will be proven to be more beneficial and systems are continuously learning and will undergo
less harmful than clinicians acting alone. When this regular updates, it is essential that the increased
happens, clinician-AI collaborations will likely be benefit and/or decreased harm of the collaboration
considered the standard of care for those tasks. be reestablished in the local clinical ecosystem and
Offering patients health care alternatives which do when the system is updated. In addition, the risk of
not include clinician-AI collaborations may not only patient harm that could result from the use of their
be below the standard of care, but also potentially personal health data to train AI systems must be
harmful, especially if clinical skills have deteriorated mitigated in ways other than depersonalizing the
as a result of AI collaborations. It may not be oper- data as prescribed by current HIPAA standards.
ationally feasible to create workflows which could These regulations were intended for a different era of
provide equivalent care for patients who refuse to personal health information and seem woefully
allow their clinicians to collaborate with AI systems. unsuited to the current digital use of patient infor-
Additionally, clinicians may be concerned about pro- mation. Regulatory bodies must assume that patient
fessional liability if patients are able to refuse to allow data can be reidentified and that patients can be pro-
them to collaborate with AI systems. Consent and filed across digital platforms. Other safeguards of
refusal may thus become illusory if reasonable and privacy and confidentiality will be required in order
equivalent alternatives to clinical AI collaborations to protect patients from harm.
are not available.
Although informing patients about clinician-AI col-
THE WELL INFORMED INFORMER
laborations and the use of health information for
training AI systems can be considered separately, Given that patients should be informed about both
patient data is necessary in order to train AI systems their clinician’s collaboration with an AI system and
to perform the tasks for which they are programmed. the donation of their personal health information
Thus, there is no AI system without patient data to without the option of consenting or refusing, the issue
train it. This creates a necessary quid pro quo. While thus arises who, or what, should provide the informa-
it is possible to segregate AI systems which use patient tion, and how should it be provided? Generally, the
data exclusively to make predictions from those sys- clinician responsible for performing an intervention is
tems that use data for both for predicting and train- also responsible for informing the patient about the
ing, it is far more difficult, if not impossible, to sort intervention since that clinician would best know the
patients within a health system into those who patient and the details of the intervention being
THE AMERICAN JOURNAL OF BIOETHICS 85

offered. However, when it comes to AI systems and ORCID


data donations, the clinician may not possess suffi- Charles E. Binkley http://orcid.org/0000-0001-9290-9876
cient knowledge about the AI system, or its function Bryan C. Pilkington http://orcid.org/0000-0001-9373-
and validation, or the benefits and risks of data shar- 8300
ing, to be the optimal informer. In fact, there is con-
cern that informing patients of the involvement of AI REFERENCES
systems in their care by clinicians without sufficient
foundational knowledge might overwhelm patients Beskow, L. M. 2016. Lessons from HeLa cells: The ethics
and policy of biospecimens. Annual Review of Genomics
(Blumenthal-Barby 2023). and Human Genetics 17:395–417. doi:10.1146/annurev-
Perhaps the ideal informer is actually an AI pow- genom-083115-022536.
ered chatbot. Already these systems have been pro- Binkley, C. E., and B. Pilkington. 2023. The actionless agent:
posed as ideal for patient education with early data An account of human-CAI relationships. The American
Journal of Bioethics 23 (5):25–7. doi:10.1080/15265161.
supporting this notion (Lee et al. 2023). Prior to their
2023.2191035.
deployment, systems would need to be trained and Blumenthal-Barby, J. 2023. An AI bill of rights:
validated across a spectrum of knowledge as well as Implications for health care AI and machine learning—A
patient demographics such as spoken language, educa- bioethics lens. The American Journal of Bioethics 23 (1):
tional level, and extent of informational preferences. 4–6. doi:10.1080/15265161.2022.2135875.
Cohen, I. G. 2019. Informed consent and medical artificial
Some patients will want more information, will have a intelligence: What to tell the patient? Geo. LJ 108:1425.
greater foundation of knowledge, and will know more Cohen, I. G. 2023. What should ChatGPT mean for bioeth-
about the risks and benefits than others. It would be ics? The American Journal of Bioethics 23 (10):8–16. doi:
impossible to have a group of humans with the acces- 10.1080/15265161.2023.2233357.
Kaldjian, L. C., and B. C. Pilkington. 2021. Why truthful-
sibility, availability, expertise, and affordability that ness is the first of the virtues. The American Journal of
will be required to adequately inform patients about Bioethics 21 (5):36–8. doi:10.1080/15265161.2021.1906991.
clinical AI systems. These systems are uniquely quali- Lee, T.-C., K. Staller, V. Botoman, M. P. Pathipati, S.
fied to provide patients with substantive information Varma, and B. Kuo. 2023. ChatGPT answers common
patient questions about colonoscopy. Gastroenterology
without the illusion of consent.
165 (2):509–11.e7. doi:10.1053/j.gastro.2023.04.033.
Mittelstadt, B. D., and L. Floridi. 2016. The ethics of big
data: Current and foreseeable issues in biomedical con-
texts. In The ethics of biomedical big data, eds. B. D.
DISCLOSURE STATEMENT Mittelstadt and L. Floridi, 445–80. Cham, Switzerland:
No potential conflict of interest was reported by the Springer International Publishing.
Richardson, J. P., C. Smith, S. Curtis, S. Watson, X. Zhu, B.
author(s).
Barry, and R. R. Sharp. 2021. Patient apprehensions about
the use of artificial intelligence in healthcare. NPJ Digital
FUNDING Medicine 4 (1):140. doi:10.1038/s41746-021-00509-1.
Schaefer, G. O., E. J. Emanuel, and A. Wertheimer. 2009.
The author(s) reported there is no funding associated with The obligation to participate in biomedical research.
the work featured in this article. Jama 302 (1):67–72. doi:10.1001/jama.2009.931.

You might also like