You are on page 1of 3
Wnited States Senate WASHINGTON, DC 20510 December 3, 2019 The Honorable Seema Verma Administrator ‘The Centers for Medicare & Medicaid Services Department of Health & Human Services Room 445-G, Hubert H. Humphrey Building 200 Independence Ave., S.W. Washington, DC 20201 Dear Administrator Verma: We write today to request information regarding any actions that the Centers for Medicare & Medicaid Services (CMS) is taking or plans to take to assess the potential for algorithms used throughout the health care system to perpetuate biases. Algorithms are increasingly embedded into every aspect of modern society, including the health care system. Organizations use automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence (AD), to organize and optimize the complex choices they need to make on daily basis. CMS and commercial health insurers have begun to explore ways to incorporate algorithms that automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse. CMS already employs algorithms in some programs, and has indicated plans to expand its use of these emerging technologies. For example, on March 27, 2019, the Center for Medicare & Medicaid Innovation (CMMD announced the Artificial Intelligence Health Outcomes Challenge. The goal of this challenge is to provide support for innovators to test how Al tools could be used to predict health care utilization and adverse events and inform innovative payment and service delivery models." Additionally, on October 21, 2019, CMS published a request for information (RFI) to gather input on how the agency could use technology, such as AI, to conduct program integrity activities more efficiently.” In using algorithms, organizations often attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems, and the massive sets of data that are used, have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. Health care systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in one algorithm widely used in health systems throughout the country. This particular case of algorithmic bias used health care costs as a proxy for health care "cM Anica Inelligence Health Outcomes Challenge” Centers fr Medicare and Medicaid Services, 27 March 2019, ps. ms.gov newsroom Tat sheets cm-l gence-health-outcomes-challnge 2 Centr for Program Integrity Request for Informatio on Using Advanced Technology in Program Integrity” Centers for Medicare and Medicaid Services, 21 Oetober 2019, hips www ems gov About-CMS/CompanentsCPL/Dovloads/Centerfor-Proran-Integri-Advanced-Technology= RF ipa ® Obermeyer, Ziad, etal, “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science, 25 October 2019, Inips./science siencemag orplconten/366/6464/447 needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual’s level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would be flagged for additional help after this bias is addressed, an increase from nearly 18 percent under the algorithm as it was originally designed. ‘The findings of this study are deeply troubling, particularly taken in the context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment recommendations for black patients compared to white patients.* It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaska Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level.’ Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. Weare pleased to learn that the organization featured in the Science study appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the tight direction, this is just one of the many algorithms currently used in the health care industry. Congress and the Administration must make a concerted effort to find out the degree to which this issue is widespread, and move quickly to make lasting changes. As CMS continues to consider ways to incorporate algorithms into its systems, we are seeking to better understand what CMS is doing to prevent and address bias in algorithms. As part of this effort, we request answers to the following questions no later than December 31, 2019: 1. Has CMS considered the potential impact of algorithmic biases in the context of federal health care programs? a. If'so, what actions has CMS taken, or is planning to take, to understand and address the impact of such biases on health disparities? 2. Does CMS require any information from organizations, including insurers and hospitals, regarding any algorithms they use to target health care services, improve the quality of care, or reduce costs? If so: a. What information is collected, and does CMS attempt to determine whether there is a potential for algorithmic bias that would negatively affect certain patients relative to others? *Hftinan, Kelly, etal, “Racal bias in pan assessment and treatment recommendations, and false belief about biological difeence between backs and whites,” Proceedings of the National Academy of Sciences ofthe Une States of America, 19 Apel 2016, haps. pas orp/onten 13/16/4296 long. * Petersen, Emily Ee a, "Racial Ethnic Disparities in Pegnancy-Related Deaths — United Sates, 2007-2016," Morbidity and Mortality Weekly Report, Centers for Disease Control and Prevention, § September 2019, ip: ode govimmnvolames/68\wrmmt3Sa3 hms eid-mam683Sa3_etdelveryName-USCDC_921-DMS346 b, Are there requirements for auditing placed on healthcare systems that use algorithms to automate decision-making? ¢. The National Institute of Standards and Technology and numerous non- governmental organizations (¢.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does CMS require use of any of these or similar resources when implementing algorithms that impact patient care? If so, which ones? 3. Has CMS engaged with other federal offices or agencies, such as the Office of Equal Opportunity and Civil Rights, regarding algorithmic biases? 4, Has CMS engaged with external stakeholders regarding the potential for algorithmic biases and how to prevent them? a. If'so, do any of the stakeholders represent populations that may be more likely to be adversely impacted by algorithmic bias, or outside validators that can properly vet algorithms for discriminatory impact? 5. In selecting which participants advanced to Stage 1 of the Artificial Intelligence Health ‘Outcomes Challenge, is CMMI prioritizing participants that address algorithmic bias? a. Has CMMI assessed whether the 25 current participants in this challenge have taken. algorithmic bias into account as they develop tools for health care systems? b. Iso, how many have done so? 6. The RFI on Using Advanced Technology in Program Integrity does not reference how AI ‘medical review tool vendors might be able to address biases in their systems, or if health care providers and suppliers consider algorithmic bias when deciding what technological tools to use. Will CMS consider these questions when assessing the input received or determining which solutions to implement? 7. AsCMS continues to develop the use of algorithms to assess health care data and automate decisions, what is the agency doing to ensure that algorithmic bias is taken into account? ‘As algorithms play an increasingly prevalent role in the health care system, we urge CMS to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’s office and Kristen Lunde with Senator Wyden’s staff at (202) 224-3224 and (202) 224-4515, respectively C Cory A. Booker Ron Wyden United States Senator United States Senat Sincerely,