You are on page 1of 15
Wnited States Senate WASHINGTON, DC 20510 December 3, 2019 Ms. Karen S. Lynch Executive Vice President, CVS Health and President, Aetna Business Unit CVS Health One CVS Drive Woonsocket, RI 02895 Dear Ms. Lynch: We write today to request information regarding any actions CVS Health is taking or plans to take to address bias in algorithms used by your company, Aetna. Algorithms are increasingly embedded into every aspect of modem society, including the health care system. Organizations use these automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence, to organize and optimize the complex choices they need to make on a daily basis. In using algorithms, organizations often make an attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. In health care, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in an algorithm widely used by a prominent national company. This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual’s level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would be flagged for additional help afier this bias is addressed, an increase from nearly 18 percent under the algorithm as it was originally designed ‘The findings of this study are deeply troubling, particularly when taken in the larger context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment "Obermeyer, Zid, etal, ‘Dissecting Racal Bias in an Algorithm Used to Manage the Health of Populations,” Science, 28 October 2019, bnpscence siencemag.orpconten/366/5464/147 recommendations for black patients compared to white patients.” It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaskan ‘Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level.> Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. We are pleased to learn that the organization featured in the Science study appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the right direction, this is just one of the many algorithms currently used in the health care industry. Congress and companies like yours that play a role in the health and well-being of Americans must make a concerted effort to find out the degree to which this issue is widespread and move quickly to make lasting changes. To that end, we request answers to the following questions no later than December 31, 2019: 1. How many algorithms does Aetna utilize in an effort to automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse? a. What specific decisions are these algorithms making? b, How many people do these algorithms impact? 2. What is Aetna doing to ensure that these algorithms are free from bias? a, How often are audits done to detect bias? 'b. Do you compare performance before and after algorithm implementation to ensure the algorithm actually works and does not maintain or increase bias? ¢. Do any of the algorithms that Aetna is currently utilizing include a bias that would negatively affect certain patients relative to others? 4. Will Aetna commit to immediately halting the use of these algorithms, where possible, until such bias is eliminated? In cases where the algorithms cannot easily be halted, will Aetna commit to providing adequate resources to investigate and fix the problem? 3. What technologies (such as machine learning or advanced analytics) do these algorithms use? For algorithms using past data to train, what is the vetting process to reduce embedded historical and systemic biases in the data? 4, The National Institute of Standards and Technology and numerous non-governmental organizations (e.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does Aetna utilize any of these * Hoftinan, Kelly, eal, “Racial bias in pain assessment and treatment commendations, and false bli aout biological ferences between blacks and whites,” Proceedings ofthe National Academy of Senos ofthe United Staes of America, 19 April 2016, Jnups: ww ps ore content 11/16/4296 lon 2 Person, Emily Fea, "Rasial/Ethnic Disparitsin Pregnancy-Reated Deaths — United States, 2007-2016," Morbidity and Mestaity Weekly Report, Ceres for Disease Control and Prevention, 6 September 2019, psf ede gowmmrvclumes/68wrimmGR3Sa3 hem?s_cd-mm6ES03,ofelveryName-USCDC_921-DME348 or similar resources when implementing algorithms that impact patient care? If so, which ones? 5. Are the teams that are developing these algorithms diverse? 6. When Aetna designs or implements these, does it consult with populations that may be more likely to be adversely impacted by bias or with outside validators that can properly vet them for discriminatory impact? As algorithms play an increasingly prevalent role in the health care system, we urge Aetna to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. ‘Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’s office and Kristen Lunde with Senator Wyden’s staff at (202) 224- 3224 and (202) 224-4515, respectively. Sincerely, 7 Cory A. Booker United States Senator United States Senate WASHINGTON, DC 20510 December 3, 2019 Mr. Scott P. Serota President and Chief Executive Officer Blue Cross Blue Shield 225 North Michigan Avenue Chicago, IL 60601 Dear Mr. Serota: We write today to request information regarding any actions Blue Cross Blue Shield is taking or plans to take to address bias in algorithms used by the company. Algorithms are increasingly embedded into every aspect of modem society, including the health care system. Organizations use these automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence, to organize and optimize the complex choices they need to make on a daily basis. In using algorithms, organizations often make an attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. In health care, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in an algorithm widely used by a prominent ‘national company.’ This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual's level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would be flagged for additional help afier this bias is addressed, an increase from neatly 18 percent under the algorithm as it was originally designed. The findings of this study are deeply troubling, particularly when taken in the larger context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment "Obermeyer, Ziad, ea, “Dissecting Racial Bias in an Algorithm Used to Manage the Health f Populations” Scene, 25 October 2019, -mgp:/ssencesiencemag ompeonten366/6160447 recommendations for black patients compared to white patients.? It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaskan Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level.> Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. We are pleased to lear that the organization featured in the Science study appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the right direction, this is just one of the many algorithms currently used in the health care industry. Congress and companies like yours that play a role in the health and well-being of ‘Americans must make a concerted effort to find out the degree to which this issue is widespread and move quickly to make lasting changes. To that end, we request answers to the following questions no later than December 31, 2019: 1. How many algorithms does Blue Cross Blue Shield utilize in an effort to automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse? a. What specific decisions are these algorithms making? b. How many people do these algorithms impact? 2. What is Blue Cross Blue Shield doing to ensure that these algorithms are free from bias? a. How often are audits done to detect bias? b. Do you compare performance before and after algorithm implementation to ensure the algorithm actually works and does not maintain or increase bias? c. Doany of the algorithms that Blue Cross Blue Shield is currently utilizing include a bias that would negatively affect certain patients relative to others? 4. Will Blue Cross Blue Shield commit to immediately halting the use of these algorithms, where possible, until such bias is eliminated? In cases where the algorithms cannot easily be halted, will Blue Cross Blue Shield commit to providing adequate resources to investigate and fix the problem? 3. What technologies (such as machine learning or advanced analytics) do these algorithms use? For algorithms using past data to train, what is the vetting process to reduce embedded historical and systemic biases in the data? 4, The National Institute of Standards and Technology and numerous non-governmental organizations (e.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does Blue Cross Blue Shield 2 Hoffman, Kelly, etal, “Racial bis in pain assessment and treatment recommendations, and false ele about biological differences between backs and whites,” Proceedings of the National Academy of Seiences ofthe United Stats of Amica 19 April 2016, Ips: pas orplomtent115/16296 long * Petersen, Emily Ee al, “Rava/ Ethnic Disparities in Pregnancy-Relted Deaths — United States, 2007-2016" Morbidity and Mortality Weekly Report, Centers for Disease Control and Prevention, 6 September 2019, Ips: ede govimmvolumes/68/on/mm68383hem?s_eid-mimG835e3_e&deiveryName-USCDC_921-DMES44 utilize any of these or similar resources when implementing algorithms that impact, patient care? If so, which ones? 5. Are the teams that are developing these algorithms diverse? 6. When Blue Cross Blue Shield designs or implements these, does it consult with populations that may be more likely to be adversely impacted by bias or with outside validators that can properly vet them for discriminatory impact? As algorithms play an increasingly prevalent role in the health care system, we urge Blue Cross Blue Shield to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’s office and Kristen Lunde with Senator Wyden’s staff at (202) 224-3224 and (202) 224-4515, respectively. Sincerely, : DW Cory A. Booker Ron Wyden United States Senator United States Senay United States Senate WASHINGTON, DC 20510 December 3, 2019 Mr. David Cordani President & Chief Executive Officer Cigna Corporation 900 Cottage Grove Road Bloomfield, CT 06002 Dear Mr. Cordani: We write today to request information regarding any actions Cigna is taking or plans to take to address bias in algorithms used by the company. Algorithms are increasingly embedded into every aspect of modem society, including the health care system. Organizations use these automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence, to organize and optimize the complex choices they need to make on a daily basis. In using algorithms, organizations often make an attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. In health care, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in an algorithm widely used by a prominent national company.! This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual’s level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would be flagged for additional help afier this bias is addressed, an increase from nearly 18 percent under the algorithm as it was originally designed. ‘The findings of this study are deeply troubling, particularly when taken in the larger context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment "Obermeyer, Zid, tal, “Dissecting Racal Bis in an Algorithm Used to Manage the Health of Populations,” Seiene, 28 October 2019, iaps.scence sciencemag orpconten/366'6464/487 recommendations for black patients compared to white patients.? It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaskan ‘Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level.’ Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. We are pleased to learn that the organization featured in the Science study appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the right direction, this is just one of the many algorithms currently used in the health care industry. Congress and companies like yours that play a role in the health and well-being of Americans must make a concerted effort to find out the degree to which this issue is widespread and move quickly to make lasting changes. To that end, we request answers to the following questions no later than December 31, 2019: 1, How many algorithms does Cigna utilize in an effort to automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse? a. What specific decisions are these algorithms making? b. How many people do these algorithms impact? 2. What is Cigna doing to ensure that these algorithms are free from bias? a. How often are audits done to detect bias? b. Do you compare performance before and after algorithm implementation to ensure the algorithm actually works and does not maintain or increase bias? €. Do any of the algorithms that Cigna is currently utilizing include a bias that would negatively affect certain patients relative to others? 4. Will Cigna commit to immediately halting the use of these algorithms, where possible, until such bias is eliminated? In cases where the algorithms cannot easily be halted, will Cigna commit to providing adequate resources to investigate and fix the problem? 3. What technologies (such as machine leaming or advanced analytics) do these algorithms use? For algorithms using past data to train, what is the vetting process to reduce embedded historical and systemic biases in the data? 4, The National Institute of Standards and Technology and numerous non-governmental organizations (e.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does Cigna utilize any of these * Hofman, Kelly al, "Racial bias in pain assessment and tretmen commendations, and false belifs aout biological ferences between blacks and whites,” Proceedings ofthe National Academy of Sciences of the United Sates of Ameria, 19 Apil2016, lnups:www. pas orpleomten 13/16/4296 Jog, 2 Peesen Emily E, etal, Rasa Etnic Disparities in regnaney-Related Deaths — United States, 2007-2016," Morbidity and Morality ‘Weekly Repr, Centr for Disease Contol and Prevention, 6 September 2019, nop: de gowmmrvolumesi68hw/mm6S3Sa5 hows cid-mms835a3_e&delveryName-USCDC_921-DM348 or similar resources when implementing algorithms that impact patient care? If so, which ones? 5. Are the teams that are developing these algorithms diverse? 6. When Cigna designs or implements these, does it consult with populations that may be more likely to be adversely impacted by bias or with outside validators that can properly vet them for discriminatory impact? ‘As algorithms play an increasingly prevalent role in the health care system, we urge Cigna to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. ‘Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’s office and Kristen Lunde with Senator Wyden’s staff at (202) 224- 3224 and (202) 224-4515, respectively. Sincerely, Cory A. Booker United States Senator Wnited States Senate WASHINGTON, DC 20510 December 3, 2019 Mr. Bruce D. Broussard President and Chief Executive Officer Humana 500 West Main Street Louisville, KY 40202 Dear Mr. Broussard: We write today to request information regarding any actions Humana is taking or plans to take to address bias in algorithms used by the company. Algorithms are increasingly embedded into every aspect of modern society, including the health care system. Organizations use these automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence, to organize and optimize the complex choices they need to make on a daily basis. In using algorithms, organizations often make an attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. In health care, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in an algorithm widely used by a prominent national company.' This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual’s level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. Asa result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would be flagged for additional help after this bias is addressed, an increase from nearly 18 percent under the algorithm as it was originally designed. The findings of this study are deeply troubling, particularly when taken in the larger context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment "Obermeyer, Ziad ta, Dissecting Racal Bias in an Algorithm Used to Manage the Health of Populations.” Seiene, 28 October 2019, hipeiscence siencemag.or/conten366/6464i147 recommendations for black patients compared to white patients.” It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaskan ‘Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level. Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. We are pleased to learn that the organization featured in the Science study appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the right direction, this is just one of the many algorithms currently used in the health care industry. Congress and companies like yours that play a role in the health and well-being of Americans must make a concerted effort to find out the degree to which this issue is widespread and move quickly to make lasting changes. To that end, we request answers to the following questions no later than December 31, 2019: 1. How many algorithms does Humana utilize in an effort to automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse? a. What specific decisions are these algorithms making? b. How many people do these algorithms impact? 2. What is Humana doing to ensure that these algorithms are free from bias? a. How often are audits done to detect bias? b. Do you compare performance before and after algorithm implementation to ensure the algorithm actually works and does not maintain or increase bias? ¢. Do any of the algorithms that Humana is currently utilizing include a bias that would negatively affect certain patients relative to others? 4. Will Humana commit to immediately halting the use of these algorithms, where possible, until such bias is eliminated? In cases where the algorithms cannot easily be halted, will Humana commit to providing adequate resources to investigate and fix the problem? 3. What technologies (such as machine learning or advanced analytics) do these algorithms use? For algorithms using past data to train, what is the vetting process to reduce embedded historical and systemic biases in the data? 4, The National Institute of Standards and Technology and numerous non-governmental organizations (e.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does Humana utilize any of 2 Mortman, Kelly, eal, “Racal ia in pain assessment and treatment ecommendations, and false beliefs abou biological difeences between blacks and whites,” Proceedings ofthe National Academy of Sciences ofthe United States of Ameria, 19 Apil2016, np: pas oryontent 13/16/4296 log. * Petersen, Emily E, etal, "Racial/Ethnic Disparities in Pregnancy-Related Deaths — United States, 2007-2016,” Morbidity and Mortality ‘Weekly Repo, Centers for Disease Conzol and Prevention, 6 September 2019, -nepeww.odcgowimm/vclmesi68wrimmS3Su3 him?s_cdemms435a3_e@eliveryName-USCDC_921-DMS348 these or similar resources when implementing algorithms that impact patient care? If so, which ones? 5. Are the teams that are developing these algorithms diverse? 6. When Humana designs or implements these, does it consult with populations that may be more likely to be adversely impacted by bias or with outside validators that can properly vet them for discriminatory impact? As algorithms play an increasingly prevalent role in the health care system, we urge Humana to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’s office and Kristen Lunde with Senator Wyden’s staff at (202) 224- 3224 and (202) 224-4515, respectively. Sincerely, ‘ory A. Booker Ron Wyden United States Senator United States Sen: United States Senate WASHINGTON, DC 20510 December 3, 2019 Mr. David 8. Wichmann, Chief Executive Officer UnitedHealth Group 9900 Bren Road East Minnetonka, MN 55343 Dear Mr. Wichmann: We write today to request information regarding any actions UnitedHealth Group is taking or plans to take to address bias in algorithms used by your companies, UnitedHealtheare and Optum, Algorithms are increasingly embedded into every aspect of modem society, including the health care system. Organizations use these automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence, to organize and optimize the complex choices they need to make on a daily basis. In using algorithms, organizations often make an attempt to remove human flaws and biases from the process. Unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases. In health care, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in an algorithm widely used by a prominent national company, later reported to be Optum.' This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to an individual’s level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately impact black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm. According to the authors of this study, more than 45 percent of black patients captured under this algorithm would bee flagged for additional help after this bias is addressed, an increase from nearly 18 percent under the algorithm as it was originally designed. The findings of this study are deeply troubling, particularly when taken in the larger context of other biases, disparities, and inequities that plague our health care system. For instance, a 2016 study found that most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment * Obermeyer, Zia, eal, "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations” Science, 25 October 2019, -maps:seenceseiencemagorgeonten366IS464447 recommendations for black patients compared to white patients.” It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaskan ‘Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level. Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also exacerbate them. We are pleased to learn that Optum appears to be taking steps to eliminate racial bias from their product. While a fix to this particular algorithm is a step in the right direction, this is just one of the many algorithms currently used in the health care industry. Congress and companies like yours that play a role in the health and well-being of Americans must make a concerted effort to find out the degree to which this issue is widespread and move quickly to make lasting changes. To that end, we request answers to the following questions no later than December 31, 2019: 1. How many algorithms does UnitedHealth Group utilize in an effort to automate decisions like predicting health care needs and outcomes, targeting resources, improving quality of care, and detecting waste, fraud, and abuse? a. What specific decisions are these algorithms making? b. How many people do these algorithms impact? 2. What is UnitedHealth Group doing to ensure that these algorithms are free from bias? a. How often are audits done to detect bias? b. Do you compare performance before and after algorithm implementation to ensure the algorithm actually works and does not maintain or increase bias? Do any of the algorithms that UnitedHealth Group is currently utilizing include a bias that would negatively affect certain patients relative to others? 4. Will UnitedHealth Group commit to immediately halting the Optum algorithm that the authors of the Science study found to be biased and other relevant algorithms until biases are eliminated? In cases where the algorithms cannot easily be halted, will UnitedHealth Group commit to providing adequate resources to investigate and fix the problem? 3. What technologies (such as machine learning or advanced analytics) do these algorithms use? For algorithms using past data to train, what is the vetting process to reduce embedded historical and systemic biases in the data? 4, The National Institute of Standards and Technology and numerous non-governmental organizations (¢.g., The Alan Turing Institute) have published best practices for preventing, detecting, and eliminating bias in algorithms. Does UnitedHealth Group * Hotfman, Kel etl “Rac bas in pain assessment and weatment recommendations, and false belies about biological diferences between backs and whites” Proceedings ofthe National Academy of Sciences af the United States af America, 19.Apil 2016, ps www pnas.og/content/13/16/4296 on6 = peterse, Emly €, eta, "Racal/Ethnc DispartisinPrognancy Related Deaths — United Stats, 2007-2016," Morbidity and Mortality Wey Report, Centers for Disease Control and Prevention, 6 September 2019, ipsa ede gov/mmur/volumes/08/wr/ mm6535a3.Mtm?s_eld=mm6835a3._eBdelveryame+USCDC_ $21. DMBS44 utilize any of these or similar resources when implementing algorithms that impact patient care? If so, which ones? 5. Are the teams that are developing these algorithms diverse? 6. When UnitedHealth Group designs or implements algorithms, does it consult with populations that may be more likely to be adversely impacted by bias or with outside validators that can properly vet them for discriminatory impact? As algorithms play an increasingly prevalent role in the health care system, we urge UnitedHealth Group to consider the risk for algorithmic bias and its potential impact on health disparities outcomes. Thank you for your attention to this matter. Should you have any questions, please contact Rashan Colbert with Senator Booker’ office and Kristen Lunde with Senator Wyden’s staff at (202) 224-3224 and (202) 224-4515, respectively. Sincerely, CE ‘A. Booker Ron Wyden - United States Senator United States Senato: