You are on page 1of 17

AB 331

Page 1

Date of Hearing: April 18, 2023

ASSEMBLY COMMITTEE ON JUDICIARY


Brian Maienschein, Chair
AB 331 (Bauer-Kahan) – As Amended April 13, 2023

As Proposed to be Amended

SUBJECT: AUTOMATED DECISION TOOLS

KEY ISSUES:

1) SHOULD THE CIVIL RIGHTS DEPARTMENT BE AUTHORIZED TO SEEK


ADMINISTRATIVE PENALTIES OF UP TO $10,000 PER VIOLATION FOR THE
FAILURE OF A DEPLOYER OR DEVELOPER, AS DEFINED, TO SUBMIT AN
IMPACT ASSESSMENT?

2) SHOULD THE LEGISLATURE CREATE A PRIVATE RIGHT OF ACTION AGAINST A


DEPLOYER WHO DEPLOYS AN AUTOMATED DECISION TOOL THAT RESULTS
IN ALGORITHMIC DISCRIMINATION AND CAUSES THE PLAINTIFF ACTUAL
HARM?

3) SHOULD THE LEGISLATURE CREATE A CAUSE OF ACTION AND AUTHORIZE


ENFORCEMENT BY SPECIFIED PUBLIC PROSECUTORS FOR A DEPLOYER OR
DEVELOPERS VIOLATION OF ANY OF THE SPECIFIED REQUIREMENTS OF THIS
BILL?
SYNOPSIS

For years, automated decision tools (ADT) (also known as automated systems, or automated
decision technology) have integrated into our lives. Briefly defined, ADT are technological tools
that make individualized decisions based on a coding structure. While the code and development
of the automated technology is manmade, the ultimate decision is theoretically void of any
human input. Most of us take automated technology and its algorithms for granted, often never
recognizing when they are impacting our day to day lives. While this ensures that we can move
about our lives unimpeded, it also means that we are often unaware of the prevalence of this
relatively new technology. Like most any new development, automated technology has both
positive and negative ramifications, and the degree of benefit or detriment that they may bring
varies greatly based on the communities targeted.

This bill proposes a new framework to regulate the development and deployment of automated
decision tools, seemingly modeled off of the White House’s 2022 AI Bill of Rights. Within the
framework, the bill requires deployers and developers, as defined, to develop annual impact
assessments, focused on the potential adverse and beneficial impacts of their ADT, and prohibits
either from using an ADT that contributes to algorithmic discrimination. Within the jurisdiction
of this Committee, the bill imposes various enforcement mechanisms, including an administrative
enforcement mechanism for the Civil Rights Department, and a private right of action for
enforcement of any of the bill’s requirements. In order to tailor the current enforcement
mechanisms to more appropriately address distinct violations, the author proposes various
amendments, discussed further in the comments section of the analysis.
AB 331
Page 2

This bill is supported by the Algorithmic Justice League, the California-Hawai’i State
Conference of the NAACP, the Israeli-American Civic Action network, Oakland Privacy, and the
Santa Monica Democratic Club. It is opposed by a broad coalition of business and tech industry
advocates. The California Association of Realtors have submitted a position of oppose unless
amended, and the California Nurses Association have submitted a position of concern. This bill
was previously heard by the Assembly Committee on Privacy and Consumer Protection, and
passed out on a vote of 8-3.

SUMMARY: Establishes a statutory framework for the development and use of automated
decision tools (ADTs). Specifically, this bill:

1) Defines the following for purposes of this bill:

a) “Algorithmic discrimination” means the condition in which an automated decision tool


contributes to unjustified differential treatment or impacts disfavoring people based on
their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited
English proficiency, disability, veteran status, genetic information, reproductive health, or
any other classification protected by state law.

b) “Artificial intelligence” means a machine-based system that can, for a given set of
human-defined objectives, make predictions, recommendations, or decisions influencing
a real or virtual environment.

c) “Automated decision tool” means a system or service that uses artificial intelligence and
has been specifically developed and marketed to, or specifically modified to, make, or be
a controlling factor in making, consequential decisions.

d) “Consequential decision” means a decision or judgment that has a legal, material, or


similarly significant effect on an individual’s life relating to the impact of, access to, or
the cost, terms, or availability of, any of the following:

i) Employment, workers management, or self-employment, including, but not limited


to, all of the following:

(1) Pay or promotion;


(2) Hiring or termination;
(3) Automated task allocation;

ii) Education and vocational training, including, but not limited to, all of the following:

(1) Assessment, including, but not limited to, detecting student cheating or
plagiarism;
(2) Accreditation;
(3) Certification;
(4) Admissions;
(5) Financial aid or scholarships;

iii) Housing or lodging, including rental or short-term housing or lodging;


AB 331
Page 3

iv) Essential utilities, including electricity, heat, water, internet or telecommunications


access, or transportation;

v) Family planning, including adoption services or reproductive services, as well as


assessments related to child protective services;

vi) Health care or health insurance, including mental health care, dental, or vision;

vii) Financial services, including a financial service provided by a mortgage company,


mortgage broker, or creditor;

viii) The criminal justice system, including but not limited to, all of the following:

(1) Risk assessments for pretrial hearings;


(2) Sentencing;
(3) Parole;

ix) Legal services, including private arbitration or mediation;

x) Voting; and

xi) Access to benefits or services or assignments of penalties.

e) “Deployer” means a person, partnership, state or local government agency, or corporation


that uses an automated decision tool to make a consequential decision.

f) “Developer” means a person, partnership, state or local government agency, or


corporation that designs, codes, or produces an automated decision tool, or substantially
modifies an artificial intelligence system or service for the intended purpose of making,
or being a controlling factor in making, consequential decisions, whether for its own use
or for use by a third party.

g) “Impact assessment” means a documented risk-based evaluation of an automated


decision tool that meets the criteria of specified existing provisions of the Business and
Professions Code.

h) “Sex” includes pregnancy, childbirth, and related conditions, gender identity, intersex
status, and sexual orientation.

i) “Significant update” means a new version, new release, or other update to an automated
decision tool that includes changes to its use case, key functionality, or expected
outcomes.

2) Requires, by January 1, 2025, and annually thereafter, a deployer of an automated decision


tool to perform an impact assessment for any automated decision tool the deployer uses that
includes all of the following:

a) A statement of the purpose of the automated decision tool and its intended benefits, uses,
and deployment contexts;
AB 331
Page 4

b) A description of the automated decision tool’s outputs and how they are used to make, or
be a controlling factor in making, a consequential decision;

c) A summary of the type of data collected from natural persons and processed by the
automated decision tool when it is used to make, or be a controlling factor in making, a
consequential decision;

d) A statement of the extent to which the deployer’s use of the automated decision tool is
consistent with or varies from the statement required by the developer in 3).

e) An analysis of potential adverse impacts on the basis of sex, race, color, ethnicity,
religion, age, national origin, limited English proficiency, disability, veteran status, or
genetic information from the deployer’s use of the automated decision tool;

f) A description of the safeguards implemented, or that will be implemented, by the


deployer to address any reasonably foreseeable risks of algorithmic discrimination rising
from the use of the automated decision tool known to the deployer at the time of the
impact assessment;

g) A description of how the automated decision tool will be used by a natural person, or
monitored when it is used, to make, or be a controlling factor in making, a consequential
decision; and

h) A description of how the automated decision tool has been or will be evaluated for
validity or relevance.

3) Requires, on or before January 1, 2025, and annually thereafter, a developer of an automated


decision tool to complete and document an assessment of any automated decision tool that it
designs, codes, or produces that includes all of the following:

a) A statement of the purpose of the automated decision tool and its intended benefits, uses,
and deployment contexts;

b) A description of the automated decision tool’s outputs and how they are used to make, or
be a controlling factor in making, a consequential decision;

c) A summary of the type of data collected from the natural persons and processed by the
automated decision tool when it used to make, or be a controlling factor in making, a
consequential decision;

d) An analysis of a potential adverse impact on the basis of sex, race, color, ethnicity,
religion, age, national origin, limited English proficiency, disability, veteran status, or
genetic information from the deployer’s use of the automated decision tool;

e) A description of the measures taken by the developer to mitigate the risk known to the
developer of algorithmic discrimination arising from the use of the automated decision
tool; and

f) A description of how the automated decision tool can be used by a natural person, or
monitored when it used, to make, or be a controlling factor in making, a consequential
decision.
AB 331
Page 5

4) Requires a deployer or developer, in addition to the impact assessment required by 2) and 3),
to perform, as soon as feasible, an impact assessment with respect to any significant update.

5) Makes the requirements in 2) through 4) applicable only to a deployer with 25 or more


employees, unless, as of the end of the prior calendar year, the deployer deployed an
automated decision tool that impacts more than 999 people per year.

6) Requires a deployer, at or before the time an automated decision tool is used to make a
consequential decision, to notify any natural person that is the subject of the consequential
decision that an automated decision tool is being used to make, or be a controlling factor in
making, the consequential decision.

a) Requires a deployer required to provide a notice pursuant to 6) to provide all of the


following:

i) A statement of the purpose of the automated decision tool;

ii) Contact information for the deployer;

iii) A plain language description of the automated decision tool that includes a
description of any human components and how any automated component is used to
inform a consequential decision.

7) Requires a deployer, if a consequential decision is made solely based on the output of an


automated decision and if technically feasible, to accommodate a natural person’s request to
not be subject to the automated decision tool and to be subject to an alternative selection
process or accommodation.

a) Following a request pursuant to 7), authorizes a deployer to reasonably request, collect,


and process information from a natural person for the purposes of identifying the person
and the associated consequential decision. If the person does not provide the information,
the deployer is no longer obligated to provide an alternative selection process or
accommodation.

8) Requires a developer to provide a deployer with a statement regarding the intended uses of
the automated decision tool and documentation regarding all of the following:

a) The known limitations of the automated decision tool, including any reasonably
foreseeable risks of algorithmic discrimination arising from its intended use;

b) A description of the type of data used to program or train the automated decision tool;

c) A description of how the automated decision tool was evaluated for validity and
explainability before sale or licensing.

9) Exempts the disclosure of trade secrets from the requirement in 8) pursuant to existing law.

10) Requires a deployer or developer to establish, document, implement, and maintain a


governance program that contains reasonable administrative and technical safeguards to map,
measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination
associated with the use or intended use of an automated decision tool.
AB 331
Page 6

a) Requires the safeguards required by 10) to be appropriate to all of the following:

i) The use or intended use of the automated decision tool;

ii) The deployer’s or developer’s role as a deployer or developer;

iii) The size, complexity, and resources of the deployer or developer;

iv) The nature, context, and scope of the activities of the deployer or developer in
connection with the automated decision tool; and

v) The technical feasibility and cost of available tools, assessments, and other means
used by a deployer or developer to map, measure, manage, and govern the risks
associated with an automated decision tool.

11) Requires the governance program required by 10) to be designed to do all of the following:

a) Designate at least one employee to be responsible for overseeing and maintaining the
governance program and compliance with this chapter, who shall have the authority to
assert to the employee’s employer a good faith belief that the design, production, or use
of an automated decision tool fails to comply with the requirements of this chapter;

b) Identify and implement safeguards to address reasonably foreseeable risks of algorithmic


discrimination resulting from the use or intended use of an automated decision tool;

c) If established by a deployer, provide for the performance of impact assessments as


required by 2).

d) If established by a developer, provide for compliance with the requirements of 6) and 8).

e) Conduct an annual and comprehensive review of policies, practices, and procedures to


ensure compliance with the provisions of this bill;

f) Maintain for two years after completion the results of an impact assessment;

g) Evaluate and make reasonable adjustments to administrative and technical safeguards in


light of material changes in technology, the risks associated with the automated decision
tool, the state of technical standards, and changes in business arrangements or operations
of the deployer or developer.

12) Makes the requirements in 10) and 11) applicable only to a deployer with 25 or more
employees, unless, as of the end of the prior calendar year, the deployer deployed an
automated decision tool that impacted more than 999 people per year.

13) Requires a deployer or developer to make publicly available, in a readily accessible manner,
a clear policy that provides a summary of both of the following:

a) The types of automated decision tools currently in use or made available to others by the
deployer or developer; and
AB 331
Page 7

b) How the deployer or developer manages the reasonably foreseeable risks of algorithmic
discrimination that may arise from the use of the automated decision tools it currently
uses or makes available to others.

14) Prohibits a deployer from using an automated decision tool that results in algorithmic
discrimination.

15) Authorizes a person to bring a civil action, beginning January 1, 2026, against a deployer for
violation of 14). Requires the plaintiff to demonstrate that the deployer’s use of the
automated decision tool resulted in algorithmic discrimination that caused actual harm to the
person bringing the civil action.

16) Makes a deployer liable to a prevailing plaintiff in a claim brought pursuant to 15) for
compensatory damages, declaratory relief, and reasonable attorney’s fees and costs.

17) Requires a deployer or developer to provide the impact assessment to the Civil Rights
Department (CRD) within 60 days of completing the impact assessment.

18) Makes a deployer or developer who fails to submit an impact assessment pursuant to 2) and
3) liable for an administrative fine of not more than $10,000 per violation in an
administrative enforcement action brought by the CRD.

a) Makes each day on which an automated decision tool is used for which an impact
assessment has not been submitted a distinct violation.

19) Authorizes the CRD to share impact assessments with other state entities as appropriate.

20) Authorizes specified public attorneys to enforce the provisions of this bill through civil
claims brought against a deployer or developer for violation of any requirement established
by this bill.

a) Authorizes recovery to a prevailing public attorney in a claim brought pursuant to 18) for
any of the following:

i) Injunctive relief;

ii) Declaratory relief;

iii) Reasonable attorney’s fees and litigation costs.

21) Requires a public attorney, before commencing an action pursuant to 18) for injunctive relief,
to provide 45 days’ written notice to a deployer or developer of the alleged violations. If the
developer or deployer cures the noticed violation and provides the prosecutor with an express
written statement, made under the penalty of perjury, that the violation has been cured and
that no further violations shall occur, prohibits a claim for injunctive relief from being
maintained.

EXISTING LAW:

1) Establishes the Civil Rights Department, and sets forth its statutory functions, duties, and
powers. (Government Code Section 12930.)
AB 331
Page 8

2) Enacts the Fair Employment and Housing Act. (Government Code Sections 12900 et seq.)

3) Enacts the Unruh Civil Rights Act. (Civil Code Section 51.)

4) Defines “trade secret” under the Uniform Trade Secrets Act as information, including a
formula, pattern, compilation, program, device, method, technique, or process, that both:

a) Derives independent economic value, actual or potential, from not being generally known
to the public or to other persons who can obtain economic value from its disclosure or
use; and

b) Is the subject of efforts that are reasonable under the circumstances to maintain its
secrecy. (Civil Code Section 3426.1 (d).)

FISCAL EFFECT: As currently in print this bill is keyed fiscal.

COMMENTS: For years, automated decision tools (ADT) (also known as automated systems,
or automated decision technology) have integrated into our lives. Briefly defined, ADT are
technological tools that make individualized decisions based on a coding structure. While the
code and development of the automated technology is manmade, the ultimate decision is
theoretically void of any human input. Most of us take automated technology and its algorithms
for granted, often never recognizing when they are impacting our day to day lives. While this
ensures that we can move about our lives unimpeded, it also means that we are often unaware of
the prevalence of this relatively new technology. Like most any new development, automated
technology has both positive and negative ramifications, and the degree of benefit or detriment
that they may bring varies greatly based on the communities targeted.

We are now faced with the question of how to apply existing federal and state legal protections
to the world of ADT. In theory, ADT is a technology immune from the misperceptions and
human prejudices. Unfortunately, due to its development by humans, ADT is often imbued with
humans’ inherent biases. These biases include preconceptions based on race, gender, sex,
geographic origin, and any other number of characteristics that humans themselves have
developed over generations.

This bill attempts to mirror the White House’s AI Bill of Rights and ensure ADT is developed
and deployed in a responsible manner. In October 2022, the White House released a white
paper detailing policy proposals on how to ensure that artificial intelligence (AI), including
ADT, work best for everyone. Titled The Blueprint for an AI Bill of Rights, the paper identifies
five main discussion points: safe and effective systems; algorithmic discrimination protections;
data privacy; notice and explanation; and human alternatives, consideration, and feedback. (The
Blueprint for an AI Bill of Rights, The White House (October 2022) available at
https://www.whitehouse.gov/ostp/ai-bill-of-rights/.) Recognizing the breadth of potential
technology that may be captured in the discussion of the AI Bill of Rights, the white paper
applies a two part test to narrow its scope to: 1) automated systems that 2) have the potential to
meaningfully impact the American public’s rights, opportunities, or access to critical resources
or services. (Id. at p. 8.) The paper further identifies rights, opportunities, or access to mean 1)
civil rights, civil liberties, and privacy, 2) equal opportunities, and 3) access to critical resources
or services. (Ibid.) The White House’s AI Bill of Rights also includes definitions for numerous
phrases and concepts implicated by the paper, including algorithmic discrimination and
automated system. (Id. at p.10.) The AI Bill of Rights then goes on to lay out expectations for
AB 331
Page 9

each of the five principles, and what should be expected by the public with regard to each
principle. For example, in its discussion of automated systems, the paper posits that “[t]he public
should be consulted in the design, implementation, deployment, acquisition and maintenance
phases of automated system development,” and that

“[b]efore deployment, and in a proactive and ongoing manner, potential risks of the
automated system should be identified and mitigated. Identified risks should focus on the
potential for meaningful impact on people’s rights, opportunities, or access and include those
to impacted communities that may not be direct users of the automated system, risks
resulting from purposeful misuse of the system, and other concerns identified via the
consultation process.” (Id. at p.18.)

The paper also suggests that

“automated systems should have ongoing monitoring procedures, including recalibration


procedures, in place to ensure that their performance does not fall below an acceptable level
over time, based on changing real-world conditions or deployment contexts […] and should
include continuous evaluation of performance metrics and harm assessments, updates of any
system, and retraining of any machine learning models as necessary[.]” (Id. at p.19)

Jumping to the AI Bill of Rights’ section on “Notice and Explanation,” the paper states: “You
should know that an automated system is being used, and understand how and why it contributes
to outcomes that impact you.” (Id. at p. 40.)

It appears that this bill is largely modeled off of the White House’s AI Bill of Rights, as
evidenced by the various reporting and notice requirements included in its language. However, it
should be noted that the AI Bill of Rights appears significantly more extensive, and expands on
numerous additional policy proposals – including an in-depth discussion of how to appropriately
capture data relating to the disparate impact of ADT on complex issues of race or religion – that
do not appear to be fully incorporated into this bill.

This bill proposes a new framework to regulate the development and deployment of automated
decision tools. The bill would impose numerous requirements on both developers (those who
build ADT) and deployers (those who implement ADT) of ADT. First, the bill requires deployers
and developers to submit annual impact assessments to the Civil Rights Department (CRD),
beginning January 1, 2025. Deployers and developers who have less than 25 employees would
be exempted from this requirement, unless the ADT they have either built or used impacted more
than 999 individuals in the previous calendar year. Second, in the event a consequential decision,
as defined, is made only on the output of an automated decision tool, the bill requires deployers
to notify the individual affected and, if possible, accommodate their request to not be subject to
the ADT. Third, the bill requires deployers and developers to establish a governance program
designed to reasonably foresee the risks of algorithmic discrimination associated with the use, or
intended use, of the ADT in question. Fourth, deployers and developers are required to make a
clear artificial intelligence policy, including a summary of both the type of ADT currently in use
and how they reasonably foresee the risks of algorithmic discrimination arising as a result of the
ADT it currently deploys or provides to others, and make the policy readily available to the
public. Finally the bill prohibits a deployer from using an automated decision tool in a manner
that contributes to algorithmic discrimination.
AB 331
Page 10

The bill incorporates various enforcement mechanisms to ensure the efficacy of the new
reporting and policy development requirements, which are discussed further below.

The scope of this analysis. This bill was primarily referred to the Assembly Committee on
Privacy and Consumer Protection. The jurisdiction of the previous committee includes the policy
impacts of the bill relating to the technology of ADT and feasibility of capturing the requested
information, such as which industries should be included or excluded, what form the impact
assessments should take, and which agency was the appropriate one to receive the assessments.

The scope of this Committee focuses on how any rights or requirements imposed by this bill
would be enforced through the courts. Namely, the private rights of action and enforcement
through the CRD. As such, this analysis will not focus on anything beyond the enforcement
mechanisms. While the opposition has raised concerns regarding the viability of the policy
proposals included in this bill, particularly as applied to their industry, the previous committee
made minimal amendments to the language and this Committee sees no need to second guess
their expertise.

Clarifying the enforcement mechanisms proposed by this bill. Under the bill currently in print
there are three separate enforcement mechanisms.

First, the bill establishes that a deployer or developer of an ADT may be subject to an
administrative enforcement action by the CRD for failure to submit the required impact
assessment. The bill authorizes CRD to collect an administrative penalty of no more than
$10,000 per violation. The bill further defines each individual day that an ADT is used for which
an impact assessment has been not submitted as an individual violation. This provision arguably
exposes deployers and developers to significant administrative penalties for failure to submit an
impact assessment. Considering the importance of the impact assessments in understanding the
effect of ADT, these high penalties are arguably justifiable. However, considering the bulk of the
information required to be submitted by deployers and developers regarding the benefit or risks
surrounding ADTs will be housed with CRD, thus making it the agency with the most intimate
knowledge of potential violations, the author may wish to consider measures to increase CRD’s
enforcement capabilities.

Second, as currently written, this bill prohibits a deployer from using an ADT “in a manner that
contributes to algorithmic discrimination.” It is not clear, however, how a plaintiff would
establish the “manner” in which the ADT was used, or how a defendant would demonstrate that
an ADT did not “contribute” to algorithmic discrimination. While the ultimate intent may be
clear to someone discussing the impacts of ADT – that is to avoid widespread use of tools that
are imbued with bigoted proclivities –this language is arguably impractical. In order to address
this concern, the author proposes to amend the language to prohibit a deployer from using an
automated decision tool that results in algorithmic discrimination. Further, the author proposes
requiring a plaintiff bringing a civil action under this language to bear the burden of
demonstrating that they suffered actual harm as a result of the ADT’s algorithmic discrimination.
The intent of this language is to ensure that an individual who utilizes the ADT simply to
demonstrate its tendency to engage in algorithmic discrimination, with no actual intent to use the
ADT for its intended purpose, is not authorized to bring a claim. In combination, these
amendments ensure that the root of the issue – the deployment of technology that results in
discriminatory outcomes – is captured. Additionally, the author proposes delaying
implementation of this private right of action until January 1, 2026, to allow businesses the time
AB 331
Page 11

necessary to identify ADT which do result in discriminatory outcomes and cease their use.
Finally, the author proposes clarifying the remedies which may be recovered by a violation of
this section. The amendments would be as follows:

22756.6. (a) A deployer shall not use an automated decision toolin a manner that
contributes to results in algorithmic discrimination.

(b) (1) On and after January 1, 2026, a person may bring a civil action against a deployer
for violation of this section.

(2) In an action brought pursuant to paragraph (1), the plaintiff shall have the burden of
proof to demonstrate that the deployer’s use of the automated decision tool resulted in
algorithmic discrimination that caused actual harm to the person bringing the civil action.

(c) In addition to any other remedy at law, a deployer that violates this section shall be
liable to a prevailing plaintiff for any of the following:

(1) Compensatory damages.

(2) Declaratory relief.

(3) Reasonable attorney’s fees and costs.

Third, the bill as currently written creates a private right of action for any individual to bring a
civil claim against a deployer or developer for a violation of any provision of this bill. This is
arguably unduly expansive, and impractical. While it is reasonable to grant an individual a
private right of action for use of an ADT which results in algorithmic discrimination, as that
application is likely to result in harm to the plaintiff themselves, it is less reasonable to grant an
individual the right to sue a business for their failure to submit an impact assessment to CRD, or
any of the other numerous requirements imposed by this bill. For one thing, it is not clear how an
individual would be aware that a deployer or developer failed to submit their impact assessment.
For another, it is arguably an imbalanced approach to allow for every individual to bring a claim
against an individual deployer or developer for their failure to submit an impact assessment when
there is no guarantee that any of those individuals would ever come into contact with the
deployer or developer. What appears more reasonable is to authorize public prosecutors to bring
a civil claim for a deployer or developer’s violation of any of the provisions of this bill. With this
amendment, the bill appears more appropriately tailored, and authorizes the state to seek
enforcement, and production of, data that is ultimately due to them. Finally, the author proposes
several clarifying amendments to the paragraph requiring notice to the developer or deployer.
The proposed amendments are as follows:

22756.8. (a) (1) On and after January 1, 2026, a person Any of the following public
attorneys may bring a civil action against a deployer or developer for a violation of this
chapter. chapter:

(A) The Attorney General in the name of the people of the State of California.

(B) A district attorney, county counsel, or city attorney for the jurisdiction in which the
violation occurred.
AB 331
Page 12

(C) A city prosecutor in any city having a full-time city prosecutor, with the consent of the
district attorney.

(2) A court may award to a prevailing plaintiff in an action brought pursuant to this
subdivision all of the following:

(A) Compensatory damages.

(A) Injunctive relief.

(B) Declaratory relief.

(C) Reasonable attorney’s fees and litigation costs.

(b) (1) (A) Subject to paragraph (2), a person, A public attorney, before commencing an
action pursuant to this section for injunctive relief, shall provide 45 days’ written notice to a
deployer or developer of the alleged violations of this chapter.

(2) (A) If the The developer or deployer demonstrates to the court within may cure, within
45 days of receiving the written notice described in paragraph (1) that it has cured a noticed (1),
the noticed violation and provides provide the person who gave the notice an express written
statement statement, made under penalty of perjury, that the violation has been cured and that
no further violations shall occur, a claim for injunctive relief shall not be maintained for the
noticed violation. occur.

(B) If the developer or deployer cures the noticed violation and provides the express written
statement pursuant to subparagraph (A), a claim for injunctive relief shall not be maintained
for the noticed violation.

Several opponents of the bill have voiced concern with the expansive nature of the private right
of action currently written. To the extent this amendment significantly narrows the ability for
private individuals to bring civil actions against deployers or developers, it is possible that this
concern is at least partially addressed.

The private right of action, while potentially creating overlap with existing legal protections,
nonetheless ensures full coverage of potential violations. Both the Unruh Civil Rights Act and
the Fair Employment and Housing Act (FEHA) prohibit discrimination on the basis of various
characteristics. The Unruh Civil Rights Act, codified in Section 51 of the Civil Code, declares
that:

“all persons within the jurisdiction of the state are free and equal, and no matter what their
sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic
information, marital status, sexual orientation, citizenship, primary language, or immigration
status are entitled to the full and equal accommodations, advantages, facilities, privileges, or
services in all business establishments of every kind whatsoever.”

FEHA, beginning with Section 12940 of the Government Code, provides protections in both the
employment and housing contexts for an entity to discriminate “because of the race, religious
creed, color, national origin, ancestry, physical disability, mental disability, reproductive health
decisionmaking, medical condition, genetic information, marital status, sex, gender, gender
AB 331
Page 13

identity, gender expression, age, sexual orientation, or veteran or military status of any person.”
The private right of action in this bill creates a right of action that is arguably already provided
for in both of these statutes. For example: imagine a Latino applicant named Juan Soto emails in
his resume for a job directly to the employer soliciting applications. An hour later, a white male
applicant named John Smith submits his resume for the same job to the same email address. Both
resumes are practically identical and both applicants have completed the instructions provided
for the application. The only substantive difference between the two is the name at the top – one
which is apparently Latino and the other which is not. If the employer were to reject Juan Soto
and offer the position to John Smith on the basis of their resumes alone, Juan Soto may very well
have a cause of action under FEHA for discrimination on the basis of race, national origin, or
ancestry.

The private right of action established by this bill arguably addresses this same scenario, but in a
slightly different context. Say in this case, both Juan Soto and John Smith, rather than emailing
in their resumes directly to the employer instead submit their resumes via an online application
portal that purports to complete an initial screening of all applications using ADT. If the online
application portal rather than the human employer eliminates Juan Soto’s application in its initial
screening, Juan Soto may have a cause of action, so long as he is able to demonstrate use of the
ADT 1) resulted in algorithmic discrimination and 2) resulted in harm. It is arguable that the
rejection of his resume, identical to John Smith’s in every substantive way except his name, is
evidence that use of the ADT resulted in discrimination, while the loss of a job opportunity is
likely clear demonstrable harm. Therefore, the private right of action established by this bill is an
important element to help close a loophole that would allow deployers to pin all blame on ADT,
rather than accept any accountability, for discriminatory outcomes.

Final proposed amendments. As currently written, this bill prohibits a city or county from
adopting any regulation or rule relating to ADT or an impact assessment similar to that required
to be submitted to the state under this bill. This seems unnecessarily restrictive. In fact, the
Legislature should arguably be encouraging local cities and counties to investigate the issue on a
more individualized level. It is likely that cities and counties, who have more intimate knowledge
of the businesses in their jurisdictions, are well positioned to help develop a robust statewide
ADT policy. Additionally, recent amendments have expanded the bill’s reporting requirements
to state or local government agency. To the extent this language was intended to ensure that state
and local agencies were captured by the bill’s requirements, the recently added language has
addressed the concern and thus this provision is arguably unnecessary. The author proposes to
strike Section 22756.9 in its entirety.

The California Association of Realtors have submitted a position of oppose unless amended.
In furtherance of their position the Realtors state: “Many small housing providers require the use
of third-party software tools to help facilitate the rental application process. AB 331 could
impose onerous requirements upon them, which could impede and increase the costs of the rental
application process and, in turn, erode the ability of such providers to house individuals and
families in a timely manner.” However, this committee’s jurisdiction is as to the enforcement
mechanism, rather than the scope of who should or should not be required to abide by the
requirements imposed by this bill. Therefore, this is not an issue appropriately addressed in this
committee.
AB 331
Page 14

ARGUMENTS IN SUPPORT: This bill is supported by The California-Hawaii Conference of


the NAACP, the Israeli-American Civic Action Network, and The Santa Monica Democratic
Club. In support of the measure, the California-Hawaii Conference of the NAACP writes:

ADTs have been found to exhibit biases and consequently have resulted in discriminatory
impacts and harm to marginalized communities. A study published in Science showed that a
clinical algorithm used across hospitals for determining patient care was racially biased
against Black patients. The algorithm used healthcare spending as a proxy for health needs
and falsely concluded that Black patients were healthier than equally sick white patients,
depriving Black patients from needed high-risk care. These results were biased because
Black patients face disproportionate poverty levels and spend less on health care.

The CA/HI NAACP’s principal objective is to ensure the political, educational, social, and
economic equality of minority citizens in California and eliminate race prejudice. ADTs are
prominent in almost every facet of an individual’s life, and they must be used safely to avoid
discriminatory harm. The mentioned restrictions provide a sense of accountability that makes
it difficult for those who use ADTs to use them as tools for discrimination and oppression,
whether in employment practices, housing, the criminal justice system, or any other facet of
life. For these reasons, CA/HI NAACP proudly supports AB 331 (Bauer-Kahan).

Further, the Israeli-American Civic Action Network writes:

Our organization believes that technology should be used to empower people and
communities, and we are committed to ensuring that technology is designed and used in a
way that promotes fairness, equity, and justice. As an organization that works to empower
immigrants to the United States from Israel by helping them get more engaged in advocacy
and civic life, we understand the importance of protecting marginalized communities from
the negative impacts of algorithmic bias and discrimination. We believe that AB 331 will
help to address these issues and promote a more just and equitable society for all
Californians.

Automated decision tools use statistical analyses to assess one’s eligibility for a benefit or
penalty. These systems have been traditionally used for credit decisions, however usage has
expanded to employment screening, insurance eligibility, and health care decisions. ADTs
are also used in the public sector, including government services, and in criminal justice
sentencing and probation decisions.

ADTs may also take into account factors such as neighborhood demographics. In some cases,
this could lead to Jewish immigrants being unfairly denied loans if they live in
neighborhoods that are unfairly characterized as higher-risk due to antisemitic stereotypes.
This could ultimately lead to immigrants being unfairly denied access to credit and
opportunities for financial growth. By supporting AB 331, we can work to prevent such
biased outcomes and ensure that all individuals, including Jewish people, are treated fairly
and equitably by automated decision tools.

ADTs are prone to unrepresentative datasets, faulty classifications, and flawed design, which
can lead to biased, discriminatory, or unfair outcomes. These tools can exacerbate the harms
they are intended to address and ultimately hurt the people they are supposed to help
AB 331
Page 15

ARGUMENTS IN OPPOSITION: This bill is opposed by a coalition of business advocates and


tech industry organizations. In support of their position, the California Apartment Association
writes:

On behalf of the members of the California Apartment Association (CAA), I am writing to


inform you that CAA has taken an oppose position on AB 331, your bill that would, among
other things, require a rental property owner or management company (referred to as a
deployer) to annually perform an “impact assessment” for any “automated decision tool” as
defined, that the deployer uses and to provide that assessment to the Civil Rights Department.
This bill would require that the deployer provide any natural person – in this case a tenant-
applicant - who is the subject of an automated decision tool with, among other things, a
statement of the purpose of the automated decision tool and allow the tenant-applicant to
request that they not be subject to the automated decision tool.

Rental property owners and property management companies use automated tools, such as
credit and eviction reports, to make decisions about a tenant-applicant’s ability to pay the
rent. These “decision tools” are not used to illegally discriminate and instead are necessary
business tools. Giving a tenant-applicant the ability to opt out of such a decision-making tool
is not logical. The owner would have no objective way to make a decision about the
applicant nor the bandwidth to institute an alternative process in the event of an opt out. We
ask that the rental housing industry be excluded from the provisions of AB 331.

Further, the Chamber of Commerce writes:

Significant concerns around AB 331’s private right of action, administrative penalties,


and limited right to cure.

[…]

As currently drafted, it is possible that a violation could constitute not only a failure to
complete an impact assessment altogether, but could also be interpreted as any single
deficiency within that impact assessment. Even then, it is unknown whether the number of
violations is based on that single error, or by any single error multiplied by the number of
individuals who were potentially impacted by the use of a particular type of ADT for which
an assessment was required or who received an inadequate notice.

Take for example AB 331’s requirement that an impact assessment include an analysis of
potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national
origin, limited English proficiency, disability, veteran status, or genetic information. (See
proposed section 22756.1(a)(5).) If an analysis is completed, but a potential adverse impact is
left out, but not with any intent or malice, is that a violation subject to a private right of
action? By way of another example, the bill requires a deployer to notify a natural person, at
or before the time an ADT is used to make a consequential decision, that an ADT is being
used to make or be a controlling factor in making the consequential decision. (See proposed
section 22756.2.) What happens in the case of a hospital emergency room where prior notice
simply is not possible? It is unclear to us what constitutes proper notice to avoid individual
lawsuits alleging non-compliance. Ultimately, we are concerned that the potential for a
private right of action as well as the steep fines will have a significant chilling effect on
innovation in California and on access to important technology – including technologies that
could reduce the instances of human bias and discrimination.
AB 331
Page 16

And while we support the opportunity to cure, conceptually, as drafted AB 331’s right to
cure is inadequate and illusory. First, the bill authorizes a court to award to any prevailing
plaintiff all of the following: compensatory damages, injunctive relief, declaratory relief, and
reasonable attorney’s fees and litigation costs. The right to cure only applies to injunctive
relief and it does not apply at all in the context of an administrative action. Even still, no
company can realistically avail themselves of this right under AB 331, as it requires them to
not only cure the noticed violation, but also provide the person giving the notice an express
written statement that the violation has been cured and that no further violations shall occur.
Due to the complexity and evolving nature of this technology, as well as the bill’s breadth
and vagueness issues, it is unrealistic for company to be asked to sign a statement that they
will never again violate this law.

Other active legislation and regulatory efforts on related topics opens the door for vast
confusion and laws and regulations in conflict.

AB 331 is not the only measure related to automated decisions, artificial intelligence (AI)
and issues around bias or discrimination when using such technology. To date, including AB
331, there are at least five bills and two regulatory bodies grappling with the issue in one
form or another.

Of note, Proposition 24 of 2020, required the California Privacy Protection Agency to issue
regulations “governing access and opt-out rights related to businesses’ use of automated
decision-making technology, including profiling and requiring businesses’ response to access
requirements to include meaningful information about the logic involved in these
decisionmaking processes, as well as a description of the likely outcome of the process with
respect to the consumer.” (Civ. Code Sec. 1798.185.) As such, just last week, the Agency
ended an informal, preliminary comment period on several topics, including automated
decision-making, though it has yet to commence formal rulemaking on the topic or share any
draft regulations with the public.

In addition to all of this, the Civil Rights Council within the Civil Rights Department
(formerly, the Department of Fair Employment and Housing) is also working on regulating
the use of AI and machine learning in connection with employment decision-making. The
Council recently published draft modifications to their proposed employment regulations
regarding “automated decision systems” in their effort to incorporate such technology into
existing rules regulating California employment and hiring practices.

With all these moving parts, it is difficult to foresee how such laws and regulations will layer
on top of one another and whether there will be conflicting public policy around the use of
such tools and technologies. While some approaches suggest further study and
understanding, others have proceeded toward regulating the technology, at times in
overlapping contexts such as employment. Understandably, our members are alarmed by the
likelihood of conflict and confusion at the conclusion of these efforts that are being run in
parallel to each other, without any coordination or consideration of the other efforts
underway.

REGISTERED SUPPORT / OPPOSITION:

Support
AB 331
Page 17

Algorithmic Justice League


California-Hawai’i State Conference of the NAACP
Israeli-American Civic Action Network
Oakland Privacy
The Santa Monica Democratic Club

Opposition

American Financial Services Association


California Apartment Association
California Bankers Association
California Chamber of Commerce
California Credit Union League
California Financial Services Association
California Grocers Association
California League of Food Producers
California Manufacturers & Technology Association
California Retailers Association
Card Coalition
Civil Justice Association of California
Computer & Communications Industry Association
Insights Association
National Payroll Reporting Consortium
Netchoice
Software & Information Industry Association
State Privacy and Security Coalition, INC.
Technet

Oppose Unless Amended

California Association of Realtors

Concern

California Nurses Association

Analysis Prepared by: Manuela Boucher-de la Cadena / JUD. / (916) 319-2334

You might also like