You are on page 1of 34

Infusing women's empathy into AI

The Empathizing–Systemizing Theory's contribution to AI debiasing

Rita Tegon, PhD Student


Starting point

While AI presence in our lives is expanding, its social and political implications
increasingly constitute the earth of the mainstream rhetoric around its
challenges and opportunities.
Nevertheless, we still face a lack of means to stem its strong propensity to
harm individual and collective freedoms, and an infusion of empathy
towards human needs is expected to lead to AI accountability.
We will go through
1) recent documents, reports, regulations, frameworks, checklists, and impact assessment
tools claiming the need of trust in AI
2) bias undermining AI trust
3) definition of trust and the role of accountability for trust
4) responsibility as a common dimension of accountability and empathy
5) the Empathizing–Systemizing Theory confirming female propensity to empathy
6) the need of increasing female presence in AI to achieve trust via empathy and
accountability.
Top takeaways
● private investment in AI soared
in 2021
● AI became more affordable and
higher performing
● there is a rise in AI ethics
everywhere and a more global
legislation on AI than ever
● language models are more
capable than ever, but also more
biased.
(Zhang et alii, 2022)
Global corporate investment in AI by investment activity from 2013–2021(Zhang et alii, 2022).
No significant impact on the decision-making of software developers
(Hagendorff, 2020)

EU legislation
The GDPR (2016) provisions are specifically designed to address these risks: it is worth paying
particular attention to articles 4 (4), 22 (1), and Recital.

Next to it many documents, reports, regulations, frameworks, checklists, and impact


assessment tools developed according to different moral, ethical, and political models.

The proposed regulation presented by the European Commission (2021) to harmonize the rules
on artificial intelligence, establishing a legal framework aimed at regulating the European Union AI
market, is still struggling to reach a consensus, even if, in parallel to the closure of the work of the AIDA
Committee in January 2022, it seems that the EU Parliament has reached an agreement
A rich updated summary is
provided by the the Digital
Watch observatory, that is
part of the Geneva Internet
Platform, an initiative of
the Swiss authorities,
operated by
DiploFoundation.

Alongside the ethical


concerns, it lists the
numerous efforts
currently made at
national and
international level to
govern and stem them
through new legal and
regulatory frameworks.
These documents are presented just as a sample, confirming that the problem of AI concerns

is well known and significant, and the research community is working hard to tackle it, to pave

a safer path to AI implementation.

The AIDA committee report (2022) “about the extensive gender gap in this area […] notes

with concern that the gender divide is persisting, […]; recommends targeted initiatives to

support women in STEM in order to close the overall skills gap in this sector; stresses that this

gap inevitably results in biased algorithms; emphasises the importance of empowering and

motivating girls towards STEM careers, and eradicating the gender gap in this area”.
Fairness biases in AI systems are a
severe problem, but they are not
inherently negative as they are
linked to human neurophysiology.

Biases are numerous and rooted in


cognitive processes connected with the
emotional dimension of the brain.

They are the downside of heuristics in


the sense that they have the purpose of
making a human being blind to
certain information to facilitate
decision-making speed and frugality

(Gigerenzer and Gaissmaier, 2010)


In a recently published report
titled “Towards a standard for
identifying and managing bias in
artificial intelligence”, the US
National Institute of Technology
(NIST) says that while human
sources of AI bias are important,
computational, and statistical,
and systemic biases are relevant as
well.
(Schwartz et alii, 2022)
Bias in Machine Learning
(Suresh and Guttag, 2021)
ADMS (Automated Decision Making Systems) may incorporate algorithmic bias arising
from:

● Data sources, where data inputs are biased in their collection or selection
● Technical design of the algorithm, for example where assumptions have been made
about how a person will behave
● Emergent bias, where the application of ADM in unanticipated circumstances creates
a biased outcome.
They have the potential to increase efficiency and enable new solutions, but ethical issues
accompany these advantages: an online decision to award a loan is an example of this, as is an
aptitude test used for recruitment that employs pre-programmed algorithms and standards.
Potential harms from algorithmic decision making systems from Smith M. forme CTO of the USA through Buolamwini J., Gender Shades MIT Media Lab
Data are not neutral as far as they are the products of unequal social relations amplifying

the rules of power; they are the result of collective and individual ontological,

psychological, cognitive, cultural, implicit and explicit models.

The mainstream narratives around big data and data science are in fact white, male, and

techno-heroic as a result of the overwhelmingly male presence in STEM education and

professions. The female world not only suffers from it, but being a minority in

STEM, it still does not have a sufficient presence to contribute to the development

of a balanced AI.
(Buolamwini J., Gender Shades MIT Media Lab)
Therefore, despite the huge documents and auditing tools, it still seems arduous to achieve
accountability, which is considered to be a key facilitator of ethical AI systems, and, as a consequence,
trust.

According to Hagendorff (2020) AI ethics are failing in many cases as

● they lack a reinforcement mechanism


● deviations from the ethical codes have no consequences
● when ethics are integrated into institutions, they are mainly used as a marketing strategy
● reading ethics guidelines has no significant impact on the decision-making of software
developers: in fact, AI ethics are often considered a surplus or some kind of widget to technical
issues, as an optional framework imposed by institutions and extraneous to the technical
community.
According to the Guidelines for Trustworthy Artificial Intelligence, presented by the
High-Level Expert Group on AI to the European Commission (European Commission,
2019), accountability is one of the dimensions supporting trust, a key facilitator of
ethical AI systems; trustworthy AI should meet a set of seven key requirements translated
into a detailed assessment list:

1. human agency and oversight


2. technical robustness and safety
3. privacy and data governance
4. transparency
5. diversity, non-discrimination, and fairness
6. environmental and societal well-being
7. accountability.
And accountability can be reached through its six main dimensions connected to the
goals and needs of the different stakeholders:

1. responsibility
2. justification
3. audit
4. reporting
5. redress
6. traceability.
(Srinivasan & González, 2022)
There is a growing interest in developing empathic AI systems.

But a strong consensus about its definition has not yet been reached.

According to Cartabuke (2017), empathy encompasses a broad set of social, affective,


somatic, cognitive, and spiritual distinct processes related to human aspects such as
understanding, affective states, caring, socialization, intentions, and responsibility.

At this point, it can be observed that responsibility is a key concept, a common


indicator of both accountability and empathy.
It should be noted that, if empathy without accountability can be harmful,
the direct incorporation of empathy into AI pipeline processes can be as treatful as
not incorporating it, since AI systems do not consider subjective notions of
emotions. Direct emotions infusion can actually deepen divisions between groups,
as we are more likely to empathise with those coming from similar backgrounds.

A primary cause for such a limitation may arise exactly from the lack of
understanding of the concept of empathy and its inter-dependence with
accountability.
Research shows that by regulating empathy, it is possible to enhance
accountability as long as empathy is appropriately incorporated into the design of the
AI process, helping address some of the challenges associated with AI accountability.

(Blader & Rothman, 2014)


Concrete pathways to enhance AI
accountability via incorporation of
empathy, both referring to
responsibility (the former in terms of an
expectation of the system's result, the latter
in more an affective sense, “taking into
account others’ states”) can be outlined as
follows
1. maximising long term welfare;
2. integrating subjective needs into decision
making;
3. incentives for fostering empathy and
accountability;
4. design of cooperation schemes to achieve
stakeholder consensus.
(Srinivasan and González, 2022)
After having investigated the close connection existing between empathy and
accountability in AI systems and their strength in generating trust in end-users, let’s focus
on a framework clarifying the role and definition of empathy.

The last decades have witnessed enormous growth in the neuroscience of empathy:
experts in the field of social neuroscience have developed two prominent theories in an
attempt to gain a better understanding of empathy, both connected with the Theory of
Mind, the ability to understand what another person is thinking and feeling based on rules for
how one should think and feel.
● The Simulation Theory (Gordon, 1992), roots empathy in the
biological component driven by the neuroscience of mirror neurons.

● The Empathising-Systemising (E-S) is a theory on the psychological


basis of autism and male-female neurological differences, originally put
forward by Simon Baron-Cohen from the University of Cambridge
(2005).
The Empathising-Systemising (E-S) theory classifies individuals based on
abilities in empathic thinking (E) and systematic thinking (S).
It uses an Empathy Quotient (EQ) and a Systemising Quotient (SQ) to assess
skills and tries to explain social and communication symptoms in autism
spectrum disorders as empathy deficits and delays paired with intact or superior
systemising.
Simply expressed, E and S denote a basic inclination toward people or things,
respectively.
The main reasons that lead us to pay more attention to this theory than to the first
one are first, the fact that it is grounded in sex difference studies and, second, that
it is connected with gender preference in the STEM subjects.
According to Baron-Cohen, females on average score higher on measures of empathy,

and males on average score higher on measures of systemising. This has been

discovered using the child and adolescent versions of the Empathy Quotient (EQ) and the

Systemising Quotient (SQ). While experience and socialization have a role in the observed

sex differences in empathy and systemising, Baron-Cohen suggests that a contribution is

given also by biology. A candidate biological factor influencing E and S is fetal testosterone

(FT levels are positively correlated with scores on the Systemising Quotient and are

negatively correlated with scores on the Empathy Quotient).


Despite the criticisms received, among others from Fine (2008), who considered the
theory a form of Neurosexism, i.e. a bias in the neuroscience of sex differences towards
reinforcing harmful gender stereotypes, the theory can be considered to have
strong support.
For example, in 2018 the E-S theory and the derived Extreme Male Brain theory
in autism, were tested in half a million people: it has been confirmed that typical
females on average are more empathic, typical males on average are more
systems-oriented; the strengths of the study are the inclusion of a replication sample
and the use of big data (Greenberg et alii, 2018).
Therefore, heterogeneous teams should be built with the awareness that empathy is objectively
more associated with the female gender.

This would make it possible to respond in a concrete way to the aforementioned need to make
AI accountable and to reduce the discrimination it widely perpetrates towards many end-users
including women.

However, we know that the AI skill penetration rate for women is still critical at the global level,
although increasing.

A discussion of the role of education systems and models in orienting girls to STEM studies and
ML, and in educating all students in empathy is crucial.
Thank You so much for attending

You might also like