You are on page 1of 9

Section 2

[Document subtitle]

[DATE]
[Company name]
[Company address]
Abstract:
This report summarizes the current state-of-the-art knowledge on algorithmic bias in AI systems,
synthesized from pivotal works. The study delves into biases that stem from data, design
decisions and socio-technical aspects. Some milestones are the innovative discussion on
language datasets that underline how linguistic details need to be treated very cautiously. It
discusses the importance of choices in design that facilitate biases. Further, the socio-technical
influences that lead to bias are investigated. The literature classifies bias types and suggests
solutions such as fairness-aware algorithms, ethical standards and human in the loop systems. So,
critical discussions explore the pros and cons of different approaches and discuss data bias
design choices socio-technical factors bias types manifestations in real life. This chapter ends
with an interdisciplinary approach, recognizing the need for partnerships in dealing with
algorithmic bias.
Table of Contents
2.1 State of the Art: Understanding Algorithmic Bias in AI Systems.............................................2

2.2 Critical Discussion: Interrogating Approaches to Algorithmic Bias in AI Systems..................5


Section: 2

2.1 State of the Art: Understanding Algorithmic Bias in AI Systems

When delving into the complex terrain of algorithmic bias in AI systems, it is essential to
perform an extensive literature review unpacking the fundamental constructs that underlie
biases. This chapter seeks to present an in-depth understanding of the state-of-the-art practice as
it deduces from some landmark works in order to unwind all dimensions of bias induced by data,
design-based biases and most importantly socio-technical factors that interplay within AI
systems. In most cases, algorithmic biases are built on data bias which serves as the cradle for
this vice in its entirety. The research conducted by (Caliskan, 2017) can be considered a
breakthrough in the analysis of biases, which are intrinsic to language datasets. Their work
highlighted the possible prejudiced results generated by partial data training and pointed out that
linguistic nuances need to be carefully analyzed. Further, in this respect (Bolukbasi, 2016) offer
the suggestions of how to organize representatives sampling as an opportunity to get rid of
prejudice at data collection stage that affects even more biases when making prediction and
implementation stages by the means implementing so-called “de-biasing techniques”. OFDM
allows minimizing the number of interferences and correcting distortions in both wideband and
narrow-band communication lines by infinite electronic correction operations, which are
calculated upon receiving certain operating conditions To find out those magnitudes that affect
each other negatively during transmission through multipath communications line based on
characteristics sensitivity analysis. The significance of the effect that design choices have on
algorithmic outcomes cannot be overemphasized. Meanwhile, (Diakopoulos, 2016) focuses on
the complexity of algorithmic design and highlights its determinant impact on biases. Through
this study, the discussion of how an AI system might inadvertently participate or even enable bias
is made. About the analysis presented by (Mittelstadt, 2016), he focuses on this matter describing
that developers generate and supply structures for AI technology implementation, while
architects dependently use design tools which raises some ethical connotations in terms of
responsibility. These summaries highlight the importance of a critical approach to algorithmic
architectures that encourage developers seeking ethical implications related with design
decisions in AI systems. In addition, it should also be noted that algorithmic biases do not just
pop out of nowhere but are intertwined in complex socio-technical configurations. For the
purpose of this extended abstract, we focus on two crucial works about algorithmic governance
(Barocas & Hardt 2019) that provide a nuanced sense around socio-technical aspects influencing
what comes up. They highlight how various social, economic and cultural components come
together to influence biases in AI systems. (Selbst 2019) . has revealed the ethical consequences
of integrated AI in different scenarios involving peacekeepers’ activities. This emphasis’ how
valuable it is to employ a responsible and context-aware or approach. This literature helps us to
understand that algorithmic biases are not simply missteps of technology but its ‘features’ firmly
connected with functioning of society, and therefore should be approached within a holistic
comprehension of socio-technical landscape.

Categorizing the different kinds of bias is a necessary step towards fully understanding
algorithmic biases in AI systems. The existing literature provides us with a rich tapestry of
insights into the various types of biases that can arise and each requires its own strategy to
identify it, understand or recognise it even; so as to provide counteractive mitigations. One of
such foundational works is presented by (Dwork, 2012), in which the authors elaborate on
fairness criteria based on individuals – Aristotelian normative ideals for treating similar cases
alike. Lastly, the reason why this work is important for understanding bias lays in its
specification of difficulties associated with definition another idea aimed at building an image of
a ‘true take’ if you will positionality This approach provides corporation between positions that
do not exist Similarly to what dissents protesters would argue it creates some changes. While
theoretical lens is interesting and thought-provoking on their own, they lack The predominant
approaches to correcting for the bias include statistical or group-based fairness measures
thorough discussed by Dieterich , aforementioned as well as (Berk, 2021). This aims in
measuring statistical discrepancies on various protected attributes such as gender, racial or any
other form of attribute (Kleinberg ., 2018). However, it becomes impossible (That is the real
reason people cannot achieve all fairness metrics without coping with underlying trade-offs and
challenges)? These authors show. This indicates that there is a need for conscious thinking and
evaluating the different forms of fairness by assuming certain normative premises. Additionally,
causal and counterfactual measures draw motivation from (Kusner, 2017) research as well as the
study (Kilbertus, 2017). These measures aim to nullify biases arising from the fact that protected
attributes cause causal influence over algorithmic predictions. However, they establish deeper
epistemic burdens by requiring a deeper understanding that involves causal links which can be
rather challenging to discern in complex systems. Under this holistic approach, we will need to
apply the divergent viewpoints of bias types. Therefore, individual-based fairness together with
statistical measures and causal considerations may give us a richer understanding of the kinds
bias that can be present in AI systems. Such a highly refined classification will facilitate the
development of personalized and effective action plans that focus on specific forms of bias, thus
avoiding general solutions.
Bias considerations require a look at the real life situations where algorithmic biases have
practical consequences. The literature is full of case studies showing how bias can infiltrate
various branches starting from biased hiring algorithm and ending with prejudiced facial
recognition systems. Recent study from (Obermeyer, 2019) deconstructs racial bias in healthcare
algorithms, emphasizing disparities between patients for further treatment opportunities.. This
work highlights the real implications of bias since it literally affects people’s access to health
information and medical assistance. Research by (Sweeney, 2013) compares algorithmic hiring
tools and gender bias. (Buolamwini and Gebru 2020). There are biases in the domain of facial
recognition technology. Their work shows that facial recognition systems are less accurate for
people of color and women meaning some groups could be discriminated against by biased
technologies whose use can increase social inequality. However, such manifestations of bias are
not merely technical errors but demonstrate the far-reaching societal implications of algorithmic
decisions. Literature on real-world cases of bias is a very important resource to understand the
practical consequences algorithmic biases may have. From these instances, researchers can
figure out peculiarities of bias in different settings to address them with particular measures. 4 In
this broad, empirical approach to anti-bias efforts being both theoretically sound and practically
effective.

2.2 Critical Discussion: Interrogating Approaches to Algorithmic Bias in AI


Systems

Major approaches of algorithmic bias with their strengths and weaknesses should form an
important critical debate to be examined while looking at the terrain of algorithmic bias. So from
listing the arguments for and against algorithmic bias, as well as by contrasting several important
points of view on this theme it would be possible to create a balanced perspective.
Addressing data bias is at the root of algorithmic bias when discussing these issues (Calo,
2016). Research like data auditing and preprocessing, as shown in work by Diakopoulos, 2016 ,
presents a proactive approach to detecting bias in the dataset and rectifying it. If implemented
properly, these strategies can contribute to more equitable results in AI. But data preprocessing is
not without problems. This way of interpreting data and dynamic process regarding biases makes
it impossible to construct general approaches in preprocessing (Barocas, Hardt, 2019). Also, as
Mitchell . points out (2019), the reinforcement of social prejudices through overcorrection is a
significant limitation related with these approaches too. The search for the happy medium
between being unbiased and integrity while keeping useful information is a constant
dilemma. Identifying design choices as sources of bias is one of the great strengths
(Diakopoulos, 2016) . Through the review of a design process, scholars can uncover biases in
algorithms that are not apparent. This strength can also be witnessed as demonstrated by the
study of Caliskan, 2018). It is therefore a shared problem of how to cope with an idea that sets
what fairness in design means merely because its subjectivity defines itself (Binns, 2018). Trying
to codify fairness metrics can simplify the complexity of ethic decisions. Furthermore, as stated
by Friedler, 2019 , the debates between various notions of equity bring challenges that are hard
to handle . Then again, design decisions are often a matter of judgments that do not necessarily
translate from one context to another.

Acknowledging the role socio-technical factors play widens a conversation beyond technical
aspects(Coeckelbergh, 2020). The approach recommends Mittelstadt, 2016 to embrace ethical
frameworks that recognize the effect of algorithmic systems on a wider societal scale. This
perspective bolsters the argument for value-sensitive design, which acknowledges that ethical
ramifications must be incorporated into technology creation (Friedman and
Nissenbaum ,1996). However, converting ethical principles into technical practices is fraught
with many difficulties. Due to the inherent subjectivism of ethics judgments and potential
conflict values injected by various societal realities, complex issues are raised. Furthermore,
socio technical systems are dynamic and therefore require ongoing ethical understanding that is
not going to occur in a static model or framework. The classification of bias types allows a
complex comprehension about different kinds. By using individual-based, statistical and causal
measures that are incorporated provides a general approach (Dwork ., 2012; Kilbertus , . This
more subtle classification understands the complicated nature of prejudice and does not subscribe
to the philosophy that one size fits all. 4 Nevertheless, the impossibility results ( Chouldechova
2017;Kleinberg . Normative judgements as to what measures should be given priority raises
questions about the universality of fairness and possible unintended consequences that may
ensue from such prioritization.
A study of real-life cases offer ground insight into the effects of biased results (Obermeyer .,
2019; Buolamwini and Gebru, 2018). This approach anchors the discussion in practical
outcomes, encouraging a wider view of biased algorithms’ impact on society. The issue, though
is to be able to generalize beyond particular cases. Different contexts and biases can present
themselves differently in different domains (Dieterich, 2016). Whilst case studies can be
illustrative they may not represent all the biases that are present across varying applications and
so it is necessary to maintain a balance between narrowness versus broad applicability. After the
amalgamation of these perspectives, a comprehensive concept on algorithmic bias surfaces. Each
method provides valuable insights; yet none is without flaws. Creating an understanding of the
connections among data, design elements connected with socio- technical aspects concerning a
system bias types and real world outcomes is requiring interdisciplinary approach as well And Of
collaborative efforts. Such strengths, ones that complement the weaknesses of others’
perspectives in this one will guide researchers and practitioners through algorithm bias territory
humbly – not just at deployment stages but always.

References

1. Barocas, S., & Hardt, M. (2019). "Fairness and Abstraction in Sociotechnical Systems."
arXiv preprint arXiv:1905.10688.

2. Binns, R. (2018). "Fairness in machine learning: Lessons from political philosophy." arXiv
preprint arXiv:1712.03586.

3. Buolamwini, J., & Gebru, T. (2018). "Gender shades: Intersectional accuracy disparities in
commercial gender classification." In Proceedings of the 1st Conference on Fairness,
Accountability and Transparency (pp. 77-91).

4. Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). "Semantics derived automatically from
language corpora contain human-like biases." Science, 356(6334), 183-186.
5. Chouldechova, A. (2017). "Fair prediction with disparate impact: A study of bias in
recidivism prediction instruments." Big Data, 5(2), 153-163.

6. Coeckelbergh, M. (2020). "AI ethics: beyond principles." AI & SOCIETY, 35(2), 399-406.

7. Crawford, K., & Calo, R. (2016). "There is a blind spot in AI research." Nature News,
538(7625), 311.

8. Diakopoulos, N. (2016). "Accountability in algorithmic decision making." Communications


of the ACM, 59(2), 56-62.

9. Dieterich, W., Mendoza, C., & Brennan, T. (2016). "COMPAS risk scales: Demonstrating
accuracy equity and predictive parity." Northpointe Inc, 1(2), 1-21.

10. Dwork, C., Hardt, M., & Zemel, R. S. (2012). "Fairness and Abstraction in Sociotechnical
Systems." arXiv preprint arXiv:1104.3913.

11. Friedman, B., & Nissenbaum, H. (1996). "Bias in computer systems." ACM Transactions on
Information Systems (TOIS), 14(3), 330-347.

12. Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2019). "On the (im)possibility of
fairness." arXiv preprint arXiv:1905.10688.

13. Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). "The ethics of
algorithms: Mapping the debate." Big Data & Society, 3(2), 2053951716679679.

14. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., & Hutchinson, B. (2019).
"Model cards for model reporting." In Proceedings of the conference on fairness,
accountability, and transparency (pp. 220-229).

15. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). "Dissecting racial bias in
an algorithm used to manage the health of populations." Science, 366(6464), 447-453.

16. Van Wynsberghe, A., & Robbins, S. (2019). "Critiquing the Reasons for Making Artificial
Moral Agents." In The Oxford Handbook of Ethics of AI.

You might also like