You are on page 1of 3

10:00 PM - 10:35 PM

Sam Charrington: Alright everyone. I am on the line with Aboba Barhani. Aboba is a PhD student at
University College Dublin. Aboba, welcome to the TWIML Podcast.

Aboba Barhani: Thank you so much for having me Sam.

Sam Charrington: I'm really excited about this conversation; we had an opportunity to meet in person
after a long while interacting on Twitter. At the most recent Neurips conference, in particular, the Black
in Al workshop, where you not only presented your paper Algorithmic Injustices toward Relational
ethics, but you won best paper there. And so, I'm looking forward to digging into that and some other
topics. But before we do that. I would love to hear you kind of share a little bit about your background,
and I will mention for folks that are hearing the sirens in the background. While I mentioned that you
are from University College Dublin, you happen to be in New York now at the AIES Conference in
Association with AAAI and as folks might know, it's hard to avoid sirens and construction in New York
City. So just consider that background or mood ambience background sounds.

Aboba Barhani: Thank you.

Sam Charrington: So, your background?

Aboba Barhani : Yes

Sam Charrington: How did you get started working in AI Ethics?

Aboba Barhani: So my background is in Cognitive Science and particularly a part of cognitive science
called embodied cognitive science, which has the roots in cybernetics, in systems thinking, the idea is to
focus on the social, on the cultural, on the historical and kind of to view cognition in continuity with the
world with historical backgrounds and all that. As opposed to your traditional approach to cognition,
which just treats cognition as something located in the brain or something formalizable, something that
can be computed. So that's my background even during my Masters, I lean towards the AI side of
cognitive science. The more I dive into it, the more I much more attracted to the ethics side, to injustices
to the social issues and so the more the PhD goes on, the more I find myself in the ethics side.
Sam Charrington: Was there a particular point that you realized that you are really excited about the
ethics part in particular? Or did it just evolve for you?

Aboba Barhani: I think it just evolved. So when I started out at the end of my master's and at the start of
the PhD. My idea is that we have this new relatively, new school way of thinking, which is embodied
cogs well, which I quite like very much. Because it emphasizes ambiguities and messiness and
contingencies as opposed to drawing clean boundaries and the idea is yes. I like the idea of redefining
cognition. As something relational, something inherently social and something that is continually
impacted and influenced by other people and the technologies we use. So, the technology aspect the
technology and was my interest. So initially the idea is yes, technology is constitutes aspect of our
cognition. You have the famous 1998 thesis by Andy Clark and David Charmers. They extended mine
where they claimed the iPhone is an extension of your mind. So, you can think of it that way, and I was
kind of advancing the same line of thoughts. But the more I dive into it, the more I saw. Yes, a digital
technology, whether it's ubiquitous computing, such as face recognition systems on the street or your
phone, whatever. Yes, it does impact, and it does continually shape and reshape our cognition and what
it means to exist in the world. But what became more and more clear to me is that not everybody is
impacted equally. The more privileged you are, the more in control of you are, as to what can influence
you and what you can avoid. So that's where I become more and more involved with the ethics of
competition and its impact on cognition.

Sam Charrington: The notion of privilege is something that flows throughout the work that you
presented at Black in Al, the Algorithmic Injustices paper, and this idea. This construct of relational
ethics. What is relational ethics and what are you getting at with it?.

Aboba Barhani: Yeah. So relational ethics is actually not a new thing. A lot of people have theorized
about it and have written about it. But the way I'm approaching it, the way I'm using it is, I guess it kind
of springs from this frustration that for many folks who talk about Al Ethics or fairness or justice. Most of
it comes down to constructing this need formulation of fairness or mathematical calculation of who
should be included and who should be excluded. What kind of data do we need, that sort of stuff. So, for
me, relational ethics is kind of leave that for a little bit and let's zoom out and see the bigger picture.
And instead of using technology to solve the problems that emerge from technology itself. So, which
means centering technology. Let's instead center the people that are, especially people that are
disproportionately impacted by the limitations or the problems that arise with the development and
implementation of technology. So, there is a robust research you can call it Al fairness or algorithmic
injustice, and the pattern is that the more you are at the bottom of the intersectional level, that means
the further away from you are, your stereotypical white CIS gendered male, the bigger the negative
impacts are on you, whether it's classification or categorization, or whether it's being scaled and scored
for by hiring algorithms or looking for housing or anything like that. The more you move away from that
stereotypical category. The status for the more the heavy the impact is on you. So, the idea of relational
ethics is kind of to think from that perspective to take that as a starting point. So, these are the groups
or these are the.

You might also like