You are on page 1of 1

CONCLUSION:

In particular, we see that many computer science research publications have relatively abstract
problem operationalizations, with no discussion of concerns like whether or not the underlying
normative assumptions are justified, or whether or not the proposal is fair in the context of the
given application. By examining over 40 recent CS papers, we inquire into: What types of
fairness in Content RS have been recognized and defined What kinds of application scenarios
researchers focus on, and how they operationalize the research problem in terms of methodology,
algorithms, and metrics Using this information, we sketch out the current state of research in
several key areas and analyze its possible limitations and future prospects. However, such
historical developments are often overlooked in the context of algorithmic fairness, which is the
focus of our current work, despite the fact that the real reason certain characteristics are regarded
protected is due to historical discrimination or subordination, where redress is necessary. For a
discussion of how fairness is understood in the social sciences and Mulligan for a definition of
fairness in machine learning and ranking algorithms. This work focuses on analyzing how
researchers in recommender systems operationalize the research problem, therefore a thorough
examination of these varying and frequently conflicting concepts of fairness is outside its
purview. The use of highly abstract operationalizations (e.g., in the form of fairness measures) is
widely proposed by researchers; this was previously noted as a potential fundamental challenge
in the larger topic of fair ML. Therefore, we wanted to know what broad categories of
contributions to the topic of fair recommendation may be found in the literature of computer
science and information systems. As we have already established, the issue of biased data is a hot
topic in the field of recommender systems. Since the concept of group fairness is foundational to
many main issues of fairness in the literature, this finding may potentially have social
consequences such as gender equality and demographic fairness. However, there is no definitive
answer to the question of which aggregation metric should be employed in a specific application
in the literature on group recommender systems, which has been revitalized under the word
fairness.

You might also like