You are on page 1of 8

Criminal Sentencing software

1. What is the emerging ethical dilemma all about?


• Some of the ethical dilemmas of using artificial intelligence to address
criminal justice issues are familiar to anyone who watched “Person of
Interest.” The CBS science-fiction show revolved around the efforts of a
team of human beings and “The Machine” — an artificial superintelligence —
to stop crimes before they could happen
• In the real world of criminal justice and the legal system,
though, problems not anticipated by “Person of Interest” are
cropping up with algorithms are used to predict criminal
behavior. Where The Machine was relentlessly rational and
unfailing (unless being interfered with), real-world machines
are increasingly facing questions about whether they produce
outcomes just as biased as the humans who build them.
2. What factor or event led to this
dilemma?
• The algorithm, developed by a private company called
• Northpointe, had determined Loomis was at "high risk" of
• running afoul of the law again. Car insurers base their premiums
• on the same sorts of models, using a person's driving record,
• gender, age and other factors to calculate their risk of having an
• accident in the future.
• • challenged the sentencing decision, arguing that the
• algorithm's proprietary nature made it difficult or impossible to
• know why it spat out the result it did, or dispute its accuracy,
• thus violating his rights to due process.
3. What are the societal implication of this dilemma?

•Algorithms silently structure our lives. Algorithms can determine whether someone is hired,
promoted, offered a loan, or provided housing as well as determine which political ads and
news articles consumers see. Yet, the responsibility for algorithms in these important
decisions is not clear. This identifies whether developers have a responsibility for their
algorithms later in use, what those firms are responsible for, and the normative grounding for
that responsibility. It is conceptualize algorithms as value-laden, rather than neutral, in that
algorithms create moral consequences, reinforce or undercut ethical principles, and enable or
diminish stakeholder rights and dignity. In addition, algorithms are an important actor in
ethical decisions and influence the delegation of roles and responsibilities within these
decisions. As such, firms should be responsible not only for the value-laden-ness of an
algorithm but also for designing who-does-what within the algorithmic decision. As such,
firms developing algorithms are accountable for designing how large a role individual will be
permitted to take in the subsequent algorithmic decision. Counter to current arguments, I find
that if an algorithm is designed to preclude individuals from taking responsibility within a
decision, then the designer of the algorithm should be held accountable for the ethical
implications of the algorithm in use.
4.Why is it important to question the moral
and ethical issues surrounding innovations in
Science and Technology?
•Scientists need to integrate scientific values with other ethical
and social values. Obviously, science can help identify
unforeseen consequences or causal relationships where ethical
values or principles are relevant. In addition, individuals need
reliable knowledge for making informed decisions.
5. In the face of dilemma, why is it impotant to
study STS?

• An important aspect if the STS movement is that science


teaching must go beyond teaching information and help
students clarify their values, develop the skills to take action on
issues, and learn how to discuss the moral and ethical
implications of science
THANK YOU !
Members

• Donna Mae Dela Cruz


• Monica M Ortiz
• Nori Jean Icamen
• Reca Ternida
• BSCE 1-D

You might also like