You are on page 1of 6

LOP

Martha Ilagan
The policy that the government is trying to push through is impossible at best, and
dangerous at worst. We, in the opposition, believe that even if such technology exists, the use
of Artificial Intelligence should not be used to operationalize the entire criminal justice system.
Lest the opposition be labelled as anti-technology or anti-progress, we would like to
clarify two points early on in this debate. FIRST, we in the opposition see the proper integration
of AI as extremely beneficial to the improvement of the criminal justice system. Utilizing AI in
aspects such as the clerical work, optimizing the schedules of courts, determining jurisdiction,
and the quick-indexing of case law, among others, are tasks that can be delegated to Artificial
Intelligence. This harmonious relationship between man and machine will create a more
efficient criminal justice system, thus relieving courts of their logged dockets and allowing for a
speedy disposition of cases, as the Rules of Court mandate.
SECOND POINT--opposition concedes to the status quo presented by the government, in
that our current criminal justice system is rife with corruption, inefficiency and error. We in the
opposition are not blind to the shortcomings of our system. Going back to our first point, this is
why we in the opposition believe that integrating AI in various aspects of the criminal justice
system will best serve the interests of our constituents, without compromising our Constitution
and the freedoms provided by it.
However, this is where the similarities of Government and Opposition end. We believe
that AI should not operationalize the entirety of the criminal justice system, to the point that a
program will collate data, and hear and decide cases based on the data, as government so
proposes.
We will ground our arguments on these points: I will tackle the unconstitutionality of
Government’s proposition. Our deputy speaker, Donna, will tackle how Government’s
proposition is an invalid exercise of Police Power, and how it violates our substantive as well as
procedural laws, among others.
Before I go into my substantives, I will rebut the points given by the government on why
such a setup will better out criminal justice system.
FIRST, Government asserts that AI will create a “perfect” system, which is infallible in
every way that human intervention need not be present. What Government failed to consider
is how complex the criminal justice system is—it is not only a matter of submitting pleadings
and deciding upon cases. Rather, it starts from the initiatory complaint, up to the promulgation
of sentence and even appeal.
This is a delusional reliance on the technology we assume to have, as Artificial
Intelligence can only do so much. It cannot determine the admissibility of evidence, it cannot
know if a person under oath is lying, it cannot decide if there is no case precedent, among
others. These are all things that the human mind is best at. To rely on a machine to handle such
a complex process in its entirety is a disservice to those who rely on the system.
Next, government also asserts that with Artificial Intelligence, there will be a less
corrupt justice system. We rule that in the negative. A program, no matter how independent in
purports to be, is still made by people—people who can be bribed, coerced and made to cater
to the interests of those who can pay their way through the system. The practice of bribing
judges and fiscals will simply be replaced by bribing technicians, programmers, and IT
personnel. Government’s assertion does not stand.
Lastly, Government posits that judges will be safer, and that their families will be more
protected if AI were to decide cases. We will argue this point in two levels. First level-- assuming
but not conceding to government’s proposition, the security of those involved in the creation of
AI will definitely be at risk of being compromised. As we explained before, Government is
simply deflecting the gun from one group of persons and pointing it to another. There is no
systemic change in their paradigm. Second level of argumentation is that with such a grand,
autonomous model that they propose, what is the role of judges anyway?
This brings us to our first substantive argument, regarding the constitutionality of the
use of AI in the criminal justice system’s entirety.
Article VIII, Section 7 of the 1987 Constitution states that “no person shall be appointed
Member of the Supreme Court or any lower collegiate court unless he is a natural-born citizen
of the Philippines. A Member of the Supreme Court must be at least forty years of age, and
must have been for fifteen years or more, a judge of a lower court or engaged in the practice of
law in the Philippines.” An AI cannot possibly fulfill these requirements, as in the first place, it is
not even a person by any stretch of our imagination. Furthermore, the Constitution requires
that “a Member of the Judiciary must be a person of proven competence, integrity, probity, and
independence.” An AI cannot possibly possess these traits as it is a mere program.
Government’s cited advanteges of AI—impartiality, disconnection from emotion and lack of
bias—cannot therefore appreciated vis a vis the requirements of the Constitution.
As jurisprudence has held over and over again, a court must be two things—impartial
and competent. While AI can be impartial in some aspects, in no way can AI fulfill the
competencies required by the Constitution. We should not be swayed into thinking that
technology is the panacea of our justice system—rather, we should see it as something to
complement and enhance the human resources that we already have. For all this and more, we
are proud to oppose.
DLOP
Donna Atienza
In an era of rapid progress of intelligent machines/Artificial intelligence, the way we live is
shifting. AI is found in almost every aspects of our lives, from modes of communications,
education, finance, government service, and the list goes on.

The motion before us today calls for the application of Artificial Intelligence in the entire
Criminal Justice System. We’re going to concede up to the extent that it has overwhelming
benefits in terms of advance modes such as videos/CCTV, biometrics system, etc. But why is it
not acceptable to operationalize the entire Criminal Justice System using AI?

Two main arguments from the Opposition Extension:


1. It will be subject to higher risk of inaccuracies, hence will do more harm.
2. Assuming it will be efficient at some point, it’s still not a feasible trade-off for being
violative of the right on privacy.
3.
Rebuttals/Responses…
Arguments:
1. While we recognize the advanced speed and algorithms of this technology, there are
aspects that it cannot defeat human senses in determination of vital aspects which includes the
difference in the appreciation of substantive evidence of human being and AI.

Taking for instance, in the “Enforcement” pillar of the Criminal Justice System, studies have
shown that some of the major facial recognition systems are inaccurate. Amazon’s software
misidentified 28 members of Congress and matched them with criminal mugshots. These
inaccuracies tend to be far worse for people of color and women. The danger is not only limited
to one pillar of the justice system.

But even if assuming that in the said “Enforcement” stage, the AI will work. It will still be
problematic in the other pillars of the Criminal Justice System, namely the Prosecution and
Court. As what happened in Columbia, As the case approached sentencing, the prosecutor
agreed that probation would be a fair punishment. But at the last minute, had been deemed a
“high risk” for criminal activity. This was determined by a criminal-sentencing AI—an algorithm
that uses data about a defendant to estimate his or her likelihood of committing a future crime.
Consequently, they took probation off the table, insisting instead that the accused be placed in
juvenile detention. It was found out that the defendant’s heightened risk assessment was based
on several factors that seemed racially biased, including the fact that he lived in government-
subsidized housing and had expressed negative attitudes toward the police.

But algorithms also play a quiet and often devastating role in almost every element of the
criminal-justice system—from policing and bail to sentencing and parole. By turning to
computers, many states and cities are putting peoples’ fates in the hands of algorithms that
may be nothing more than mathematical expressions of underlying bias. That is the reason why
we cannot put into the hands of AI the entire operation of Criminal Justice System.
2. Assuming the AI has close to perfect accuracy, it’s still not going to be feasible for being
violative of the right to privacy. This right is enshrined not just in our Constitution, but in the
RPC, Civil Code and various International covenants such as UN Universal Declaration of Human
Rights anc ICCPR. The reason why this right is so emphasized in many provisions of the law is
that it is a fundamental human right which much be given protection.

Activities that restrict the right to privacy, such as surveillance and censorship, can only be
justified when they are prescribed by law, necessary to achieve a legitimate aim, and
proportionate to the aim pursued. The danger of legally applying AI in the entire system, it will
easier for the authorities to allow interception when an activity is labeled as in relation to cases
involving specific alleged crimes.

While we recognize the need to improve our justice system, it is our submission that is should
be done to be more beneficial and within the bounds of the Constitutionally protected rights.
Thank you.

WHIP
Kaye Ande
The reason why penal laws are construed in favor of the accused, how crimes are aggravated or
mitigated, how different circumstances in cases are necessarily looked upon on its intrinsic
value is because the state recognizes that there are human beings behind these cases and a
criminal justice system is better off because of the dynamic nature of evidences and cannot be
alternatively operationalized by AI which further dehumanizes due process.

For my issue: Where is the criminal justice system better off?

Government bench pushes for a narrative that AI operationalizing the CJM leads to more
objective decisions because according to them, human emotions usually get in the way.
However, opposition strays away from this narrative. In appreciating evidences, a lot is to be
considered than simple facts, these are equity in peculiar cases and different procedure for
appeals and the like. This is a risk that we cannot dehumanize the CJS because objective
decisions cannot simply fly from previous decisions and court orders because of intrinsic
circumstances.

Even if they say that AI operationalizing CJS leads to objective decisions and a speedy trial, the
CJS is better off under our policy because they have never considered that maybe in particular
cases there needs to have an appreciation of evidence from agents of the court in justifying a
crime, for example. These are direct harms under their model, which they never addressed.
Impartial decisions are compromised, which is the intention of the courts.

Secondly, government side’s model cannot work because they claimed that AI’s perfect
capability ensure a speedy trial is comparatively better. However, they failed to characterize
how their system operationalizes. They only told us how the AI system simply records and
applies previous jurisprudence. Opposition was already clear when we said that impartial
decisions matter and we cannot strategically see this in the type of input and computation
system they provided because due process requires to be procedural and substantive. They
never answered this engagement.
To conclude, the burden to operationalize the CJS relies on its dynamic character as human
beings, which cannot be achieved by AI operationalizing by simply allowing the process to rely
totally on court records. To empower the CJS and other stakeholders, due process should not
be compromised simply because they say judges can simply be bribed and cases takes to long
to be decided on. They failed to characterize how this can be totally achieved under their
model.

You might also like