You are on page 1of 5

Transparency, AI, and the Future of Chinese Courts: A

Discussion with Professor Benjamin Liebman


jtl.columbia.edu/bulletin-blog/transparency-ai-and-the-future-of-chinese-courts-a-discussion-with-professor-
benjamin-liebman

March 13, 2021

By: Tim Wang, staff member

Benjamin Liebman is the Robert L. Lieff Professor of Law at Columbia Law School. He
is also the director of both the Hong Yen Chang Center for Chinese Legal Studies and
The Parker School of Foreign and Comparative Law. Widely regarded as a preeminent
scholar of contemporary Chinese law, Professor Liebman studies Chinese court
judgments, Chinese tort law, Chinese criminal procedure, and the evolution of China’s
courts. He has also recently written about the roles of artificial intelligence and big
data in the Chinese legal system, including in a forthcoming article in Volume 59, Issue
3 of the Columbia Journal of Transnational Law titled Automatic Fairness? Artificial
Intelligence in the Chinese Courts. I sat down with Professor Liebman to discuss the
implications of big data and artificial intelligence for the future of Chinese, as well as
other, legal systems.

***

This interview has been edited for clarity and length.

The top-down directive from the Supreme People’s Court to publicize court
documents has led to more than 100 million court documents becoming
public. Yet China’s court transparency is still relatively incomplete. In your
article, you note that this spotty compliance is in part due to the not
unfamiliar power joust between China’s central and local actors. What are
some of the disincentives that might dissuade local court officials
from publishing court documents and opinions online?

First, I think it’s pretty remarkable that they’ve put 100 million documents online. The
Supreme People’s Court in particular deserves credit for doing so because this is not
what you generally expect an authoritarian legal system to do. Even if it isn’t full
transparency, this level of transparency is pretty significant. We should remember that
there’s also a big chunk of cases, such as mediation and family law cases, that aren’t
supposed to go online. So the set of cases that are not online but are supposed to be
online is smaller than the overall number of missing cases. That said, we have found that
noncompliance is due to a couple of factors. First, we outlined in our previous piece,
Mass Digitization of Chinese Court Decisions: How to Use Text as Data in the Field of
Chinese Law, some noncompliance isn’t necessarily for a political purpose or because

1/5
officials want to hide the ball. Rather, there is bureaucratic sluggishness - digitizing
documents is hard, takes time, and requires resources. Publicizing court documents has
been a target for evaluating courts but hasn’t been a hard target, and so especially in the
early years this wasn’t as important of a goal for local courts as it was for higher-level
courts. Now, that said, there’s also pretty good evidence that more sensitive cases, for
example, those that involve powerful or influential companies, don’t get posted online.
But I don’t know that there are concerted efforts by local courts to push back against the
requirement to post things online due to purely local interests. Rather, there’s
bureaucratic inertia, and courts may sometimes face pressure from other parties not to
put things online.

The CCP endorsed judicial transparency in part because of social and media
outrage over wrongly-decided cases. What has been the public response to
the public release of millions of court decisions? Has it garnered more
legitimacy for the CCP and the courts? And has it helped the media and
other civic institutions check the judicial process?

Public opinion is very difficult to measure in China. I don’t think we know an answer or
have a way of figuring out an answer. I think the public court documents have mainly
been used in the legal community, both by lawyers and the legal tech industry in China.
So I’d say it’s been used more in the legal community than the broader community.
There certainly was some concern at the time these policies were implemented that the
documents could be used to criticize the courts and make them look bad, but I just don’t
think we’ve seen that. There isn’t much evidence suggesting that there are downsides to
the courts so far.

If you talk to lawyers and judges in China, they will tell you that increased transparency
is useful in multiple ways. Part of it is that the judges will be more careful, mistakes will
be noticed, and some of the things we might have seen in earlier periods -- where courts
are clearly just not following the law -- becomes much harder to do. Transparency does
make courts pay more attention. It also can be helpful for protecting the courts because
judges facing external interference can now say: “look, this is going online, I can’t help
you even if I wanted to.” So the fact that these cases will see sunshine makes it easier for
courts to push back against impermissible forms of interference. And, of course, this
dovetails with the CCP’s dramatic crackdown on corruption in and interference with the
courts that we’ve seen in recent years.

Also, I’ll point out that the policy also accords with the central leadership’s recent
emphasis on big data and artificial intelligence. So it’s not just about public support, but
also about the courts aligning themselves with the core policy goals of the central Party-
state.

Does the adoption of the algorithm-assisted system mean that China is


moving towards a de facto precedent-based system? How might an
algorithm-assisted system differ from the system of stare decisis in the
United States and elsewhere?

2/5
You’re right that China is not officially a precedent-based system. But all systems rely on
some form of precedent. Even before AI, it was pretty common for courts and lawyers to
look to other similar cases informally.

AI is going to be the most useful in the cases where it’s least needed - it can help with
high throughput, routine cases by standardizing those cases a bit more. A lot of those
cases were already pretty standardized -- for example, judges already know what a lot of
the sentences in criminal cases are supposed to be. But there’s still a perception that
courts are undermined by inconsistent rulings. I think part of the goal of AI then is to
move towards increased standardization of court decisions. And maybe it also helps
move cases more quickly through the system. I would call this more standardization
than creating precedent. It’s more that courts are trying to ensure that like cases are
decided in like fashion, rather than the courts binding themselves to the decisions and
reasoning of past judicial opinion.

Some scholars argue that Chinese courts inherently serve political purposes
and therefore need the flexibility to depart from legal considerations for
political ones. Isn’t an algorithm-assisted system that binds a court in
tension with that narrative?

This is exactly right and we discuss this tension in our paper as a central challenge.
Courts in China are not just machines that apply the law; instead, they are supposed to
consider a broader range of social goals and sometimes political factors as well. Given
that, it’s hard to imagine how an algorithm can account for those factors. We’ve seen
some shift away from the fairness-based rationales for decisions, including in revisions
made in the new civil code, but courts are still supposed to consider the possibility of
social unrest or the need to support indigent or weaker parties. So it’s hard to see how
algorithms are going to take these considerations into account. The question thus
becomes: is AI also going to be used as a tool to push courts away from judging that takes
account of factors such as equity and social stability or are algorithms going to have to
give way to these types of social considerations in a wide range of cases? It’s too early
now to tell which way the central court leadership is leaning on this issue.

Laws can be amended. And in China, there are also shifting political
directives that may alter the way certain legal problems are dealt with. For
example, the strike-hard campaigns (hard crackdown on crime) in the
1990s were followed by a shift towards the “kill fewer, kill cautiously” policy
in 2007. How would an algorithm-assisted system adapt to such legal or
policy changes?

This goes to the question of how these AI algorithms are created. My understanding is
that there are two different ways of doing it. One is machine learning based on prior
decisions, which raises the question of whether the algorithm is baking in cases that
might not have been accurate. You would need some human review of the cases that are
being relied upon. You can also have people come up with decision trees — which is
happening in Shanghai — that try to map out in effect what should happen in each
situation. In either case, when you have big policy changes, you’re going to have to figure

3/5
out a way to translate that into the algorithms. I think what judges would say is that right
now and in the near future, they don’t expect to be replaced by algorithms. Rather,
algorithms are used as a tool to identify outliers and make sure that decisions are
roughly in-line with similar cases. But even so, if there is a policy shift then they will
absolutely have to figure out a way to reflect that in the algorithms.

You discuss the proliferation of AI potentially creates inequality because


those with more resources are the ones that are able to crunch big data. Do
you think this just the inevitable result of judicial systems that already favor
those with more resources? To the extent AI exacerbates this inequality
further, do you have any proposals for narrowing the gap?

The concerns we raise about equity are about the availability of data generally and who’s
using it, which is a common problem around the world. If data can be marshaled to
predict outcomes, then absolutely the availability of AI would exacerbate this disparity. I
don’t think that’s a reason for restricting the flow of data, but it requires us to think
harder about how to ensure equity for litigants. Part of the idea for publicizing these
documents was to make them available to ordinary people. But there is an inevitable
trend that once there’s a lot of data out there, anyone who has the ability to process the
data is going to have an advantage. There’s a lot of litigation consulting that goes on
based on data analysis, which allows folks with more resources to have better
information.

Another point about equity is that once you have the black box of AI being deployed, it
shifts power to the algorithm writer. How do we check if the algorithm is correct? This
question isn’t even being asked in China yet. This is a problem in the United States as
well, but we also have media and civil society to act as a check to some degree. If you look
at some of the work that’s being done on predictive software in the U.S. courts -- for
example, on predicting dangerousness for parole determinations -- that work is being
done by academics, civil society, and the media. You need somebody to scrutinize and
check these algorithms. But China doesn’t have those same voices serving as a check.
That makes the inequalities worse because there are fewer autonomous and robust civil
society actors that serve as a counterbalance to the shift in power to the corporations
writing these algorithms.

You also note in your article that the SPC has recently limited the number of
results that an individual can view on the website. Why have they done this?

It’s a little unclear why the SPC is doing this. They say that it is to limit companies from
scraping data. Apparently, there is so much scraping going on that it has impaired the
ability of ordinary users to use the website. But I think they also view the marketization
of this data with some concern. It’s obvious they aren’t completely comfortable with
companies marketizing and repackaging the data. Perhaps they are okay with the data
being used in individual cases but want greater oversight when the information is used in
large datasets.

4/5
Unlike China, Other countries’ judicial systems do not have a supervisory
arm embedded within the courts to monitor judicial behavior. So is the
centralization of power away from the judiciary through AI a problem
unique to China? How might executives in other countries, where the
separation of powers is institutionalized or a constitutional mandate,
encroach upon judicial spheres through using AI?

We’ve seen in the West a dramatic increase in executive power, especially driven by the
rise of populist leaders. And that’s at the expense of other branches -- we’ve seen the
weakening of the courts and legislature in our own country. That’s one form of
centralization of power.

In China, AI is being imposed on the courts by the courts themselves. But one could
certainly envision executives in other countries rolling out AI as a way to curtail the
courts, perhaps as a way over time to restrict the judiciary’s ability to independently
adjudicate cases or just generally weaken the position of courts. We haven’t seen this,
but we think there needs to be a conversation about the broad range of potential
constraints on judicial authority in different jurisdictions. And as we continue to think
about the toolbox that authoritarian or quasi-authoritarian leaders have available to
constrain the courts, AI should be a part of the conversation along with some of the other
things that have gotten more attention to date.

To conclude, I think that this is an area where China is leading the world. First, there are
certain things that can be done with technology in China that can’t be done elsewhere,
sometimes because there isn’t that institutional history or sometimes because the
institutions have different backgrounds or are weak. We don’t think what happens in
China will happen elsewhere necessarily, but we do think that China will have a more
rapid rollout of AI in the courts compared to other countries. There may be some
important lessons to learn from China’s experience, both regarding what’s possible and
also the implications.

Tim Wang is a second-year student at Columbia Law School and a Staff member of the
Columbia Journal of Transnational Law. He graduated from Columbia College in 2017.

5/5

You might also like