You are on page 1of 46

THE USE OF AI TECHNOLOGIES IN THE TAX LAW DOMAIN:

AN INTERDISCIPLINARY ANALYSIS

Abstract: The article provides an interdisciplinary overview of the current AI applications in


the tax law domain and reviews the most relevant legal issues. First, it provides key clarifications
concerning AI applications and its history in the tax law domain. Then, it reviews some AI
applications and possible uses for the taxpayer, the tax administration, and the tax judiciary. For
each area of use, it highlights both the potential of AI and the legal matters at stake. The final aim
is to offer an insightful and comprehensive framework for future reflection on the use of AI
technologies in tax law.

SUMMARY: 1. Introduction – 2. Technological background – 2.1. Definition of Artificial


Intelligence and some clarifications – 2.2. AI and tax law: from the origins to present
success. – 3. AI for taxpayers – 3.1. Fields of Application – 3.1.1. Knowledge of tax law
– 3.1.2. Tax accounting – 3.1.3. Outcome prediction for the taxpayer – 3.2. AI bet-
ween aggressive tax planning and cooperative compliance – 4. AI for the tax admini-
stration – 4.1. Fields of application – 4.1.1. Tax investigation – 4.1.2. Fraud and
evasion detection – 4.1.3. Collaborative tax compliance – 4.2. Legal issues concerning
the use of AI – 4.3. Taxpayers’ privacy and data protection – 4.4. Algorithmic discri-
mination and the impartiality of tax administration – 4.5. Duty to state reasons – 4.6.
Accountability – 5. AI for the Tax Judiciary – 5.1. Possible Uses – 5.1.1. Organising
legal information – 5.1.2. Judicial advising – 5.1.3. Outcome prediction for judges –
5.2. Impact on fair tax process – 5.3. Impact on the tax judiciary – 6. Conclusions.

1. - Introduction (1)
The use of Artificial Intelligence (henceforth, AI) in the context of tax
law is currently at the centre of a wide-ranging debate. The increasing
willingness to use AI technologies in the public sector after the Covid-19
pandemic, which was made explicit by European (2) and national docu-
ments (3), has brought the topic into the spotlight and has aroused interest
among tax scholars (4). On the one hand, AI technologies hold the pro-

(1) Although the contribution is the result of a joint work, Sections 3 and 4 are
attributed to Alessia Fidelangeli and Sections 2 and 5 to Federico Galli. The introduction,
Paragraph 4.3, and the conclusions are the outcome of a shared reflection.
(2) Such as von der Leyen Political Guidelines (“A Union that strives for more”) and
the White Paper on Artificial Intelligence - A European approach to excellence and trust,
Brussels, 19.2.2020 COM(2020) 65 final.
(3) Such as the Italian National Recovery and Resilience Plan (“Piano Nazionale di
Ripresa e Resilienza”, PNRR).
(4) See ex multis, B. Kuzniacki. The marriage of artificial intelligence and tax law: Past,
Present, and Future, in Kluwer International Tax Blog, 2019, January 25; B. Alarie, A.
Niblett, A. H. Yoon, Law in the future, in University of Toronto Law Journal, 2016, 66,
4, 423-428; L. de Lima Carvalho, Spiritus Ex Machina: Addressing the Unique BEPS Issues of
Autonomous Artificial Intelligence by Using ‘Personality’ and ‘Residence’, in Intertax, 2019,

Diritto e pratica tributaria internazionale 1/2022


dottrina 119

mise of a faster and more efficient tax system. Indeed, intelligent auto-
mated systems process the large amount of information currently available
in national tax systems and use the resulting knowledge to improve deci-
sion-making and the fairness of the tax system. On the other hand, the use
of AI entails many risks for individuals and institutions involved in the tax
system. These risks can arise from the misuse of technology, the opacity
and bias of current AI technologies, and the improper interaction between
humans and automated systems.
Against this framework, this article provides an overview of AI appli-
cations in different tax domains and the legal challenges stemming from
their use.
While existing tax legal scholarship has focused on the impact of AI as
a support tool and as a substitute for decision-making, (5) the present
study opts for a different methodology. First, it enhances knowledge of
AI technologies to realistically consider how tax law actors can use AI
depending on their functions and roles, and only then will it assess rele-
vant legal issues according to sectors. This approach calls for an interdi-
sciplinary perspective for various reasons. First, looking at how AI is
applied in the tax law domain requires understanding what AI is and
how it works. We believe that such knowledge is necessary to correctly
interpret the socio-technical phenomena behind AI and their impact on
the legal tax system. Second, the potential in using AI technologies to
automate specific tasks and processes requires a careful legal analysis, in
which tax lawyers should take a leading role.
The overview is not exhaustive but rather aimed at providing some
relevant examples of uses of AI in the tax system. The examples conside-
red highlight the variety of technologies and potential applications within

47, 5, 425-43; S. Hoffer, What if Tax Law’s Future is Now?, in The Ohio State Technology
Law Journal, 2020, 16, 1, 68-72; L. Scarcella, Tax compliance and privacy rights in profiling
and automated decision making, in Internet Policy Review, 2019, 8, 4, 1-19; S. Dorigo,
Intelligenza artificiale e norme antiabuso: il ruolo dei sistemi “intelligenti” tra funzione am-
ministrativa e attività giurisdizionale, in Rass. trib., 2019, 4, 728-751; A. Vozza, Intelligenza
artificiale, giustizia predittiva e processo tributario, in Fisco, 2019, 32/33, 3154 ff.; A. Di
Pietro, Leva Tributaria e divisione sociale del lavoro, in U. Ruffolo (Edited by), XXVI Lezioni
di diritto dell’intelligenza artificiale, Giappichelli, Torino, 2020, 450 ff.; L. Quarta, Impiego
di sistemi AI da parte di Amministrazioni finanziarie ed agenzie fiscali. Interesse erariale
versus privacy, trasparenza, proporzionalità e diritto di difesa, in A.F. Uricchio, G. Riccio,
U. Ruffolo, (a cura di), Intelligenza artificiale tra etica e diritti, Cacucci, Bari, 2020; C.
Sacchetto, Processo tributario telematico e giustizia predittiva in ambito fiscale, in Rass. trib.,
2020, 1, 41-54.
(5) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit.; A. Di Pietro, Leva Tribu-
taria e divisione sociale del lavoro, cit.
120 diritto e pratica tributaria internazionale n. 1/2022

the broad spectrum of “artificial intelligence”. This approach stresses the


different levels of technological evolution and the actual or future poten-
tial achieved by the different tools in the tax law domain. At the same
time, as legal issues stemming from AI uses change considerably depen-
ding on the national legal system at stake, the paper mainly focuses on
some general principles shared by most European countries.
The article is structured as follows. Section 2 will first provide some
conceptual clarifications concerning AI applications and history, especially
looking at the tax law field. In the sections that follow, we will review
current AI applications and possible uses for the taxpayer (Section 3), the
tax administration (Section 4) and the judiciary (Section 5). For each area
of use, we will highlight the potential of AI to improve the efficiency and
fairness of tax systems and the legal issues at stake.

2. - Technological background
2.1. - Definition of Artificial Intelligence and some clarifications
Although AI has garnered a great deal of attention from tax legal
experts in the last few years, there has been some conceptual misunder-
standing concerning the various declinations of AI. In the following para-
graph, we will provide some general insights on AI, which could be
helpful in better understanding the application in the tax law domain
and the related legal challenges.
The broadest definition of AI characterises it as the attempt to build
machines that “perform functions that require intelligence when perfor-
med by people” (6). A more elaborate notion has been provided by the
High-Level Expert Group on AI (AI HLEG) set up by the European
Commission:
“Artificial Intelligence (AI) systems are software (and possibly also
hardware) systems designed by humans that, given a complex goal, act
in the physical or digital dimension by perceiving their environment
through data acquisition, interpreting the collected structure or unstruc-
tured data, reasoning on the knowledge, or processing the information,
derive from this data and deciding the best action(s) to take to achieve a
given goal. AI systems can either use symbolic rules or learn a numeric

(6) R. Kurzweil, The age of intelligent machines, MIT press, Cambridge, 1990, 14. For
other possible definitions and approaches to AI research, see S. J. Russell, P. Norvig.
Artificial intelligence: a modern approach, Pearson Education Limited, Hoboken, 2020, 1 ff.
dottrina 121

model, and they can also adapt their behaviour by analysing how the
environment is affected by their previous action.” (7).
This definition can be accepted with the proviso that most of today AI
systems only perform a fraction of the activities listed in the definition:
pattern recognition (e.g., recognising images of plants or animals, human
faces or attitudes), language processing (e.g., understanding spoken lan-
guages, translating from one language into another, fighting spam, or
answering queries), practical suggestions (e.g., recommending purchases,
purveying information, performing logistic planning, or optimising indu-
strial processes). On the other hand, some systems may combine many
such capacities, as it happens in smart assistants or industrial robots.
Three clarifications are relevant to our discussion. First, AI should be
kept separate from robotics, although it constitutes its core. Robotics is the
discipline that aims to build “physical agents that perform tasks by mani-
pulating the physical world” (8). The High-Level Expert Group describes
robotics as follows:
“Robotics can be defined as ‘AI in action in the physical world’ (also
called embodied AI). A robot is a physical machine that has to cope with
the dynamics, the uncertainties and the complexity of the physical world.
Perception, reasoning, action, learning, as well as interaction capabilities
with other systems are usually integrated in the control architecture of the
robotic system. In addition to AI, other disciplines play a role in robot
design and operation, such as mechanical engineering and control theory.
Examples of robots include robotic manipulators, autonomous vehicles
(e.g., cars, drones, flying taxis), humanoid robots, robotic vacuum clea-
ners, etc.” (9).
In this article, robotics will not be separately addressed, since embo-
died and disembodied AI systems raise similar concerns when addressed
from the perspective of tax law applications.
In this article, robotics will not be separately addressed since embo-
died and disembodied AI systems raise similar concerns when addressed
from the perspective of tax law applications.

(7) High-Level Expert Group on AI, Ethics guidelines for trustworthy AI, Brussels, 8
April 2019, 36. The High-Level Expert Group on Artificial Intelligence was an independent
expert group set up by the European Commission in June 2018 entrusted with the task of
providing ethical guidelines and policy and investment recommendation for the develop-
ment of AI. See also from the same group, A definition of AI: Main capabilities and scientific
disciplines, Brussels, 17 December 2018.
(8) High-Level Expert Group on AI, A definition of AI, cit., 1.
(9) Id., 5.
122 diritto e pratica tributaria internazionale n. 1/2022

Second, AI also differs from “algorithm” and “algorithmic systems”.


Indeed, the concept of algorithm is more general than the concept of AI
since it includes any sequence of unambiguously defined instructions to
execute a task, primarily through mathematical calculations (10). Algo-
rithms must be expressed through programming languages to be executed
by a computer system, thus becoming machine-executable software pro-
grams. Algorithms can be elementary, specifying, for instance, how to
arrange lists of words in alphabetical. They can also be very complex,
such as algorithms for file encryption, speech recognition, or financial
forecasting. Not all algorithms involve AI, but every AI system includes
algorithms.
Finally, AI differs from data analytics. Data analytics is the process of
collecting and analysing large volumes of data (big data) to extract hidden
information (11). In particular, big data identifies vast data sets that can be
hardly managed through standard techniques, because of their unique
features, the so-called three V’s: huge Volume, high Velocity, and great
Variety. Some data analytics techniques rely on descriptive or inferential
statistics and probability distribution, other may involve techniques deve-
loped in AI research, such as machine learning algorithms (see below). As
we will see in the next paragraphs, in the tax domain, the activity of
analysing large quantities of data through automated techniques is not
new. The real novelty is represented by the large amount of data that
allow more accurate statistical models that extract knowledge from it.

2.2. - AI and tax law: from the origins to present success


The recent success of AI is linked to a change in the leading paradigm
in AI research and development.
Until a few decades ago, it was generally assumed that, in order to
develop an intelligent system, humans had to provide a formal represen-
tation of the relevant knowledge usually expressed through a combination
of rules and concepts, together with algorithms making inferences from
such knowledge. This approach resulted in expert systems applications.
Expert systems include a domain-specific knowledge base coupled with an

(10) D. Harel, Y. A. Feldman, Algorithmics: the spirit of computing, Pearson Education,


2004.
(11) M.J. Zaki, W. Meira Jr., W. Meira, Data mining and analysis: fundamental concepts
and algorithms, Cambridge University Press, 2014. For a definition and context of Big data
technologies: A. De Mauro, M. Greco, M Grimaldi, What is big data? A consensual defi-
nition and a review of key research topics, AIP Conference Proceedings, American Institute
of Physics, 2015, 1644, 97-104.
dottrina 123

inferential engine that provides answers to users’ queries. Expert systems


were also used in the tax law domain (12). Taxman was one of these (13):
the US research team developed the system to test the consequences of a
corporate reorganisation, could apply a complete set of rules and concepts
to a specific situation to classify it for corporate tax purposes (14). Unfor-
tunately, such systems were often unsuccessful or only limitedly successful:
they could only provide incomplete answers, were unable to address the
peculiarities of individual cases, and required persistent and costly efforts
to broaden and update their knowledge bases (15).
Today AI has made an impressive leap forward with the application of
machine learning to big data. Machine Learning (ML) is defined as “the
study of computer algorithms that allow computer programs to automa-
tically improve through experience” (16). In machine learning approaches,
machines are provided with learning methods rather than, or in addition
to, formalised knowledge. Using such methods, they can automatically
learn to effectively accomplish their tasks by extracting or inferring rele-
vant information from their input data (17). Thus, machine learning rever-

(12) B. Kuzniacki. The marriage of artificial intelligence and tax law, cit.
(13) Other systems were Taxman II (1979), TaxAdvisor (1982), Expertax (1986),
Investor (1987).
(14) See L.T. McCarty, Reflections on TAXMAN: An experiment in Artificial Intelli-
gence and legal reasoning, in Harv. L. Rev., 1977, 90, 5, 837 ff.
(15) Expert system developers had to face the so-called knowledge representation
bottleneck: to build a successful application, the required information – including tacit
and common-sense knowledge – had to be represented in advance using formalised langua-
ges. This proved to be very difficult and, in many cases, impractical or impossible. In the
field of tax law, it is often not easy to determine the type of inference and the logic behind
human tax experts. Often, tax analysts and decision-makers rely on their intuition, trained
on their experience with relevant examples, or rely on tacit and common-sense knowledge.
The most significant barrier was that expert systems could not read and understand texts
unless they were provided with a map of the terminology and all possible connections
relevant to tax law. This evidence made it very difficult – in many cases, impractical or
impossible – to build an expert system capable of operating as a tax law professional.
(16) T. M. Mitchell, Machine Learning, McGraw-Hill Science, 1997, 2.
(17) “Supervised learning” is currently the most used of these methods. In supervised
learning, the machine is given in advance a training dataset, which contains a set of pairs,
each linking the description of a case to the correct response for that case. In one hypo-
thetical example for the tax domain, a system designed to identify non-compliant taxpayers,
the description of past taxpayers (e.g., age, profession, transactions, contact) is linked to
whether the Tax Administration has issued an assessment notice. The system uses the
training set to build an algorithmic model which captures the relevant knowledge initially
contained in the training set, namely the correlations between cases and responses. This
model is then used to provide hopefully correct responses to new cases by mimicking the
correlations in the training set. For example, once the system has extracted the model
representing correlations between taxpayers’ profile and non-compliance, it applies such
124 diritto e pratica tributaria internazionale n. 1/2022

ses the traditional paradigm of computer programming and AI: instead of


writing the rule of a particular task in the computer program, the human
programmer provides large amounts of data to the system, which auto-
nomously learns how to extract the rules needed to perform the task.
Many techniques have been deployed in machine learning.
A development related to deep learning regards the possibility of
automatically processing a significant quantity of textual data. Natural
language processing (NLP) includes techniques to program computers
so that they can process and analyse natural language data, including
complex legal texts, such as contracts, treaties, legislation, and judgements.
Chatbots are the most popular applications: they are text-based conversa-
tional software applications that enable a human user to communicate
with a computer system using natural language.
Based on this technological background, the following sections explo-
re the most relevant AI applications in the tax domain and highlight some
related legal challenges. First, we shall analyse the uses and possible legal
problems in AI applications facilitating voluntary taxpayers’ compliance
(Paragraph 3). Then, we will review current uses of AI for tax admini-
strations’ assessment and control procedures (Paragraph 4). Finally, the
use of AI tools in the tax judiciary will be considered (Paragraph 5).

3. - AI for taxpayers
Taxpayers can use AI to make faster, cheaper, and more accurate
decisions. In particular, AI can provide innovative ways to process finan-
cial data, provide answers to complex questions, and perform previously
time-consuming or impossible analyses. If correctly developed, these so-
lutions can improve taxpayers’ efficiency in compliance, favour legal cer-
tainty, and support a collaborative approach with tax authorities (18).
In the following paragraphs, we will provide some examples of AI
applications depending on whether they are used for 1) enhanced know-
ledge of tax law; 2) tax accounting; 3) outcome predictions.

model to the data profile of new a taxpayer, not previously present in the training set. Then,
it will determine whether a new taxpayer is presumably compliant or not. Other applica-
tions of machine learning are unsupervised learning and reinforcement learning. In the
former, the machine learns by looking directly into the data without being provided with
training examples in advance. As to the latter, the machine will learn based on the feedback
provided by the human on well it has been doing in reaching its outcome. For an overview
of machine learning for non-experts, see E. Alpaydin, Introduction to machine learning, MIT
Press, 2020.
(18) L. Viola, Interpretazione della legge con modelli matematici, Milano, 2017.
dottrina 125

3.1. - Fields of Application

3.1.1. - Knowledge of tax law


Taxpayers today face a tax law environment where the retrieval and
interpretation of rules are increasingly complex (19). The national tax sy-
stems are riddled with frequent enactments of new laws or changes of
existing ones. Moreover, they must adapt to changes in the economic
system and grant coherence with new regulations and case-law (20). Hence,
in this field, AI could provide for promising applications to make tax law
more accessible and comprehensible to taxpayers (21). In particular, ma-
chine learning can automatically identify patterns in data and help deter-
mine the tax discipline regulating a particular transaction. Additionally, in
this field, NLP-powered applications can provide solutions to understand
and answer tax-related questions (22) and a better understanding of tax law
obligations may improve taxpayers’ compliance (23).
AI software for enhancing the knowledge of tax law have been deve-
loped both through national initiatives (24) or through accounting firms’
initiatives for providing services to their clients. For example, KPMG has
reported using IBM Watson, i.e., the IMB deep-learning powered com-
puter, (25) to help clients secure R&D credits (26). With the application,
users can upload large amounts of documents and rapidly analyse struc-

(19) See, for example, M. Logozzo, L’ignoranza della legge tributaria, Milano, 2002; Id.,
La scusante dell’illecito tributario per obiettiva incertezza della legge, in Riv. trim. dir. trib.,
2012, 387 ff.
(20) A. Di Pietro, Leva Tributaria e divisione sociale del lavoro, cit., 451.
(21) See G. Peruginelli; S. Faro, Frontiers in Artificial Intelligence and Applications, IOS
Press, 2019, which provides a general overview of the practical implementation of legal
information systems and the tools to manage this kind of information.
(22) PWC, How Tax is leveraging AI — Including machine learning — In 2019,
available at https://www.pwc.com/gx/en/tax/publications/assets/how-tax-leveraging-ai-ma-
chine-learning-2019.pdf.
(23) D. Bentley, Taxpayers’ Rights: Theory, Origin and Implementation, 2007, Alphen
aan den Rijn, Kluwer Law International, pp. 269 ff.
(24) For example, in the US, companies that must navigate the increasingly complex
US Tax Code can use AI tools to track tax rates and calculations for multiple tax jurisdic-
tions. An example of such a tool is Intuit Inc. which provides an application called Tax
Knowledge Engine (TKE), helping users streamline tax preparation. The system delivers
answers tailored to each taxpayer by gathering correlating more than 80.000 pages of US tax
requirements and instructions based on an individual’s unique financial situation.
(25) Deep learning is a subset of AI including computer systems that learn based on
complex neural networks.
(26) Additional information on IMB Watson is available at https://www.ibm.com/
watson/stories/kpmg (last access 10 December 2021).
126 diritto e pratica tributaria internazionale n. 1/2022

tured and unstructured data to help identify projects that are eligible for
credits, using NLP to understand the economic context.
Moreover, chatbot applications, powered by NLP and machine lear-
ning, are said to affect the accessibility to the law profoundly. Significantly,
the emergence of deep learning-based Q&A systems and speech-based
virtual assistants are likely to empower individuals in addressing client-
specific tax questions (27). In this field, Deloitte Belgium has developed a
chatbot that can provide first-hand EU VAT advice, considering the place
of supply rules, exemptions, domestic rates, etc. (28).
In addition to being used for the knowledge of tax legislation, AI
could be applied to improve taxpayers’ understanding of the practice of
tax administrations (29). For example, in the Italian tax law system, AI
could be used to increase the knowability of interpelli (30) and risoluzio-
ni (31) that are publicly available. A possible use case could be the follo-
wing: the taxpayer would provide the AI system with the relevant features
of a specific case, and the system would search for tax administration’s
solutions offered in similar situations. Such a system would reduce errors
in compliance, prevent the taxpayer to raise issues that have been already
addressed by the tax administration, and ultimately favour the uniform
application of tax law. However, especially for interpelli, the importance
and the wide-ranging diversity of the factual elements would require a
careful evaluation of AI applications.
Finally, it has to be highlighted that the usefulness of these applica-
tions could be hindered by the complexity and features of each tax law
system: often-cumbersome legislation, constant changes in regulation and
administration and judges’ interpretations can influence the correctness
and timeliness of the answers provided by the software.

(27) Venture Beat, How this chatbot powered by machine learning can help with your
taxes, available at https://venturebeat.com/2017/01/27/how-this-chatbot-powered-by-ma-
chine-learning-can-help-with-your-taxes/ (last access 10 December 2021).
(28) Deloitte, VAT chatbot SAM, available at https://www2.deloitte.com/be/en/pages/
tax/solutions/VATbot-SAM-Deloitte-Belgium-Tax.html.
(29) A. Di Pietro, Leva Tributaria e divisione sociale del lavoro, cit., 457.
(30) An “interpello” is a request that a taxpayer makes to the tax administration before
engaging in tax-relevant conduct, to obtain clarification in relation to a concrete and per-
sonal case concerning the interpretation, application or disapplication of tax law rules.
(31) A “risoluzione” is an internal act of the tax administration addressed to tax officials
that provides for the correct interpretation or application of tax law, in order to solve a
practical and concrete problem usually on the basis of a request.
dottrina 127

3.1.2. - Tax accounting


AI is particularly effective in accounting as it can perform fast and
accurate operations and minimise the risk of errors. Hence, lawyers and
consultants are investing part of their budgets to be competitive with the
ever-changing market and to cope with increasingly strict regulations con-
cerning the mistakes made by professionals (32).
In this field, NLP can be used for tax data extraction and to recognise
patterns and features in scanned documents. Moreover, AI can be promi-
singly used in tax accounting to extract critical data from documents,
classify tax-sensitive transactions (33), and identify incorrectly booked as-
sets. AI technologies can help automate repetitive tasks performed by
professionals, such as invoice payment, accounting reconciliation (34),
and financial reporting (35).
A current application is used by the Deloitte accounting department,
which provides AI-powered solutions for various tax purposes. It develo-
ped a tax analytics application that analyses the company’s tax obligations
related to employees, and by analysing and aggregating the data, it allows
the company to manage its tax position. The tool is pre-loaded with
knowledge, including business rules, training data, and a dictionary so
that the machine understands the relevant terminology. For example,
the firm uses natural language generators in its tax practice to provide
targeted financial advice (36).
Even if AI technologies in accounting provide practical solutions, it
must be stressed that NLP techniques are based on a syntactic and lexical
analysis of language. Thus, applications are unable to understand language
semantics (37). This limitation could lead to problems in understanding
highly complex legal concepts, as is often the case in tax law which
frequently borrows concepts from civil and commercial law but applies
them with different meanings (38).

(32) See Kuzniacki, The Artificial Intelligence Tax Treaty Assistant, cit., 9.
(33) B. Van Volkenburgh, Artificial Intelligence and taxes: 8 ways it’s being used, in
Crowd Reason Blog, September 9, 2019.
(34) For example, to compare invoice discrepancies.
(35) For example, for automatically processing tax entries from a spreadsheet.
(36) M. A. Nickerson, Ai: New risks and rewards, in Strategic Finance Blog, April 1,
2019.
(37) The inability of computers to understand language at the level of semantics is used
by logician and philosopher John Searle to curb the great expectations of artificial intel-
ligence.
(38) For the Italian system, in this respect, see F. Bosello, La formulazione della norma
tributaria e le categorie giuridiche civilistiche, in Dir. prat. trib., 1981, 1, 1436 ff.; A. Berliri,
128 diritto e pratica tributaria internazionale n. 1/2022

3.1.3. - Outcome prediction for the taxpayer


Finally, recent studies focused on AI uses to predict judicial decisions
in tax matters. Such applications are included in the field of predictive
justice (39). This field involves data analytics, machine learning and NLP
techniques to analyse large amounts of judicial decisions and make pre-
dictions about the outcome of future decisions (40). Taxpayers could use
predictive justice tools for various purposes, such as predicting the odds of
a successful appeal based on the processing of previous case-law (41).
One of the seminal applications in this domain is the BlueJ project
created from a partnership between industry and researchers in Canada.
The group has developed an AI application that provides taxpayers with
answers on routine legal issues in the Canadian tax law courts (42). The
system can classify workers as self-employed or employees for income tax
purposes by looking at courts’ previous interpretations. The researchers
used the 600 cases under Canadian law to develop a predictive system that
maps trends in case law and anticipates tax authority interpretation (43).
The system is also equipped to make predictions on other types of pro-
blems, such as whether an individual is a resident or not for tax purposes,
or whether the expenses related to workspace in the home are deductible
or not.
As we will see later (Para. 5.1.3), the circulation of these tools is still
limited, especially in the EU. Moreover, the reliability of AI technologies
in activities such as outcome prediction, best litigation strategies, case law
retrieval, and arguments mapping is still quite low. In any case, the possi-
bility of predicting a court decision will depend mainly on the number and

Sulle cause della incertezza nell’individuazione e interpretazione della norma tributaria appli-
cabile ad una determinata fattispecie, in Giur. imp., 1976, 117 ff.; F. Paparella, L’autonomia
del diritto tributario ed i rapporti con gli altri settori dell’ordinamento tra ponderazione dei
valori, crisi del diritto e tendenze alla semplificazione dei saperi giuridici, in Riv. dir. trib.,
2019, 6, 587 ff.
(39) For references on “predictive justice”, see footnote n. 118.
(40) Conversely to what is generally thought, this example clearly shows that AI systems
for outcome prediction do not actually “decide”, meaning that they take the decision, but
merely examine previous case-law and provide a quantitative score on the possible outcome
of the tax.
(41) F. Bex, H. Prakken, De juridische voorspelindustrie: onzinnige hype of nuttige
ontwikkeling?, in Ars Aequi, 2020, 69, 256.
(42) Blue J, available at: https://www.bluej.com/ca. For a detailed description of the
BlueJ Project, see B. Alarie, A. Niblett, A. H. Yoon, Law in the future, cit.
(43) Given the different factors that can emerge in a case, the system can find what is
the best weight to each variable and how the variables interact with each other, accom-
plishing a task that would be impossible for humans.
dottrina 129

the accuracy of the legal information available (44). Finally, since these
applications analyse past cases and highlight prevailing trends (similarly
to what happens in common law with the binding precedent), their usa-
bility in civil law systems must be carefully considered.

3.2. - AI between aggressive tax planning and cooperative compliance


Even though the use of AI by taxpayers is generally less problematic
than its use by public powers (45), AI technologies can provide personali-
sed strategic endeavours that can be qualified as aggressive tax plan-
ning (46). Research into how AI technologies can be used to carry out such
practices is limited, and information on such uses is not disclosed by
consultancy firms. However, some examples could be the use of AI tech-
nologies to suggest how to re-characterise particular items of income or
expenses to apply lower tax rates (hybrid mismatches), shift income from a
high-taxed person to a low-taxed person, or, more generally, capture and
combination of the technicalities of a tax system facilitating aggressive tax
planning structures (47).

(44) B. De Finetti, Teoria della probabilità, Milano, 2005, 10.


(45) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit., 742; A. Vozza, Intelligenza
artificiale, giustizia predittiva e processo tributario, cit.
(46) See G. Végh, H. Gribnau, Tax Administration Good Governance, in EC Tax
Review, 2018, 1, 55 mentioning in this respect World Bank, An Integrated Assessment
Model for Tax Administration (IAMTAX) The World Bank 13, September 2011. By ag-
gressive tax planning we mean the exploitation of cross-border disparities across tax systems
with a view to achieving tax advantages that States would otherwise not have meant to give.
The concept of aggressive tax planning has been used within the Base Erosion and Profit
Shifting (BEPS) project. Moreover, aggressive tax planning has been the object of European
actions and documents, such as the EU Commission recommendation of 6 December 2012
on aggressive tax planning (2012/772/EU) and the European Commission Communication
from the Commission to the European Parliament and the Council on Tax Transparency to
Fight Tax Evasion and Avoidance, Brussels,18.03.2015, COM(2015) 136 Final. For further
information on this topic, see A.P. Dourado, Aggressive Tax Planning in EU Law and in the
Light of BEPS: The EC Recommendation on Aggressive Tax Planning and BEPS Actions 2
and 6, in Intertax, 2015, 1, 43; P. Pistone, The meaning of tax avoidance and aggressive tax
planning in European Union Tax Law: some thoughts in connection with the reaction to such
practices by the European Union, in A.P. Dourado (ed.), Tax Avoidance Revisited in the EU
BEPS Context, Amsterdam, IBFD, 73-100; P. Pistone, La pianificazione fiscale aggressiva e le
categorie concettuali del diritto tributario globale, in Riv. trim. dir. trib., 2016, 2, 395-439,
who which stresses that there are three essential elements of aggressive tax planning, namely
(i) the exploitation of disparities between different systems with the aim of deriving a tax
advantage, (ii) the misalignment between the production of wealth and the power of the
state to tax, and (iii) the existence of double non-taxation that states have not intended to
allow.
(47) See K. Mikuriya, T. Cantens, If algorithms dream of Customs, do customs officials
dream of algorithms? A manifesto for data mobilisation in Customs, in World Customs
130 diritto e pratica tributaria internazionale n. 1/2022

The European Commission has recently shown a willingness to tackle


aggressive tax planning and act against tax advisers and other intermedia-
ries facilitating it (48). At the European level, the Anti-Tax Avoidance
Directive (ATAD) already provides a minimum level of protection for
national corporate tax systems against some practices that can be labelled
as aggressive tax planning across the Union (49). However, it is limited to
general provisions and leave the implementation to the Member States (50).
Moreover, some Authors have questioned the effectiveness of anti-tax
avoidance rules to tackle aggressive tax planning (51). Also, the OECD’s
initiatives on Base Erosion and Profit Shifting (BEPS) provide for concrete
action recommendations and have inspired solutions at the national (52),
European and international level, such as the “MLI” (53). However, most

Journal, 2020, 14, 2, 8. For some examples of the ATP indicators, see F. Cachia, Aggressive
Tax Planning: An Analysis from an EU Perspective, in EC Tax Review, 2017, 5, 267 who
identifies several “ATP indicators” and stresses the importance of involving more countries
to build “effective” ATP structures.
(48) On the topic of advisors and lawyers, the European Commission (EC) had laun-
ched a public consultation on the 10 November 2016 to gather feedback on the way
forward for EU action on advisers and intermediaries who facilitate tax evasion, tax avoi-
dance and aggressive tax planning. Following the EC’s public consultation, the EC publis-
hed a tax paper entitled the Study on Structures of Aggressive Tax Planning and Indicators,
Final Report highlighting the model Aggressive Tax Planning (ATP) structures and identi-
fying ATP indicators that facilitate or allow ATP.
(49) Council Directive (EU) 2016/1164 of 12 July 2016 laying down rules against tax
avoidance practices that directly affect the functioning of the internal market, OJ L 193,
19.7.2016.
(50) Recitals (3) of the Council Directive (EU) 2016/1164 of 12 July 2016 laying down
rules against tax avoidance practices that directly affect the functioning of the internal
market.
(51) P. Pistone, La pianificazione fiscale aggressiva e le categorie concettuali del diritto
tributario globale, cit. In the Author’s view, although tax avoidance and aggressive tax
planning have common elements, they must be distinguished. For example, the objective
of tax avoidance is to achieve the tax saving in the same State in which it occurs. In contrast,
in the case of aggressive tax planning the tax saving arises because of the different tax
treatment the States apply to the transnational case. Furthermore, in aggressive tax planning
the intentional element is not relevant. Hence, the two concept must be kept different. In
the author’s view, only in cases where they coexist within international tax planning schemes
or overlap, it is feasible that rules against tax avoidance also counteract aggressive tax
planning.
(52) As far as Italian law is concerned, see, among others, F. Amatucci, L’adeguamento
dell’ordinamento tributario nazionale alle linee guida dell’OCSE e UE in materia di lotta alla
pianificazione fiscale aggressiva, in Riv. trim. dir. trib., 2015, 1, 3; G. Ianni, Countering
international tax evasion and tax avoidance in the BEPS framework: the experience of the
‘Guardia di Finanza’, in Riv. dir. trib. int., 2013, 2, 251.
(53) Multilateral Convention to Implement Tax Treaty Related Measures to Prevent
Base Erosion and Profit Shifting. For more details on MLI instruments, see P. Pistone, The
BEPS Multilateral Instrument and EU Law, in A. Martin Jimenez (ed.), The External Tax
dottrina 131

of these initiatives do not take into account the specific problems connec-
ted to the use of AI in tax planning. This lack calls for more attention at
the national and European level on this topic.
At the same time, AI technologies can also be used to improve inte-
ractions between taxpayers and tax administrations. There is an on-going
trend in tax administrations which increasingly stress the importance of
taxpayers’ voluntary compliance (54). Already in 2008, the OECD sugge-
sted the need for an “enhanced relationship” between taxpayer and tax
administration based on mutual trust and cooperative compliance (55).
Intermediaries, such as banks, lawyers, and consultants, would take on a
key role and, instead of offering aggressive tax planning products, could
propose programmes to foster tax compliance among their clients. This
way, taxpayers would avoid the more aggressive tax audit methods, which
would be used only for those who do not comply with tax compliance
programmes. By relying on automated procedures and standardisation
mechanisms, AI applications for taxpayers could be designed in such a
way to foster compliance with tax law, leaving tax administrations with the
possibility to target controls on the cases where AI systems are not used.
In the field of compliance, it is interesting to wonder if AI could also
be employed by the taxpayers or their advisors for complex obligations,
such as those deriving from the DAC system, thus reducing the burden of
compliance by offering technology-driven facilities to taxpayers. For exam-
ple, in the context of DAC6 (56), AI could be used to identify the cross-
border arrangements (so-called “hallmarks”), which must be reported to
the relevant tax authority by intermediaries, or in some cases by the
taxpayers themselves. Although there is currently no information on these
uses, it can be assumed that such a system could be hindered by the

Strategy of the EU in a Post-BEPS Environment, Amsterdam, IBFD, 171-184; M. Lang, P.


Pistone, A. Rust, J. Schuch, C. Staringer, The OECD Multilateral Instrument for Tax Trea-
ties Analysis and Effects, Alphen aan den Rijn Wolters Kluwer, 2018.
(54) G. Végh, H. Gribanu, Tax Administration Good Governance, cit., 55.
(55) For a deeper insight, see OECD, Co-operative Compliance: A Framework, From
Enhanced Relationship to Co-operative Compliance, Paris, 2013 (OECD Publishing).
(56) Council Directive (EU) 2018/822 of 25 May 2018 amending Directive 2011/16/
EU as regards mandatory automatic exchange of information in the field of taxation in
relation to reportable cross-border arrangements. A similar consideration could be done
regarding DAC7 (Council directive (EU) 2021/514 of 22 March 2021 amending Directive
2011/16/EU on administrative cooperation in the field of taxation), which, among other
measures, introduces an obligation for platform operators to provide information on income
derived by sellers through platforms with the aim of addressing the lack of tax compliance
and the under-declaration of income earned from commercial activities carried out through
digital platforms.
132 diritto e pratica tributaria internazionale n. 1/2022

diverging implementations of DAC6 into the national laws of the Member


States (57).

4. - AI for the tax administration


Besides supporting taxpayers in compliance procedures, AI applica-
tions increasingly assist tax authorities across a wide range of operational
activities.
Tax administrations have always had access to significant quantities of
information provided by the taxpayers, deriving from databases and pu-
blic registries (See para. 2). Especially in the globalised economy, the
application of tax rules implies the tax enforcers’ ability to obtain infor-
mation on taxpayers’ economic activities, which could be easily gathered
and elaborated through technological tools (58). Thanks to AI, today, these
data can more easily be processed and transformed into actionable know-
ledge (59). Moreover, AI can reduce errors, optimise time and resources,
and speed up processes. For example, AI allows to granularly profile
taxpayers, predict their future behaviour, and increase effective tax col-
lection. The possibility of performing real-time data analysis can improve
efficiency in calculations and audits, and the automated detection of irre-
gularities can be used to combat fraud.
In the following paragraphs, we will analyse some AI applications used
by tax administrations that seem promising in pursuing their objectives.
These cover different functions: (1) tax investigations; (2) fraud detection;
(3) tax collections (60).

(57) See D. Weber, J. Steenbergen, The (Absence of) Member State Autonomy in the
Interpretation of DAC6: A Call for EU Guidance, in EC Tax Review, 2021, 5/6, 254, who
argues that the implementation of the Directive created a lot of uncertainty, as DAC6
contains new (often undefined) concepts. Member States have tried to tackle this uncer-
tainty by introducing official guidance, which may lead to diverging interpretations of the
concepts used in DAC6.
(58) L. Scarcella, Tax compliance and privacy rights in profiling and automated decision
making, cit., 2.
(59) Ibidem. In addition, a recent provision in the Italian Finance Act made it easier to
use this information and allowed the use of information from open sources.
(60) In the future, the use of artificial intelligence tools in relation to sanctions should
be addressed. In fact, to give just one example, in the Italian national system the application
of flexible sanctions could more easily lead to the imposition of maximum edictal amounts
even with difficulties in motivating them. Although we are aware of these issues, for reasons
of conciseness, we consider that this is not the place to deal with it and we propose to do so
in the future.
dottrina 133

4.1. - Fields of application

4.1.1. - Tax investigation


A 2016 OECD survey showed that one of the most successful AI uses
by tax administrations consists in prioritising cases for investigations and
controls or other compliance interventions (61). According to the survey,
AI techniques are generally used to analyse previous taxpayers’ data to
identify hidden relationships or potentially high-risk tax non-compliance
networks. These technologies can rely on taxpayers or third parties’ da-
ta (62), classify taxpayers based on non-compliance risks and select tax-
payers for assessments. For example, the Norwegian Tax Authority uses
data analysis and machine learning techniques to improve efficiency in
selecting of the cases to be inspected. The algorithm is trained with hi-
storical data to predict the possibility of non-compliance in the VAT
return. Each case is assigned a score, and tax officials inspect taxpayers
with the highest scores. The more declarations are audited, the more data
the algorithm will obtain for its model, thus improving accuracy of its
predictions.
The area of customs duties is particularly suitable for AI applications
since, at least within the EU, there are common rules, customs collect data
everywhere in a massive way, and the computational culture is well esta-
blished (63). As of the 1970s, customs have been adopting information
technologies in their activities and, since border processing is a largely
standardised process, today many of their procedures are automated (64).
For example, the Dutch Customs Administration (DCA) developed a
system for customs risk management which handles customs declarations
documents, assesses the associated risk, decides whether to perform an
inspection and eventually carry out the inspections for those packages that
have been red-flagged.
Looking beyond specific applications, using AI in tax investigation
may depend on specific tax sector. For example, the use of AI in interna-
tional law could be more complex than in national law since it would

(61) OECD, Advanced Analytics for Better Tax Administration, OECD Publishing,
Paris, 2016.
(62) See French Law 28 December 2019, n. 1479, Article 154, which, for experimental
purposes, suggested relying on information from open sources to integrate the tax asses-
sment.
(63) K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 4.
(64) Ibid.
134 diritto e pratica tributaria internazionale n. 1/2022

involve interaction with a set of multilevel rules and general clauses (65).
On the contrary, the above example shows that the field of customs law
could be a fertile sector precisely because of the existence of harmonised
legislation and databases. Moreover, also the existence of databases and
information-exchange platforms, such as the VAT Information Exchange
System (VIES) (66) or One-Stop Shop (OSS) (67) in the VAT field, could
represent a good environment for the effective use of AI (68). At the
national level, the usability of AI seems facilitated in areas with standardi-
sed assessment models and precise guidelines, such as for small and me-
dium-sized enterprises.

4.1.2 - Fraud and evasion detection


Tax authorities and academics advocate for the employment of data
analytics and machine learning techniques to detect tax fraud and tax
evasion cases (69). According to the OECD study mentioned earlier, ad-
ministrations can create AI models based on unsupervised learning which
seek to identify interesting or anomalous patterns in the data (70).
Regarding national initiatives, the German Federal Ministry of Finance
supported the enactment of automated analysis of fraud detection in a

(65) Nonetheless, use limited to certain areas (e.g., transfer pricing comparability)
could be more effective, even though in that case, it might be challenging to obtain the
relevant information.
(66) VIES is an electronic mean for validating VAT-identification numbers of economic
operators registered in the European Union for cross border transactions on goods or
services.
(67) The Union One-Stop Shop (OSS) is the electronic portal businesses can use to
comply with their VAT obligations on e-commerce sales within the EU to consumers since 1
July 2021.
(68) As far as the importance of VIES system in combatting fraud is concerned, see B.
Middelburg, T. Potma, L. van Verseveld, Report of the EFS Seminar ‘50 Years of the EU
Customs Union and EU VAT System: Developments, Challenges and Alternatives’ Held on 14
February 2019, at the Erasmus University Rotterdam, in EC Tax Review, 2019, 4, 216.
(69) See M. Merkx, N. Verbaan, Technology: A Key to Solve VAT Fraud?, in EC Tax
Review, 2019, 28, 6, 300; L. Scarcella, Tax compliance and privacy rights, cit., 9; C. Pérez
Lopez, M.J. Delgado Rodriguez, S. de Lucas Santos, Tax Fraud Detection through Neural
Networks: An Application Using a Sample of Personal Income Taxpayers, in Future Internet,
2019, 11, 4, 86 ff; F.C. Venturini, R.M. Chaim, Predictive Models in the Assessment of Tax
Fraud Evidences, in Á. Rocha, H. Adeli, G. Dzemyda, F. Moreira, A.M. Ramalho Correia
(eds.), Trends and Applications in Information Systems and Technologies, 2021, Springer,
Cham.
(70) Such a use would be in line with EU Commission initiative to establish a platform
for tax good governance, tackling aggressive tax planning and DT, aimed at stimulating
debate among the tax administrations of the Member States with regard to good tax
governance.
dottrina 135

2016 amendment of the German Tax Law (71). The reform has introduced
a “fully automated procedure” for risk management, which allows the
German tax authority to detect high-risk cases and prevent tax evasion.
The fully automated procedure is based on the data provided by the
taxpayer, on the information already available to the tax authorities, and
on data transmitted by third parties to the tax authorities. It is intended to
ensure an appropriate risk detection and corresponding verification by
automatically filtering out cases involving significant risk and submitting
them for comprehensive examination by public officials.
Moreover, in the UK, the HMRC developed the Connect System, a
computerised data analytics system of network analysis that cross-checks
the tax records of companies and individuals with other databases to
establish fraudulent activities (72). The system looks for correlations bet-
ween the declared income and the lifestyle data coming from a variety of
sources, such as banks, land registry, credit cards, vehicles, VAT registries,
tax investigations, employer income, online platforms, social networks,
web browsing and email records.
Machine learning-based security and fraud detection applications have
been experimented with in customs duties, but few have been deployed
operationally, and their results are generally not shared by administra-
tions (73). One example is the web-based platform called Theseus, used
by OLAF (74). Thanks to in-house developed statistical methods on ag-
gregated and disaggregated data, the platform creates alerts related to
illicit activities such as customs fraud or money laundering.
When addressing the use of AI in tackling tax evasion, it is worth
mentioning that sometimes the line between tax evasion and tax avoidance
is subtle (75). Moreover, in the case of AI use for fraud detection, AI may

(71) Law of July 18, 2016 (BGBl I, 1679). For further information, see N. B. Binder,
Artificial Intelligence and taxation: Risk management in fully automated taxation procedures,
in T. Wischmeyer, T. Rademacher (Edited by), Regulating Artificial Intelligence, Springer,
2020, 295 ff.
(72) Croner-i Navigate, available at https://library.croneri.co.uk/acmag_194203.
(73) K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 4.
(74) Available at https://theseus.jrc.ec.europa.eu. The platform employs classical data
mining techniques, which partially overlap with machine learning techniques. Theseus fo-
cuses on analysis at the macro level: OLAF is not interested in single breaches (which
remain the responsibility of national authorities) but rather in serial and systematic breaches.
(75) F. Cachia, Aggressive Tax Planning: An Analysis from an EU Perspective, cit., 258.
Tax evasion is the direct, open violation of tax rules (such as the rules imposing the
obligation to declare the premise), expressly provided for and punished with administrative
and/or criminal sanctions (F. Tesauro, Istituzioni di diritto tributario, Torino, Parte generale,
2011, I, 242). Tax avoidance differs from tax evasion in that the taxpayer - instead of
136 diritto e pratica tributaria internazionale n. 1/2022

struggle to detect the element of the intentional wrongdoing by the tax-


payer, necessary to qualify a certain behaviour as fraud. Hence, a proper
use of AI should include secure mechanisms that allow to challenge the
determination of the system in order to take these nuances into account.
As we will see in further detail in the following paragraphs, this has to do
with the broader challenge of ensuring control over AI, especially in risk
assessment activities.

4.1.3. - Collaborative tax compliance


In addition to case selection and fraud detection, AI-based risk asses-
sment can improve tax compliance and recommend compliance solutions.
These applications reveal an additional synergy between information tech-
nologies and compliance procedures, not limited to e-payments methods
such as the Electronic Invoicing System.
For example, in the UK, the HMRC implemented a risk modelling
and experimentation programme (called ADEPT (76)) that detects tax-
payers who are likely to fail to meet filing obligations. The model uses
supervised machine learning to predict taxpayers most likely to miss filing
deadlines and target interventions to encourage compliance. Such appli-
cation is based on profiling UK citizens to detect those that, based on their
personal information and lifestyle, are most likely to incur non-complian-
ce. It uses nudge theory to improve governments policy and services (77).
Similarly, AI applications have been deployed to manage tax pay-
ments. Many tax administrations use risk-based modelling techniques that
detect individuals or companies most likely to fail payment obligations (78).
For example, Australia and Norway use real-time debt management sy-
stems that propose different payment arrangements to the taxpayer de-
pending on their predicted propensity and capacity to pay. In this case,

committing a direct breach of the tax rule - improperly uses one or more legal instruments
to achieve a certain objective, thereby achieving a reduction in the tax burden (R. Cordeiro
Guerra, P. Mastellone, Evasione [dir. Trib.], 2017, in Enciclopedia Treccani Online; see
also S. Cipollina, Abuso del diritto o elusion fiscale, in Dig. Comm., Agg. VIII, Milano, 2017,
1 ff.). Many countries make a distinction between acceptable tax avoidance and unaccepta-
ble tax avoidance. Moreover, there are tools such as the above-mentioned ATAD which
fight tax avoidance.
(76) The acronym stands for Analytics for Debt Profiling and Targeting.
(77) UK Government, Building a trusted, modern tax administration system, 2020,
available at https://www.gov.uk/government/publications/tax-administration-strategy/buil-
ding-a-trusted-modern-tax-administration-system.
(78) OECD, Advanced Analytics for Better Tax Administration, cit.
dottrina 137

combining AI and data analytics on a large volume of taxpayers’ data in


real-time can support efficient decision-making.
These examples show that the use of AI risk modelling techniques in tax
compliance could in the future foster the idea of a collaborative relationship
between taxpayers and the tax administration by providing tailor-made so-
lutions and facilitating compliance in a cooperative way (See para. 3.2) (79).

4.2. - Legal issues concerning the use of AI


While many tax administrations are using AI technologies for different
activities, the legality of some solutions should be evaluated with respect to
existing tax principles and rules. Such an assessment should be framed
bearing in mind the public nature of the relationship between the tax
administration and the taxpayer, which requires a balance between possi-
bly conflicting interests. On the one hand, the proper implementation of
the tax rules for public finance objectives; on the other hand, the protec-
tion of the taxpayer whose position is affected by the activities of tax
administrations (80).
Different legal issues may arise from the use of AI by the public
administration depending on the purposes for which systems are used
and on the relationship between the automated process and the asses-
sment notice. For example, using AI systems for cases prioritisation, such
as the one currently in use in Germany (See para. 4.1.2), is very different
from employing AI to ground the tax assessment or tax collection process,
as it happens in Norway (81). Differently, while AI used for an internal
investigation or to carry out an initial screening of activities and operations
(See Para 4.1.1.) may endanger individuals’ privacy rights (See para 4.3), it
does not affect the obligation to state reasons and the right of defence are
not directly affected (See para. 4.5) (82).

(79) OECD, Co-operative Compliance, cit. It is worth mentioning that it would be


interesting to further investigate the use of AI in tax collection. In fact, in the executive
activity (rather than in the assessment activity), AI would not be entrusted with the ascer-
tainment of facts but rather with activities such as suggesting more effective strategies of
aggression of the taxpayers’ assets. Since such activity, in certain situations, is limited to the
need to cross-reference data, the use of AI techniques could be particularly effective and
would be more profitable than the use in assessment activities. Such utility would be
exceptionally high in countries with a high level of evasion from tax collection.
(80) L. Quarta, Impiego di sistemi AI da parte di Amministrazioni finanziarie ed agenzie
fiscali, cit, 247.
(81) According to Scarcella, Tax compliance and privacy rights, cit., p. 4, advanced
analytics is the first tool to prioritise cases.
(82) According to some authors, a distinction should be made depending on whether
138 diritto e pratica tributaria internazionale n. 1/2022

Based on these distinctions, a stance has emerged in legal doctrine


which argued for a more “moderate” use of AI. This would mean that the
use of AI should be regarded as legitimate, and even advisable, whenever
it can support administrative activities and it can be comprehensible to tax
officials. In other situations, the use of AI should be seriously questio-
ned (83). In the following paragraphs, we will deepen this debate and will
analyse some of the most relevant issues raised by AI’s use by the public
administration, paying particular attention to 1) taxpayer data protection,
2) the principle of equality and non-discrimination, 3) the duty of the tax
administration to state reasons for the act and the taxpayers’ right of
defence and 4) accountability of tax administration.

4.3. - Taxpayers’ privacy and data protection


Many current AI applications for tax law involve processing taxpayers’
personal data or taxpayers’ clients’ data. One on the hand, these data
contribute to the datasets used by learning systems to draw knowledge
from it. On the other hand, AI can be applied to taxpayers’ personal data
to make assessments and decisions concerning them, such as the systems
assessing the risk of non-compliance used in customs duties (See Para
4.1.1.). In both cases, the lawfulness of using AI technologies in the tax
domain also depends on the level of compliance with privacy and data
protection laws (84). At the same time, the need to respect individual
privacy and data protection should be balanced against the collective
interest to tax transparency and effective tax enforcement (85).
In Europe, the right to privacy and data protection are protected by
several sources. The ECHR protects the right to privacy and family life
(Article 8) and the ECtHR has ensured its application to tax matters (86).

AI is used for supporting the assessment and the evaluation of the factual elements or for
directly adopting the final assessment notice (S. Dorigo, Intelligenza artificiale e norme
antiabuso, cit., 743).
(83) See the proposal made by Dorigo in S. Dorigo, Intelligenza artificiale e norme
antiabuso, cit., 745.
(84) Privacy and data protection here is used not merely as synonymous with “tax
confidentiality” but also to the set of individual rights and freedom stemming from funda-
mental rights and data protection framework. For a discussion on the distinction, see E.
Politou, E. Alepis, C. Patsakis, Profiling tax and financial behaviour with big data under the
GDPR, in Computer law & security review, 2019, 35, 3, 306 ff.
(85) J. Kokott, P. Pistone, R. Miller, Public International Law and Tax Law: Taxpayers’
Rights: The International Law Association’s Project on International Tax Law-Phase 1, in
Geo. J. Int’l L., 2021, 52, 2, 381-426.
(86) Satakunnan Markkinapdrssi Oy & Satamedia Oy v. Finland [GC], no. 931/13, 27
June 2017.
dottrina 139

The European Union protects privacy and data protection both in primary
and secondary law. The Charter of Fundamental Rights provides the
fundamental right to privacy in Article 7 and the right to data protection
in Article 8. In secondary law, these rights are extensively addressed by the
General Data Protection Regulation (henceforth GDPR) (87), which is a
general framework applicable to natural and legal persons that process
personal data. The GDPR provides individuals with a series of ex-ante and
ex-post rights to control and manage the access to personal data, while
also imposing on data collectors a series of obligations to ensure that
personal data are processed in respect of citizens’ fundamental rights.
As a general rule the GDPR applies to the processing of taxpayers’
data, as long as these can identify a natural person or render him or her
identifiable. According to the GDPR, the processing of personal data can
be considered lawful if one of the six conditions laid down in Article 6 is
applicable. Letter e) is relevant for the tax law sphere. It states that data
processing is lawful when necessary for the performance of a task that is
carried out in the public interest or in the exercise of official authority
vested in the controller.
However, the protective reach of the GDPR to the processing of
taxpayers’ data is reduced by many exemptions provided by the regulation
for the tax field. These exemptions clearly respond to the fundamental
need of Member states to balance privacy with the general interest to tax
transparency and an effective exercise of tax authorities’ powers.
In particular, Article 23 explicitly allows Member states to restrict the
application of Articles from 12 to 22 and Article 34 of the GDPR when
such restriction is needed and proportionate to safeguard important ob-
jectives of general public interest. Among such objectives, “budgetary and
taxation matters” are included in letter e). (88) The exemption is vast as it
includes rights such as the right to transparent information on the pro-
cessing (Article 13-14), the right of access (Article 15), the right to erasure
(Article 17), the right to object (Article 21), and the right not to be subject
to automated decision-making (Article 22) (89). The exemption also inclu-

(87) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of natural persons regarding the processing of personal data
and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119,
4.5.2016 [henceforth, GDPR].
(88) Article 23 GDPR. See also Recitals 31, 71 and 112 GDPR.
(89) Article 22 provides for data subject’s right not to be subject to a decision based
solely on automated processing, including profiling. The prohibition applies to all cases
where personal data are used to profile taxpayers, and automated systems are used to take
140 diritto e pratica tributaria internazionale n. 1/2022

des Article 34 which lays down the obligation of the data controller to
communicate a personal data breach to the data subject. However, when
restricting these rights and obligations, the Member States must adopt a
legislative measure which respects “the essence of the fundamental rights
and freedoms” (90). This requirement is further specified in Article 23(2),
which states that the measure in question should, at least, contain specific
provisions regarding: the purpose of the processing or categories of pro-
cessing; the categories of personal data; the scope of the restrictions in-
troduced; the safeguards to prevent abuse or unlawful access or transfer;
the specification of the controller or categories of controllers; the storage
periods and the applicable safeguards taking into account the nature,
scope and purposes of the processing or categories of processing; the risks
to the rights and freedoms of data subjects; and the right of data subjects
to be informed about the restriction, unless that may be prejudicial to the
purpose of the restriction.
Another important exemption is included in Article 49(1)(d), which
allows the transfer of personal data to third countries or international
organisations on the ground of “important reasons of public interest”.
In this connection, Recital 112 clarifies that this derogation should include
transfers between “tax or customs administrations” and “between finan-
cial supervisory authorities”. Many of these transfers are not only permit-
ted, but in fact encouraged or required by European regulations (91).
Finally, Article 4(9) explicitly excludes from the definition of “reci-
pients of data” the “public authorities which may receive personal data in
the framework of a particular inquiry in accordance with Union or Mem-
ber State law”. This specification entails that, if data controllers are under

decisions that produce legal effects or similarly significantly affect the data subject. Howe-
ver, the prohibition of automated decision-making, including profiling, is not absolute and
allows for fundamental exemptions. For our purposes, Recital 71 provides valuable hints. It
states that decision-making, including profiling, should be allowed where expressly autho-
rised by Union or Member State law to which the controller is subject to fraud and tax-
evasion monitoring and prevention purposes. Among the exemptions directly provided by
Article 22, para 2, the authorisation by Union or Member State law will most likely offer
safe grounds for the activities carried out by tax administrations and taxpayers employing
AI systems.
(90) Art. 23(1) GDPR.
(91) Such as, in the field of VAT, Council Regulation (EU) No 904/2010 of 7 October
2010 on administrative cooperation and combating fraud in the field of value added tax; in
the field of excise duties, Council Regulation (EC) No 2073/2004 of 16 November 2004 on
administrative cooperation and Council Directive 2004/106; in the field of direct taxation,
Council Directive 2011/16/EU of 15 February 2011 on administrative cooperation in the
field of taxation and repealing Directive 77/799/EEC.
dottrina 141

an obligation to disclose personal data concerning taxpayers to public


authorities, the rules on the information and the guarantees provided to
data subjects in case of disclosure to third parties are not applicable.
Based on such a framework, the last Italian Budget law provided two
rules concerning taxpayers’ privacy (92). On the one side, it introduced
rules aimed at effectively exchanging and cross-checking the data available
to the different bodies and agencies composing the Italian Tax Admini-
stration, thus, allowing for more effective controls. Moreover, it provided
for the modification of the Italian Privacy Regulation (93) to include the
prevention and fight against tax evasion between the processing “necessa-
ry for reasons of substantial public interest” which – as outline above –
allow for a limitation of the GDPR guarantees (94). It has been argued that
these measures could amount to a non-proportionate limitation of data
subject rights (95).
As seen in this paragraph, the limited applicability of the GDPR in tax
matters reflects the complexity of balancing the protection of the tax-
payer’s privacy and the effective exercise of state powers in connection
with tax law enforcement. The use of general criteria such as necessity and
proportionality are susceptible to cause differing interpretations in the
Member States. This could lead, on the one hand, to a different level of
protection of taxpayers’ privacy in the EU, and, on the other hand, to an
overly broad interpretation of the exceptions.

4.4. - Algorithmic discrimination and the impartiality of tax admini-


stration.
AI systems based on machine learning methods learn from the exam-
ples contained in the data. In doing that, they tend to reproduce the merits
and flaws present in the data and may lead to wrong or biased decisions.
This scenario may occur when an AI system bases its decision on prohi-
bited features, such as race, ethnicity, or gender, or on features that are
not relevant for the decision at hand. This situation is often described as
“algorithmic discrimination” (96).

(92) Art. 1 para. 681-686 of Law 27 December 2019, n. 160.


(93) Legislative Decree 30 June 2003, n. 196 as amended by the Legislative Decree 10
August 2018, n. 101 implementing the GDPR.
(94) Artt. 15, 16, 17, 18, 19, 20, 21, 22 GDPR.
(95) In this respect, see L. Quarta, cit., 268 mentioning and commenting on the
position of the Italian Data Protection Authority.
(96) It is important to stress that “algorithmic discrimination” is a socio-technical
concept different from the traditional legal concept of “discrimination”. For an account
142 diritto e pratica tributaria internazionale n. 1/2022

In the tax domain, the system’s risk of adopting discriminatory deter-


minations may derive from multiple causes. For example, the inclusion of
taxpayers’ sensitive information into the training set could result in tax
administration focusing only on individuals belonging to specific groups or
having certain features. A system may learn that people belonging to a
particular ethnicity are more likely to fail to comply with specific tax
obligations and suggests future tax assessments targeted to another tax-
payer belonging to the same group (97). Moreover, discriminatory deter-
minations may result from an overly inclusive dataset, for example, con-
taining personal information unrelated to tax purposes. In this case, a
system may detect a correlation with a particular characteristic or beha-
viour of the taxpayer, which is not an expression of the ability to pay (e.g.,
connection and post on a social media, subscription haute couture maga-
zine’s), and the probability to fail to comply with specific tax obligations.
Finally, algorithmic discrimination may derive from the type of data that
the system developer (possibly together with the tax administration) deci-
des to include in the dataset. For example, if the tax administration uses
an AI system trained on tax assessments whose majority was directed to
small-medium enterprises, the machine will be more prone to suggest
future tax assessments against small-medium enterprises. Similarly, suppo-
se a machine that detects risks of non-compliance in tax collection were
trained only considering the timeliness of taxpayer’s compliance. In that
case, it may learn that it is more profitable to control small enterprises than
large ones, in which investigations are complex, and the outcomes are
more uncertain (98).
If the tax administration adopts discriminatory determinations given
by AI systems, problems may arise regarding the principle of impartiality.
At national level, this principle has frequently a constitutional relevance, as
it happens with Article 97 of the Italian Constitution. Within the Euro-
pean Union, the principle of impartiality finds a specific legal reference in

of “algorithmic discrimination”, refer to S. Barocas, A.D. Selbst. Big data’s disparate impact,
in Calif. L. Rev., 2016, 104, 671 ff.; F.J. Zuiderveen Borgesius, Strengthening legal protection
against discrimination by algorithms and artificial intelligence, in The International Journal of
Human Rights, 2020, 24, 10, 1572-1593; G. Sartor, F. Lagioia, Le decisoni algoritmiche tra
etica e diritto, in U. Ruffolo (a cura di), Intelligenza artificiale - Il diritto, i diritti, l’etica,
Giuffrè, Milano, 2020; P. Hacker, Teaching Fairness to Artificial Intelligence: Existing and
Novel Strategies Against Algorithmic Discrimination under EU Law, in Common Market Law
Review, 2018, 55, 4, 1143.
(97) S. Bastiani, T. Giebe, C. Miao, Ethnicity and tax filing behaviour, in Journal of
Urban Economics, 2020, 116, C, 1-16.
(98) See K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 13.
dottrina 143

Article 41 of the Charter of Fundamental Rights of the European


Union (99). An impartial and fair administration is also at core of the
ongoing debate on “good governance”, which refers to a society with
political, legal, and administrative institutions enacting and implementing
effective public policies (100).
Two questionable scenarios may arise in connection with the principle
of impartiality, as well as with the discretion of the administrative ac-
tion (101). First, although everyone can be checked, for the sake of effi-
ciency the tax administration directs tax controls to taxpayers who, based
on extra-fiscal factors, show a higher risk of non-compliance with tax law
obligations. A second risk is a systematic exclusion from investigations of
certain groups of taxpayers having particular features, thus creating unac-
countable taxpayers. Such situations would question the limits of tax
administrations discretion and impartiality. Indeed, while a partial exercise
of administrative power could occur even without AI technologies, AI
systems may increase the likelihood and the gravity of impartial outcomes.
The technology could be designed in such a way as to leave taxpayers
belonging to certain groups entirely out of the scope of investigations, with
the result of creating problematic areas of impunity. Moreover, tax autho-
rities often do not have time to carefully check the outcomes suggested by
AI and are influenced by the aura of mathematical certainty that sur-
rounds it. The likely result is that the outcome of the system will be
uncritically adopted by the tax administration, hindering its discretion.
The potential of unfair automated treatments in tax issues is increa-
singly acknowledged. For example, in France, the 2020 Budget Law pro-
vided the possibility of using social networks to obtain information to

(99) Article 41 of the EU Charter of Fundamental Rights of the European Union (Right
to good administration).
(100) B. Rothstein, The oxford Handbook of Governance, 2012, D. Levi-Faur ed., 143-
144. The concept is closely related to those of state capacity, quality of government and
government interaction with the private sector and civil society. In 2013, the EU Commis-
sion referred to this concept in the Decision for the establishment of a Platform for tax good
governance, aggressive tax planning and DT, aimed at stimulating the debate between the tax
authorities of the Member States with regard to good tax governance. See, more extensively
on this topic, G. Végh, H. Gribnau, Tax Administration Good Governance, cit. and F.
Amatucci, L’autonomia procedimentale tributaria nazionale ed il rispetto del principio europeo
del contraddittorio, in Riv. trim. dir. trib., 2016, 2, 257-276.
(101) This article does not dwell on the binding or discretionary nature of the tax
administration activities. Similarly, it does not take a position on the different declinations of
the concept depending on whether it refers to the assessment or the collection. For this
reason, we will limit ourselves to general reflections, with the idea of deepening the topic in
further studies.
144 diritto e pratica tributaria internazionale n. 1/2022

initiate investigations, although on an experimental basis and for three


years. The French Constitutional Council has affirmed the legality of the
new measure, but it required the Council of State to adopt a decree
establishing the limits and conditions for using the information collec-
ted (102).
Considering what has been said, a significant challenge emerges for
the use of AI in the tax domain: gathering personal data which possibly do
not contain any bias and can provide robust and reliable models which do
not lead to outcome potentially breaching the impartiality of tax authori-
ties and undermining taxpayers’ trust.

4.5. - Duty to state reasons


As seen in Section 2, current AI systems apply machine learning
techniques to a significant quantity of data. Based on such data, the AI
system makes several statistical-based computations that correlate specific
data features and produce an output. Sometimes, such computations are
so complex that, while input and output are observable and known, the
inner working of the system remains obscure, even to its programmers (s.c.
“black box problem”). These features of AI tools are likely to raise issues
concerning the duty to state reasons and the taxpayer’s right of defence
when AI technologies are used for the tax assessment.
The duty to state reasons is typically codified by national laws (103), as
well as by Article 41 of the EU Charter of Fundamental Rights (“right to
good administration”) (104), and it is essential in all cases where the sphere

(102) Decision of the Constitutional Council of 27 December 2019, no. 2019-796 DC


on Art. 154 Law n˚ 2019-1479 of 28 December 2019.
(103) In Italy, we can mention Articles 97 and 113 of the Italian Constitution and
Article 7 of Law 27 July 2000, n. 212 (Statuto del contribuente). On the latter and its
relationship with Article 3 of Law 7 August 1990, n. 241 concerning the regulation on
administrative procedures see, ex multis, L. Perrone, La disciplina del processo tributario
nello statuto del contribuente, in Rass. trib., 2011, 54, 3, 563-576; P. Selicato, Attuazione del
tributo nel procedimento amministrativo, Milano, 2001; S. La Rosa, Il giusto procedimento
tributario, in Giur. imp., 2004, 3, 763 ff.; A. Comelli, Sulla non condivisibile tesi secondo cui
‘‘accertamento tributario si identifica sempre con un procedimento amministrativo (speciale), in
Dir. prat. trib., 2006, 2, 731-748.
(104) As regards the application of the right to good administration to national tax
administrations, we do not intend to go into this subject in depth. In this respect, we refer to
the reading of, among others, D.U. Galetta, Il diritto ad una buona amministrazione nei
procedimenti amministrativi oggi (anche alla luce delle discussioni sull’ambito di applicazione
dell’art. 41 della Carta dei diritti UE), in M.C. Pierro (Edited by), Il diritto a una buona
amministrazione nei procedimenti tributari, Giuffreı̀ Francis Lefebvre, Milano, 2019, 1-32;
G. Vanz, Buona amministrazione” di cui all’art. 41 della Carta dei diritti fondamentali
dell’Unione Europea e diritto tributario nazionale, in Rass. trib, 2019, 4, 709-726.
dottrina 145

of the individual is affected by a measure based on authoritative po-


wer (105). It ensures the proper exercise of administrative activity and
the protection of taxpayers’ rights. Taxpayers must be able to understand
what is contested to them, based on which rules and the logical process
that led to a particular conclusion and, hence, decision. Indeed, the obli-
gation to state reasons closely connects to the right of defence of the
addressee of the administrative measure (106).
Therefore, if the exercise of the authoritative power is based on AI
systems’ decisions, it is essential to ensure the knowability of the system to
the tax authorities. In particular, the logic involved in the decision must be
knowable to understand which rules have been applied in the specific case.
Moreover, the tax authorities must be able to understand how the rules have
been applied to the specific case to evaluate the correctness of the algorith-
mic decision and, if correct, to provide for a proper motivation. Otherwise,
the administrative activity would depend on the choices of the developers of
the algorithm rather than on tax authorities’ choices based on existing legi-
slation, thus undermining a proper exercise of administrative power (107).
Only if the act is properly motivated and the logic of the AI system knowa-
ble, the taxpayer can effectively contest the decision and exercise his or her
rights of defence (108). Therefore, the factors that led the AI model to decide
should be presented in such a way as to be understood by humans. This way,
the tax administration could evaluate the decision before it is officially issued
and the taxpayer to challenge it afterwards (109).
The right to challenge the decision provided by AI systems is parti-
cularly important as automated determinations are not based on causality
but on predictive correlations. An AI system used in tax assessment can
identify specific features of taxpayers that only presumably may indicate
non-compliance (110). In this regard, it should be noted that the problem

(105) L. Quarta, Impiego di sistemi AI da parte di Amministrazioni finanziarie, cit., 264.


(106) In Italy, we can mention Articles 24 and 113 of the Italian Constitution regarding
the right of defence. On the duty to state reasons in administrative acts, see A.R. Tassone,
Motivazione nel diritto amministrativo, in Digesto delle discipline pubblicistiche, 2011. For
the field of tax law and in relation to the use of AI, see S. Dorigo, Intelligenza artificiale e
norme antiabuso, cit., 743.
(107) D.U. Galetta, Intelligenza Artificiale per una Pubblica Amministrazione 4.0?, in
Federalismi.it, 6 February 2019, No. 3; L. Viola, Intelligenza artificiale nel procedimento e nel
processo amministrativo, in Federalismi.it, 7 November 2018, No. 21.
(108) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit., 744.
(109) Such approaches switch the emphasis from the computation rules to the control
of the decision rules.
(110) Current AI systems do not base their assessments on logical deductions, accor-
146 diritto e pratica tributaria internazionale n. 1/2022

of the compatibility of using predictions to determine the amount of the


tax obligation and fines with the procedural guarantee of taxpayers is not
new. The issue is reminiscent of the questions of legitimacy that have been
raised, for example, in Italy on the use of inductive methods to calculate
the tax base (111).
The Italian Council of State recently delivered several decisions con-
cerning the use of AI in the administrative procedure, which are relevant
also in the tax domain (112). The decision, concerning an algorithm for
assigning public-school teachers, overturned a previous series of rulings by
the Regional Administrative Court of Lazio, which had affirmed that,
despite the possible high degree of accuracy, algorithms could never re-
place human decision-making (113). Following this overruling, the Italian
Council of State has developed a case law (114): the use of automated
decision-making should, in principle, be encouraged in the name of time
efficiency, errors reduction and impartiality of the administration (115).
Article 97 of the Constitution can grant the admissibility of such instru-
ments. However, the use of automated procedures cannot result in cir-
cumventing general principles of administrative law, as it happens in re-

ding to which certain premises (e.g., income, exchange, etc.) lead to certain conclusions
(e.g., the amount of tax to be paid, the decision for inspections, etc.). But they use a
statistical approach that applies rules that are discovered from previous cases, and from
which the algorithms provide probable answers (often a probability resembling certainty),
but never certain answers.
(111) On the legitimacy of inductive methods, see, among others, A. Fedele, Rapporti
tra i nuovi metodi di accertamento ed il principio di legalità, in Riv. dir. trib., 1995, 1, 242 ff.;
E. Fazzini, L’accertamento per presunzioni: dai coefficienti agli studi di settore, in Rass. trib.,
1996, 2, 309 ff.; G. Marongiu, Coefficienti presuntivi, parametri e studi di settore, in Dir. prat.
trib., 2002, 73, 5, 707-734; M. Basilavecchia, Verso il giusto equilibrio tra effettività della
ricchezza accertata e strumenti presuntivi di accertamento, in Riv. giur. trib., 2013, 4, 341-343;
A. Kostner, Studi di settore e tutela del contribuente tra diritto interno e principi sovranazio-
nali, in Dir. prat. trib., 2017, 88, 1, 28-50.
(112) As far as the applicability to tax law, see L. Quarta, Impiego di sistemi AI da parte
di Amministrazioni finanziarie, cit., 275.
(113) T.A.R. Lazio, sec. III bis, 13 September 2019, n. 10964; T.A.R. Lazio, sec. III bis,
9 July 2019, no. 9066; T.A.R. Lazio, sec. III bis, 28 May 2019, n. 6688; T.A.R. Lazio, sec. III
bis, 25 March 2019, n. 3981; T.A.R. Lazio, sec. III bis, 11 September 2018, n. 9228. In
particular, the Court found that the complete substitution of human activities by algorithms
breaches Articles 3, 24 and 97 of the Italian Constitution and Article 6 of the European
Convention on Human Rights.
(114) Italian Council of State, sec. VI, 8 April 2019, n. 2270; Italian Council of State,
sec. VI, 13 December 2019, n. 8474; Italian Council of State, sec. VI, 13 December 2019, n.
8473; Italian Council of State, sec. VI, 13 December 2019, n. 8472. For a comment to these
decisions, see J. Della Torre, Le decisioni algoritmiche all’esame del Consiglio di Stato, in Riv.
dir. proc., 2021, 2, 710 ff.
(115) Italian Council of State, sec. VI, 13 December 2019, n. 8474.
dottrina 147

lation to the “black box” problem. Thus, a set of minimum conditions for
legitimate use of AI in the administrative procedure have been provided:
a) the algorithm must be transparent and knowable; b) the algorithm must
not be the only basis for the authority’s decision; c) the algorithm must be
non-discriminatory (116). In respect of the first point, the Italian Council of
State has spoken about the right for public officials to understand the
“logic process on the basis of which the act on the basis of which the act
itself [was] issued by means of automated procedures” (117).
In establishing the above-mentioned criteria, the Italian Council of
State looked at interdisciplinary studies and at EU law initiatives, such
as the GDPR. As we saw above (See para. 4.3), at the European level, the
primary source is the GDPR, which sets out principles and rules applica-
ble to decisions based on the automated processing of personal data. In
particular, the GDPR has acknowledged the importance of transparency
and explainability when automated individual decision-making is taking
place based on personal data, and such decisions may impact the indivi-
duals. Among the information that data subject must receive prior con-
senting to data processing, Article 13 includes “significant information on
the logic involved, as well as the significance and the envisaged conse-
quences of such processing for the data subject”. This provision has been
at the centre of a vast debate in the research community, where this legal
requirement has been related to the more fundamental issue of explaining
AI systems and their outcomes (118).
Given the importance of the duty to state reasons, a challenge emerges
for the use of AI in the tax domain: using AI systems that can provide
explanations of their decisions and enhance the reasoning of tax admini-
stration, thus allowing a transparent decision-making process. Making

(116) Italian Council of State, sec. VI, 13 December 2019, n. 8474; Italian Council of
State, sec. VI, 13 December 2019, n. 8473.
(117) It should also be noted that, in Italy, a task force on Artificial Intelligence has
already produced an extensive White Paper on the subject (Task Force on Artificial Intel-
ligence of the Agenzia per l’Italia Digitale, White Paper on Artificial Intelligence at the
Service of the Citizen).
(118) For a discussion on the actual inclusion of a right of explanation in the GDPR,
see S. Wachter, B. Mittelstadt, L. Floridi, Why a right to explanation of automated decision-
making does not exist in the general data protection regulation, in International Data Privacy
Law, 2017, 7, 2, 76-99; G. Malgieri, G. Comandé, Why a right to legibility of automated
decision-making exists in the general data protection regulation, in International Data Privacy
Law, 2017, 7, 4, 243-265. For a discussion on how such right could be framed from a
practical point of view, see G. Sartor, The impact of the General data protection regulation
(GDPR) on artificial intelligence, Study PE 641.530, European Parliamentary Research
Service, 2020, 54.
148 diritto e pratica tributaria internazionale n. 1/2022

complex AI systems transparent and explainable is the goal of an interdi-


sciplinary research field called “explainable AI” (119).

4.6. - Accountability
The duty to states reasons is strictly intertwined with the accountabi-
lity of the tax administration. Tax administration accountability demands
that citizens can identify the person who is in charge and responsible for
an administrative action or decision (120). Only in this way citizens can
know to whom they can seek redress in the case of a wrongful doing.
Regarding AI application in public administration, it has been obser-
ved that the principle of accountability would entail that there is always a
human-supervisor responsible for machine determination for which they
remain responsible (121). Problems may emerge when determining who is
responsible in case of wrong determinations by AI tax systems. This could
happen in the tax law field, for example, when the tax administration errs
in determining taxpayers’ obligations (122). Additionally, the difficulty of
determining accountability could cause an additional burden for judges
who will have to determine the correct exercise of administrative function
when AI technologies are used in administrative procedures, especially for
decision purposes.
In the field of taxation, the principle of accountability entails that the
taxpayer can request the automated decision be reviewed by a human who
must always be in control of the procedure. However, the taxpayer must
first know how the AI model reached the decision to use this right effec-
tively. This reveals an interplay between the obligation to state reasons, the
right of defence, and accountability, resulting in a “right to human inter-
vention”, as the latter cannot effectively be used without the former (123).

(119) See for a recent overview of explainable AI methods, R. Guidotti, A. Monreale,


D. Pedreschi, F. Giannotti, Principles of Explainable Artificial Intelligence, in M. Sayed-
Mouchaweh (Edited by), Explainable AI Within the Digital Transformation and Cyber
Physical Systems, Springer, 2021, 9-31.
(120) R. Chieppa, Responsabilità della pubblica amministrazione (I agg.), in Digesto,
2008.
(121) D.U. Galetta, Intelligenza Artificiale per una Pubblica Amministrazione 4.0?, cit.
(122) As already stated, this article does not intend to dwell on the question of the tax
administration activity binding or discretion nature.
(123) The right to obtain human intervention in algorithmic decisions is one of the key
ideas behind the proposal for an AI Act. Article 14 of the Proposal requires that high-risk
AI systems “shall be designed and developed in such a way, including with appropriate
human-machine interface tools, that they can be effectively overseen by natural persons
during the period in which the AI system is in use”. It represents another crucial aspect,
dottrina 149

At the same time, the actual feasibility of the “human-in-control” requi-


rements should be carefully considered. Due account must be taken of the
fact that human supervision will be challenging to realise in practice.
Intuitively, it is implausible that the human supervisor involved in tax
assessment decisions will take distance from the decision made by the
AI system presented as infallible, especially when aware that he or she is
responsible for the final determination. Such a situation would raise a
dilemma: if the human supervisor thinks the decision of the machine is
correct, the machine will have de facto determined the outcome of the
decision; if the human thinks the determination of the machine is wrong,
he will not depart from it because of the risk of being held responsible,
and in this case, too, the machine will have determined the outcome of the
decision (124).

5. - AI for the Tax Judiciary


In the past few years, a great interest has emerged in using AI tech-
nologies in the judiciary. While society is rapidly becoming more digital,
the legal sector and the judiciary are progressively digitising. In this re-
spect, it is essential to distinguish AI systems from the “traditional” digital
technologies that are already commonly used by courts, such as applica-
tions for case-management, video technologies, the digital exchange of
information, filing of documents in electronic form (“e-filing”), and legal
databases. These technologies were either introduced to inform, support,
and advise people involved in the justice process (such as video techno-
logies) or replace the administrative functions previously performed by
humans (such as electronic filing). In respect of them, AI technologies
have a significant impact, as they can affect the core of the adjudicating
function, changing how judges work and deliver their decisions (125).
In the discourse on the use of AI in the judiciary, the possibility of
“robot judges”, i.e., super-intelligent machines replacing judges in their

which highlights the importance of considering the inclusion of AI systems used by the tax
administration in the scope of the new Proposal of Artificial Intelligence Act.
(124) B. Green, The Flaws of Policies Requiring Human Oversight of Government
Algorithms, forthcoming (2022).
(125) For the possibility of using predictive justice systems to remedy certain shortco-
mings of national legal systems, see C. Sacchetto, Processo tributario telematico e giustizia
predittiva, cit. In particular, it is claimed that AI technologies could act as a barrier to the
migration of tax justice towards out-of-trial disputes or para-trial instruments, which ensure
less fairness and quality of solutions and a way to put the judicial administration back at the
heart of tax justice.
150 diritto e pratica tributaria internazionale n. 1/2022

adjudicating capacities, has raised great enthusiasms together with wide


apprehension. To clear any misunderstandings, the visions that computing
machines are intelligent and that their designers have managed to slip the
legal mind of the judges inside their mechanisms are distorted. Currently,
AI applications in the judiciary pursue a much more modest objective,
namely, the construction of systems that, at a satisfactory level, can engage
in specific tasks connected to judicial activities which normally require
human intelligence.

5.1. - Possible Uses

5.1.1. - Organising legal information


Recognising patterns in text documents and files can be helpful, for
example, when sorting large amounts of cases or in complex cases that
contain much information. In this field, machine learning and NLP
techniques have been increasingly deployed in the past years and are a
considerable asset for intelligent search options to complement current
keyword or full-text search. These tools could link various sources (e.g.,
constitutions, conventions, treaties, laws, case-law), and data visualisa-
tion techniques could illustrate search results. They can help narrow
down on relevant cases to judges and filter irrelevant cases and statutes
out easily.
Legal tech companies provide much of existing AI applications in
the US (126). An application currently adopted in the EU is the French
software Prédictice (127), a legal research and analytics platform that
uses NLP to optimise the performance of legal professionals. Among
other things, the software provides advanced search engine and litiga-
tion visualisation, semantic filters and filters by topic of all French case
law and includes enriched reading with cases timelines and related

(126) Examples of these applications include CaseText, CaseMaker, FastCase, Lexi-


sNexis, Westlaw, IBM Ross Intelligence, Justia, Legal Information Institute at Cornell Law
School. An exception of a tool tailored for the judiciary is “eDiscovery”, an automated
investigation of electronic information for discovery before starting a court proceeding.
eDiscovery uses machine learning algorithms which can extract the relevant parts from a
large amount of information. Parties agree on which search terms and coding they use. The
judge assesses and confirms the agreement. The method is faster and more accurate than
manual file research.
(127) Prédictice, https://predictice.com (last access 27 November 2021). For a more in-
depth discussion of the topic, see J. B. Hubin, H. Jacquemin, B. Michaux, Le juge et
l’algorithme: juges augmentés ou justice diminuée?, Larcier, Brussels, 2019.
dottrina 151

decisions (128). Other similar examples are the French Jurinet and Juri-
CA databases (129).
The organisation of legal information by AI tools in taxation would be
highly profitable. Nevertheless, as we already said, NLP techniques are
based on a syntactic and lexical analysis of language, not on the semantics
of the language, thus challenging its effectiveness (See para. 3.1.2.) and the
complexity of a tax law system or the frequent changes in regulation and
administration and judges’ interpretations can influence the correctness
and timeliness of the answers provided by the software (See para.
3.1.1.) (130).

5.1.2. - Judicial advising


In addition to looking for relevant information, AI-powered tools can
also provide recommendations or answers to questions that can be useful
to the judges in making the final decisions.
An application of this kind was developed at the District Court of East
Brabant in the Netherlands, in collaboration with Tilburg University,
Eindhoven University of Technology and the Jheronimus Academy of
Data Science. The study investigates the possibilities of AI for cases con-
cerning road traffic regulations that recommend to citizens how to appeal
to the local court according to the administrative law on traffic viola-
tions (131). The study also aims to develop a tool to support judges in
preparing and deciding such cases. The study uses data from the District
Courts of East Brabant and Zeeland-West Brabant and the Arnhem-Leeu-
warden Court of Appeal, which deals with appeals. It is worth highlighting
again that this application is still an experiment.

5.1.3. - Outcome prediction for judges


AI applications that claim to predict court decisions have attracted
much interest in the past few years under the label of “predictive justi-
ce” (132). Predictive justice involves machine learning and NLP techniques

(128) É. Buat-Ménard, La justice dite «redictive»: prérequis, risques et attentes-l’expé-


rience franēaise, in Les Cahiers de la Justice, 2019, 2, 2, 269-276.
(129) Ibid.
(130) L. Viola, Giustizia predittiva, in Enciclopedia Treccani, 2018.
(131) M. van der Put, Kan artificiėle intelligentie de rechtspraak betoveren?, Recht-
streeks, 2019, 2, 50-60.
(132) For a definition or predictive justice and its relationship with legal certainty see L.
Viola, Giustizia predittiva, cit.; L. Viola, L’Interpretazione della legge con modelli matematici,
Milano, 2017, 26.
152 diritto e pratica tributaria internazionale n. 1/2022

to analyse large amounts of judicial decisions and predict future cases’


outcomes. Such algorithms are first presented with many historical cases
with the features and outcomes of these cases. The algorithm can learn the
complex relations between these features and the possible outcomes from
the training data and use these relations to “predict” the outcome of
unseen cases. A hypothetical example in tax law would be to use tax-
payers’ decisions previously trialled for frauds as a training set. The fea-
tures would be the different taxpayers’ data (e.g., personal status, origin,
gender, and case facts); the output would be whether the taxpayer is liable
for fraudulent activities. The AI system may learn relevant features that
presumably identify a fraudulent taxpayer and look for such features in
new cases using such a dataset.
To date, AI tools are mainly deployed in the private sector (Para.
3.1.3). They are designed for use by business legal departments, insurers
(both for their internal needs and for their policyholders), and lawyers for
them to foresee the outcome of litigations. The tools provide a graphic
representation of the probability of success for each dispute outcome
based on criteria entered by the user (specific to each type of dispu-
te) (133). In the judiciary, AI appears to be quite popular in the United
States (134). In Europe, national courts do not seem to be making any
practical and daily use of predictive software, and the existing applications
are limited to local tests or open projects (135). Nevertheless, public deci-
sion-makers are beginning to consider the integration of these tools into
public policies (136).

(133) Example of such tools are Prédictice France, Watson/Ross (IBM), Juris Data
Analytics (LexisNexis), LexMachina.
(134) European Commission for the Efficiency of Justice (CEPEJ) of the Council of
Europe, European Ethical Charter on the use of Artificial Intelligence in judicial systems
and their environment (adopted on 3-4 December 2018), 14.
(135) As regards Italy, mention can be made of the project of the Brescia Court of
Appeal (April 2018-December 2020). This project implemented a case law database to
provide predictions of guidelines and timing in particular areas of law. The database uses
expert and rule-based systems and natural language processing in terms of technology.
Among the area of concern are civil justice, labour and social security law, contract and
commercial law, company law. A complete account of existing AI projects in the various EU
Member States is contained in a recent study commissioned by the European Commission.
See European Commission, Study on the use of innovative technologies in the justice field –
Final Report, 2020. Report prepared by M. Vucheva, M Rocha, R. Renard, D. Stasinopo-
lous.
(136) See, recently, the joint position of the ministries of justice in the Background
Paper of the Conference of Ministers of Justice of 5 October 2021 “Digital Technology and
Artificial Intelligence – New Challenges for Justice in Europe” regarding the use of AI for
the digitisation of justice sectors.
dottrina 153

While it emerges that predictive justice is still a long way from the idea
of robot judges, there is a burgeoning academic interest in the application
of AI for the justice system, with many techniques and methodologies
being developed and tested. (137) For example, the most extensively de-
scribed research concerns the European Court of Human Rights
(ECHR) (138). Some researchers developed a tool that uses NLP to predict
whether the Court will decide that a particular provision of the European
Convention on Human Rights (ECHR), in a specific situation, has been
violated. The tool relies on information from previous judgments. Re-
search reports 79% accuracy of results (139). The results indicate that
the facts of a case, as presented by the Court, have a leading role in
predicting the case’s outcome.
An example of research in the tax law domain is provided by the
Italian research project ADELE funded by the EU. The ongoing project,
in which the two authors are involved, aims to develop an AI tool that can
support legal research and judicial decision-making processes, among
other things, in the tax domain. Its objectives include functionality that
provides judges with the most likely outcome of the decision based on
previous case-law (140).

5.2. - Impact on fair tax process


The possible uses of AI in the judiciary have raised several ethical and
legal questions. It has been claimed that the use of such technologies may
affect the principles of fair trial, transparency, and impartiality (141).

(137) See D.M. Katz, M.J. Bommarito, J. Blackman, A general approach for predicting
the behaviour of the Supreme Court of the United States, in PloS ONE, 2017, No. 4; N.
Aletras, D. Tsarapatsanis, D. Preotiuc-Pietro, V. Lampos, Predicting judicial decisions of the
European Court of Human Rights: A natural language processing perspective, in PeerJ Com-
puter Science, 2016, 2, 93; M. Medvedeva, M. Vols, M. Wieling, Using machine learning to
predict decisions of the European Court of Human Rights, in Artificial Intelligence and Law,
2020, 28, 237-266.
(138) N. Aletras, D. Tsarapatsanis, D. Preotiuc-Pietro, V. Lampos, Predicting judicial
decisions of the European Court of Human Rights, cit.
(139) Accuracy evaluates how well the machine learning systems has learned the rela-
tionships between input and output in a training dataset and given a new input, correctly
identified the output in the test set.
(140) ADELE Project funded by the European Union’s Justice Programme under
Grant Agreement no. 101007420. For a description of the project, its objective and current
research output, consult the project website available at https://site.unibo.it/adele/en.
(141) For an ethical and legal account of international literature on AI technologies in
the judiciary, see, among others, H. Surden, Machine learning and law, in Wash. L. Rev.,
2014, 89, 87 ff.; H. Surden, Artificial intelligence and law: An overview, in Ga. St. UL Rev.,
2018, 35, 1305 ff.; T. Surdin, Judge v Robot?: Artificial intelligence and judicial decision-
154 diritto e pratica tributaria internazionale n. 1/2022

Significant problems concerning parties’ procedural rights, such as the


adversarial principle and the equality of arms, which are enshrined in most
national constitutions, could arise. The principle requires elements that
can influence the case’s solution to be known to the parties and discussed.
Therefore, when AI is involved in the judicial decision-making, it seems
vital to make a certain amount of information (not only quantitative but
also qualitative) accessible to the parties for them to understand how
scales have been constructed, to measure their possible limits and to be
able to challenge them before the judge (142).
The principle of equality of arms requires a “fair balance” between the
parties, without any substantial disadvantage of a party vis-à-vis the other,
be they the public authority and the citizen, or two private litigants. The
use of AI in the judiciary may disrupt this balance (143). Digital tools could
facilitate proceedings for certain parties, such as the State and its institu-
tions who have access to many data or companies with many technological
tools and experts, whereas it could entail excessive expenses for those who
are less familiar with AI. This imbalance would add to the asymmetry that
already exists in the tax process between the tax administration, which
often has better knowledge of the facts and easier access to previous case-
law, and the taxpayer, who may not have these resources. In tax law, this
overall asymmetry could also result in an unfair distribution of the tax

making, in University of New South Wales Law Journal, 2018, 41, 4, 1114-1133; T. Sourdin,
Judges, Technology and Artificial Intelligence: The Artificial Judge, Edward Elgar Publishing,
2021; F. Pasquale and G. Cashwell. Prediction, persuasion, and the jurisprudence of beha-
viourism, in University of Toronto Law Journal, 2018, 68, Supplement 1, 63-81; R.W.
Campbell, Artificial intelligence in the courtroom: The delivery of justice in the age of machine
learning, in Colo. Tech LJ, 2020, 18, 323 ff.; J. Ulenaers, The Impact of Artificial Intelligence
on the Right to a Fair Trial: Towards a Robot Judge?, in Asian Journal of Law and Economics,
2020, 11, 2; M. Zalnieriute, F. Bell, Technology and the judicial role, in Gabrielle Appleby
and Andrew Lynch (Edited by), The Judge, the Judiciary and the Court: Individual, Collegial
and Institutional Judicial Dynamics in Australia, Cambridge University Press, 2021, 116-142.
For Italian literature, see, among others, C. Castelli, D. Piana, Giustizia predittiva, la qualità
della giustizia in due tempi, in Questione Giustizia, 4, 2018; E. Scoditti, Giurisdizione per
principi e certezza del diritto, in Questione Giustizia, 2018, 4, 153 ff.; E. Rulli, Giustizia
predittiva, intelligenza artificiale e modelli probabilistici. Chi ha paura degli algoritmi?, in
Analisi Giuridica dell’Economia, 2018, 17, 2, 533-546.
(142) D. Kehl, P. Guo, S. Kessler, Algorithms in the Criminal Justice System: Assessing
the Use of Risk Assessments in Sentencing, 2017, Responsive Communities Initiative, Berk-
man Klein Center for Internet & Society, Harvard Law School.
(143) J. Ulenaers, The Impact of Artificial Intelligence on the Right to a Fair Trial, cit.,
25. On the use of AI in producing evidence in the judicial proceeding and for its impact on
the principle of equality of arms, see S. Quattrocolo, Equality of Arms and Automatedly
Generated Evidence, Artificial Intelligence, Computational Modelling and Criminal Procee-
dings, Springer, 2020, 73-98.
dottrina 155

burden since it would create greater diversity between, e.g., large multi-
nationals, which have the economic resources to invest in technological
aid, and taxpayers who do not.
Moreover, courts must motivate their decisions (144). Motivation is a
crucial requirement to make parties and society respect judicial decisions
as it is strictly related to the principles of independence and impartiality of
the judiciary, on the one hand, and to the right of the defence, on the
other hand. As seen in the previous section, AI systems are, to date, unable
to explain or justify their determination, and even though very accurate,
purely statistical-mathematical correlations are insufficient to meet the
standards of a reasoned decision. On the contrary, AI systems can theo-
retically be effective in suggesting possible arguments to judges, especially
in the research for previous judgments, even though this would be difficult
in applying general clauses and principles (145). Nevertheless, also the
second scenario can be prone to abuse. Over time, outcome prediction
systems can reverse the conventional terms of the relationship between
motivation and decision. The risk is that judges no longer reach a decision
by applying specific rules to the facts of the case based on reasoning but
instead look for the most compelling arguments to justify the most proba-
ble outcome determined by the machine.
The use of AI by the judiciary could also affect fundamental princi-
ples, such as the impartiality and independence of the judiciary (146).
Without discussing the specific meaning of the principle of impar-
tiality and how it is interpreted in different legal systems, it is worth
mentioning that it can be undermined by AI-driven biased determina-
tion. The latter can discover a characteristic in one of the parties that is
predictive of the outcome of the judgment. For example, the system may
detect that people from a particular social background are more likely to
be liable for tax evasion. If adopted uncritically by the judge, such a
determination could undermine the principle of impartiality. The judge
would be led to assume that there is a higher likelihood of a conviction in
every case regarding a taxpayer with a particular social background, race,
gender, political affiliation, or specific irrelevant behaviour. This outco-
me would create negative social repercussions on the tax judiciary, as it

(144) In the Italian case, see Article 111 para. 6 Constitution and, for the tax field,
Article 36 del d.lgs. 546 del 1992.
(145) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit., 749.
(146) J. Ulenaers, The Impact of Artificial Intelligence on the Right to a Fair Trial, cit.
156 diritto e pratica tributaria internazionale n. 1/2022

could disproportionately harm or benefit some groups at the expense of


others (147).
Regarding the principle of judicial independence, it is a long-standing
tradition that judges are bound only to the law, and the judiciary must not
depend on any other power, nor should external pressure influence their
decisions. The judicial independence could be compromised if an AI tool
uses proprietary software developed by a private company operating for
profit, as often such tools are shielded behind trade secrets and are not
subject to the same oversight mechanisms as judges. Independence is also
strictly linked to the accountability of judges: judges are identified as
points of accountability - when a decision occurs, we know which official
actors made that decision. However, with the increased use of AI tools in
tax jurisdiction, there may be a subtle shift in accountability, not unlike
the one highlighted when we discussed the use of AI by the tax admini-
stration. In principle, the output of an AI system is a mere suggestion or
forecast that the judges may consider along with all the other evidence to
make the judgements. However, there is the possibility that judges using
such systems will begin to defer to the automated determinations made by
the system habitually. Indeed, suppose an AI system produces an auto-
mated assessment that has the aura of mathematical precision and objec-
tivity. In that case, there are reasons to suspect that judges might be
pressured to routinely adopt the automated suggestion or score by default,
even if they formally have the discretion to come to a different conclusion.
Against this background, a critical discussion has emerged at the
European level to ensure fair and ethical uses of AI technologies in judicial
systems. One of the most notable achievements is represented by the
Council of Europe’s Ethical Charter on the Use of Artificial Intelligence
in Judicial Systems (148). The Charter states that the use of Al should be
done “responsibly, with due regard for the fundamental rights of indivi-
duals as set forth in the European Convention on Human Rights and the

(147) M. Zalnieriute, F. Bell, Technology and the judicial role, in Gabrielle Appleby and
Andrew Lynch (Edited by), The Judge, the Judiciary and the Court: Individual, Collegial and
Institutional Judicial Dynamics in Australia, Cambridge University Press, 2021, 139. On the
problematisation of extraneous factors in the judicial decision-making and its impact on the
principle of impartiality, see Danziger, S., J. Levav, L. Avnaim-Pesso, Extraneous Factors in
Judicial Decisions, Proceedings of the National Academy of Sciences, 2011, 108, 17, 6889-
6892.
(148) European Commission for the Efficiency of Justice (CEPEJ) of the Council of
Europe, European Ethical Charter on the use of Artificial Intelligence in judicial systems and
their environment (adopted on 3-4 December 2018).
dottrina 157

Convention on the Protection of Personal Data” (149). It spells out five


principles that should be fulfilled to achieve this goal. The principle of
respect for fundamental rights requires that the design and implementation
of AI be compatible with fundamental rights such as privacy, equal treat-
ment, and fair trial. The principle of non-discrimination must ensure that
using AI tools in the judiciary does not lead to unfair treatment, especially
if based on sensitive data. Data quality and security principles mean that
judicial decisions and data must not be altered, and models must ensure
technological robustness. The principle of transparency, impartiality, and
fairness highlights that the algorithmic tools and the way judicial data are
processed must be accessible and understandable by the people involved
in the process, and external audits must be authorised. The principle of
user control aims to ensure that end-users are informed and that algorith-
mic determinations are not used as prescriptions. Even though these prin-
ciples are not binding, they provide a meaningful reference for the ethical
use of AI technologies in the judiciary, including in the tax domain.

5.3. - Impact on the tax judiciary


If technically sound and employed in compliance with legal principles
of the process, the use of AI can substantially benefit the judiciary.
Among these benefits, AI may improve the knowledge of case-law.
For example, in Italy, the judiciary is at the forefront of time-to-trial, but
not in terms of quality of decisions (150). Although some initiatives have
been taken that make it possible for taxpayers to be aware of all legislative
and administrative provisions in the tax field (such as the creation of a
“massimario”) (151), case law is still not easily accessible (152).
Moreover, the use of AI tools may favour convergence in legal inter-
pretations and facilitate judicial decision-making in standardised cases,
enhancing the coherence and consistency in the judicial system. Legal
certainty understood as the reasonable foreseeability of the legal conse-

(149) Id., 5.
(150) For further information concerning the timeliness and quality of tax law deci-
sions, see M. Basilavecchia, Funzione impositiva e forme di tutela, Giappichelli, Torino,
2018, 24; A. Giovannini, Giurisdizione ordinaria o mantenimento della giurisdizione tribu-
taria, in Dir. prat. trib., 2016, No. 5, p. 1903, C. Sacchetto, Processo tributario telematico,
cit., 42.
(151) A. Collini, Il massimario delle commissioni tributarie: una struttura da potenziare,
in Fisco (Il), 2003, No. 6, p. 843 commenting D.Lgs. 31-12-1992, n. 545, art. 40.
(152) See D. Borgni, Il regime di pubblicità delle sentenze delle commissioni tributarie, in
Giur. It., 2011, 3, 702, who comments Cass., Sec. un., 3 March 1961, n. 456.
158 diritto e pratica tributaria internazionale n. 1/2022

quences of a person’s actions, is a crucial value for the tax law sector. On
the one side, it is indispensable for properly managing public expenditure
and operating an orderly accounting. On the other side, the presence of
general and abstract rules that can be foreseen is indispensable also from a
sociological and constitutional point of view (153). From a sociological
point of view, the predictability of the fiscal impact of one’s conduct is
essential for the programming and carrying out many aspects of economic
and social life (154). From a constitutional perspective, taxation in demo-
cratic States must depend on the community’s consensus (155). The use of
AI tools in tax justice would help to foster the acceptance of judgments
consistent with the system of precedents (156).
If explainable, AI tools can also be essential for identifying judges’
biases (157). Indeed, like machines, humans are also affected by biases. By
providing explained models of how judges usually decide, AI systems may
therefore pinpoint such biases and incentivise correcting misconducts or
irrational tendencies. At the same time, they may enhance the transparency
of previous decision-making and encourage judges to motivate better de-
cisions that depart from mainstream approaches.
Moreover, these systems may facilitate judicial decision-making in
standardised cases, substantially reduce the courts’ workload, and improve
citizens’ access to justice (158). As we saw, the predictability of justice
would encourage calculating the odds of making a case (also as a natural
filter to litigation), hopefully with the overall improved effectiveness of tax
justice (See para. 3.1.3.).
Hence, more significant efforts should be made on open data policies
in the judiciary. Indeed, the availability of judicial data (such as judicial
decisions, acts of the parties) is an essential condition for the development
of AI, enabling it to perform specific tasks previously carried out by
humans in a non-automated manner. The more data available, the more
AI can refine models improving their predictive ability. Therefore, an

(153) F. Farri, Le (in)certezze nel diritto tributario, in Dir. prat. trib., 2021, 2, 720.
(154) Ibidem.
(155) Ibidem.
(156) Advocating for the use of AI in the Italian tax judiciary, see C. Sacchetto, Processo
tributario telematico e giustizia predittiva, cit., 46 who explicitly points out the usefulness of
AI tools to enhance the knowledge and application of judicial precedent and interestingly
elaborates on this topic.
(157) C.R. Sunstein, Algorithms, correcting biases, in Social Research: An International
Quarterly, 2019, 86, 2, 499-511.
(158) For more details on quantitative data on the case law of Italian tax commissions,
see the Department of Justice of the Ministry of Economic and Finance annual report.
dottrina 159

open data approach to judicial decisions is a prerequisite for the research


and development of AI tools for the judiciary and improving the accessi-
bility of tax law decisions to citizens (159).
These several benefits are counterbalanced by fundamental questions
on the impact of AI on the evolution of case law and the broader role of
judges in society. We are not talking about the scenario (unrealistic no-
wadays) in which justice-making is wholly delegated to intelligent machi-
nes. Unfortunately, this idea still permeates many analyses of predictive
justice that lend these devices immediate or future capabilities for the near
replication of human intelligence (160). This context, fuelled every day by
AI advances, leads to approach these predictive tools with a dose of
mysticism, stating that what is not entirely possible today will inevitably
be possible tomorrow. Despite these approaches, it is considered more
helpful to focus on some actual problems concerning the relationship
between the judge’s competencies and the technological apparatus, and
more generally, the role of judges in the tax system.
One line of argument holds that these tools may create a new form of
normativity for judges, which could supplement the law by regulating the
sovereign discretion of the judge and potentially leading, in the long term,
to standardisation of judicial decisions (161). Deference to the previous
case-law would no longer be based on case-by-case reasoning by the court
but on a pure statistical calculation linked to the average adjudication
previously awarded by other courts. Such difficulty may hinder the ana-
logic reasoning of the machine costing in applying the ratio decidendi of
previous cases to new cases.
Such standardisation would also affect the evolution of law and the
adaptability to new cases. This effect can be easily explained given the
limited potential of machine learning and NLP ability to adapt to new
circumstances and “innovate” based on them. Current AI systems only
consider patterns extracted from previous cases, not considering new laws
and – more noticeably – new interpretations or clarification of the existing

(159) In this vein, the Italian Commission for the Revision of Tax Justice called for
creating a database of case-law of Italian Tax Commissions. Such creation represents a
necessary step towards using AI in the Italian tax judiciary.
(160) See C. Sacchetto, Introduzione, in C. Sacchetto, F. Montalcini (Edited by), Diritto
tributario telematico, Giappichelli, Torino, 2017, pp. XXV-XXXI and M. Taruffo, Prece-
dente e giurisprudenza, in Riv. trim. dir. proc. civ., 2007, 61, 3, 709 ff., who speak sabout the
risk of a decrease in the critical analysis of cases and norms.
(161) F. Pasquale, G. Cashwell, Prediction, persuasion, and the jurisprudence of beha-
viourism, cit., 79.
160 diritto e pratica tributaria internazionale n. 1/2022

law. Such situations constantly occur in the tax law domain, and it is
evident when thinking about the crucial role of CJEU decisions in the
VAT field where an adaptation of existing rules to the European case-law
is constantly needed, or when thinking about the centrality of the Con-
stitutional Court’s decisions in national tax law systems such as the Italian
one (162).
At the same time, many judgments within the legal system involve an
element of discretion (163), such as the interpretation of general clauses or
principles (164). While machines may facilitate detection of improper use
of such discretion, they cannot be trained to exercise this function (165).
Discretionary decisions may need to consider community values, the sub-
jective features of parties, and any other surrounding circumstances that
may be relevant. In this sense, someone argues that, behind the rhetoric of
making the judicial systems more efficient, the pressure of using AI sy-
stems would be aimed at capping the judge’s discretion to give judicial
decisions the chrism of mathematical certainty (166).
Just like the beneficial effects, the distorting consequences of the use
of AI on the activity of courts must be taken seriously to encourage a
useful and conscious use of technology.

6. - Conclusions
This article has provided an overview of how AI technologies apply or
potentially apply in the tax law field. The aim was to classify the several
existing applications from the perspective of the different actors involved
in the tax law domain, i.e., taxpayers, tax administration and tax judiciary.

(162) M. Basilavecchia, Funzione impositiva e forme di tutela, Giappichelli, Torino,


2018, 25.
(163) The term “discretion” is used in a non-technical sense referring to the traditional
responsibility of the judge to interpret the rules applicable to the specific case.
(164) See F. Farri, Le (in)certezze nel diritto tributario, cit., 720.
(165) In this respect, see S. Dorigo, Intelligenza artificiale e norme antiabuso, cit., 749 on
anti-abuse clauses.
(166) Among the many positions, see A. Roth, Trial by machine, in Georgia Law
Journal, 2015, 104, 1245. See also T. Sourdin, Judge v Robot?, cit., 1126. The leading
Australian researcher in judicial applications of AI reports on how Australian federal Judge
Melissa Perry sees predictive justice as a threat. According to the Judge, legislators and
administrators aspire to introduce devices that allow for discretionary control and a faster,
more predictable process. If they simplify the law, such amendments may lead to unjust or
arbitrary decisions because of the lack of individualised justice and discretion and the lack
of nuance in the law. See the personal statement in M. Perry, iDecide: Administrative
Decision-Making in the Digital Word, in Australian Law Journal, 2017, 29, 30 ff.
dottrina 161

Based on this classification, the article has addressed some of the potential
issues from such uses.
First, the analysis of the existing applications showed that AI applica-
tions vary widely based on their purposes and technological development.
For example, applications used for tax compliance and accounting are
relatively more common than used in the tax judiciary. Furthermore, while
raising many academic discussions, some applications are still experimen-
tal, such as those in “predictive justice”. On the contrary, AI technologies
for control prioritisation and tax compliance are more developed and
robust, and these fields seem to be those with more promising applica-
tions.
Regarding the legal analysis, several issues have emerged that should
be carefully considered in the future debate on AI and tax law and even-
tually be considered for future legislative action. Since AI systems need
large amounts of data, the fair balance between collecting taxpayers’ data
and their privacy rights is a significant issue. The primary legal source in
this field is the GDPR. However, the Regulation allows for many excep-
tions in the tax field, which member States can use to derogate from data
protection rules. In this respect, due consideration should be given at the
national level to the extent to which data protection principles are respec-
ted when AI applications are used in the tax system, especially in light of
the proportionality principle.
Several issues emerge from the technical functioning of AI systems.
First, the problem of algorithmic discrimination may affect the principle of
impartiality of both the tax administration and the judiciary. Although
these principles assume different meanings depending on the context,
the use of AI increases the risk of biased decisions. Second, the “black-
box problem” may result in an obscure decision-making process of tax
actors, thus undermining the essential principles such as the duty to state
reasons and, in more general terms, the acceptability of authoritative acts
by taxpayers. In this respect, a closer collaboration between tax lawyers
and computer scientists is recommended in the future to reflect on the fair
use of AI. For example, an effort should be made towards explainability,
making the used data and the criteria for selection transparent, and the
logical process bringing to the final decision accessible.
Beyond considering technical problems, the use of AI technologies
could lead to accountability problems. The tax actors could be brought
to follow the machine’s suggestions uncritically and apply them to the case
at hand without considering contextual factors. Therefore, another vital
162 diritto e pratica tributaria internazionale n. 1/2022

course of action will ensure adequate human supervision and precise


accountability rules.
These issues are at the core of the European Commission’s proposal
for an AI Regulation (167). The proposal spans a wide range of topics
related to AI and its applications adopting a risk-based approach. It pro-
vides a legal definition of AI, a methodology to define high-risk AI systems
and details a regulatory framework to ensure that AI systems are develo-
ped in accordance with human rights and European values. The proposed
rules concern the design and management requirements for high-risk AI
systems and include provisions on data governance (Article 10), transpa-
rency and (Article 13), and human oversight (Article 14). It includes
specific obligations for AI providers and other actors involved and a
complex framework for governance, which includes certification bodies
and market surveillance authorities.
Among the types of AI considered as high-risk systems, the Proposal
includes those systems used in law enforcement (168). This category inclu-
des, for example, systems intended to be used by law enforcement autho-
rities for making individual risk assessments of natural persons in order to
assess the risk of a natural person for offending or reoffending or the risk
for potential victims of criminal offences. The focus on law enforcement is
explained in Recital 38, which states that: “Actions by law enforcement
authorities involving certain uses of AI systems are characterised by a
significant degree of power imbalance and may lead to surveillance, arrest
or deprivation of a natural person’s liberty as well as other adverse impacts
on fundamental rights guaranteed in the Charter” (169). In the view of EU
legislator, this particular situation exacerbates the risks associated with the
use of AI, especially if the system is not trained on quality data, does not
meet transparency requirements, or is not otherwise properly monitored: it
may lead to the identification of suspects in a discriminatory manner and it
may hinder important rights, such as the right to an effective remedy, the
right to a fair trial, and the right to defence.
However, Recital 38 also states that “AI systems specifically intended
to be used for administrative proceedings by tax and customs authorities

(167) European Commission, Proposal for a Regulation of the European Parliament and
of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence
Act) and amending certain union legislative acts, Brussels, 21.04. 2021, COM/2021/206 final
[AI Regulation proposal].
(168) Annex III of the AI Regulation proposal.
(169) Recital 38 of the AI Regulation proposal.
dottrina 163

should not be considered high-risk AI systems used by law enforcement


authorities for the purposes of prevention, detection, investigation and
prosecution of criminal offences”. This clarification suggests that the pro-
visions relating to high-risk AI systems would not apply to systems used by
tax authorities in tax law enforcement activities. In our view, this choice
may need careful consideration in the process of approving the regulation,
in light of the dangers we have identified associated with the use of AI in
tax administration. In any case, these systems could be covered by the
(voluntary and non-binding) codes of conduct provided for in Article 69
for non-high-risk AI systems.
Differently, the Proposal includes in the category of high-risk AI sy-
stems those “intended to assist a judicial authority in researching and
interpreting facts and the law and in applying the law to a concrete set
of facts.” The use of AI systems in this field is increasingly seen as pro-
blematic with respect to fundamental principles such as the impartiality of
the judiciary and the right to a fair trial, but also with respect to role of
judges in the legal systems. However, more specific guidance is needed on
how to implement the management and design requirements provided in
the proposal for high-risk systems. In addition, it should be considered to
what extent these requirements are sufficient to prevent or minimise the
risks to individuals and groups, which have been outlined in this contri-
bution, in relation to the tax judiciary (170).

ALESSIA FIDELANGELI
University of Bologna

FEDERICO GALLI
University of Bologna

(170) For a further analysis of the application of the AI Regulation proposal to the
systems used in the judicial practice, see S.F. Schwemer, L. Tomada, T. Pasini, Legal AI
Systems in the EU’s proposed Artificial Intelligence Act, in Proceedings of the Second
International Workshop on AI and Intelligent Assistance for Legal Professionals in the
Digital Workplace (LegalAIIA 2021), held in conjunction with ICAIL 2021, June 21,
2021, Sao Paulo, Brazil.

You might also like