Professional Documents
Culture Documents
AN INTERDISCIPLINARY ANALYSIS
1. - Introduction (1)
The use of Artificial Intelligence (henceforth, AI) in the context of tax
law is currently at the centre of a wide-ranging debate. The increasing
willingness to use AI technologies in the public sector after the Covid-19
pandemic, which was made explicit by European (2) and national docu-
ments (3), has brought the topic into the spotlight and has aroused interest
among tax scholars (4). On the one hand, AI technologies hold the pro-
(1) Although the contribution is the result of a joint work, Sections 3 and 4 are
attributed to Alessia Fidelangeli and Sections 2 and 5 to Federico Galli. The introduction,
Paragraph 4.3, and the conclusions are the outcome of a shared reflection.
(2) Such as von der Leyen Political Guidelines (“A Union that strives for more”) and
the White Paper on Artificial Intelligence - A European approach to excellence and trust,
Brussels, 19.2.2020 COM(2020) 65 final.
(3) Such as the Italian National Recovery and Resilience Plan (“Piano Nazionale di
Ripresa e Resilienza”, PNRR).
(4) See ex multis, B. Kuzniacki. The marriage of artificial intelligence and tax law: Past,
Present, and Future, in Kluwer International Tax Blog, 2019, January 25; B. Alarie, A.
Niblett, A. H. Yoon, Law in the future, in University of Toronto Law Journal, 2016, 66,
4, 423-428; L. de Lima Carvalho, Spiritus Ex Machina: Addressing the Unique BEPS Issues of
Autonomous Artificial Intelligence by Using ‘Personality’ and ‘Residence’, in Intertax, 2019,
mise of a faster and more efficient tax system. Indeed, intelligent auto-
mated systems process the large amount of information currently available
in national tax systems and use the resulting knowledge to improve deci-
sion-making and the fairness of the tax system. On the other hand, the use
of AI entails many risks for individuals and institutions involved in the tax
system. These risks can arise from the misuse of technology, the opacity
and bias of current AI technologies, and the improper interaction between
humans and automated systems.
Against this framework, this article provides an overview of AI appli-
cations in different tax domains and the legal challenges stemming from
their use.
While existing tax legal scholarship has focused on the impact of AI as
a support tool and as a substitute for decision-making, (5) the present
study opts for a different methodology. First, it enhances knowledge of
AI technologies to realistically consider how tax law actors can use AI
depending on their functions and roles, and only then will it assess rele-
vant legal issues according to sectors. This approach calls for an interdi-
sciplinary perspective for various reasons. First, looking at how AI is
applied in the tax law domain requires understanding what AI is and
how it works. We believe that such knowledge is necessary to correctly
interpret the socio-technical phenomena behind AI and their impact on
the legal tax system. Second, the potential in using AI technologies to
automate specific tasks and processes requires a careful legal analysis, in
which tax lawyers should take a leading role.
The overview is not exhaustive but rather aimed at providing some
relevant examples of uses of AI in the tax system. The examples conside-
red highlight the variety of technologies and potential applications within
47, 5, 425-43; S. Hoffer, What if Tax Law’s Future is Now?, in The Ohio State Technology
Law Journal, 2020, 16, 1, 68-72; L. Scarcella, Tax compliance and privacy rights in profiling
and automated decision making, in Internet Policy Review, 2019, 8, 4, 1-19; S. Dorigo,
Intelligenza artificiale e norme antiabuso: il ruolo dei sistemi “intelligenti” tra funzione am-
ministrativa e attività giurisdizionale, in Rass. trib., 2019, 4, 728-751; A. Vozza, Intelligenza
artificiale, giustizia predittiva e processo tributario, in Fisco, 2019, 32/33, 3154 ff.; A. Di
Pietro, Leva Tributaria e divisione sociale del lavoro, in U. Ruffolo (Edited by), XXVI Lezioni
di diritto dell’intelligenza artificiale, Giappichelli, Torino, 2020, 450 ff.; L. Quarta, Impiego
di sistemi AI da parte di Amministrazioni finanziarie ed agenzie fiscali. Interesse erariale
versus privacy, trasparenza, proporzionalità e diritto di difesa, in A.F. Uricchio, G. Riccio,
U. Ruffolo, (a cura di), Intelligenza artificiale tra etica e diritti, Cacucci, Bari, 2020; C.
Sacchetto, Processo tributario telematico e giustizia predittiva in ambito fiscale, in Rass. trib.,
2020, 1, 41-54.
(5) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit.; A. Di Pietro, Leva Tribu-
taria e divisione sociale del lavoro, cit.
120 diritto e pratica tributaria internazionale n. 1/2022
2. - Technological background
2.1. - Definition of Artificial Intelligence and some clarifications
Although AI has garnered a great deal of attention from tax legal
experts in the last few years, there has been some conceptual misunder-
standing concerning the various declinations of AI. In the following para-
graph, we will provide some general insights on AI, which could be
helpful in better understanding the application in the tax law domain
and the related legal challenges.
The broadest definition of AI characterises it as the attempt to build
machines that “perform functions that require intelligence when perfor-
med by people” (6). A more elaborate notion has been provided by the
High-Level Expert Group on AI (AI HLEG) set up by the European
Commission:
“Artificial Intelligence (AI) systems are software (and possibly also
hardware) systems designed by humans that, given a complex goal, act
in the physical or digital dimension by perceiving their environment
through data acquisition, interpreting the collected structure or unstruc-
tured data, reasoning on the knowledge, or processing the information,
derive from this data and deciding the best action(s) to take to achieve a
given goal. AI systems can either use symbolic rules or learn a numeric
(6) R. Kurzweil, The age of intelligent machines, MIT press, Cambridge, 1990, 14. For
other possible definitions and approaches to AI research, see S. J. Russell, P. Norvig.
Artificial intelligence: a modern approach, Pearson Education Limited, Hoboken, 2020, 1 ff.
dottrina 121
model, and they can also adapt their behaviour by analysing how the
environment is affected by their previous action.” (7).
This definition can be accepted with the proviso that most of today AI
systems only perform a fraction of the activities listed in the definition:
pattern recognition (e.g., recognising images of plants or animals, human
faces or attitudes), language processing (e.g., understanding spoken lan-
guages, translating from one language into another, fighting spam, or
answering queries), practical suggestions (e.g., recommending purchases,
purveying information, performing logistic planning, or optimising indu-
strial processes). On the other hand, some systems may combine many
such capacities, as it happens in smart assistants or industrial robots.
Three clarifications are relevant to our discussion. First, AI should be
kept separate from robotics, although it constitutes its core. Robotics is the
discipline that aims to build “physical agents that perform tasks by mani-
pulating the physical world” (8). The High-Level Expert Group describes
robotics as follows:
“Robotics can be defined as ‘AI in action in the physical world’ (also
called embodied AI). A robot is a physical machine that has to cope with
the dynamics, the uncertainties and the complexity of the physical world.
Perception, reasoning, action, learning, as well as interaction capabilities
with other systems are usually integrated in the control architecture of the
robotic system. In addition to AI, other disciplines play a role in robot
design and operation, such as mechanical engineering and control theory.
Examples of robots include robotic manipulators, autonomous vehicles
(e.g., cars, drones, flying taxis), humanoid robots, robotic vacuum clea-
ners, etc.” (9).
In this article, robotics will not be separately addressed, since embo-
died and disembodied AI systems raise similar concerns when addressed
from the perspective of tax law applications.
In this article, robotics will not be separately addressed since embo-
died and disembodied AI systems raise similar concerns when addressed
from the perspective of tax law applications.
(7) High-Level Expert Group on AI, Ethics guidelines for trustworthy AI, Brussels, 8
April 2019, 36. The High-Level Expert Group on Artificial Intelligence was an independent
expert group set up by the European Commission in June 2018 entrusted with the task of
providing ethical guidelines and policy and investment recommendation for the develop-
ment of AI. See also from the same group, A definition of AI: Main capabilities and scientific
disciplines, Brussels, 17 December 2018.
(8) High-Level Expert Group on AI, A definition of AI, cit., 1.
(9) Id., 5.
122 diritto e pratica tributaria internazionale n. 1/2022
(12) B. Kuzniacki. The marriage of artificial intelligence and tax law, cit.
(13) Other systems were Taxman II (1979), TaxAdvisor (1982), Expertax (1986),
Investor (1987).
(14) See L.T. McCarty, Reflections on TAXMAN: An experiment in Artificial Intelli-
gence and legal reasoning, in Harv. L. Rev., 1977, 90, 5, 837 ff.
(15) Expert system developers had to face the so-called knowledge representation
bottleneck: to build a successful application, the required information – including tacit
and common-sense knowledge – had to be represented in advance using formalised langua-
ges. This proved to be very difficult and, in many cases, impractical or impossible. In the
field of tax law, it is often not easy to determine the type of inference and the logic behind
human tax experts. Often, tax analysts and decision-makers rely on their intuition, trained
on their experience with relevant examples, or rely on tacit and common-sense knowledge.
The most significant barrier was that expert systems could not read and understand texts
unless they were provided with a map of the terminology and all possible connections
relevant to tax law. This evidence made it very difficult – in many cases, impractical or
impossible – to build an expert system capable of operating as a tax law professional.
(16) T. M. Mitchell, Machine Learning, McGraw-Hill Science, 1997, 2.
(17) “Supervised learning” is currently the most used of these methods. In supervised
learning, the machine is given in advance a training dataset, which contains a set of pairs,
each linking the description of a case to the correct response for that case. In one hypo-
thetical example for the tax domain, a system designed to identify non-compliant taxpayers,
the description of past taxpayers (e.g., age, profession, transactions, contact) is linked to
whether the Tax Administration has issued an assessment notice. The system uses the
training set to build an algorithmic model which captures the relevant knowledge initially
contained in the training set, namely the correlations between cases and responses. This
model is then used to provide hopefully correct responses to new cases by mimicking the
correlations in the training set. For example, once the system has extracted the model
representing correlations between taxpayers’ profile and non-compliance, it applies such
124 diritto e pratica tributaria internazionale n. 1/2022
3. - AI for taxpayers
Taxpayers can use AI to make faster, cheaper, and more accurate
decisions. In particular, AI can provide innovative ways to process finan-
cial data, provide answers to complex questions, and perform previously
time-consuming or impossible analyses. If correctly developed, these so-
lutions can improve taxpayers’ efficiency in compliance, favour legal cer-
tainty, and support a collaborative approach with tax authorities (18).
In the following paragraphs, we will provide some examples of AI
applications depending on whether they are used for 1) enhanced know-
ledge of tax law; 2) tax accounting; 3) outcome predictions.
model to the data profile of new a taxpayer, not previously present in the training set. Then,
it will determine whether a new taxpayer is presumably compliant or not. Other applica-
tions of machine learning are unsupervised learning and reinforcement learning. In the
former, the machine learns by looking directly into the data without being provided with
training examples in advance. As to the latter, the machine will learn based on the feedback
provided by the human on well it has been doing in reaching its outcome. For an overview
of machine learning for non-experts, see E. Alpaydin, Introduction to machine learning, MIT
Press, 2020.
(18) L. Viola, Interpretazione della legge con modelli matematici, Milano, 2017.
dottrina 125
(19) See, for example, M. Logozzo, L’ignoranza della legge tributaria, Milano, 2002; Id.,
La scusante dell’illecito tributario per obiettiva incertezza della legge, in Riv. trim. dir. trib.,
2012, 387 ff.
(20) A. Di Pietro, Leva Tributaria e divisione sociale del lavoro, cit., 451.
(21) See G. Peruginelli; S. Faro, Frontiers in Artificial Intelligence and Applications, IOS
Press, 2019, which provides a general overview of the practical implementation of legal
information systems and the tools to manage this kind of information.
(22) PWC, How Tax is leveraging AI — Including machine learning — In 2019,
available at https://www.pwc.com/gx/en/tax/publications/assets/how-tax-leveraging-ai-ma-
chine-learning-2019.pdf.
(23) D. Bentley, Taxpayers’ Rights: Theory, Origin and Implementation, 2007, Alphen
aan den Rijn, Kluwer Law International, pp. 269 ff.
(24) For example, in the US, companies that must navigate the increasingly complex
US Tax Code can use AI tools to track tax rates and calculations for multiple tax jurisdic-
tions. An example of such a tool is Intuit Inc. which provides an application called Tax
Knowledge Engine (TKE), helping users streamline tax preparation. The system delivers
answers tailored to each taxpayer by gathering correlating more than 80.000 pages of US tax
requirements and instructions based on an individual’s unique financial situation.
(25) Deep learning is a subset of AI including computer systems that learn based on
complex neural networks.
(26) Additional information on IMB Watson is available at https://www.ibm.com/
watson/stories/kpmg (last access 10 December 2021).
126 diritto e pratica tributaria internazionale n. 1/2022
tured and unstructured data to help identify projects that are eligible for
credits, using NLP to understand the economic context.
Moreover, chatbot applications, powered by NLP and machine lear-
ning, are said to affect the accessibility to the law profoundly. Significantly,
the emergence of deep learning-based Q&A systems and speech-based
virtual assistants are likely to empower individuals in addressing client-
specific tax questions (27). In this field, Deloitte Belgium has developed a
chatbot that can provide first-hand EU VAT advice, considering the place
of supply rules, exemptions, domestic rates, etc. (28).
In addition to being used for the knowledge of tax legislation, AI
could be applied to improve taxpayers’ understanding of the practice of
tax administrations (29). For example, in the Italian tax law system, AI
could be used to increase the knowability of interpelli (30) and risoluzio-
ni (31) that are publicly available. A possible use case could be the follo-
wing: the taxpayer would provide the AI system with the relevant features
of a specific case, and the system would search for tax administration’s
solutions offered in similar situations. Such a system would reduce errors
in compliance, prevent the taxpayer to raise issues that have been already
addressed by the tax administration, and ultimately favour the uniform
application of tax law. However, especially for interpelli, the importance
and the wide-ranging diversity of the factual elements would require a
careful evaluation of AI applications.
Finally, it has to be highlighted that the usefulness of these applica-
tions could be hindered by the complexity and features of each tax law
system: often-cumbersome legislation, constant changes in regulation and
administration and judges’ interpretations can influence the correctness
and timeliness of the answers provided by the software.
(27) Venture Beat, How this chatbot powered by machine learning can help with your
taxes, available at https://venturebeat.com/2017/01/27/how-this-chatbot-powered-by-ma-
chine-learning-can-help-with-your-taxes/ (last access 10 December 2021).
(28) Deloitte, VAT chatbot SAM, available at https://www2.deloitte.com/be/en/pages/
tax/solutions/VATbot-SAM-Deloitte-Belgium-Tax.html.
(29) A. Di Pietro, Leva Tributaria e divisione sociale del lavoro, cit., 457.
(30) An “interpello” is a request that a taxpayer makes to the tax administration before
engaging in tax-relevant conduct, to obtain clarification in relation to a concrete and per-
sonal case concerning the interpretation, application or disapplication of tax law rules.
(31) A “risoluzione” is an internal act of the tax administration addressed to tax officials
that provides for the correct interpretation or application of tax law, in order to solve a
practical and concrete problem usually on the basis of a request.
dottrina 127
(32) See Kuzniacki, The Artificial Intelligence Tax Treaty Assistant, cit., 9.
(33) B. Van Volkenburgh, Artificial Intelligence and taxes: 8 ways it’s being used, in
Crowd Reason Blog, September 9, 2019.
(34) For example, to compare invoice discrepancies.
(35) For example, for automatically processing tax entries from a spreadsheet.
(36) M. A. Nickerson, Ai: New risks and rewards, in Strategic Finance Blog, April 1,
2019.
(37) The inability of computers to understand language at the level of semantics is used
by logician and philosopher John Searle to curb the great expectations of artificial intel-
ligence.
(38) For the Italian system, in this respect, see F. Bosello, La formulazione della norma
tributaria e le categorie giuridiche civilistiche, in Dir. prat. trib., 1981, 1, 1436 ff.; A. Berliri,
128 diritto e pratica tributaria internazionale n. 1/2022
Sulle cause della incertezza nell’individuazione e interpretazione della norma tributaria appli-
cabile ad una determinata fattispecie, in Giur. imp., 1976, 117 ff.; F. Paparella, L’autonomia
del diritto tributario ed i rapporti con gli altri settori dell’ordinamento tra ponderazione dei
valori, crisi del diritto e tendenze alla semplificazione dei saperi giuridici, in Riv. dir. trib.,
2019, 6, 587 ff.
(39) For references on “predictive justice”, see footnote n. 118.
(40) Conversely to what is generally thought, this example clearly shows that AI systems
for outcome prediction do not actually “decide”, meaning that they take the decision, but
merely examine previous case-law and provide a quantitative score on the possible outcome
of the tax.
(41) F. Bex, H. Prakken, De juridische voorspelindustrie: onzinnige hype of nuttige
ontwikkeling?, in Ars Aequi, 2020, 69, 256.
(42) Blue J, available at: https://www.bluej.com/ca. For a detailed description of the
BlueJ Project, see B. Alarie, A. Niblett, A. H. Yoon, Law in the future, cit.
(43) Given the different factors that can emerge in a case, the system can find what is
the best weight to each variable and how the variables interact with each other, accom-
plishing a task that would be impossible for humans.
dottrina 129
the accuracy of the legal information available (44). Finally, since these
applications analyse past cases and highlight prevailing trends (similarly
to what happens in common law with the binding precedent), their usa-
bility in civil law systems must be carefully considered.
Journal, 2020, 14, 2, 8. For some examples of the ATP indicators, see F. Cachia, Aggressive
Tax Planning: An Analysis from an EU Perspective, in EC Tax Review, 2017, 5, 267 who
identifies several “ATP indicators” and stresses the importance of involving more countries
to build “effective” ATP structures.
(48) On the topic of advisors and lawyers, the European Commission (EC) had laun-
ched a public consultation on the 10 November 2016 to gather feedback on the way
forward for EU action on advisers and intermediaries who facilitate tax evasion, tax avoi-
dance and aggressive tax planning. Following the EC’s public consultation, the EC publis-
hed a tax paper entitled the Study on Structures of Aggressive Tax Planning and Indicators,
Final Report highlighting the model Aggressive Tax Planning (ATP) structures and identi-
fying ATP indicators that facilitate or allow ATP.
(49) Council Directive (EU) 2016/1164 of 12 July 2016 laying down rules against tax
avoidance practices that directly affect the functioning of the internal market, OJ L 193,
19.7.2016.
(50) Recitals (3) of the Council Directive (EU) 2016/1164 of 12 July 2016 laying down
rules against tax avoidance practices that directly affect the functioning of the internal
market.
(51) P. Pistone, La pianificazione fiscale aggressiva e le categorie concettuali del diritto
tributario globale, cit. In the Author’s view, although tax avoidance and aggressive tax
planning have common elements, they must be distinguished. For example, the objective
of tax avoidance is to achieve the tax saving in the same State in which it occurs. In contrast,
in the case of aggressive tax planning the tax saving arises because of the different tax
treatment the States apply to the transnational case. Furthermore, in aggressive tax planning
the intentional element is not relevant. Hence, the two concept must be kept different. In
the author’s view, only in cases where they coexist within international tax planning schemes
or overlap, it is feasible that rules against tax avoidance also counteract aggressive tax
planning.
(52) As far as Italian law is concerned, see, among others, F. Amatucci, L’adeguamento
dell’ordinamento tributario nazionale alle linee guida dell’OCSE e UE in materia di lotta alla
pianificazione fiscale aggressiva, in Riv. trim. dir. trib., 2015, 1, 3; G. Ianni, Countering
international tax evasion and tax avoidance in the BEPS framework: the experience of the
‘Guardia di Finanza’, in Riv. dir. trib. int., 2013, 2, 251.
(53) Multilateral Convention to Implement Tax Treaty Related Measures to Prevent
Base Erosion and Profit Shifting. For more details on MLI instruments, see P. Pistone, The
BEPS Multilateral Instrument and EU Law, in A. Martin Jimenez (ed.), The External Tax
dottrina 131
of these initiatives do not take into account the specific problems connec-
ted to the use of AI in tax planning. This lack calls for more attention at
the national and European level on this topic.
At the same time, AI technologies can also be used to improve inte-
ractions between taxpayers and tax administrations. There is an on-going
trend in tax administrations which increasingly stress the importance of
taxpayers’ voluntary compliance (54). Already in 2008, the OECD sugge-
sted the need for an “enhanced relationship” between taxpayer and tax
administration based on mutual trust and cooperative compliance (55).
Intermediaries, such as banks, lawyers, and consultants, would take on a
key role and, instead of offering aggressive tax planning products, could
propose programmes to foster tax compliance among their clients. This
way, taxpayers would avoid the more aggressive tax audit methods, which
would be used only for those who do not comply with tax compliance
programmes. By relying on automated procedures and standardisation
mechanisms, AI applications for taxpayers could be designed in such a
way to foster compliance with tax law, leaving tax administrations with the
possibility to target controls on the cases where AI systems are not used.
In the field of compliance, it is interesting to wonder if AI could also
be employed by the taxpayers or their advisors for complex obligations,
such as those deriving from the DAC system, thus reducing the burden of
compliance by offering technology-driven facilities to taxpayers. For exam-
ple, in the context of DAC6 (56), AI could be used to identify the cross-
border arrangements (so-called “hallmarks”), which must be reported to
the relevant tax authority by intermediaries, or in some cases by the
taxpayers themselves. Although there is currently no information on these
uses, it can be assumed that such a system could be hindered by the
(57) See D. Weber, J. Steenbergen, The (Absence of) Member State Autonomy in the
Interpretation of DAC6: A Call for EU Guidance, in EC Tax Review, 2021, 5/6, 254, who
argues that the implementation of the Directive created a lot of uncertainty, as DAC6
contains new (often undefined) concepts. Member States have tried to tackle this uncer-
tainty by introducing official guidance, which may lead to diverging interpretations of the
concepts used in DAC6.
(58) L. Scarcella, Tax compliance and privacy rights in profiling and automated decision
making, cit., 2.
(59) Ibidem. In addition, a recent provision in the Italian Finance Act made it easier to
use this information and allowed the use of information from open sources.
(60) In the future, the use of artificial intelligence tools in relation to sanctions should
be addressed. In fact, to give just one example, in the Italian national system the application
of flexible sanctions could more easily lead to the imposition of maximum edictal amounts
even with difficulties in motivating them. Although we are aware of these issues, for reasons
of conciseness, we consider that this is not the place to deal with it and we propose to do so
in the future.
dottrina 133
(61) OECD, Advanced Analytics for Better Tax Administration, OECD Publishing,
Paris, 2016.
(62) See French Law 28 December 2019, n. 1479, Article 154, which, for experimental
purposes, suggested relying on information from open sources to integrate the tax asses-
sment.
(63) K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 4.
(64) Ibid.
134 diritto e pratica tributaria internazionale n. 1/2022
involve interaction with a set of multilevel rules and general clauses (65).
On the contrary, the above example shows that the field of customs law
could be a fertile sector precisely because of the existence of harmonised
legislation and databases. Moreover, also the existence of databases and
information-exchange platforms, such as the VAT Information Exchange
System (VIES) (66) or One-Stop Shop (OSS) (67) in the VAT field, could
represent a good environment for the effective use of AI (68). At the
national level, the usability of AI seems facilitated in areas with standardi-
sed assessment models and precise guidelines, such as for small and me-
dium-sized enterprises.
(65) Nonetheless, use limited to certain areas (e.g., transfer pricing comparability)
could be more effective, even though in that case, it might be challenging to obtain the
relevant information.
(66) VIES is an electronic mean for validating VAT-identification numbers of economic
operators registered in the European Union for cross border transactions on goods or
services.
(67) The Union One-Stop Shop (OSS) is the electronic portal businesses can use to
comply with their VAT obligations on e-commerce sales within the EU to consumers since 1
July 2021.
(68) As far as the importance of VIES system in combatting fraud is concerned, see B.
Middelburg, T. Potma, L. van Verseveld, Report of the EFS Seminar ‘50 Years of the EU
Customs Union and EU VAT System: Developments, Challenges and Alternatives’ Held on 14
February 2019, at the Erasmus University Rotterdam, in EC Tax Review, 2019, 4, 216.
(69) See M. Merkx, N. Verbaan, Technology: A Key to Solve VAT Fraud?, in EC Tax
Review, 2019, 28, 6, 300; L. Scarcella, Tax compliance and privacy rights, cit., 9; C. Pérez
Lopez, M.J. Delgado Rodriguez, S. de Lucas Santos, Tax Fraud Detection through Neural
Networks: An Application Using a Sample of Personal Income Taxpayers, in Future Internet,
2019, 11, 4, 86 ff; F.C. Venturini, R.M. Chaim, Predictive Models in the Assessment of Tax
Fraud Evidences, in Á. Rocha, H. Adeli, G. Dzemyda, F. Moreira, A.M. Ramalho Correia
(eds.), Trends and Applications in Information Systems and Technologies, 2021, Springer,
Cham.
(70) Such a use would be in line with EU Commission initiative to establish a platform
for tax good governance, tackling aggressive tax planning and DT, aimed at stimulating
debate among the tax administrations of the Member States with regard to good tax
governance.
dottrina 135
2016 amendment of the German Tax Law (71). The reform has introduced
a “fully automated procedure” for risk management, which allows the
German tax authority to detect high-risk cases and prevent tax evasion.
The fully automated procedure is based on the data provided by the
taxpayer, on the information already available to the tax authorities, and
on data transmitted by third parties to the tax authorities. It is intended to
ensure an appropriate risk detection and corresponding verification by
automatically filtering out cases involving significant risk and submitting
them for comprehensive examination by public officials.
Moreover, in the UK, the HMRC developed the Connect System, a
computerised data analytics system of network analysis that cross-checks
the tax records of companies and individuals with other databases to
establish fraudulent activities (72). The system looks for correlations bet-
ween the declared income and the lifestyle data coming from a variety of
sources, such as banks, land registry, credit cards, vehicles, VAT registries,
tax investigations, employer income, online platforms, social networks,
web browsing and email records.
Machine learning-based security and fraud detection applications have
been experimented with in customs duties, but few have been deployed
operationally, and their results are generally not shared by administra-
tions (73). One example is the web-based platform called Theseus, used
by OLAF (74). Thanks to in-house developed statistical methods on ag-
gregated and disaggregated data, the platform creates alerts related to
illicit activities such as customs fraud or money laundering.
When addressing the use of AI in tackling tax evasion, it is worth
mentioning that sometimes the line between tax evasion and tax avoidance
is subtle (75). Moreover, in the case of AI use for fraud detection, AI may
(71) Law of July 18, 2016 (BGBl I, 1679). For further information, see N. B. Binder,
Artificial Intelligence and taxation: Risk management in fully automated taxation procedures,
in T. Wischmeyer, T. Rademacher (Edited by), Regulating Artificial Intelligence, Springer,
2020, 295 ff.
(72) Croner-i Navigate, available at https://library.croneri.co.uk/acmag_194203.
(73) K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 4.
(74) Available at https://theseus.jrc.ec.europa.eu. The platform employs classical data
mining techniques, which partially overlap with machine learning techniques. Theseus fo-
cuses on analysis at the macro level: OLAF is not interested in single breaches (which
remain the responsibility of national authorities) but rather in serial and systematic breaches.
(75) F. Cachia, Aggressive Tax Planning: An Analysis from an EU Perspective, cit., 258.
Tax evasion is the direct, open violation of tax rules (such as the rules imposing the
obligation to declare the premise), expressly provided for and punished with administrative
and/or criminal sanctions (F. Tesauro, Istituzioni di diritto tributario, Torino, Parte generale,
2011, I, 242). Tax avoidance differs from tax evasion in that the taxpayer - instead of
136 diritto e pratica tributaria internazionale n. 1/2022
committing a direct breach of the tax rule - improperly uses one or more legal instruments
to achieve a certain objective, thereby achieving a reduction in the tax burden (R. Cordeiro
Guerra, P. Mastellone, Evasione [dir. Trib.], 2017, in Enciclopedia Treccani Online; see
also S. Cipollina, Abuso del diritto o elusion fiscale, in Dig. Comm., Agg. VIII, Milano, 2017,
1 ff.). Many countries make a distinction between acceptable tax avoidance and unaccepta-
ble tax avoidance. Moreover, there are tools such as the above-mentioned ATAD which
fight tax avoidance.
(76) The acronym stands for Analytics for Debt Profiling and Targeting.
(77) UK Government, Building a trusted, modern tax administration system, 2020,
available at https://www.gov.uk/government/publications/tax-administration-strategy/buil-
ding-a-trusted-modern-tax-administration-system.
(78) OECD, Advanced Analytics for Better Tax Administration, cit.
dottrina 137
AI is used for supporting the assessment and the evaluation of the factual elements or for
directly adopting the final assessment notice (S. Dorigo, Intelligenza artificiale e norme
antiabuso, cit., 743).
(83) See the proposal made by Dorigo in S. Dorigo, Intelligenza artificiale e norme
antiabuso, cit., 745.
(84) Privacy and data protection here is used not merely as synonymous with “tax
confidentiality” but also to the set of individual rights and freedom stemming from funda-
mental rights and data protection framework. For a discussion on the distinction, see E.
Politou, E. Alepis, C. Patsakis, Profiling tax and financial behaviour with big data under the
GDPR, in Computer law & security review, 2019, 35, 3, 306 ff.
(85) J. Kokott, P. Pistone, R. Miller, Public International Law and Tax Law: Taxpayers’
Rights: The International Law Association’s Project on International Tax Law-Phase 1, in
Geo. J. Int’l L., 2021, 52, 2, 381-426.
(86) Satakunnan Markkinapdrssi Oy & Satamedia Oy v. Finland [GC], no. 931/13, 27
June 2017.
dottrina 139
The European Union protects privacy and data protection both in primary
and secondary law. The Charter of Fundamental Rights provides the
fundamental right to privacy in Article 7 and the right to data protection
in Article 8. In secondary law, these rights are extensively addressed by the
General Data Protection Regulation (henceforth GDPR) (87), which is a
general framework applicable to natural and legal persons that process
personal data. The GDPR provides individuals with a series of ex-ante and
ex-post rights to control and manage the access to personal data, while
also imposing on data collectors a series of obligations to ensure that
personal data are processed in respect of citizens’ fundamental rights.
As a general rule the GDPR applies to the processing of taxpayers’
data, as long as these can identify a natural person or render him or her
identifiable. According to the GDPR, the processing of personal data can
be considered lawful if one of the six conditions laid down in Article 6 is
applicable. Letter e) is relevant for the tax law sphere. It states that data
processing is lawful when necessary for the performance of a task that is
carried out in the public interest or in the exercise of official authority
vested in the controller.
However, the protective reach of the GDPR to the processing of
taxpayers’ data is reduced by many exemptions provided by the regulation
for the tax field. These exemptions clearly respond to the fundamental
need of Member states to balance privacy with the general interest to tax
transparency and an effective exercise of tax authorities’ powers.
In particular, Article 23 explicitly allows Member states to restrict the
application of Articles from 12 to 22 and Article 34 of the GDPR when
such restriction is needed and proportionate to safeguard important ob-
jectives of general public interest. Among such objectives, “budgetary and
taxation matters” are included in letter e). (88) The exemption is vast as it
includes rights such as the right to transparent information on the pro-
cessing (Article 13-14), the right of access (Article 15), the right to erasure
(Article 17), the right to object (Article 21), and the right not to be subject
to automated decision-making (Article 22) (89). The exemption also inclu-
(87) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of natural persons regarding the processing of personal data
and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119,
4.5.2016 [henceforth, GDPR].
(88) Article 23 GDPR. See also Recitals 31, 71 and 112 GDPR.
(89) Article 22 provides for data subject’s right not to be subject to a decision based
solely on automated processing, including profiling. The prohibition applies to all cases
where personal data are used to profile taxpayers, and automated systems are used to take
140 diritto e pratica tributaria internazionale n. 1/2022
des Article 34 which lays down the obligation of the data controller to
communicate a personal data breach to the data subject. However, when
restricting these rights and obligations, the Member States must adopt a
legislative measure which respects “the essence of the fundamental rights
and freedoms” (90). This requirement is further specified in Article 23(2),
which states that the measure in question should, at least, contain specific
provisions regarding: the purpose of the processing or categories of pro-
cessing; the categories of personal data; the scope of the restrictions in-
troduced; the safeguards to prevent abuse or unlawful access or transfer;
the specification of the controller or categories of controllers; the storage
periods and the applicable safeguards taking into account the nature,
scope and purposes of the processing or categories of processing; the risks
to the rights and freedoms of data subjects; and the right of data subjects
to be informed about the restriction, unless that may be prejudicial to the
purpose of the restriction.
Another important exemption is included in Article 49(1)(d), which
allows the transfer of personal data to third countries or international
organisations on the ground of “important reasons of public interest”.
In this connection, Recital 112 clarifies that this derogation should include
transfers between “tax or customs administrations” and “between finan-
cial supervisory authorities”. Many of these transfers are not only permit-
ted, but in fact encouraged or required by European regulations (91).
Finally, Article 4(9) explicitly excludes from the definition of “reci-
pients of data” the “public authorities which may receive personal data in
the framework of a particular inquiry in accordance with Union or Mem-
ber State law”. This specification entails that, if data controllers are under
decisions that produce legal effects or similarly significantly affect the data subject. Howe-
ver, the prohibition of automated decision-making, including profiling, is not absolute and
allows for fundamental exemptions. For our purposes, Recital 71 provides valuable hints. It
states that decision-making, including profiling, should be allowed where expressly autho-
rised by Union or Member State law to which the controller is subject to fraud and tax-
evasion monitoring and prevention purposes. Among the exemptions directly provided by
Article 22, para 2, the authorisation by Union or Member State law will most likely offer
safe grounds for the activities carried out by tax administrations and taxpayers employing
AI systems.
(90) Art. 23(1) GDPR.
(91) Such as, in the field of VAT, Council Regulation (EU) No 904/2010 of 7 October
2010 on administrative cooperation and combating fraud in the field of value added tax; in
the field of excise duties, Council Regulation (EC) No 2073/2004 of 16 November 2004 on
administrative cooperation and Council Directive 2004/106; in the field of direct taxation,
Council Directive 2011/16/EU of 15 February 2011 on administrative cooperation in the
field of taxation and repealing Directive 77/799/EEC.
dottrina 141
of “algorithmic discrimination”, refer to S. Barocas, A.D. Selbst. Big data’s disparate impact,
in Calif. L. Rev., 2016, 104, 671 ff.; F.J. Zuiderveen Borgesius, Strengthening legal protection
against discrimination by algorithms and artificial intelligence, in The International Journal of
Human Rights, 2020, 24, 10, 1572-1593; G. Sartor, F. Lagioia, Le decisoni algoritmiche tra
etica e diritto, in U. Ruffolo (a cura di), Intelligenza artificiale - Il diritto, i diritti, l’etica,
Giuffrè, Milano, 2020; P. Hacker, Teaching Fairness to Artificial Intelligence: Existing and
Novel Strategies Against Algorithmic Discrimination under EU Law, in Common Market Law
Review, 2018, 55, 4, 1143.
(97) S. Bastiani, T. Giebe, C. Miao, Ethnicity and tax filing behaviour, in Journal of
Urban Economics, 2020, 116, C, 1-16.
(98) See K. Mikuriya, T. Cantens, If algorithms dream of Customs, cit., 13.
dottrina 143
(99) Article 41 of the EU Charter of Fundamental Rights of the European Union (Right
to good administration).
(100) B. Rothstein, The oxford Handbook of Governance, 2012, D. Levi-Faur ed., 143-
144. The concept is closely related to those of state capacity, quality of government and
government interaction with the private sector and civil society. In 2013, the EU Commis-
sion referred to this concept in the Decision for the establishment of a Platform for tax good
governance, aggressive tax planning and DT, aimed at stimulating the debate between the tax
authorities of the Member States with regard to good tax governance. See, more extensively
on this topic, G. Végh, H. Gribnau, Tax Administration Good Governance, cit. and F.
Amatucci, L’autonomia procedimentale tributaria nazionale ed il rispetto del principio europeo
del contraddittorio, in Riv. trim. dir. trib., 2016, 2, 257-276.
(101) This article does not dwell on the binding or discretionary nature of the tax
administration activities. Similarly, it does not take a position on the different declinations of
the concept depending on whether it refers to the assessment or the collection. For this
reason, we will limit ourselves to general reflections, with the idea of deepening the topic in
further studies.
144 diritto e pratica tributaria internazionale n. 1/2022
ding to which certain premises (e.g., income, exchange, etc.) lead to certain conclusions
(e.g., the amount of tax to be paid, the decision for inspections, etc.). But they use a
statistical approach that applies rules that are discovered from previous cases, and from
which the algorithms provide probable answers (often a probability resembling certainty),
but never certain answers.
(111) On the legitimacy of inductive methods, see, among others, A. Fedele, Rapporti
tra i nuovi metodi di accertamento ed il principio di legalità, in Riv. dir. trib., 1995, 1, 242 ff.;
E. Fazzini, L’accertamento per presunzioni: dai coefficienti agli studi di settore, in Rass. trib.,
1996, 2, 309 ff.; G. Marongiu, Coefficienti presuntivi, parametri e studi di settore, in Dir. prat.
trib., 2002, 73, 5, 707-734; M. Basilavecchia, Verso il giusto equilibrio tra effettività della
ricchezza accertata e strumenti presuntivi di accertamento, in Riv. giur. trib., 2013, 4, 341-343;
A. Kostner, Studi di settore e tutela del contribuente tra diritto interno e principi sovranazio-
nali, in Dir. prat. trib., 2017, 88, 1, 28-50.
(112) As far as the applicability to tax law, see L. Quarta, Impiego di sistemi AI da parte
di Amministrazioni finanziarie, cit., 275.
(113) T.A.R. Lazio, sec. III bis, 13 September 2019, n. 10964; T.A.R. Lazio, sec. III bis,
9 July 2019, no. 9066; T.A.R. Lazio, sec. III bis, 28 May 2019, n. 6688; T.A.R. Lazio, sec. III
bis, 25 March 2019, n. 3981; T.A.R. Lazio, sec. III bis, 11 September 2018, n. 9228. In
particular, the Court found that the complete substitution of human activities by algorithms
breaches Articles 3, 24 and 97 of the Italian Constitution and Article 6 of the European
Convention on Human Rights.
(114) Italian Council of State, sec. VI, 8 April 2019, n. 2270; Italian Council of State,
sec. VI, 13 December 2019, n. 8474; Italian Council of State, sec. VI, 13 December 2019, n.
8473; Italian Council of State, sec. VI, 13 December 2019, n. 8472. For a comment to these
decisions, see J. Della Torre, Le decisioni algoritmiche all’esame del Consiglio di Stato, in Riv.
dir. proc., 2021, 2, 710 ff.
(115) Italian Council of State, sec. VI, 13 December 2019, n. 8474.
dottrina 147
lation to the “black box” problem. Thus, a set of minimum conditions for
legitimate use of AI in the administrative procedure have been provided:
a) the algorithm must be transparent and knowable; b) the algorithm must
not be the only basis for the authority’s decision; c) the algorithm must be
non-discriminatory (116). In respect of the first point, the Italian Council of
State has spoken about the right for public officials to understand the
“logic process on the basis of which the act on the basis of which the act
itself [was] issued by means of automated procedures” (117).
In establishing the above-mentioned criteria, the Italian Council of
State looked at interdisciplinary studies and at EU law initiatives, such
as the GDPR. As we saw above (See para. 4.3), at the European level, the
primary source is the GDPR, which sets out principles and rules applica-
ble to decisions based on the automated processing of personal data. In
particular, the GDPR has acknowledged the importance of transparency
and explainability when automated individual decision-making is taking
place based on personal data, and such decisions may impact the indivi-
duals. Among the information that data subject must receive prior con-
senting to data processing, Article 13 includes “significant information on
the logic involved, as well as the significance and the envisaged conse-
quences of such processing for the data subject”. This provision has been
at the centre of a vast debate in the research community, where this legal
requirement has been related to the more fundamental issue of explaining
AI systems and their outcomes (118).
Given the importance of the duty to state reasons, a challenge emerges
for the use of AI in the tax domain: using AI systems that can provide
explanations of their decisions and enhance the reasoning of tax admini-
stration, thus allowing a transparent decision-making process. Making
(116) Italian Council of State, sec. VI, 13 December 2019, n. 8474; Italian Council of
State, sec. VI, 13 December 2019, n. 8473.
(117) It should also be noted that, in Italy, a task force on Artificial Intelligence has
already produced an extensive White Paper on the subject (Task Force on Artificial Intel-
ligence of the Agenzia per l’Italia Digitale, White Paper on Artificial Intelligence at the
Service of the Citizen).
(118) For a discussion on the actual inclusion of a right of explanation in the GDPR,
see S. Wachter, B. Mittelstadt, L. Floridi, Why a right to explanation of automated decision-
making does not exist in the general data protection regulation, in International Data Privacy
Law, 2017, 7, 2, 76-99; G. Malgieri, G. Comandé, Why a right to legibility of automated
decision-making exists in the general data protection regulation, in International Data Privacy
Law, 2017, 7, 4, 243-265. For a discussion on how such right could be framed from a
practical point of view, see G. Sartor, The impact of the General data protection regulation
(GDPR) on artificial intelligence, Study PE 641.530, European Parliamentary Research
Service, 2020, 54.
148 diritto e pratica tributaria internazionale n. 1/2022
4.6. - Accountability
The duty to states reasons is strictly intertwined with the accountabi-
lity of the tax administration. Tax administration accountability demands
that citizens can identify the person who is in charge and responsible for
an administrative action or decision (120). Only in this way citizens can
know to whom they can seek redress in the case of a wrongful doing.
Regarding AI application in public administration, it has been obser-
ved that the principle of accountability would entail that there is always a
human-supervisor responsible for machine determination for which they
remain responsible (121). Problems may emerge when determining who is
responsible in case of wrong determinations by AI tax systems. This could
happen in the tax law field, for example, when the tax administration errs
in determining taxpayers’ obligations (122). Additionally, the difficulty of
determining accountability could cause an additional burden for judges
who will have to determine the correct exercise of administrative function
when AI technologies are used in administrative procedures, especially for
decision purposes.
In the field of taxation, the principle of accountability entails that the
taxpayer can request the automated decision be reviewed by a human who
must always be in control of the procedure. However, the taxpayer must
first know how the AI model reached the decision to use this right effec-
tively. This reveals an interplay between the obligation to state reasons, the
right of defence, and accountability, resulting in a “right to human inter-
vention”, as the latter cannot effectively be used without the former (123).
which highlights the importance of considering the inclusion of AI systems used by the tax
administration in the scope of the new Proposal of Artificial Intelligence Act.
(124) B. Green, The Flaws of Policies Requiring Human Oversight of Government
Algorithms, forthcoming (2022).
(125) For the possibility of using predictive justice systems to remedy certain shortco-
mings of national legal systems, see C. Sacchetto, Processo tributario telematico e giustizia
predittiva, cit. In particular, it is claimed that AI technologies could act as a barrier to the
migration of tax justice towards out-of-trial disputes or para-trial instruments, which ensure
less fairness and quality of solutions and a way to put the judicial administration back at the
heart of tax justice.
150 diritto e pratica tributaria internazionale n. 1/2022
decisions (128). Other similar examples are the French Jurinet and Juri-
CA databases (129).
The organisation of legal information by AI tools in taxation would be
highly profitable. Nevertheless, as we already said, NLP techniques are
based on a syntactic and lexical analysis of language, not on the semantics
of the language, thus challenging its effectiveness (See para. 3.1.2.) and the
complexity of a tax law system or the frequent changes in regulation and
administration and judges’ interpretations can influence the correctness
and timeliness of the answers provided by the software (See para.
3.1.1.) (130).
(133) Example of such tools are Prédictice France, Watson/Ross (IBM), Juris Data
Analytics (LexisNexis), LexMachina.
(134) European Commission for the Efficiency of Justice (CEPEJ) of the Council of
Europe, European Ethical Charter on the use of Artificial Intelligence in judicial systems
and their environment (adopted on 3-4 December 2018), 14.
(135) As regards Italy, mention can be made of the project of the Brescia Court of
Appeal (April 2018-December 2020). This project implemented a case law database to
provide predictions of guidelines and timing in particular areas of law. The database uses
expert and rule-based systems and natural language processing in terms of technology.
Among the area of concern are civil justice, labour and social security law, contract and
commercial law, company law. A complete account of existing AI projects in the various EU
Member States is contained in a recent study commissioned by the European Commission.
See European Commission, Study on the use of innovative technologies in the justice field –
Final Report, 2020. Report prepared by M. Vucheva, M Rocha, R. Renard, D. Stasinopo-
lous.
(136) See, recently, the joint position of the ministries of justice in the Background
Paper of the Conference of Ministers of Justice of 5 October 2021 “Digital Technology and
Artificial Intelligence – New Challenges for Justice in Europe” regarding the use of AI for
the digitisation of justice sectors.
dottrina 153
While it emerges that predictive justice is still a long way from the idea
of robot judges, there is a burgeoning academic interest in the application
of AI for the justice system, with many techniques and methodologies
being developed and tested. (137) For example, the most extensively de-
scribed research concerns the European Court of Human Rights
(ECHR) (138). Some researchers developed a tool that uses NLP to predict
whether the Court will decide that a particular provision of the European
Convention on Human Rights (ECHR), in a specific situation, has been
violated. The tool relies on information from previous judgments. Re-
search reports 79% accuracy of results (139). The results indicate that
the facts of a case, as presented by the Court, have a leading role in
predicting the case’s outcome.
An example of research in the tax law domain is provided by the
Italian research project ADELE funded by the EU. The ongoing project,
in which the two authors are involved, aims to develop an AI tool that can
support legal research and judicial decision-making processes, among
other things, in the tax domain. Its objectives include functionality that
provides judges with the most likely outcome of the decision based on
previous case-law (140).
(137) See D.M. Katz, M.J. Bommarito, J. Blackman, A general approach for predicting
the behaviour of the Supreme Court of the United States, in PloS ONE, 2017, No. 4; N.
Aletras, D. Tsarapatsanis, D. Preotiuc-Pietro, V. Lampos, Predicting judicial decisions of the
European Court of Human Rights: A natural language processing perspective, in PeerJ Com-
puter Science, 2016, 2, 93; M. Medvedeva, M. Vols, M. Wieling, Using machine learning to
predict decisions of the European Court of Human Rights, in Artificial Intelligence and Law,
2020, 28, 237-266.
(138) N. Aletras, D. Tsarapatsanis, D. Preotiuc-Pietro, V. Lampos, Predicting judicial
decisions of the European Court of Human Rights, cit.
(139) Accuracy evaluates how well the machine learning systems has learned the rela-
tionships between input and output in a training dataset and given a new input, correctly
identified the output in the test set.
(140) ADELE Project funded by the European Union’s Justice Programme under
Grant Agreement no. 101007420. For a description of the project, its objective and current
research output, consult the project website available at https://site.unibo.it/adele/en.
(141) For an ethical and legal account of international literature on AI technologies in
the judiciary, see, among others, H. Surden, Machine learning and law, in Wash. L. Rev.,
2014, 89, 87 ff.; H. Surden, Artificial intelligence and law: An overview, in Ga. St. UL Rev.,
2018, 35, 1305 ff.; T. Surdin, Judge v Robot?: Artificial intelligence and judicial decision-
154 diritto e pratica tributaria internazionale n. 1/2022
making, in University of New South Wales Law Journal, 2018, 41, 4, 1114-1133; T. Sourdin,
Judges, Technology and Artificial Intelligence: The Artificial Judge, Edward Elgar Publishing,
2021; F. Pasquale and G. Cashwell. Prediction, persuasion, and the jurisprudence of beha-
viourism, in University of Toronto Law Journal, 2018, 68, Supplement 1, 63-81; R.W.
Campbell, Artificial intelligence in the courtroom: The delivery of justice in the age of machine
learning, in Colo. Tech LJ, 2020, 18, 323 ff.; J. Ulenaers, The Impact of Artificial Intelligence
on the Right to a Fair Trial: Towards a Robot Judge?, in Asian Journal of Law and Economics,
2020, 11, 2; M. Zalnieriute, F. Bell, Technology and the judicial role, in Gabrielle Appleby
and Andrew Lynch (Edited by), The Judge, the Judiciary and the Court: Individual, Collegial
and Institutional Judicial Dynamics in Australia, Cambridge University Press, 2021, 116-142.
For Italian literature, see, among others, C. Castelli, D. Piana, Giustizia predittiva, la qualità
della giustizia in due tempi, in Questione Giustizia, 4, 2018; E. Scoditti, Giurisdizione per
principi e certezza del diritto, in Questione Giustizia, 2018, 4, 153 ff.; E. Rulli, Giustizia
predittiva, intelligenza artificiale e modelli probabilistici. Chi ha paura degli algoritmi?, in
Analisi Giuridica dell’Economia, 2018, 17, 2, 533-546.
(142) D. Kehl, P. Guo, S. Kessler, Algorithms in the Criminal Justice System: Assessing
the Use of Risk Assessments in Sentencing, 2017, Responsive Communities Initiative, Berk-
man Klein Center for Internet & Society, Harvard Law School.
(143) J. Ulenaers, The Impact of Artificial Intelligence on the Right to a Fair Trial, cit.,
25. On the use of AI in producing evidence in the judicial proceeding and for its impact on
the principle of equality of arms, see S. Quattrocolo, Equality of Arms and Automatedly
Generated Evidence, Artificial Intelligence, Computational Modelling and Criminal Procee-
dings, Springer, 2020, 73-98.
dottrina 155
burden since it would create greater diversity between, e.g., large multi-
nationals, which have the economic resources to invest in technological
aid, and taxpayers who do not.
Moreover, courts must motivate their decisions (144). Motivation is a
crucial requirement to make parties and society respect judicial decisions
as it is strictly related to the principles of independence and impartiality of
the judiciary, on the one hand, and to the right of the defence, on the
other hand. As seen in the previous section, AI systems are, to date, unable
to explain or justify their determination, and even though very accurate,
purely statistical-mathematical correlations are insufficient to meet the
standards of a reasoned decision. On the contrary, AI systems can theo-
retically be effective in suggesting possible arguments to judges, especially
in the research for previous judgments, even though this would be difficult
in applying general clauses and principles (145). Nevertheless, also the
second scenario can be prone to abuse. Over time, outcome prediction
systems can reverse the conventional terms of the relationship between
motivation and decision. The risk is that judges no longer reach a decision
by applying specific rules to the facts of the case based on reasoning but
instead look for the most compelling arguments to justify the most proba-
ble outcome determined by the machine.
The use of AI by the judiciary could also affect fundamental princi-
ples, such as the impartiality and independence of the judiciary (146).
Without discussing the specific meaning of the principle of impar-
tiality and how it is interpreted in different legal systems, it is worth
mentioning that it can be undermined by AI-driven biased determina-
tion. The latter can discover a characteristic in one of the parties that is
predictive of the outcome of the judgment. For example, the system may
detect that people from a particular social background are more likely to
be liable for tax evasion. If adopted uncritically by the judge, such a
determination could undermine the principle of impartiality. The judge
would be led to assume that there is a higher likelihood of a conviction in
every case regarding a taxpayer with a particular social background, race,
gender, political affiliation, or specific irrelevant behaviour. This outco-
me would create negative social repercussions on the tax judiciary, as it
(144) In the Italian case, see Article 111 para. 6 Constitution and, for the tax field,
Article 36 del d.lgs. 546 del 1992.
(145) S. Dorigo, Intelligenza artificiale e norme antiabuso, cit., 749.
(146) J. Ulenaers, The Impact of Artificial Intelligence on the Right to a Fair Trial, cit.
156 diritto e pratica tributaria internazionale n. 1/2022
(147) M. Zalnieriute, F. Bell, Technology and the judicial role, in Gabrielle Appleby and
Andrew Lynch (Edited by), The Judge, the Judiciary and the Court: Individual, Collegial and
Institutional Judicial Dynamics in Australia, Cambridge University Press, 2021, 139. On the
problematisation of extraneous factors in the judicial decision-making and its impact on the
principle of impartiality, see Danziger, S., J. Levav, L. Avnaim-Pesso, Extraneous Factors in
Judicial Decisions, Proceedings of the National Academy of Sciences, 2011, 108, 17, 6889-
6892.
(148) European Commission for the Efficiency of Justice (CEPEJ) of the Council of
Europe, European Ethical Charter on the use of Artificial Intelligence in judicial systems and
their environment (adopted on 3-4 December 2018).
dottrina 157
(149) Id., 5.
(150) For further information concerning the timeliness and quality of tax law deci-
sions, see M. Basilavecchia, Funzione impositiva e forme di tutela, Giappichelli, Torino,
2018, 24; A. Giovannini, Giurisdizione ordinaria o mantenimento della giurisdizione tribu-
taria, in Dir. prat. trib., 2016, No. 5, p. 1903, C. Sacchetto, Processo tributario telematico,
cit., 42.
(151) A. Collini, Il massimario delle commissioni tributarie: una struttura da potenziare,
in Fisco (Il), 2003, No. 6, p. 843 commenting D.Lgs. 31-12-1992, n. 545, art. 40.
(152) See D. Borgni, Il regime di pubblicità delle sentenze delle commissioni tributarie, in
Giur. It., 2011, 3, 702, who comments Cass., Sec. un., 3 March 1961, n. 456.
158 diritto e pratica tributaria internazionale n. 1/2022
quences of a person’s actions, is a crucial value for the tax law sector. On
the one side, it is indispensable for properly managing public expenditure
and operating an orderly accounting. On the other side, the presence of
general and abstract rules that can be foreseen is indispensable also from a
sociological and constitutional point of view (153). From a sociological
point of view, the predictability of the fiscal impact of one’s conduct is
essential for the programming and carrying out many aspects of economic
and social life (154). From a constitutional perspective, taxation in demo-
cratic States must depend on the community’s consensus (155). The use of
AI tools in tax justice would help to foster the acceptance of judgments
consistent with the system of precedents (156).
If explainable, AI tools can also be essential for identifying judges’
biases (157). Indeed, like machines, humans are also affected by biases. By
providing explained models of how judges usually decide, AI systems may
therefore pinpoint such biases and incentivise correcting misconducts or
irrational tendencies. At the same time, they may enhance the transparency
of previous decision-making and encourage judges to motivate better de-
cisions that depart from mainstream approaches.
Moreover, these systems may facilitate judicial decision-making in
standardised cases, substantially reduce the courts’ workload, and improve
citizens’ access to justice (158). As we saw, the predictability of justice
would encourage calculating the odds of making a case (also as a natural
filter to litigation), hopefully with the overall improved effectiveness of tax
justice (See para. 3.1.3.).
Hence, more significant efforts should be made on open data policies
in the judiciary. Indeed, the availability of judicial data (such as judicial
decisions, acts of the parties) is an essential condition for the development
of AI, enabling it to perform specific tasks previously carried out by
humans in a non-automated manner. The more data available, the more
AI can refine models improving their predictive ability. Therefore, an
(153) F. Farri, Le (in)certezze nel diritto tributario, in Dir. prat. trib., 2021, 2, 720.
(154) Ibidem.
(155) Ibidem.
(156) Advocating for the use of AI in the Italian tax judiciary, see C. Sacchetto, Processo
tributario telematico e giustizia predittiva, cit., 46 who explicitly points out the usefulness of
AI tools to enhance the knowledge and application of judicial precedent and interestingly
elaborates on this topic.
(157) C.R. Sunstein, Algorithms, correcting biases, in Social Research: An International
Quarterly, 2019, 86, 2, 499-511.
(158) For more details on quantitative data on the case law of Italian tax commissions,
see the Department of Justice of the Ministry of Economic and Finance annual report.
dottrina 159
(159) In this vein, the Italian Commission for the Revision of Tax Justice called for
creating a database of case-law of Italian Tax Commissions. Such creation represents a
necessary step towards using AI in the Italian tax judiciary.
(160) See C. Sacchetto, Introduzione, in C. Sacchetto, F. Montalcini (Edited by), Diritto
tributario telematico, Giappichelli, Torino, 2017, pp. XXV-XXXI and M. Taruffo, Prece-
dente e giurisprudenza, in Riv. trim. dir. proc. civ., 2007, 61, 3, 709 ff., who speak sabout the
risk of a decrease in the critical analysis of cases and norms.
(161) F. Pasquale, G. Cashwell, Prediction, persuasion, and the jurisprudence of beha-
viourism, cit., 79.
160 diritto e pratica tributaria internazionale n. 1/2022
law. Such situations constantly occur in the tax law domain, and it is
evident when thinking about the crucial role of CJEU decisions in the
VAT field where an adaptation of existing rules to the European case-law
is constantly needed, or when thinking about the centrality of the Con-
stitutional Court’s decisions in national tax law systems such as the Italian
one (162).
At the same time, many judgments within the legal system involve an
element of discretion (163), such as the interpretation of general clauses or
principles (164). While machines may facilitate detection of improper use
of such discretion, they cannot be trained to exercise this function (165).
Discretionary decisions may need to consider community values, the sub-
jective features of parties, and any other surrounding circumstances that
may be relevant. In this sense, someone argues that, behind the rhetoric of
making the judicial systems more efficient, the pressure of using AI sy-
stems would be aimed at capping the judge’s discretion to give judicial
decisions the chrism of mathematical certainty (166).
Just like the beneficial effects, the distorting consequences of the use
of AI on the activity of courts must be taken seriously to encourage a
useful and conscious use of technology.
6. - Conclusions
This article has provided an overview of how AI technologies apply or
potentially apply in the tax law field. The aim was to classify the several
existing applications from the perspective of the different actors involved
in the tax law domain, i.e., taxpayers, tax administration and tax judiciary.
Based on this classification, the article has addressed some of the potential
issues from such uses.
First, the analysis of the existing applications showed that AI applica-
tions vary widely based on their purposes and technological development.
For example, applications used for tax compliance and accounting are
relatively more common than used in the tax judiciary. Furthermore, while
raising many academic discussions, some applications are still experimen-
tal, such as those in “predictive justice”. On the contrary, AI technologies
for control prioritisation and tax compliance are more developed and
robust, and these fields seem to be those with more promising applica-
tions.
Regarding the legal analysis, several issues have emerged that should
be carefully considered in the future debate on AI and tax law and even-
tually be considered for future legislative action. Since AI systems need
large amounts of data, the fair balance between collecting taxpayers’ data
and their privacy rights is a significant issue. The primary legal source in
this field is the GDPR. However, the Regulation allows for many excep-
tions in the tax field, which member States can use to derogate from data
protection rules. In this respect, due consideration should be given at the
national level to the extent to which data protection principles are respec-
ted when AI applications are used in the tax system, especially in light of
the proportionality principle.
Several issues emerge from the technical functioning of AI systems.
First, the problem of algorithmic discrimination may affect the principle of
impartiality of both the tax administration and the judiciary. Although
these principles assume different meanings depending on the context,
the use of AI increases the risk of biased decisions. Second, the “black-
box problem” may result in an obscure decision-making process of tax
actors, thus undermining the essential principles such as the duty to state
reasons and, in more general terms, the acceptability of authoritative acts
by taxpayers. In this respect, a closer collaboration between tax lawyers
and computer scientists is recommended in the future to reflect on the fair
use of AI. For example, an effort should be made towards explainability,
making the used data and the criteria for selection transparent, and the
logical process bringing to the final decision accessible.
Beyond considering technical problems, the use of AI technologies
could lead to accountability problems. The tax actors could be brought
to follow the machine’s suggestions uncritically and apply them to the case
at hand without considering contextual factors. Therefore, another vital
162 diritto e pratica tributaria internazionale n. 1/2022
(167) European Commission, Proposal for a Regulation of the European Parliament and
of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence
Act) and amending certain union legislative acts, Brussels, 21.04. 2021, COM/2021/206 final
[AI Regulation proposal].
(168) Annex III of the AI Regulation proposal.
(169) Recital 38 of the AI Regulation proposal.
dottrina 163
ALESSIA FIDELANGELI
University of Bologna
FEDERICO GALLI
University of Bologna
(170) For a further analysis of the application of the AI Regulation proposal to the
systems used in the judicial practice, see S.F. Schwemer, L. Tomada, T. Pasini, Legal AI
Systems in the EU’s proposed Artificial Intelligence Act, in Proceedings of the Second
International Workshop on AI and Intelligent Assistance for Legal Professionals in the
Digital Workplace (LegalAIIA 2021), held in conjunction with ICAIL 2021, June 21,
2021, Sao Paulo, Brazil.