You are on page 1of 8

2022-01-09 21:55 Liability for AI in financial services

Liability for AI in financial services


OUT-LAW ANALYSIS | 11 May 2020 | 11:05 am | 8 min. read

Financial services providers can turn to transparency, good


governance and testing to reduce the risk of things going wrong
with artificial intelligence (AI) systems, but should be prepared
to take responsibility for issues even if they are not at fault.

Liability can arise in a number of areas, such as intellectual property rights (IPR) ownership and
infringement, data protection claims, discrimination and bias issues, and financial loss to customers.

The challenge of identifying responsibility for fault

In the UK, there is no overarching legal framework in relation to AI, but some issue-specific laws on
automated processes have been implemented, such as the Automated and Electric Vehicles Act 2018,
which attributes liability for damage caused by insured automated vehicles to the insurer, and specific
provisions on automated decision making in the Data Protection Act 2018.
As AI is increasingly being used across a number of sectors and for a variety of purposes, the
development of sector specific laws regulating AI development and use may be more appropriate than a
general AI legal regime. A sector-focused approach will enable the implementation of laws which not
only address the issues associated with AI but may also ensure that such laws are relevant and applicable
at a practical level for the sectors seeking to rely on them. This would also allow the relevant regulator,
such as the FCA in the financial service sector, to have oversight and give more specific guidance and
direction on AI adoption and use.

As AI becomes increasingly autonomous and removed from human decision making, it can be difficult to
establish who will be at fault if AI causes damage, particularly given the number of parties typically
involved in the development and operation of AI tools – from developers, to data providers,
manufacturers, and users. Further difficulties arise where loss results from machine learning systems
which learn to make decisions independently and without human intervention.
https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 1/8
2022-01-09 21:55 Liability for AI in financial services

Risks of liability for AI in financial services

It is important for financial services businesses to consider how AI is to be implemented into business
processes and to assess the potential liabilities that may arise. Businesses should look to future-proof
contracts with technology providers and customers as much as possible. Contracts should be thoroughly
considered to ensure appropriate apportionment of liability.

Luke Scanlon
Head of Fintech Propositions
Algorithmic errors, insufficient or inaccurate
data, and lack of training
of systems and AI users, could result in bad
decisions leading to
financial loss for both the financial services businesses
themselves and
their customers

When future-proofing contracts, the various types of loss and claims which may arise through AI
adoption should be considered. The use of AI can give rise to losses in a number of areas including, for
example, in circumstances where it is embedded in drones or machines, for personal injury and product
liability. However, these scenarios are less likely to be of significance to many financial services
businesses which use AI technology as part of digital solutions.
In a digital financial services context, the following may be of particular concern:
 

Economic loss

From chatbots and insurance quotes, to assessing investment portfolios and analysing market trends,
businesses are increasingly relying on AI technology in back office operations and to provide services to
its customers and assist with making financial decisions. Algorithmic errors, insufficient or inaccurate
data, and lack of training of systems and AI users, could result in bad decisions leading to financial loss
for both the financial services businesses themselves and their customers.

Data protection claims

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 2/8
2022-01-09 21:55 Liability for AI in financial services

An AI solution may rely on data, whether that is personal, non-personal or a combination of both.
Businesses need to ensure that they lawfully collect and use data, and in particular, personal data, in a
way which is compliant with data protection laws – including the General Data Protection Regulation
(GDPR) and Data Protection Act 2018 – to minimise the risk of customers raising data protection claims
and regulatory fines.
 
Where data collection and use is seen to be unlawful, financial services businesses using AI to collect
data from customers may face enforcement action from the Information Commissioner's Office (ICO).
Infringement could result in fines under the GDPR of up to €20 million or 4% of a firm's worldwide
annual revenue, whichever is higher.
 

Security breaches and data loss

A failure to implement adequate security measures to protect data can lead to corruption, leaks and
losses of significant volumes of customer data which in turn may lead to customer complaints and a
right to compensation. The ICO in its draft AI auditing framework highlighted two security concerns
which may be heightened in the AI context.
The first is the extent to which AI is dependent on the use of third party frameworks and code libraries
and the supply chain security issues this creates. Machine learning technologies often require access to
large third party code repositories with the ICO's study finding that one popular machine learning
development framework included "887,000 lines of code" and relied on "137 external dependencies".

The second is around the use of open source code. While open source is a necessary and valid option,
consideration must be given to the liability implications of finding security vulnerabilities when they are
used.  

IPR ownership and infringement 

Disputes over ownership of IP generated by AI technologies may arise along with claims in relation to
infringement by AI of third party IPR.
Rights of ownership may be not be clear in respect of the standard legal categories of protection for
confidential information, know-how and copyright where one party provides the data and the other the
algorithm, and in relation to patent rights where machine learning is used to achieve a novel or
innovative step. 

In respect of IP infringement, a starting position may be an expectation that liability would rest with the
legal entity that controls or directs the AI system. However, the position may not be so clear cut where

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 3/8
2022-01-09 21:55 Liability for AI in financial services

multiple businesses are involved in the process and where AI systems develop to make decisions
independently and without human supervision. Where AI technology evolves or improves its processes
beyond the original purpose for which a business creates it, it may risk infringing another business’s
copyright by using that business’s data as one of its inputs. Attributing liability may be less challenging
where a business is able to trace back how decisions were made by AI but this is complicated by the
existence of 'black box AI' where AI decision making cannot be explained.
 

Bias and discrimination

Decisions which are based on insufficient or low quality data may produce biased outcomes. These
outcomes can lead to complaints from consumers and requests for decisions to be retaken, particularly
where the decisions put individuals at a disadvantage. This might include, for instance, where customers
face paying higher insurance premiums due to their location or gender. Where this is the case, financial
services businesses should consider how to minimise the risk of discriminatory outputs. Considerations
such as whether the biased outputs could have been prevented through testing of algorithms and quality
checks on data may be relevant when attributing the level of liability.

SPECIAL REPORT

AI in Financial Services

Reducing AI risk

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 4/8
2022-01-09 21:55 Liability for AI in financial services

While an AI technology provider may be willing to stand behind the performance of its algorithm, it may
not be willing to give its customers comfort that the AI system as a whole, after learning from the
customer's data, will not result in outputs that cause loss. As regulatory accountability will ultimately lie
with the financial services business, so risk control measures other than the transfer of risk to the AI
technology provider may need to be put in place.
Examples include:

Explainable AI

The use of explainable AI can assist with tracing back to how a decision was made and where the process
went wrong. Businesses should weigh the benefits of using AI systems which produce unexplainable
decisions against the risks which arise from doing so, including ultimately being liable for losses even
where it is unclear how it was caused.

Keeping clear, accurate and up to date records of the data used, in accordance with legal requirements,
how AI systems are trained and tested, and perhaps the decision making processes themselves where
possible, might enable businesses to not only trace back how a decision was made through assessing
decision making patterns and an AI's behaviour, but also to step in and resolve issues before they arise,
and determine at which point something went wrong. Traceability facilitates accountability and
auditability.

Data protection

Businesses are required to carry out data protection impact assessments (DPIA) prior to using AI to
ensure that the processing of personal data is lawful and complies with data protection law. DPIAs may
also assist with pinpointing whether systems are 'high risk' and therefore can help businesses prepare for
the level of data protection risk and liability that they could be taking on. Businesses should also ensure
that all necessary consents are obtained and fair processing notices are issued prior to processing
personal data.

Security

Businesses should ensure that adequate technical and organisational security measures are implemented
to protect customer data, and address the risk of data loss and AI algorithms being hacked and tampered
with. Security should be built into the technology and the overall process at the outset rather than

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 5/8
2022-01-09 21:55 Liability for AI in financial services

considered as a secondary measure. A review of existing organisational security processes should be


conducted to test whether measures already in place are sufficient for AI use.

Human intervention

Ensuring that there is always human input in the AI lifecycle, and that it is the appropriate or right level
of human input, is important. Exercising control over the AI system can help mitigate issues which may
give rise to liability, such as poor or inaccurate decisions, and discrimination. A European Commission's
expert group has recommended asking:

did you consider the task allocation between the AI system and humans for meaningful interactions?;

are there safeguards to prevent overconfidence in or overreliance on the AI system for work
processes?; and

which detection and response mechanisms did you establish to assess whether something could go
wrong?

Audit

Financial services businesses should ensure that they have in place a robust process for auditing AI
technologies. AI audits may focus on quality checking AI outputs, assessing the risks with using data, and
continuous monitoring of AI technologies and those operating the technology. Developing an AI audit
framework may entail:

re-assessing existing audit processes to check that they remain fit for purpose and adequately address
the specific risks AI poses. Using AI can create new risks or make it more difficult to identify risks,
generally;
reviewing and amending (where necessary) access and audit provisions within contracts to ensurethe
real risks that AI poses are covered; and

engaging with external auditors to allow for independent assessments of the AI technology.

Insurance

Customers will want to know that the businesses that they are dealing with will have sufficient resources
to make good any claims they have against them. Questions arise as to whether insurers have developed
products to cover risks that the use of AI can give rise to.

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 6/8
2022-01-09 21:55 Liability for AI in financial services

In a similar way to which businesses obtain traditional insurance products, there is potential for
insurance products to be developed against the consequences of AI and taken out either by the
developer of the AI on behalf of its clients, or directly by those using AI within their businesses. This may
be particularly important where the failure could have a significant impact on customers.

Bespoke AI insurance policies are not currently commercially available, and AI losses are not explicitly
covered by traditional forms of insurance. This means firms using AI should explore and carefully analyse
the extent to which existing types of insurance, such as business interruption, cyber, and professional
indemnity insurance, cover specific AI related risks.

Complaint-handling

There are varying types of liability which can arise from AI use. Measures can be implemented to mitigate
the risks associated with adopting AI, from keeping records, to ensuring systems are subject to regular
and thorough testing, and future- proofing contracts.

Unexplainable 'black box' AI creates various challenges and so financial services businesses should think
about whether these systems should be used in customer facing environments where there are various
duties to customers – including the principle of treating customers fairly and  right to be informed under
data protection legislation – or if at all.
Ultimately, however, financial services businesses as the first point of contact for the customer and the
holder of the customer relationship, must ensure that adequate processes are in place to deal with
customer complaints and claims, irrespective of who is responsible for any loss or damage.
Luke Scanlon and Priya Jhakra are experts in fintech law at Pinsent Masons, the law firm behind Out-Law.

WRITTEN BY

Luke Scanlon
Head of Fintech Propositions

+44 (0) 20 7490 6597
luke.scanlon@pinsentmasons.com

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 7/8
2022-01-09 21:55 Liability for AI in financial services

2022 Copyright Pinsent Masons LLP

https://www.pinsentmasons.com/out-law/analysis/liability-for-ai-in-financial-services 8/8

You might also like