You are on page 1of 9

Finwin Capital Limited: The Roadmap for Implementation of Artificial Intelligence

In the last IT steering committee meeting held in Sep, 2022, Mr. Verma presented the efficacy of
the solutions implemented at Finwin Capital Limited for credit underwriting of the customers.
Improving the credit underwriting using data, technology innovations and the increased
computational power have been the theme of a large number of conversations across industry
forums for the last few years in forums like Indian Banking Association (IBA) and various other
Fintech meets. Mr. Chandrakant Verma, Chief Information Officer at Finwin Capital Limited was
entrusted with the responsibility of finding out the optimal approach to meet stakeholder’s
expectations of increasing business volumes without taking undue additional credit risk. Mr Verma
was well aware that the technology is enabling the lenders to answer credit applications much
quicker than a human reviewing it. Some of the new age lenders have proved it is more accurate
which means they are able to capture more of the lucrative lending market and be a stronger
business.

During the meeting the committee members had queries regarding usage of the alternate data and
the data points which are being disregarded to be used for underwriting purposes. The members
asked the technology team if AI / ML models should be used to underwrite certain lending
products. Committee members also wanted to understand return on investments for these tech
investments in the next 2-3 years. Mr. Verma called upon some of his colleagues in the industry
and also spoke to his team at FinWin Capital Limited to collect their opinions and point of views,
but is yet undecided. Since AI based projects need a significant amount of time and cost and the
probability of failure or not getting a positive ROI is very high, Mr. Verma wants to be convinced
about the approach before presenting in the next steering committee meeting in April, 2023.

Finwin Capital Limited

Started in 2018, Finwin Capital Limited, a Non-Banking Financial Company (NBFC) licensed by
RBI, had started with the goal to provide lending services in various areas. Company’s stated
mission has been to provide technology-enabled, innovative and customized financial solutions for
an enhanced customer experience. The company has a suite of financing products that cater to a
well-diversified spectrum of clients. This enables the company to leverage upbeat sectoral growth
trends, while staying insulated from slumps in certain segments. The company started with
corporate and Real Estate lending and in the last one year has forayed into retail lending as well.
During FY 2021-22, the company has expanded its footprint to over 10 locations from merely three
locations in FY 2020-21. (Exhibit 1). The retail portfolio is largely SME focused where the
company has been increasing its Assets under management (AUM) by more than 10% every month
for the last 6 months. For the next 2 years, the plan is to double the AUM each year. Company has
also started disbursing personal loans and has aspirations to become lender of choice for salaried
individuals.

The company has CRISIL Ratings of AA(-) for Long-term borrowings and A1(+) for Short-term
borrowings. Finwin Capital offers a range of financing solutions to meet business requirements of
large corporations with flexible repayment structures mapped to client cash flows. It also offers
customized financing solutions which enable property and urban infrastructure developers to scale
up their businesses. In the retail lending space, Finwin capital facilitates clients with personal loans
for various end-uses through seamless and paper-free digital lending. These loans are collateral-
free and have fixed interest rates and tenure. Its MSME financing solutions such as Loans Against
Property, Supply Chain Finance and Business Loans are designed to fast-track business growth.
To meet the needs of its clients, the company offers innovative products, such as the LAP cum
Supply Chain Finance which was introduced in October 2021.

AI in Lending

While AI will provide a competitive edge and improve operational efficiency, there are worries
about data privacy and ethical practices. Various banks and NBFCs have embarked on an AI-led
transformation journey using AI-based predictive modeling for fraud detection and credit
decisioning, automating manual tasks, robotic assistance/ chatbots and intelligent OCR (Optical
Character Recognition). RBI circular states that Digital lenders should adopt ethical AI which
focuses on protecting customer interest, promotes transparency, inclusion, impartiality,
responsibility, reliability, security and privacy. This has resulted in rethinking business models for
a number of Fintechs and Lending companies.

The ethics of using AI and ML algorithms in every sector is coming under scrutiny, with three
main areas of concern – accountability, transparency and data bias. These issues, however, are
particularly thorny in the financial services arena since it's a highly regulated industry with
significant impact on people’s lives. While the legal framework is in development, organizations
today nevertheless have ethical obligations to meet around quality of data, transparent decision-
making and accountability. Artificial intelligence (AI) and machine learning (ML) are not new
technologies for the financial services industry, but their growing adoption has prompted complex
questions around data oversight and accountability. In a highly regulated industry, transparency,
responsibility and liability in decision-making and data processing are paramount, but the
functioning of AI and ML algorithms can be opaque, making the question of ultimate responsibility
an open one. An investigation by The Markup in the USA has found that lenders using AI
technology in loan applications are more likely to deny home and short-term loans to people of
color. Markup's investigation revealed that 80% of Black applicants, 40% of Latino applicants and
70% of Native American applicants are more likely to be denied a loan compared to other
applicants. Such results are relevant considering that 45% of the United States’ largest mortgage
lenders offer online or app-based loans.

AI readiness at Finwin Capital Limited

Technology is an enabler for business growth and customer facilitation and Finwin capital has
made it their constant endeavor to enhance their technology-enabled, innovative and customized
financial solutions, to deliver differentiated customer journeys and enriched stakeholder
experiences. The technology team has been focusing on delivering seamless digital experiences,
agility in solution delivery and transitioning to a data-driven enterprise. All of the applications at
Finwin are highly scalable to ensure business goals can be supported seamlessly at optimum costs.
Finwin has also gone live with an analytics platform that provides a single version of truth, across
the enterprise. A standard set of APIs, exposed through their API platform, has helped in smooth
integration with Fintechs and NBFCs for sourcing customers. Being low-code/no-code, their
platforms are agile, easy to use and help them achieve a competitive market edge. Since all of their
applications are hosted on cloud, they have been able to seamlessly scale up or down the
infrastructure cost, based on the business volumes.

Most important business drivers for AI use cases in the BFSI industry are enhancing customer
experience, improving productivity and increasing revenue.
Operational Efficiency: Finwin Capital has automated the Back office processing of Education
loans which required Goal seek using RPA bots. Automation of a 13 step process using robotics
saves on an average 10 minutes per application. This has also resulted in Error-free processing and
quicker application processing.

Intelligent OCR and image processing: Finwin’s video KYC solution uses computer vision and
liveness checks to enable efficient servicing and onboarding of customers. In the absence of video
KYC, someone from the company has to go and visit customers to do the KYC (Know Your
Customer) checks before loan disbursement. It also uses intelligent OCR (Optical Character
Recognition) for bank statements and Financial statement analysis. For credit underwriting,
customer’s financial statements have to be analyzed which would have taken a lot of time if it was
done manually. Now since all of these statements are read using intelligent OCR and the rules are
run automatically to flag exceptions, the credit decisions are faster and error free. This
significantly improves operational efficiency and credit underwriting.

Fraud detection and Credit Decisioning: Finwin capital has a state of the art Loan Origination
System which has a very efficient Business Rule engine to underwrite all retail loan products.
However, these rules are formulated by the credit and risk team which brings in individual biases
and some of these elements are based on anecdotal evidence rather than based on data. Individual
biases can be related to gender, ethnicity, geography, industry, age group or any other demographic
characteristics. To build any model, the team needs a good amount of data which is not available
with Finwin as of now. Finwin’s current customer base is around 7500 which is statistically too
less to develop and test any model. Moreover, the vintage of these customers are mostly less than
a year so the empirical credit worthiness checks can’t be possible. Finwin Capital is ensuring that
all data points which are collected during the customer lifecycle are stored in their systems.

The company can buy an AI based credit decisioning system from the market and run a parallel
credit engine for every customer that they onboard. Since, they don’t have enough data as of now,
they can’t back test the model. The advantage of having a parallel model would be two folds. The
credit team will gain insights from the output generated by the AI based system and the AI model
will get enhanced with the feedback loop over time.
Who is ultimately liable for when an AI algorithm goes wrong? This is a question that is perplexing
regulators, firms and technology providers alike. Many people expect that in the case of collisions
caused by driverless cars, the driver will not be at fault, instead the buck will stop with the
manufacturer of the vehicle. This assumption might not be true for a lending organization hence
Finwin has to analyze any potential issues with the quality of their data and take steps to ensure
that those risks are mitigated.

The difficulty in interrogating the inner workings of an AI algorithm is important in terms of


transparency for the customer. Where an algorithm is making decisions on whether to grant a loan,
or which assets to buy for a portfolio, customers first want to be aware if it is a machine that is
making the decision, and secondly, how it arrived at that choice. There’s a lot of thought around
traceability of decisions, being able to trace that decision from where it came from and whether all
the right steps were taken along the way. That’s the evidence that the financial institution should
have in place, in terms of what they have to disclose.

Since the model will be implemented without any significant testing done with its data, the efficacy
of the output would be questionable. Relying on a completely untested system even when it is not
in production can be completely counterproductive. There is a school of thought which advocates
to start working on AI systems only when they have sufficient amounts of data.

For Finwin to embark on the journey of AI based fraud detection and credit underwriting, they
have to address the concerns related to availability of data, privacy of consumers and security of
data, integration, operationalization and maintenance of AI infrastructure, and the very distinctive
skills required for success in AI. While the company has built BI dashboards and plans to
experiment with simple algorithms the AI roadmap is not defined. The dataset is important, but
there are a number of ways in which biases can be built into the AI system. If the directive of the
system is framed incorrectly, this can also drive biased behavior. For example, an AI system that’s
assessing the creditworthiness of customers, but whose drive is to grow profits, may become
predatory and unethical in its selections. The machine itself neither understands nor cares whether
its outcomes are considered unethical or prejudiced, unless these considerations are framed in its
programming.
Questions:

Q.1.Is AI reality or a passing fad for Lending organizations? Should Finwin Ltd. wait for the
technology to mature and regulations to be clearer before embarking on this journey?

Q.2.Should Finwin capital Ltd. wait for enough data to embark on an AI journey or start with
whatever they have and find ways to get around it?

Q.3.Should Finwin Capital Ltd. invest in their technology team to build an AI model from scratch
or should they buy a system from a technology provider?

Q.4.Outputs from AI systems for Lending organizations should be explainable (commonly known
as Explainable AI (EAI) or should the AI model be allowed to mature with continuous feedback
loop and let the output remain a black box?
Exhibit 1: Key Financials of Finwin Capital Limited
Appendix : AI implementation success and failures

A UK High Street bank wanted to see the impact of using Machine Learning vs traditional credit
score methods to predict if a customer would go into credit default. Their platform was used to
ingest the data, automatically transform it, create features and build a lot of Machine Learning
(ML) models. The result was a ML model that was able to predict credit default better to the point
that it caught 83% of bad debt not caught by credit scores while refusing loans to the same number
of customers.

JP Morgan implemented a program called COiN (Contract Intelligence platform) that uses
unsupervised machine learning to do image recognition where the software can compare and
distinguish between different commercial loan agreements. They used to typically employ lawyers
and loan officers who spent about 360,000 hours each year tackling mundane tasks, including
interpreting commercial-loan agreements. The firm has successfully managed to cut the time spent
on this work down to a matter of seconds using machine learning.

At the same time, there are examples of companies which have miserably failed because of issues
with data quality / availability, regulatory compliance and ethical practices. There are examples of
promising AI-powered startups coming up short, shutting down their businesses because they've
failed to build meaningful AI solutions for the problems they hoped to solve. When deploying AI
applications at a bank, complexity increases by an order of magnitude over any other project. Issues
like auditability, financial viability and fairness have to be clearly understood and managed for all
AI projects in lending. When artificial intelligence is able to use a machine learning algorithm to
incorporate big datasets, it can find empirical relationships between new factors and consumer
behavior. Thus, AI coupled with ML and big data, allows for far larger types of data to be factored
into a credit calculation. Examples range from social media profiles, to what type of computer
customers are using, to what they wear, and where they buy clothes. If there is data out there, there
is probably a way to integrate it into a credit model. But just because there is a statistical
relationship does not mean that it is predictive, or even that it is legally allowable to be incorporated
into a credit decision. Incorporating new data raises a bunch of ethical questions. Should a bank
be able to lend at a lower interest rate to a Mac user, if, in general, Mac users are better credit risks
than PC users, even controlling for other factors like income, age, etc.? When a customer is denied
credit, the credit team should know the reason for the denial. In the last few months, RBI has taken
actions and are now in the process of tightening regulations on collections and usage of data points
for credit decisions. Lending companies have to be regulatorily compliant to avert any penal
actions or negative publicity resulting in monetary or reputational loss.

In the early days of tech development, there was an idea that AI could be inherently fair, as
machines couldn’t have an opinion about people. But time and again, AI algorithms have taken on
the biases inherent in the data they learn from. In 2019, tech entrepreneur David Heinemeier
Hansson went public on Twitter with the results when both he and his wife applied for Apple
Cards. He alleged that he received a credit limit 20 times higher than his wife’s from the company’s
algorithm, despite the fact that she had a higher credit score and they filed joint tax returns. The
New York State Department of Financial Services subsequently opened an investigation into the
credit card practices of Goldman Sachs, which partnered with Apple in the card venture. Data bias
can result in AI algorithms simply coming to the wrong conclusions, or it can be more detrimental,
such as when the AI upholds gender, racial or other discriminations. In order to ensure that data is
usable, financial institutions need to test the quality of their datasets and ensure that it is a good
representative sample rather than a limited group of samples.

You might also like