Skip to content

Banks look at ‘explainable’ AI systems to boost consumer trust

Finance industry turns to emerging technology aimed at making decisions more transparent as it tries to root out biased algorithms

The finance industry is turning to explainable AI to ferret out biased algorithms and other problems with software-based decision-making.
The finance industry is turning to explainable AI to ferret out biased algorithms and other problems with software-based decision-making. (Getty Images/iStock photo)

Banks and other financial firms are investing in “explainable” artificial intelligence that lets auditors and analysts trace how decisions about loans and other services are made by financial technologies, experts say.

The increasing use of software with AI capabilities such as machine learning and data mining has automated banking operations, increasing efficiency and providing more services. But privacy and civil liberties groups contend that has come at a cost, with bias in the AI systems’ algorithms leading to discrimination in the form of loans or other services denied based on sex or ethnicity.

This perception of algorithmic bias is a big problem for banks, which are investing in technical solutions to solve the problem, Moutusi Sau, an analyst at research and advisory company Gartner Inc., told CQ Roll Call. That issue is known as the black box problem with AI systems: software decision-making processes that often are opaque to humans, making it difficult or impossible to determine how a decision was made.

To come to grips with this issue, the finance industry is turning to explainable AI, an emerging technology that aims to make decisions more transparent. The goal is to ferret out the biased algorithms and other problems with software-based decision-making. The explainable part is usually in the form of data, often presented visually, such as in a chart that can show how the software reached its conclusions.

The Defense Department’s Defense Advanced Research Projects Agency launched a multiyear project in 2018 to grapple with the Pentagon’s concerns about opaque AI decision-making in weapons systems, one of the first large projects to explore the technology.

Now it’s helping banks, which had been slow in adopting AI-based technologies, Sau said. According to a Gartner CIO survey of banking and investment services, the adoption or “deployed” rate for AI software at banking respondents is now at 24.9 percent.

“Explainable AI essentially helps you trust the algorithm at the end of the day, and it helps you explain to regulators what you’re doing is well-documented in the system, so no one can say, ‘I don’t know what happened,’” she said.

The advent of the technology hasn’t convinced everyone that it will completely solve the bias in machine learning algorithms.

Bonnie Buchanan, head of the Department of Finance and Accounting and professor of finance at the University of Surrey in the U.K., said if data is poorly structured, or if it isn’t fully private or anonymous, there will be problems, especially for black box AI systems.

“You’ve got garbage in and you’re going to have garbage coming out if you don’t understand the process, what’s going on in the black box,” Buchanan told CQ Roll Call.

Market problems

Unaccountable decision-making by black box AI systems, and the high-frequency traders who employ such systems, took some of the blame for the so-called flash crash in 2010, in which major U.S. stock indexes plunged and then partially recovered within an hour. The exact causes of this and other crashes are still debated, Sau noted.

Regulators have attributed at least part of the 2007-09 financial crash to the spread of what has since become labeled as toxic collateralized debt obligations and collateralized loan obligations. While these complex products didn’t involve AI, they lacked an audit trail the likes of which an explainable AI system endeavors to create. The CDO and CLO market was largely unregulated at the time, so the origins of these trades couldn’t be traced because of the nature of the software in use, Buchanan said.

Because of this problematic history, banks avoided these automated trading systems and a lot of AI-based technology until recently, Sau said. Now they are using AI systems for back-office functions and large front-office ones. Sau noted that major banks such as Bank of America and J.P. Morgan are investing in these technologies used by smaller fintech firms.

OnDeck Capital is one company using AI tools to evaluate loans to small and medium-sized businesses, Sau added.

Banks’ back-office use of explainable AI is mostly driven by vendors with bank approval. Products like Amazon Web Services and Microsoft Azure, while not appearing to be AI-driven, use machine learning and other capabilities. It is these abilities, with increasing explainability built into them, that are making their way into the finance industry.

Hunting fraud

Fraud detection is another area in which AI tools are extensively used by banks. This has been the case for many years, but the tools are becoming smarter and more focused on areas like anti-money laundering, due diligence and portfolio credit risk optimization. Other areas where automation and AI are increasingly being used are in cybersecurity and compliance, Sau said.

While AI and machine learning tools have been used for a while to hunt for fraud or credit card abuse, they could still use some fine-tuning, Buchanan said. One trouble spot is when a customer credit card is accidentally declined because the system misidentified it as being overcharged, fraudulent or otherwise suspect.

In fraud detection terms, those are known as false positives, and they can cost the banking and retail industries billions of dollars a year in lost business, Buchanan said. “There’s also a reputational impact as well in that approximately 39 percent of cardholders just stopped using their card altogether,” she said.

On the flip side, accepting and processing sales made with fraudulent credit cards, known as false negatives, also costs banks.

These issues are ultimately about good governance for AI, which Buchanan defines as constantly fine-tuning AI systems for credit cards to avoid fraud and costly mistakes.

“If you’re clogging up [your system with] a lot of false positives and false negatives, to me that’s a signal that you, the financial services provider, are not doing that continuous fine-tuning,” Buchanan said.

Recent Stories

Are these streaks made to be broken?

Supreme Court airs concerns over Oregon city’s homelessness law

Supreme Court to decide if government can regulate ‘ghost guns’

Voters got first true 2024 week with Trump on trial, Biden on the trail

Supreme Court to hear oral arguments on abortion and Trump

House passes $95.3B aid package for Ukraine, Israel, Taiwan