
AI: Banking’s New Superpower
The AI-powered future of banking is already here in areas like fraud detection, risk assessment, and customer service, says digital finance expert Xavier Lavayssière. Read on to learn more.
7 min read
You've just watched an inspiring video about an amazing teacher building a school, halfway around the world from you. Moved by their efforts, you decide to make a donation to support their cause. You open your banking app, enter the non-profit's details, and hit send. But have you ever wondered how your bank ensures that your generous act is indeed supporting children's education and not inadvertently funding a scam or an armed group in the region?Banks and fintech companies play a critical role in safeguarding our financial transactions. They've developed technologies to identify you and the recipient, detect transaction patterns, and assess risks. Artificial intelligence (AI) is revolutionizing these processes, but before I explain what’s new, we first need to understand the methods employed so far. The first step is to evaluate the risks. For example, is there more fraud using credit cards or wire transfers? For this, statistics have long been an old companion of banking. Already in the early Renaissance, Florentine banker and statesman Giovanni Villani used statistical methods to evaluate population, education, and trade trends.In payments, statistical models help evaluate the probability of fraudulent transactions. If a transaction meets certain criteria, it may be flagged for deeper analysis or even blocked. Then, analysts will decide how to proceed further. However, identifying statistical criteria for risks, applying rule-based methods, and manually controlling them can be slow and costly to implement and run. Once risks are identified, rule-based matching has been a staple in banking operations. For instance, when processing loan repayments, banks often struggle to match incoming payments with the correct loans. During my first internship at an international development bank, I created a program looking at incoming SWIFT payments and the list of expected payments to match them. I observed in the database the most likely mistakes and misspellings, then created rules that can automate the matching process: payments were more likely to be late than early, amounts could be within a few percentage points of the expected payments due to currency conversions, and contract numbers could appear in any field of the payment.The beauty of AI lies in its ability to identify patterns without explicit programming. Using machine learning, AI can analyze vast amounts of transaction data, previously labeled as legitimate or fraudulent, to identify subtle indicators of potential fraud. For example, AI might notice that fraudulent transactions often occur in a specific sequence or have particular characteristics in common. Underneath, the logic isn’t so different from statistical analysis and rule-based approaches. However, unlike traditional methods, AI can adapt to new fraud schemes in real time, process large volumes of data quickly and efficiently, and continually learn and improve its accuracy over time.Large language models (LLMs) and deep learning go one step further. Mimicking the functioning of a human brain, they provide extensive interfaces that allow anyone to interact with them in a more natural language, and they can use more unstructured data. The rise of generative AI has sparked the interest of the public at large and triggered a new wave of AI applications in banking.In fraud monitoring and prevention, AI-powered computer vision can verify the authenticity of identity documents and even its owner, streamlining Know Your Customer (KYC) processes. For due diligence on counterparties, AI can analyze data to assess the reliability and risk associated with potential business partners or customers. Loan applications can be facilitated by AI. The technology can quickly process and extract relevant information from various loan-related documents, significantly reducing processing time. AI-driven risk-scoring models can incorporate non-traditional data sources to assess creditworthiness, potentially expanding access to credit for underserved populations. Moreover, these AI models can also help banks tailor loan terms to individual customer profiles and reduce default risks.Chatbots and virtual assistants, powered by natural language processing, can handle a wide range of customer queries and transactions. These AI-driven interfaces provide 24/7 service, improving customer experience and reducing the workload on human staff.AI-powered systems can also provide personalized financial advice, analyzing a customer's spending habits and financial goals to offer tailored recommendations and savings strategies.One of the primary concerns with using AI in banking is its potential for errors and unpredictable behavior. Unlike traditional rule-based systems, AI models, particularly large language models, can sometimes produce logically inconsistent or incorrect results. Moreover, when an LLM lacks information, it might “hallucinate,” generating plausible-sounding but entirely fictitious information. In a financial context, such errors could have serious consequences.Moreover, the complexity of AI systems often makes their decision-making processes opaque, creating a "black box" effect. This lack of transparency can be problematic for banks, which need predictable systems for compliance purposes. The inability to fully explain how an AI arrived at a particular decision could pose challenges in regulatory audits or when justifying decisions to customers.Another significant risk is the potential for AI to perpetuate or even amplify existing biases. Currently, credit scores in the U.S. are regulated to ensure fair access to credit. Only some types of data can be used, and individuals can correct information. These rules ensure fairness but also that loans serve their role of individual and collective wealth creation by lending to worthy projects.If an AI model is trained on historical data that reflects societal biases and uses large amounts of data, it may reproduce these biases in its decisions. For instance, an AI-powered loan approval system might unfairly penalize individuals based on factors like their neighborhood or background without considering their actual financial behavior or potential. Moreover, an AI might not recognize that an applicant's unconventional financial history is due to extenuating circumstances rather than poor financial management.For example, would an AI have given a loan to an 18-year-old Kylian Mbappé, living in Trappes, with an uncertain career in football? An AI assessing loans wouldn’t look at his potential for the Ballon d'Or, only the risk of not being paid back.