NEW: Access Swedish consumers with Swish Find out more

Will AI Make the Financial Industry Smarter? An Introduction


When you hear the words “artificial intelligence”, what do you think of it? A machine that thinks, communicates, and behaves like a human? A self-aware computer system able to learn based on its interpretation of its experiences?

AI is already widely used in the financial industry and is about to become much more important. The second payment services directive (PSD2), which came into force on 13 January 2018, is designed to help create an open banking environment and to promote the growth of digital payments. As the volume of payments grows, the only way to effectively vet for fraud, conduct know-your-customer (KYC) audits, calculate risk scores and carry out other necessary functions will be by using AI.

This has serious implications for the industry and for customers. Properly implemented, AI will make the industry more intelligent, as well as help it to reduce risk, offer more tailored services, and prevent fraud. A poorly designed AI, however, could incorrectly categorise customers as high risk, denying them access to financial services.

In this article, the first in a series about AI in the FinTech and payments industries, we look at what AI is and what it is not — and what that means for us as an industry and as citizens of an increasingly digitised world.

What do we mean by AI?

Let’s circle back to our original question: what do you think of when you hear the phrase “artificial intelligence”? If you’re a sci-fi fan, you probably imagine a thinking machine, like Ava in the movie Ex Machina or Agent Smith in The Matrix. This kind of AI, which is still purely theoretical and exists only in fiction, is called artificial general intelligence (AGI).

A real AGI would have to be able to reason, to learn based on new knowledge and experiences, and to express its thoughts in ways others could understand. Crucially, it would have to be able to do this across the same range of tasks that a human could, rather than being limited to one very specific sphere.

Currently, we are nowhere near developing AGI. And even if we were, it probably wouldn’t look like it does in the movies. To take just one example, there’s no reason to believe that an AI without an instinct for self-preservation and an endocrine system would either perceive humans as a threat or behave aggressively.

The AI we have had until now is known as “weak AI”. Rather than a human-like intelligence (AGI), weak AI is generally focused on one task which it repeats infinitely, maximising its efficiency and effectiveness as it goes. No weak AI can truly replace a skilled, multi-tasking human worker.

Different types of AI

In the past, AI was programmed, i.e. sophisticated software was created which could analyse a situation and use a computer’s superior speed to solve a particular problem faster than a human could. Chess computers are a notable example.

Nowadays, we let AI learn by itself, instead of being taught by humans. Known as “machine learning”, this method has led to tremendous progress in recent times. It’s the reason AI is now the buzzword on everyone’s lips. A machine trains itself by being given data (inputs) and only the desired outputs. A machine-learning algorithm allows the AI to find the patterns that enable it to assign the correct output value to future input values. If, for instance, you wanted to train an AI to be able to recognise the difference between oranges and pears, you would need to give it an algorithm that allows it to differentiate between two types of input objects based on colour and shape. You would then need to allow it to practise with thousands or hundreds of thousands of objects, analysing the results each time, in order for it to keep learning.

Deep learning is a subset of machine learning, and is based on something called a neural network. As their name suggests, neural networks are designed to approximate, if not exactly simulate, the layers of neurons in a human brain. The network consists of layers of nodes, with each node standing in for a neuron. The function of a node is to assign a probability to the input, based on how likely that input is to generate one of the desired outputs. Each neuron has an activation threshold.

If, for example, you were designing a neural network to check pictures for faces, you might have an input neuron that looked for a nose-shaped pattern of pixels roughly at the centre of a larger group of pixels. If it found what it thought was a nose, that neuron would assign a “nose-like” value to that group of pixels. As the programmer, you might set “55% nose-like” as your threshold value.

When your input neuron for noses is at least 55% certain that a given object is a nose, it activates, passing a value to the next layer to say, “We have a nose here”. Based on what the other input neurons (the ones looking for mouths, eyes, ears and so on) say, the system then decides whether or not it’s looking at a face. The next layer of neurons can then start to examine the input for different criteria.

Once it has been decided that the object is a face, the second layer of neurons in the network may compare values with a database of known individuals to see if it can work out whose face it is. This process is repeated through all the layers of neurons until the values are passed to the output layer, which communicates the final result to the outside world (“Yes, it is a face, and I’m 90% certain that it’s Bob’s.”)

That this is no longer purely the stuff of science fiction can be demonstrated very easily with your smartphone. These devices now contain incredible amounts of AI. Ask Siri (or another voice assistant of your choice) for pictures of trees, and you’ll be amazed at how many of your own pictures with trees on them are displayed, as well as images from the Internet.

Training this type of AI often reveals interesting biases in the data. If faces — and thus, noses — are most often found in the top left corners of the images used in the training data, for instance, the AI may build this fact into its recognition model, leading it to miss noses that appear elsewhere in other images.

The same sort of thing can happen in finance. If, for example, a certain category of people —members of a particular social group or inhabitants of a specific town — are less likely to repay a loan, an AI may factor this into its risk model. Not only could this be a legal and social problem, but it may also be completely inaccurate. The AI may be missing a vital piece of information, causing it to treat a correlation as a cause. It’s extremely important that as many such biases as possible are weeded out.

There are other models for AI. In the neuroevolution model, a number of weak AIs are designed to solve a specific problem. The one that is closest (even if it’s not very close) is used as the basis for the next generation of AI, all of which are very subtly different from the parent AI and from each other. Again, the one that produces the best result is chosen as the basis for the next generation. The process continues until a workable AI has been developed. It’s Darwinism on fast-forward!

Generally, however, when today’s companies claim to have an AI-powered product, they’re talking about either machine learning or deep learning based on neural networks.

What can AI do for us?

Firstly, if you’re a worker whose job involves judgement, skill, and a wide range of complex and interrelated tasks, the AIs we have today are not coming for your job. What they can do, however, is automate the time-consuming and repetitive parts of it.

UBS recently automated its post-trade allocation requests. An AI can now perform in less than two minutes tasks that used to take a trader 45 minutes[1]. Similarly, Google has developed a medical AI which is able to accurately scan patient imaging data for cancer, a job that used to take doctors five or six hours. Both developments free up skilled professionals to concentrate on more complex tasks, but they could also allow organisations to employ fewer comprehensively trained workers.

If, on the other hand, your job involves primarily predictable and repetitive tasks, even some that are considered skilled, it might well be time to find out how secure your job is now and how secure it’s likely to be in five years’ time. When combined with advances in robotics, AI is now able to replace many skilled artisans, and even a number of academic jobs are no longer secure.

In Bangladesh, the number of new garment sector jobs has fallen from 300,000 a year in 2008 to just 60,000 today[2], despite clothing accounting for 81% of Bangladesh’s exports. At least some of that shortfall is caused by the introduction of robotic, AI-powered sewing machines that can work faster than humans, without breaks and without mistakes, something no-one thought possible just a few years ago.

There’s almost certainly no way to avoid this development. But if we do our best to understand and plan for what is coming, we can mitigate the problems while maximising the benefits. In the payment industry, the situation is similar: AI promises greater efficiency while also coping with the vast increase in payment volumes expected in the near future. But we need to set this benefit against the risk that poorly designed or weighted algorithms may have unintended consequences for both businesses and consumers.

In the next instalment of this series, we will look at how the payment industry currently uses AI and what that means for businesses, individuals in the industry, and consumers.