As financial services firms increasingly turn to artificial intelligence technology to create efficiencies and get an edge over their competition there are a number of risks and fears that come with adoption of cutting edge technology.
A report from the Financial Stability Board at the end of last year raised concerns regarding the use of artificial intelligence in banking, specifically of the technology triggering financial stability risks.
It also raised concerns that: "The lack of interpretability or auditability of AI and machine learning methods could become a macro-level risk. Similarly, a widespread use of opaque models may result in unintended consequences."
The international body also warned that replacing people with AI "has the potential to amplify financial shocks".
But the report did also highlight the potential upsides: "The more efficient processing of information, for example in credit decisions, financial markets, insurance contracts and customer interactions, may contribute to a more efficient financial system.
"The applications of AI and machine learning by regulators and supervisors can help improve regulatory compliance and increase supervisory effectiveness."
Computerworld UK sat down with Ajwad Hashim, vice president for innovation and emerging tech at Barclays last week at the Deep Learning in Finance Summit to speak about his specific fears when it comes to the adoption of artificial intelligence within the 300-year old bank.
Barclays is currently investigating the use of AI across the bank, from machine learning and predictive analytics, to natural language processing (NLP) and chatbots, according to Hashim.
These techniques are being applied to both the front office, for customer retention metrics and product recommendations, for example, to back office functions, such as using NLP to extract information from written application forms.
All of this comes with a serious upside in terms of creating operational efficiencies for the bank, but also come with risks to the business.
Hashim identified four key risks to the bank when it comes to the use of AI.
The first is data security. "I think the biggest risk I see is with the data quality," Hashim explained. "Most banks will have a lot of data stored in siloed systems so ensuring the data we are using is clean data, ensuring we are storing and using data in the right way to protect against data leakage and making sure the models we are creating are on the foundation of good data.
"For AI models, and we're looking here at predictive analytics but even with NLP, the strength is with the data you are using to train those algorithms," Hashim added.
"In financial services a lot of the data we hold is confidential, it is either personal information or confidential to a business. So I think there are quite a few risks around how we manipulate that data, how we use that data. Firstly I would say around ensuring that data is in a secure environment and that there is no leakage. Particularly where we are starting to work with external parties like fintechs."
It's worth noting here that establishing a secure API standard to share data with fintechs forms a core part of the open banking regulations which are coming into play in the UK right now.
The key to managing this risk for Hashim is transparency. "The way in which we use that data and the way customers have trusted us with that data means we have to be careful and being clear with customers regarding what we are doing with their data is quite key," he said.
"Also there is an element of using obfuscated data, that's quite a quick way for us to experiment and test and that's how most banks tend to test concepts using masked data and creating those models in safe environments to validate the outcomes."
Second there is the issue of explaining to regulators how these AI systems are working. Hashim said: "There is a bigger piece on understanding how these models are working and why they give the decisions they give.
"Particularly with deep learning and neural networks a lot of these models are trained with historic data and it's very difficult to understand why they are giving the result they give and trying to explain that to the regulator is quite a tough challenge that most banks face.
"Even though these models might perform better than people and traditional linear regression models, the difficulty around understanding why they work better is always going to be challenging."
On this note, Jesse McWaters, financial innovation lead at the World Economic Forum said during the Innovate Finance Global Summit last week: "It's really clear in this space uncertainty is a concern for financial institutions which has the potential for a chilling effect."
McWaters said he thinks the issue of interpretability of models is somewhat overblown, but that we "have to ask questions when it's important that models aren't easily understandable."
Then there is the issue of inherent bias. Hashim asks: "There are certain problems that might occur around using biased information and how do we control that the information going into those models isn't skewed towards a particular outcome?
"Look at loan applications for example. We have to be careful when using credit scoring that we aren't segmenting the client base in such a way that it's disadvantageous to certain clients because of certain data points."
Finally, there is a wider concern of public outcry. The recent Cambridge Analytica and Facebook scandal will only highlight the risk bad data practices can bring to a trusted company.
"As a bank there is a risk reputationally that if we don't go into these projects with our eyes wide open and with complete transparency then there is a reputation risk there that we could be seen to be using the data in the wrong way and the sort of public outcry regarding the models we have created," he said.