Artificial Intelligence Applications In Financial Services

Artificial Intelligence (AI) is a powerful tool that is already widely deployed in financial services. It has great potential for positive impact if companies deploy it with sufficient diligence, prudence, and care. This paper is a collaborative effort between Bryan Cave Leighton Paisner LLP (BCLP), Hermes, Marsh, and Oliver Wyman on the pros and cons of AI applications in three areas of financial services: asset management, banking, and insurance. It aims to facilitate board-level discussion on AI. In each section, we suggest questions that board directors can discuss with their management team.

We highlight a number of specific applications, including risk management, alpha generation and stewardship in asset management, chatbots and virtual assistants, underwriting, relationship manager augmentation, fraud detection, and algorithmic trading. In insurance, we look at core support practices and customer-facing activities. We also address the use of AI in hiring.

There are many benefits of using AI in financial services. It can enhance efficiency and productivity  through automation; reduce human biases and errors caused by psychological or emotional factors; and improve the quality and conciseness of management information by spotting either anomalies or longer-term trends that cannot be easily picked up by current reporting methods. These applications are particularly helpful when new regulations, such as the European Union Markets in Financial Instruments Directive II (MiFID II), increase senior management’s level of responsibility to review and consider higher-quality data from within the firm.

However, if organisations do not exercise enough prudence and care in AI applications, they face potential pitfalls. These include bias in input data, process and outcome when profiling customers and scoring credit, and due diligence risk in the supply chain. Users of AI analytics must have a thorough understanding of the data that has been used to train, test, retrain, upgrade and use their AI systems. This is critical when analytics are provided by third parties or when proprietary analytics are built on third-party data and platforms. There are also concerns over the appropriateness of using big data in customer profiling and credit scoring. In November 2016, for instance, a British insurer abandoned a plan to assess first-time car owners’ propensity to drive safely – and use the results to set the level of their insurance premiums – by using social media posts to analyse their personality traits.  The social media service company in question said that the initiative breached its privacy policy, according to which data should not be used to “make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan.”

These concerns often have legal and financial implications, in addition to carrying reputational risks. For example, the General Data Protection Regulation (GDPR) gives EU citizens the right of information and access, the right of rectification, the right of portability, the right to be forgotten, the right to restrict the processing of their data, and the right to restriction of profiling . However, it is unclear how easily individuals can opt out of the sharing of their data  for customer profiling. It is also unclear whether opting out will affect individuals’ credit scoring, which in turn could affect the pricing of insurance products and their eligibility to apply for credit-based products such as loans.

Calls for the ethical and responsible use of AI have also grown louder, creating global momentum for the development of governance principles, as noted in a 2019 paper by Hermes and BCLP.  However, the real challenge is to shift from principles to practice.

Artificial Intelligence Applications In Financial Services