How Banks Should Prepare For Robots Going Rogue

Featured on World Economic Forum Agenda

By Elizabeth St-Onge and Ege Gürdeniz.
This article first appeared on World Economic Forum Agenda on November 5, 2018.

Banks are rolling out machine-learning applications to handle all manner of tasks once reserved for humans, from customer service to automated investment picking. But are they ready to clean up the mess created if the robots go rogue?

While financial services firms have sturdy structures in place to police human misconduct – and have expanded them in recent years to cover social media and other new technologies – machine misconduct is another matter. Standing at the crossroads of compliance, risk management, human resources, and technology, the management of machine conduct has no natural home in most banks’ organizational structures.

This needs to change if banks hope to tap the incredible potential of machine learning applications. Used correctly, these technologies can deliver significant benefits both to banks and their customers. Machine-learning applications can provide better customer insights and solutions and more efficiencies across the entire firm, from customer interface to back office functions.

86% of bank executives agree that the widespread use of AI provides for a competitive advantage beyond cost

But banks also need to scrutinize the ethical ramifications of machine learning applications just as aggressively as they vet the backgrounds, ethics, and compatibility of job applicants. One bad bot can harm a bank’s reputation and potentially dent revenue. Since machine misconduct is purely a digital phenomenon, problems often spread instantly – causing chain reactions that can affect organizations and customers on a massive scale.

Even technology giants have stumbled in the machine-learning arena. Apple’s Siri search tool recently defined the word “mother” in an inappropriate way, while Google’s photo app made a racist blunder.

So how can banks beef up their machine risk management while still fostering innovation and tapping into machine learning’s immense promise?

First, they need to create robust machine-development and data-governance standards for their machine-learning efforts. That starts with an inventory of all such applications running throughout the company. At many banks, individual teams roll out new applications in isolation. They need a firm-wide view.

Next banks must dive headlong into the data. They already have a deep understanding of market and other data that flow in and out every day, but machine-learning applications are introducing vast quantities of new types of social media and customer-interface data that need to be catalogued and monitored. These new data forms require the same level of governance as trading and other financial data. Individuals or teams must be relentless in screening out anything that could bias a machine-learning application’s results.

Before a new application is introduced, it should go through a review and approval process that balances the need for proper risk management across the firm with the need to promote innovation. Each application has the potential to introduce new data and decisions into the ecosystem that could corrupt other functions.

Banks must also establish accountability for machine mishaps. They already have long-standing procedures for employees: the human resources function governs behavior and other ethical considerations; compliance makes sure company and regulatory rules are followed; conduct teams govern interactions with customers; while risk teams make sure products being sold don’t put the firm in peril.

Similarly, banks need a taxonomy for machine learning applications that spells out the roles, responsibilities, and procedures for governing and managing the risk associated with each type of machine that can potentially spiral if, say, a new machine were to start making inappropriate investment recommendations to customers. Fingers would be pointed at the technology team, who might in turn deflect blame to the sales team or the model risk management team. The compliance group and others might not get involved at all.

To ensure banks can respond appropriately, they must boost the level of technological expertise inside each governance function, from risk management to compliance and human resources. Banks need to add data scientists and other technologists in these areas so that the right questions are being asked and the oversight is informed.

Finally, there are ethical considerations to the use of machine learning for decision making, and senior management needs to be actively involved in developing that framework.

Machine-learning applications can enable banks to create value for their customers, employees, shareholders, and society in new ways. But banks must be aware of the risks of machine learning and address them quickly and systematically. Without proper governance, it won’t be long until a machine-learning disaster with major ethical, legal, and financial consequences unfolds.