Insights

Managing The Risk Of Machine Learning

In the last year, machine learning has taken the world by storm. While the mathematical concepts behind this form of artificial intelligence have been understood for decades, the emergence of cheap, massive computing power via the public cloud and the existence of large, comprehensive sets of data has resulted in nearly anyone being able to train and deploy machine learning models. While the opportunity this presents is vast, it has also introduced new business and societal risks that will need to be managed.

The risk of using models that have a bias built into them is very real, and there are numerous examples of it already happening to the detriment of the people involved

From prison sentencing models that make inaccurate and racist predictions on re-offend rates, to chat bots that start to communicate using inappropriate language and concepts, the potential ways in which machine learning can result in negative outcomes are numerous.

What is required to avoid these types of outcomes is a two-pronged approach. The first aspect is that companies that are leveraging this technology need to adopt a comprehensive approach to internal governance and a three line of defense model to management of the risk. Just because it is easy to train and deploy a model, doesn’t mean the control frame around it shouldn’t be robust. The second is that the government regulators who oversee these companies need to incorporate an understanding of machine learning risk into their approach.

The skills necessary to manage this new technology are going to be a combination of the core mathematics and engineering talents required to effectively understand how it works, and the social sciences perspective to understand how results could differ from expectations to the detriment of customers, employees and society in general. The promise of machine learning is vast, but it will be important that we also manage the potential downside implications of the technology.