Insights

How To Build Trust In The World Of Automation

By: Chris DeBrusk, Ege Gürdeniz, Shri Santhanam, Til Schuermann

This article first appeared on BRINK News on October 16, 2018.

As the adoption of applications that leverage complex machine learning grows, so do concerns about our ability to understand and explain decisions made and actions taken by machines. This concern has been particularly pronounced in areas where this lack of understanding can have a tangible negative impact on customers. Prominent examples include instances of unfair treatment of loan applicants within financial services or the misdiagnosis of patients within the health care industry.

Various terms such as artificial intelligence explainability, transparency and interpretability have been used by different groups and organizations to articulate this challenge. However, the fundamental issue boils down to our ability to trust the output produced by machines; to make a significant decision that impacts others based on a piece of output, we must sufficiently trust the output. To do this, we must know that the output is accurate and understand how and why the output was produced.

The Challenge

Machine learning algorithms can take many shapes and forms and vary in complexity. As a result, our ability to understand and trust the output produced by a machine depends on the specifics of the learning algorithm.

For example, a simple regression is much easier to understand and explain than a multilayer neural network. However, while simpler models are generally easier to explain, they also tend to perform worse (e.g., lower accuracy of predictions).

Therefore, as companies attempt to solve increasingly complex problems with increasing accuracy, they will need to use complex approaches. One such approach may include deep neural networks with myriad hidden layers and thousands or millions of parameters with nonlinear interactions, which humans cannot intuitively or immediately understand. With added complexity, trusting the machines will become more difficult.

The Machine Learning Center of Trust

The “machine-human ecosystem” comprises various groups of people with different levels and types of interactions with a machine, and each group may have a different level of need and ability to understand the output produced by a machine.

The absence of trust in the machine-human ecosystem will likely inhibit the large-scale adoption of machine learning

Given the large number of machines and impacted parties involved as well as the need to follow a consistent methodology, the challenge of building trust and understanding in the machine-human ecosystem is best addressed centrally. Thus, our recommendation is to designate the existing machine development functions (e.g., head of data science, head of AI or head of analytics) as the “machine learning center of trust,” which would be responsible for executing the appropriate tasks and developing the necessary artifacts to help impacted groups understand and trust the machine.

A department like the machine learning center of trust would have the following key responsibilities with respect to explaining the machine:

  • Testing: Running a host of quantitative tests to assess input significance and how inputs impact the output.
  • Data review: Walking through the data sourcing methodology and tracing back the data used to train the model in order to identify and remediate any potential areas of bias.
  • Documentation: Creating user-friendly documentation that synthesizes the results of quantitative tests and any other qualitative assessments that were made (e.g., contrastive explanations of the output) and providing a nontechnical and intuitive explanation of the drivers behind the model output.
  • Procedures: Defining and implementing standards and procedures to make sure all machines are developed in a transparent and consistent way and to ensure that outputs are replicable by independent third parties.
  • Monitoring and reporting: Monitoring model inputs and outputs on an ongoing or regular basis at the appropriate frequency (depending on the tier of the machine) and reporting the results to the relevant parties (e.g., management).
  • Training: Designing and executing targeted training programs, workshops and communications—either internal or external. For example, the rollout of a high-importance machine could be accompanied by an appropriate workshop to educate all relevant parties on the new machine.
  • Customer support: Providing customer and employee support for machine-related inquiries. For example, a customer asking why their credit application was rejected by a robot, or a sales person asking why the machine is recommending they sell a particular product to a client.

ConclusionThe potential benefits of successfully using machine learning at scale are numerous and well-covered by industry publications, academic papers and mainstream media alike. New use cases, applications and experiments appear daily, further adding to the excitement and optimism around what machine learning can deliver for companies and consumers.

However, the absence of trust in the machine-human ecosystem will likely inhibit the large-scale adoption of machine learning as the risk of unintended negative consequences will be too great, and organizations may not have the appetite to face the potential regulatory, legal, ethical or financial consequences. To avoid this roadblock on adoption, institutions should start designating their own version of the machine learning center of trust and begin rolling out the associated guidelines and requirements now.