Insights

Trusting The Mind Of A Machine

As the adoption of applications that leverage complex machine learning grows, so do concerns around humans’ ability to sufficiently understand and explain the decisions made and actions taken by machines. This concern has been particularly pronounced in areas where the lack of understanding around modeled output can have a real, negative impact on customers such as unfair treatment of loan applicants within financial services or misdiagnosis of patients within health care.

Various terms such as Artificial Intelligence (AI) “explainability,” “transparency” and “interpretability” have been used by different groups and organizations to articulate this challenge. However, the fundamental issue boils down to our ability to trust the output produced by the machine: to make a significant decision that impacts others based on a piece of output, we must sufficiently trust the output, and to sufficiently trust the output, we must:

  1.  Know that the output is accurate
  2.  Sufficiently understand how and why the output was produced
If you can’t explain it simply, you don’t understand it well enough
Albert Einstein

Most institutions have independent review frameworks and qualified testers that make sure the output produced by a machine is accurate and appropriate (for example, model risk management functions at financial institutions). Knowing that a qualified third party has reviewed and certified a machine for use does establish some level of trust in the system. However, independent review does not necessarily help others understand the machine: a high-performing machine that has been independently reviewed and certified by a highly qualified team of computer scientists can still be a complete mystery to parties that are impacted by the machine such as users and customers.

Indeed, not every machine needs to be understood and explained (and trusted) at the same level. Depending on the
use and purpose of the machine, who is impacted and how, and other factors such as regulatory requirements, each machine can have a different requirement for the level of trust. As a result, an effective solution to the trust issue must account for these different factors. In this article, we present a framework that institutions can use to establish trust in the “Machine-Human Ecosystem”, and enable the responsible and large-scale adoption of machine learning applications.

With the rollout of the General Data Protection Regulation (GDPR) by the EU and overall heightening supervisory and public expectations around data privacy and traceability, being able to explain a machine’s output to customers and clients has become a legal obligation (at least for firms doing business in the EU). We recommend companies take steps now to establish a framework that will allow them to meet these increasing requirements and expectations in a timely manner.

Trusting The Mind Of A Machine


DOWNLOAD PDF Download Simplified Chinese PDF