AI Risk: The Newest Non-Financial Risk Every CRO Should Be Preparing For

Financial institutions are ramping up their efforts to leverage artificial intelligence (AI) and machine learning to create competitive advantage to better serve customers and realize more efficient operations. Tailored product recommendations, seamless customer on-boarding and support, near-instant underwriting and pricing decisions, secure identity verification and real-time fraud detection are just a few of the many areas where financial institutions are experimenting with AI.

The potential benefits of AI are clear, but the rapid adoption of this technology creates a challenge from a second line risk management and governance perspective. The risk that is being created via the deployment of AI applications (“AI risk”) does not wholly fit into any existing risk bucket. Instead, AI risk is a composite risk that cuts across multiple aspects of non-financial risk.

An AI application can create technology risk, cyber risk, information security risk, model risk, compliance and legal risk, 3rd party vendor risk, and many other types of risks (e.g., fraud risk) depending on the specific use case and application. Because of its complex and composite nature, AI risk currently does not have a clearly designated second line owner at most financial institutions. Roles and responsibilities across the different second line functions are also typically not articulated to holistically govern the risk. This creates gaps in governance and oversight.

AI risk is not a future risk. It is here today

To ensure AI risk is appropriately managed and AI applications are responsibly rolled out with emphasis on customer and shareholder protection, we recommend CROs take six key steps today:

1.    Designate and treat AI risk as a new, distinct category of non-financial risk, with it a clear definition, risk appetite, operating model and governance structure.

2.    Create a comprehensive inventory of current and planned implementations of AI across the organization.

3.    Establish a second line owner for AI risk. We see technology risk as the natural home for AI risk, but acknowledge the most appropriate owner may depend on the specific institution.

4.    In addition to having a clear second line owner, ensure roles and responsibilities across second line functions are also clearly articulated and codified. For example, it should be clear what aspects of AI risk will be covered by technology risk management, what aspects will be covered by model risk management, and so on.

5.    Take a collaborative and transparent approach to determining roles and responsibilities across the second line functions to determine the best operating model and governance structure for managing AI risk.

6.    Like many other risk types, take a tiered, risk-based approach to managing AI risk. Not all AI applications will create the same level of risk, and the level of governance should be commensurate with the inherent risk that is identified, and the organization’s risk appetite

Above all, don’t wait – establish your AI risk framework now. AI risk is not a future risk. It is here today as institutions have already adopted the technology across many areas, in some cases without realizing it. The rate of adoption and the corresponding level of risk will only continue to increase as the technology matures and it becomes even easier to deploy.