Insights

We Need To Approach AI Risks Like We Do Natural Disasters

Companies, insurers, and policymakers all play a role

By Prashanth Gangu
This article first appeared in Harvard Business Review on February 7th, 2018. 

The risks posed by intelligent devices will soon surpass the magnitude of those associated with natural disasters.

Tens of billions of connected sensors are being embedded in everything ranging from industrial robots and safety systems to self-driving cars and refrigerators. At the same time, the capabilities of artificial intelligence (AI) algorithms are evolving rapidly. Our growing reliance on so many intelligent, connected devices is opening up the possibility of global-scale shutdowns.

The good news is that natural disasters themselves, which Munich Re says caused $300 billion in economic losses globally in 2017, provide a template for how to mitigate the growing and catastrophic risk posed by AI. Like they have for extreme weather and natural disasters, companies not only can begin to establish international protocols and standards to govern AI within their own walls, but also can put in place processes to work with other companies, insurers, and policymakers.

 

Source: Oliver Wyman analysis

INTELLIGENT DEVICE RECOVERY PLANS

Today, many companies are exposed to intelligent device risks that could harm both their own operations as well as their customers. Yet few have formally quantified the size of their revenue at risk and potential liability. Nor have they set up safety and security protocols for potential Black Swan AI events.

They should. Like the risks associated with natural disasters, companies cannot completely protect against smart-device risks by buying insurance; they must have worst-case scenario recovery plans. Managers have to figure out their higher and lower risk intelligent device vulnerabilities, add in redundant systems, and potentially set up the AI equivalent of tsunami early-warning systems. In addition, they need the ability to switch to manually controlled environments in case artificially intelligent systems have to be shut down and to recall faulty smart products.

Contingency plans must go beyond a natural disaster playbook. Given the many potential points of connectivity, it will be much more difficult to predict, identify, and correct the cause of large-scale smart-device failures. De-bugging and re-programming a faulty intelligent device is even more complicated than creating a patch to fight against a malevolent cyberattack because it can be unclear what rules the machines are following.

As a result, no company will be able to recover on its own. To rebound from the potential impact of a cascading set of global AI-related shocks, managers will have to consider the vulnerabilities that exist everywhere from their suppliers to their customers. Addressing those vulnerabilities will require coordination across a large number of technology service providers and other companies that could catch or spread an AI infection to others, regardless of who is at fault.

AI INSURANCE PRODUCTS AND SERVICES

Insurers should quantify their exposure to a global intelligent device meltdown, offer new products, and advise companies and governments. Even with about $700 billion in capital available in the United States and hundreds of billions of dollars more around the globe, property and casualty insurers’ balance sheets are too small to cover all the potential losses from a global intelligent device disaster. But insurers can use data collected on losses across industries to advise companies and governments on how best to quantify their potential exposure to a worst-case scenario.

As they have for natural catastrophes, insurers can also encourage public sector safeguards. Since insurers cannot completely mitigate the outsized risks posed by extreme weather events, governments of many developed countries and international organizations provide natural catastrophe relief through government agencies like the Federal Emergency Management Agency and public flood insurance programs. Insurers need to help mobilize similar public sector resources to help the potential victims of an AI-enabled smart device disaster.

In addition, they can start to advise clients on how they can enhance their safety and security protocols to head off the dangerous repercussions of an intelligent device meltdown. Today, some leading insurers are suggesting security procedures that companies could follow to attend to information breaches and interruptions in the event of a global failure of interconnected systems. But they should also begin to explore steps to deal with when smart devices become even more sophisticated and potentially set and follow their own objectives.

To rebound from the potential impact of a cascading set of global AI-related shocks, managers will have to consider the vulnerabilities that exist everywhere, from their suppliers to their customers

AI INTERNATIONAL PROTOCOLS

Finally, policymakers should establish international trust and ethics guidelines to govern the development and implementation of ever more advanced AI products and systems. To reduce the future impact from natural disasters, governments and international organizations like the Red Cross and the World Bank collect and share data concerning the destructive ramifications and the support required to help victims. Similar intelligence will be critical to curb the impact of potential smart device shocks as artificial intelligence evolves and the number of connected IoT (Internet of Things) devices, sensors and actuators reaches over 46 billion in 2021, according to Juniper Research.

About a dozen governments, technology companies, and international organizations such as the Institute for Electrical and Electronics Engineers and the World Economic Forum are starting to explore global AI trust and ethics protocols for retaining control of interconnected AI-driven systems and products. These forums are beginning to deepen understanding of the potential harm that intelligent devices could cause and the need for best practices. But much more has to be done.

Establishing the resources required to reduce the risks that will come with the world’s transition to more intelligent and interconnected networks will be difficult and costly. But we can’t afford not to do it and our experience responding to some of the world’s worst “100-year storms” offer a valuable starting point for figuring out how to get ahead of potentially even more severe disasters. We just need companies, insurers, and policymakers to recognize that such efforts are an essential investment in our future.

This article is posted with permission of Harvard Business Publishing. Any further copying, distribution, or use is prohibited without written consent from HBP - permissions@harvardbusiness.org

We Need To Approach AI Risks Like We Do Natural Disasters


DOWNLOAD PDF