Insights

Commentary: Atlanta’s Cyber Attack Shows the New Security Risks the U.S. Needs to Address—and Fast

By Peter Beshar

This article first appeared in Fortune

Last week’s ransomware attack on the city of Atlanta’s computer networks offers a chilling reminder that the public sector is directly in the line of fire in the war against cyber terror. With cities and states across the country increasingly relying on artificial intelligence and machine learning to deliver vital services, the risks for residents and businesses are growing exponentially.

Public officials are trying to balance the need to secure infrastructure assets with the need for open government practices. Last August, for instance, in the name of transparency and accountability, a New York City Councilman named James Vacca proposed that the city of New York publicly disclose the source code of all algorithms relied upon in delivering municipal services. These “algos” range from how teachers are evaluated, to when garbage gets collected, to which precincts get the most police officers. The proposal was the first of its kind in any U.S. city—and some privacy advocates assert that it should serve as a model for the rest of the country.

The debate over the management and disclosure of this source code is critical, because governments are increasingly relying on artificial intelligence and machine learning to analyze data and make key decisions. And while these advances offer the promise of better service at a reduced cost to taxpayers, this growing reliance on AI and ML comes with two distinct and potentially conflicting risks.

The first risk is that governments that become overly reliant on AI introduce the potential for bias, particularly racial bias in the criminal justice system. In 2016, a ProPublica investigation found significant racial disparities in criminal justice “risk assessments” produced by algorithms that seek to predict future criminal behavior.

In one notable example, the software wrongly considered a black woman who took a bike from a neighbor’s yard (given a risk score of 8) to be more likely to commit a future crime than a white man arrested for shoplifting who had a lengthy criminal record (he scored a 3). The ProPublica analysis of 7,000 individuals arrested in Broward Country, Florida revealed that this risk assessment tool wrongly identified African-American defendants as potential recidivists at improperly high rates. The software made the inverse mistake of underestimating recidivism rates for whites.

More than 45 states now rely on algorithmic tools to set bond amounts, make parole decisions, or even influence jail sentences. These kinds of automated risk formulas—which have implications for civil liberties and racial inequality—require broad transparency and close scrutiny.

Councilman Vacca’s legislation was aimed squarely at this troubling potential for bias. The challenge, though, is that erring too far on the side of transparency increases the second risk, which is the threat of widespread physical cyberattacks.

When we think about cybersecurity risk, we typically envision attacks on email, networks, websites, and other digital assets. Increasingly, however, we can expect these attacks to target physical assets, and the rise of artificial intelligence and machine learning may provide new and potent vectors for widespread attacks.

Automated systems are rapidly evolving from offering assessments and evaluations to actually delivering implementation. That’s the difference between Waze offering individual drivers the best driving routes and a centralized computer system giving a fleet of autonomous vehicles, or drones, direct instructions.

The more source code governments disclose, the more tools cyber criminals will have at their disposal

As cities automate water supply, electricity, mass transit, and hospital services, the cyber threat to these physical assets will rise. We’re already seeing evidence of this. Just before the Atlanta cyberattack, the U.S. Department of Homeland Security and the FBI issued a joint bulletin indicating that Russian hackers successfully penetrated control systems at energy, nuclear, water, aviation, and manufacturing sites.

Herein lies the dilemma with the Vacca bill and similar efforts that have a well-intentioned goal of maximizing transparency to minimize the threat of bias: The more source code governments disclose, the more tools cyber criminals will have at their disposal. Last month, experts from 14 organizations, including OpenAI, Oxford University, and the Center for a New American Security, catalogued the digital, physical, and political risks of AI in a sweeping report. Its core thesis was the “dual-use nature” of AI—the potential for both good—in the form of accelerated scientific discovery and enhanced productivity—and harm from cyberattacks and political disruption.

When weighing the benefits and risks of the Vacca bill, the New York City Council sensibly decided to devote more time to understanding what the city should disclose and how. This is far preferable to diving headfirst into legislating without fully understanding the risks involved. This due diligence—in New York and around the country—must happen quickly.

With his landmark legislation, Councilman Vacca sparked a crucial debate about balancing transparency and security in the new world of artificial intelligence. Citizens deserve to know how their government allocates resources and makes decisions. Yet, governments have an obligation to do all that they can to keep us safe, particularly at a time where cyber hackers too often appear to be one step ahead of the rest of us.

Peter J. Beshar is executive vice president and general counsel of Marsh & McLennan  Companies.