Numerous press reports this week spotlight Spectre and Meltdown, two newly discovered cybersecurity flaws. What makes these flaws different from other security “holes” is that they are hardware, not software flaws—and manifest in the microprocessors that run most of the computers and phones in the world. Software security flaws can be virtually patched; hardware flaws often require physical-part replacement, like an automaker’s airbag recall.
In general, both the Spectre and Meltdown flaws allow an attacker to access areas of computer memory that should be inaccessible. Hackers gain access by taking advantage of aspects of microprocessor design that are used to improve performance, including memory read-ahead and out-of-order instruction execution. If a program can access memory that should be walled off, an outsider could potentially access sensitive information. That sensitive information could be passwords or other access information that could open the door to a much larger data breach.
Since these flaws have been identified, a patch has been issued for major operating systems that addresses Meltdown, although potentially at a fairly material impact to performance. There is not currently a patch for Spectre, but the speculation is that it cannot be fully remediated without physically replacing the processor in every affected computer and server.
There are two primary ways in which an attacker could take advantage of these flaws to get access to confidential or sensitive data. The first would be to run an attack program on a public cloud that attempted to steal information that was simultaneously running on the same physical servers, given that public cloud is a shared, virtual environment. While possible in theory, this sort of attack would be highly speculative, not unlike fishing in the middle of the ocean with no idea of what’s below. Plus, the big cloud providers have already patched their infrastructure or added protections to prevent this sort of information leakage.
The second is much more likely. By tricking someone into running malware on a specific machine, likely via a phishing attack, other information running on that same machine could be compromised. That being said, there have been no documented attacks of this type and the operating-system publishers have been rolling out patches and protections to reduce the likelihood of it happening.
How important is this distinction from the perspective of cyber risk and digital innovation? We think it is very important, and likely signals the beginning of a new era in tech design.
Hardware isn’t safe anymore (and really wasn’t ever safe)
People generally think that software is often bug ridden and hackable, but physical hardware is safe. Spectre and Meltdown have highlighted the fallacy in this assumption. What this means from a practical perspective is that the hardware stacks in captive data centers, and on laptops, phones, and consumer devices, need to be treated as potentially compromised (and often un-patchable). Security postures must be adjusted accordingly.
This realization reinforces the notion that the castle approach to cyber security is fundamentally flawed, and that companies need to take a layered approach to security that increases control (and, likely, user friction) as assets become more sensitive. At the core of this philosophy is the assumption that you will be hacked and act to limit any damage.
Devices are the next threat vector
Over the last five years there has been a continuous march to network nearly everything in our daily lives. From smart thermostats, to garage-door openers, to lightbulbs, to kids’ toys and even fish tanks—everything is being connected to the local WiFi access point so it can be controlled remotely and upload data into the cloud. On the surface, this is a good thing—smarter devices are easier to use, save us energy, and make sure our fish stay alive.
Unfortunately, all these networked devices also afford hackers millions of new points of attack that are often not effectively hardened. Even worse, device manufacturers rarely put in place the necessary upgrade-and-patch programs to identify and close security holes as they are discovered. Plus, these devices are full of microprocessors and other hardware that can create additional risk.
As the spread of networking and the Internet of Things is likely to continue accelerating, it is absolutely critical that the buyers of devices (both consumers and corporations) demand protection of their data. After all, your fish tank shouldn’t let hackers steal all your data.
Security needs to be the first design constraint, not the last
Given that hacking is already pervasive and will likely get worse, security must be a focal point, and not an afterthought, in device design—starting at the whiteboard stage. The current practice of doing a cursory security review just before releasing V 1.0, and then quickly patching security issues that are discovered (often after the first hacks), is simply unacceptable in today’s cyber environment.
Likewise, the base assumption that adding user friction to improve security is unacceptable also needs to be challenged directly and continually. Users need to be trained to accept some additional complexity in exchange for being protected—and user-experience designers are going to need to get creative in how they natively build security into the user experience.
Spectre and Meldown are likely just the beginning when it comes to hardware-based security holes. Both flaws resulted because engineers compromised security to gain performance, which likely made sense 20 years ago. In today’s fully networked, always “on,” environment, these types of tradeoffs will just create avenues for hackers to exploit.