Insights

The Enterprise Systems That Companies Need To Create

Featured in MIT Sloan Management Review

By: David Waller and Paul Beswick
This article first appeared in MIT Sloan Management Review on October 12, 2020.

The swiftness of technology’s progress in the past decade has convinced legions of companies that their survival depends on jettisoning their legacy systems as soon as budgets permit such an overhaul. Computing power has surged, storage costs have plummeted, and networking speeds have approached theoretical limits. All the while, companies and consumers are generating ever-growing floods of data packed with clues on how individuals behave and how products perform. Many companies thus risk upgrading technology purely for its own sake. In doing so, they overlook what may be the greatest opportunity presented by the modern technology stack: the chance to mobilize new tools in a way that empowers managers and technologists alike to make fundamentally better business decisions.

To illustrate, consider the curiously old-fashioned approach companies typically take to upgrading their legacy systems. It starts when something old stops working. Perhaps an aging mainframe fails often, resulting in seemingly never-ending maintenance costs, or an outage destroys transaction data. A decade ago, the natural response was to check for an updated version of yesterday’s software that could stamp out the bugs — particularly if paired with newer hardware. Now, companies look to the cloud for the latest collection of computing services, storage technologies, and performance guarantees.

Both scenarios share the same mentality and prevailing aim: to replicate yesterday’s functionality at today’s prices. “I want what I have today, only faster, or cheaper, or simpler.” The habit of colossal, periodic technology projects persists, justified by often-strained business cases that hinge on cost improvements or risk reductions spread across a wide swath of deeply entrenched systems.

There is a better way. Tech upgrades can be revenue generators, not just cost sinks, and they need not saddle you with soon-to-be legacy burdens. Our experience suggests that three strategies can position companies to carry out technology transformations that can create value and enable continuous innovation.

Redefine success. Companies that reap the greatest rewards from technical improvements recognize that it’s not only technology that changes: It’s also their leaders’ minds, priorities, and circumstances. Legacy systems aren’t bad because they’re outmoded — they’re bad because they’re almost invariably hard to deprecate.

To skillfully keep pace with technology, companies therefore need to develop what we call second derivative thinking: They must work to increase the rate of change of change. To build systems that improve the velocity of change in practice, companies need to identify the structural impediments that act as brakes on their ability to deliver technically, and then insist that each change project aims to remove at least one of those obstacles. In addition to achieving the project’s immediate goals, the effort also clears roadblocks that would otherwise bog down future efforts. With fewer impediments, subsequent projects automatically accelerate. And because those individual solutions are delivered in concert with existing work, teams don’t need to contrive laborious business cases to address them in isolation.

Take the example of a bank that wanted to grow by expanding its geographic footprint. Many aspects of banking vary from one country to the next — regulations and requirements, consumer habits and preferences. Idiosyncrasies aside, however, much remains the same about the core proposition and the span of products and services; in every nation, people save, spend, and borrow. Rather than inventing new infrastructure to conform to the nuances of a given location or region, the bank instead sought to leverage a common systems core where possible and then engineered custom services that could be “swapped in” to meet the particular needs of any given area.

Because the bank set the goal of being able to rapidly deploy operations in a new geography, it had no choice but to engineer the ability to rapidly adapt its systems. If a new regulation in, say, Singapore alters local banks’ identity verification requirements, the bank can update that single, isolated service rather than try to retool core banking applications in their entirety.

Of course, not every company has the time or wherewithal to build cloud-native applications from scratch. Many sit in a technical limbo, unable to reach for state-of-the-art tech but unwilling to hobble along with turn-of-the-millennium tools. To resolve that dilemma, companies often try to build data lakes as a stepping stone to more comprehensive system upgrades. They reason that if they can systematically hoover up the information stored in those source systems and grant more widespread access to it, they can create more modern applications while letting older systems lumber along in the background. This tactic can work, but it comes with a cost.

To pipe data out of legacy systems reliably, you have to build and maintain reliable pipelines. With a handful of systems and sources, this isn’t hard. But multiply that across the vast array of systems inside most large organizations and you get what has been called pipeline jungles — thickets of expensive data-integration jobs that no one really owns. Anyone who’s ever written or read a service-level agreement knows that a collection of individually strong components can produce a brittle system.

A novel solution to this problem is emerging in the form of data virtualization: a logical data layer that unifies data siloed in disparate systems without actually physically integrating it. At first, the idea sounds a bit silly. Don’t actually pull all of your data together into a communal, capacious tank. Instead, use technology that lets you pretend that you did. Rather than yanking data out of systems, you reach into them and fetch information when you need to use it.

Accessing data in situ, rather than creating infrastructure to shuffle it around, offers a few benefits. You’ll reduce unneeded copying and the wasteful expense of storing duplicate data. You’ll use tools that give you a single route to reach upstream data, avoiding pipeline jungles to wade through. And by combining these tools with gateways and intelligently constructed interfaces, you can implement detailed permissions and security protocols far more easily.

Your thinking may evolve substantially as you upgrade your systems, but working in small chunks makes pivoting far easier. You have room to change your mind

Orient technology around decision-making. To create value from investing in technology, companies need to be clear on where the value lies in the first place. Companies can’t function without any technology, but surely the goal of better systems should be to function more effectively. In business, this imperative boils down to the goal of making better decisions. Generally, businesses don’t make money by chance but by choice. Behind every value-creating action, there is a long string of decisions about how, when, and by whom related tasks need to get done. For instance, long before a bank collects payments on a loan, it first needs to identify target customers, devise ways to solicit them, estimate their creditworthiness, create the product, set a rate, originate the loan, and ensure that the systems are ready to service it.

In general, companies should strive to make high-stakes decisions more effective and low-stakes ones more efficient. One national retailer noticed that when its senior-most executives gathered for quarterly planning meetings, they spent nearly all of their time scrutinizing historical sales reports and almost none of it making strategic choices. Why? Culture played a part — dwelling on anomalies and nitpicking was a well-known habit. (“Retail is detail,” as many say.) But there was a deeper, technological failing too.

Over time, the company had steadily improved its reporting capabilities to let users see fresher, more granular information. In contrast, it hadn’t invested in technologies to make predictions or to model scenarios. Users could pinpoint the store that sold the most blueberry yogurt yesterday. But ask them which flavors customers would buy if the blueberry were removed, and they’d have no idea. Ask them whether they should invest to build more stores or to reduce prices, and you’d hear a similar silence.

The executives hatched a bold plan to reorient themselves. They resolved to create what they called “living, 18-month plans” that codified their strategic choices — for instance, levels of discounting or footprint expansion — as well as robust forecasts of expected performance. During quarterly meetings, leaders updated the plan. What’s more, they insisted on the ability to quantify the business impacts of varying those choices as interactively as they could. The company needed to redirect investments away from reporting and toward modeling — fewer dashboards and more decision-support tools. But tailoring their technology to better suit the company’s key decisions meant that the leadership team could focus its collective attention on the choices that mattered most.

Enhancing low-stakes decisions can generate sizable profits too, if done frequently or on a large scale. A grocery store chain spent years agonizing over an elaborate, item-level pricing system. To work, it needed lots of data. Some inputs should have been simple to obtain, like statistics on past prices and sales, but analysts were stymied by having to collate data from different legacy systems. Other inputs, like competitive price data, were costly to buy and error-prone. To function, the system needed to build, store, and update tens of thousands of narrow statistical models, each one of which could fail from an input glitch or return nonsensical results because of an outlier. It was, in other words, a mess.

Dismayed by what he saw, the CEO asked a provocative question: Was “the juice worth the squeeze”? In other words, what if, instead of updating the complicated system, the retailer simply stopped setting item prices itself?

Instead of deploying large teams to chase down tiny details, the retailer could ask small teams of buyers and category managers to apply the same margin to all items in a category, effectively shifting the burden of setting product pricing back to the vendors. If a brand wished to undercut its rivals, it could do so by lowering its cost to the retailer — but that dynamic would require zero effort from the retailer. The retailer, in turn, could adjust its overall level of price competitiveness by altering margin targets across categories.

Though the idea struck some as heresy, it gained currency as its assumptions and implications became clear. It might be possible to measure differences in price sensitivities between two similar cans of beans, as pricing folklore suggests, but the evidence is weak. Hence, this kind of decision is likely to be a low-stakes one where the best way to win is to be more efficient. Automating such decisions often makes them more efficient; offloading them entirely always does so.

Reduce your delivery “chunk size.” Reduce your delivery “chunk size.” One final strategy that companies can use to good effect when revamping enterprise systems is to deliver smaller, self-contained, complete units of work. As a thought experiment, think of how much you’re willing to spend on your next tech transformation and how long you expect it to take. Divide both numbers by 50. You should aim to organize your work so that if you spend that diminished sum over that brief time interval, you get an entirely functional systems component, like a Lego brick that’s yours to build with. Maybe it’s a service encapsulated in a container, or a set of interfaces for accessing data programmatically — but whatever you build, you want it to be feature-complete and immediately reusable.

Operating in this way embeds the kind of second derivative thinking described above. Your thinking may evolve substantially as you upgrade your systems, but working in small chunks makes pivoting far easier. You have room to change your mind.

Furthermore, teams can use and derive value from individual components as they come online, rather than waiting for the entire edifice to be built. And giving teams new technical possibilities is a great way to unleash their creativity and empower them to solve problems you may not have considered.

The experience of a large commercial insurer provides an example of how to put these ideas into practice. Like many of its peers, this company felt both the drag of legacy systems and the fear of taking them off life support. Rather than rebuilding them in one go, the company chose to isolate its internal systems by putting legacy applications in a distinct service tier. Then it built a separate services layer on top of those applications. Those services provided users with access to the data and functions of the underlying legacy systems while abstracting away the need to interface with them directly. Legacy systems were in effect shrink-wrapped for a longer shelf life, and more legacy systems could be moved into the first, internal tier one by one.

To move a system behind this services layer, teams needed to think carefully about which of its data elements and functions were critical to the company’s operations. Not only did the exercise allow them to incrementally disentangle their tangled legacy setup; it also gave them a road map for progressively building more modern applications.

Leave an Agile Legacy
You don’t have to be a technical expert to be astonished by the increasing potential for technologies to transform organizations. Fifty-one years ago, the United States piloted a spaceship to the surface of the moon using a 70-pound computer that could perform 14,245 calculations per second. In September of this year, Nvidia introduced a graphics processor that is more than 2.5 billion times as fast, weighing little more than a book.

Currently, high-capacity hard drives can store data at a cost of just over $15 per terabyte. When we first walked on the moon, storing that terabyte would have cost $1.7 billion in today’s dollars. This August, researchers pushed data through a single fiber-optic cable at a rate of 178 terabits per second — enough to transfer about 1,500 4K movies in the time it takes to say “one, Mississippi.” When the Apollo 11 astronauts splashed back down to Earth, the first computer-to-computer link hadn’t yet been invented. (It would come three months later, with Arpanet.)

Given the giant leaps forward that technology continually makes, it’s now possible for companies to replace their legacy systems and aging computing platforms with systems that will enable them to realize breakthroughs in terms of greater efficiencies, but also with greater product and service offerings. The key is to recognize how your legacy systems must adapt in a world where accelerating change is the only constant.