The application and data landscape in large companies today entails a web of interconnected platforms, databases and applications that is extremely difficult to understand and even more difficult to safely change. For example, a large global bank may run 5,000 individual applications. Manual processes supported by end user computing tools such as Excel and Access also often number in the thousands or tens of thousands. Large numbers of databases and reports, often duplicates of each other, are also common. A viable estimate is that there is a quarter of a trillion lines of coding language currently running in production.
The impact of all this complexity makes it extremely difficult for the organization to change, improve and innovate.
Complexity Makes Innovation Difficult
The transition to a digital model across industries has been steadily accelerating for years and has only accelerated as the COVID-19 pandemic has driven millions of people online for everything from ordering groceries to performing their jobs. As organizations attempt to adapt, the complexity of their technology, process and data environments is such that safely and effectively implementing change has become very difficult and that driving innovation at scale is even more challenging.
Complexity Is Expensive
Managing down operating costs within a complex technology environment is extremely difficult. Retiring applications that are tightly coupled within the technical and data architecture results in nearly insurmountable dependency management issues and change risk. Seemingly small projects quickly become big, expensive efforts with extended time frames. Retiring one application inevitably requires upstream and downstream changes, and this added cost erodes the business case and often leads to a decision to leave things as they are.
Maintaining all the legacy code in production is also increasingly more expensive as the software engineers who wrote it retire from the workforce. Reports of having to triple salaries to retain engineers with unique knowledge of legacy code bases and infrastructure are common.
Complex Environments Are Difficult to Make Resilient
There is an inverse relationship between complexity and resilience. The more complex a company’s technology and processing environment, the more difficult it is to make it resilient. After a certain point, it is impossible to ensure that a catastrophic failure can be 100% prevented or that a malicious actor cannot cause significant damage via a cyberattack.
Start by Measuring Complexity
The first step in managing complexity is to figure out how to measure the current state. Very few organizations have a formalized approach to measuring the complexity of their application and data environment. Even fewer have defined a process for evaluating whether a specific change to the technical architecture will increase or decrease complexity. Companies don’t know where they are or where they are headed.
There are a range of approaches to quantifying complexity. More simplistic methods include measuring hops and numbers of copies of data, identifying applications that perform similar functions, tracking the number of programming languages used, counting open-source libraries, or tallying the number of reports produced. More complex approaches use algorithms that measure systemic complexity via automated code review and database scanning that maps data flow throughout the enterprise to understand levels of duplication and dependency. There are tools that allow for automated process mining and analytics that measure process complexity.
The important consideration is less how sophisticated an organization is in their approach to measuring complexity, but that they do so consistently. Once an organization has a complexity metric that describes their current state, they can predict the impact of a specific change on that metric and can track whether changes designed to reduce complexity do so. Not surprisingly, many changes designed to decrease the complexity of an environment actually increase it.
Many companies have architecture teams whose mandate is to approve the different technologies and vendor products they use. However, they rarely have authority to decide on a new piece of technology based on its impact on technical complexity.
Empowering architects to consider complexity and allowing them to steer business decision-makers along paths that reduce complexity while still meeting business needs is critical. To do so, business and technology leadership need to define and agree to follow top-of-the-house technology standards.
"Too often, technology teams do not take the time to educate business leaders on the impacts their decisions have on the enterprise’s technology architecture and associated cost and risk."
For example, many corporations purchase multiple reporting and analytics products, each with modest differences in capability. The decision to use one over the other is simply a factor of the preferences of individual leaders. Rationalizing back to a single, standard platform reduces complexity and typically brings cost advantages.
Pushing strategic partners and technology vendors to design their platforms as modular, support flexible deployment models, and comprehensively leverage application programming interfaces (APIs) is also important.
Create Complexity Firebreaks in the Architecture
When a firefighting team is attempting to control a forest fire, they first create firebreaks, or areas that lack combustible materials. They turn a large unmanageable fire into a set of smaller fires that are fought individually. The same technique can be leveraged to segment a highly integrated, tightly coupled enterprise architecture that is difficult to change into a set of smaller blocks that are easier to understand and manage.
In the case of enterprise technology, firebreaks can be introduced via the use of standard APIs and data contracts between groups and functions, the definition of authoritative data sources, standardized workflow technologies to manage the flow of information between businesses and teams, and other techniques to break the overall architecture into manageable blocks.
Once these firebreaks are introduced and the application segments created, it becomes possible to stabilize interfaces between them. Teams who are wholly responsible for a segment are more free to innovate and modernize within their areas of accountability. If done correctly, overall enterprise complexity reduces and innovation accelerates.
Engage Business Leadership in Terms They Understand
Too often, technology teams do not take the time to educate business leaders on the impacts their decisions have on the enterprise’s technology architecture and associated cost and risk. A decision to implement a vendor product may seem clear when considered using purely business criteria, but when considering complexity and other non-functional criteria, it may seem less advantageous.
Agreeing on a metrics framework, as suggested above, enables a fact-based discussion of the impact of a specific business decision on enterprise architectural complexity. Further, via the formal adoption of a defined risk appetite for technology complexity, discussions with business leadership, or other technologists become anchored in risk rather than debates over one platform versus another.
Solving the problem of excessive complexity is expensive, and a strong partnership between business and technology leadership is critical.
It Is Important to Take Small Steps
The challenge of legacy technology and the complexity it brings can be overwhelming to most organizations, especially those who allocate most of their technology budget each year to simply keep everything running and have little left over for innovation. Reducing complexity cannot be solved overnight and is a multi-year process. The key is to set standards, define metrics and then start measuring progress and stop doing further harm, even if gains are modest at the beginning.
Chris DeBrusk Partner, Digital & Financial Services, Oliver Wyman