If there is one term that does not lose its relevance in the IT landscape, it is "Legacy Transformation." A couple of decades back, technologies like Visual Basic, Java, and C++ were widely used as the end-state of this transformation from then-legacy technologies primarily running on mainframe machines. With the widespread adoption of Java, it became the end-state vision for most of these technologies. What is ironic is that what was legacy a couple of decades back, continues to be legacy in large enterprises to this day, particularly in large Public Sector organizations. What used to be modern application packages a decade back has now been classified as legacy as well; that are difficult to change and maintain.
Mission critical applications continue to run on these legacy technologies, and it has been herculean tasks to move them out of these into modern technologies. Many of these initiatives have either failed or yielded partial to mixed results. However, as we know, modern technologies offer significantly higher degrees of agility, flexibility, and seamless consumer experience. The need for moving away from legacy tech is more than ever now.
There is not a single future-oriented organization that does not have Digital Transformation on its agenda. In many of these cases, having a robust roadmap to move away from legacy tech is at the heart of Digital Transformation. Approach to move away from legacy can be broadly classified into two:
Bottom-Up modernization is a term commonly used by Systems Integrators and product vendors. There are many vendors who claim to provide capabilities to convert business logic embedded in legacy systems to modern applications. There are multiple methods to this; and it covers one or many of the following:
As the name suggests, this is the inverse of Bottom-Up Approach. Top-Down Approach is typically used when the business logic/business rules are well documented with minimal to no flaws. Here, the document is the 100% source of truth and is used like any business requirements specification, even though some of these may need be re-documented in line with new-age product features.
Both above scenarios are happy path scenarios, and we know that a happy path in digital programs is a myth.
Bottom-Up Approach works reasonably well in the case of rather straight-forward logic implemented in code, i.e., there are not too many detours or deviations. It gets complicated when code has been customized for specific scenarios during its life cycle. This is of particular significance when the code is a few decades old and has had multiple exception scenarios in its life cycle.
Top-Down Approach works well when every single part of the code is documented and is integrated with the master document; i.e., when new logic is introduced for exception scenarios, the master document is modified instead of creating separate documents.
Let us take the case of a government service that has been around for, say 30 years. Or a banking system that has been around for 10 years. One can only imagine the number of iterations the business logic would have gone through resulting from changes in legislation, or bug fixes, or even to handle specific cases. In this case, neither the Top-Down Approach nor the Bottom-Up Approach will ensure 100% accuracy. Let us say, we are dealing with a payment system here—there is no room for error, and it must be 100% accurate.
So, what is the Hybrid Model? In a usual scenario, the Hybrid Model begins with the Top-Down Approach. One should assume that the documentation has an accuracy of more than 70%. You build the base logic using this model. As soon as this is designed and developed, you integrate a comparison logic to compare the output of the newly built system with that of the existing system for the scenarios developed. This is typically done in a Functional Testing or early Functional Testing phase. If the comparison is 100% accurate, proceed to further rounds of testing. However, this is a rare possibility. Chances are that you will not meet 100% similarity in several cases.
The next step is to weed out the specific cases where there is no 100% match. Run these specific cases in the legacy system and use a Bottom-Up Approach for these cases. It could be enabling trace to see how the code traverses, or by capturing logs and reverse engineering from the logs. These techniques give insights into deviations from standard stated business logic. Incorporate these into the documentation—the key is that this is an iterative process until Functional Testing can be executed for all test cases and scenarios. What started as a 70% predictable outcome gradually improves and inches close to the 100% mark in successive iterations.
After you are convinced that the Functional Testing is reasonably successful, repeat similar steps during Integration Testing and User Acceptance Testing. Integration Testing is even more important if you are only using a rules engine, as there is good possibility that your rules engine is headless. The scenarios that trigger rules and their variations are driven by the system invoking these rules, sometimes an Assessment Engine or a Customer Interaction Engine like a CRM.
Once you have covered all phases of testing; there is a very high probability that you would have covered 100% of the rules in the system.
So, next time when someone reaches out to you saying, "guaranteed legacy transformation," think again!
Based in Canberra, Australia, Madhu Pulasseri is the Delivery Partner and Digital Process Automation Practice Manager for Infosys.
No entries found