There continues to be irony in the fact that IT organizations – run by the very people responsible for building excellent business software to accurately manage assets and inventory – so regularly disregard the value of using this same inventory management concept when managing their own software assets.
However, with modernization projects looming on the horizon, and the risk of brain drain due to retirement and attrition, an increasing number of IBM i development sites are now looking to forensic-based data to understand their code base and better manage their available development resources.
Typically, organizations are turning to code analysis to answer the following two questions:
Unfortunately, there are still many IT organizations with ongoing dependency on IBM i who don’t know precisely how much of their code base is relevant, how truly complex any part of it is, what business rules it implements, what the relational model is, or any other metric consistent with a structured management process.
It is not uncommon for IT directors in multi-billion dollar organizations to have no real idea of how much code they are really responsible for, or how low-tech their development management processes are.
In a recent power systems user group encounter, a long-time, seasoned IT manager confessed to having responsibility for “thousands” of programs, but had no precise data on the size of that code base, or how much of it was redundant. He also had no explicit design information about the system beyond what was written down 20 years ago, at the time the system was first built.
It is equally revealing (and satisfying), however, to see the expression of wonder and amazement on the faces of people who see something they have worked on for 30 years quantified, measured and visualized for the first time. Without exception, this ‘bird’s eye view’ provides a new and more effective way of working and managing their IT systems and development processes.
Typically, one of the key deliverables of a code visualization effort is a detailed list of software problems. This list can include software and object mismatches, missing source, unused programs, files, access paths, code and a number of legacy constructs such as GOTOs, internally described files and multi-format files.
The initial wonder of seeing a system visualized quickly gives way to surprise and unease as the scope of these problems is exposed. This dismay, however, quickly translates into a proactive action plan to fix or remove problems from mission-critical parts of the existing application portfolio.
Understanding the scope of general or specific maintenance tasks and modernization projects, and accurate planning and costing can make or break an IT manager’s career. Not to mention that the platform often ends up taking the blame for seat-of-the-pants planning. “Oh it’s that legacy platform again! That’s why we never get any new enhancements done on time!” Anyone familiar with developing in RPG or SYNON on IBM i will argue back that it is probably the most productive kit around for business application development.
Some IBM i shops use basic cross-referencing and some form of Source Change Management (SCM) to help manage development. And there is a widespread misconception that SCM can help control inventory of software assets. This is untrue. SCM allows or provides for the control of specific assets for a specific purpose, at a given time. It provides no meaningful management data on the entire system either before or after a change or series of changes.
At the opposite end of the spectrum there is IBM’s Rational Team Concert (RTC), which offers an excellent portfolio of features and modules that help in managing large, complex development projects with many developers and stakeholders. RTC is a terrific solution when an IT shop is engaged in developing a new code base, with multiple teams across multiple locations.
However, IBM i development and modernization efforts don’t fit that model. In these scenarios, the focus is on small teams responsible for maintaining or modernizing very large existing code bases. The use of RTC in this situation may actually discourage IBM i developers. In addition, the support for deep code analysis of RPG/SYNON IBM i applications within RTC is very limited.
IT managers are increasingly turning to code analysis to improve visibility. Tools such as X-Analysis are instrumental in quantifying and managing the size and quality of your code base, assessing the risk of maintaining and modernizing legacy code bases, and improving development quality. By abstracting the code base one level above its syntax and using various interactive diagrams and pseudo code in place of RPG, non-IBM i developers and stakeholders are exposed to the proven designs and value of IBM i legacy applications. Most analysis tools integrate with SCM tools so that key decisions resulting from the analysis are implemented in a structured and methodical manner.
The decrease of available RPG resources, rising maintenance costs, ever growing code base, and widening diversity in technology and skills is creating a tipping point for a more scientific and measurable approach to IBM i development and modernization. Whether it is the organization’s plan to modernize or simply maintain the IBM i systems running the business today, improved understanding of the code base through structured code analysis will result in improved business understanding and a clearer roadmap for the future. For more information about solutions for documenting, design recovery, reengineering and rebuilding RPG, CA 2E and COBOL legacy applications on the IBM i, view our solutions, or e-mail us: email@example.com
PDF: Click here