Wednesday, November 13, 2024

Reuse 101

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

October 1999

Reuse 101

How to implement (and get management buy-in) on a “simple” practice that’s often surprisingly slippery
by James Bean

Two of the biggest challenges of the IT world are improving time to market and reducing costs. No earth-shattering news there. But the rapid evolution of e-commerce has dramatically increased these challenges—users want to see improvements immediately, and management demands an ever-increasing return on investment.

While there have been tremendous advances in technologies and tools, a single simple solution to the demand for faster and cheaper application development is doubtful. However, there is one possible solution to these problems: reuse.

The hype isn’t quite as pronounced as it was several years ago, but reuse still has the same potential. The decreased hype around reuse is most likely due to three basic problems:

  • Reuse is an easily misunderstood concept.
  • Identifying what can be reused is a confusing process.
  • Implementing reuse is seldom simple or easy to understand.

The lack of a solid definition for reuse has prevented it from being widely accepted and implemented. Many technology executives, managers and practitioners insist that reuse is simply using a technology they have developed (or acquired) more than once.

But reuse is a bit more complex than that. It’s actually defined by two different activities, and most people forget the first part: reuse engineering, the act of transforming technology assets to prepare them for reuse.

A more common perspective is that all technology assets (including legacy artifacts, meaning non-architected assets) are immediately reusable in their native state. But with the exception of utilities, traditional development is driven from a line-of-business requirement, a specific application need, or a singular implementation solution. So while these assets might be reusable, it’s more likely that they cannot be reused outside the parochial scope or domain in which they were initially developed.

Even with the best development methodologies and architectural techniques, taking a traditional approach to development can reduce the potential for reusing technology assets or limit the type of potential reuse, without later additional re-engineering. And with today’s emphasis on financial controls, cost containment, and return on investment, acquiring additional budget to re-engineer a technology asset will probably be a tough sell.

Scout out the field before diving in
Reuse engineering is a process where a technology asset is designed and developed following architectural principles, and with the intent of being reused in the future. Within the standards and constraints of the environment, the developer should engineer the asset to be flexible and extensible.

Developers should coordinate with business analysts to generalize the intended functionality and how it meets the requirements that are set for it. The business analyst should consider the potential for reuse elsewhere within the business domain, as well as across multiple domains, since the developer will not often be aware of future requirements for an asset.

This is where basic object principles come to developers’ aid—even in the case of mainframe Cobol or other legacy development. The asset should be developed to enable and address encapsulation, inheritance, and polymorphism. The developer should insulate functionality from external modification and leverage the concepts of public interfaces, messages, APIs, and parameters.

Next comes reuse deployment. Reuse deployment is what most people believe software reuse is—leveraging an existing technology asset or artifact more than once. But this, as we’ve already noted, is only half of the equation.

In some cases, a legacy artifact might be just the answer to a new business requirement. When this happens, great! But this kind of circumstantial or opportunistic reuse is a rare event.

But all is not lost. A few techniques can help you leverage less perfectly formed legacy assets and artifacts.

Implementing reuse
The first step is to inventory your collection of assets and artifacts. If you have implemented a rigorous repository process, this should require a tolerable amount of effort. If you are not currently leveraging a repository, you should either look into acquiring such technology, or consider an internally developed asset database as an alternative. As a reuse infrastructure component, a repository is only part of the picture, and you will also need to develop processes around its maintenance and use.

Assuming a reuse repository is in place, my recommendation based on experience is to define a set of classifications that segment and describe a line of business, a set of functions and rules, or a collection of data objects. This process is not unlike traditional domain analysis. Define the boundaries of your domains and begin the collection, categorization, and assignment of assets.

Next, assess the reuse value for each of the categorized assets. You will need to consider different criteria depending on each reuse environment, but here are a few simple suggestions stated as questions:

  • Does the asset resolve a set of well-defined and scoped functions?
  • Does the asset utilize a simple interface as the method of exchange?
  • Does the asset include robust functionality driven by parameters or other public variables?
  • Does the asset enforce well-defined and accepted business rules and constraints?
  • Does the functionality contained within the asset apply exclusively to a single defined domain or problem set, or to several?
  • Does the asset experience frequent modification?

Evaluate each reuse candidate by these questions using a range of scores to rate how well the asset meets the criteria—for example 1=Poor, 2=Average, 3=Good. Each of the assets that score high by your criteria should be researched in more detail.

For those assets determined to be acceptable reuse candidates, identify the functional characteristics supported by the asset, and the parameters used for exchange of requests and responses. In addition, you’ll need to document any environment characteristics such as whether the asset is available only in batch, can be accessed from a heterogeneous network, and so on. Also, determine what domains the asset supports, and any business rules or sensitivity issues related to its use.

Publish this information to the repository, and make it available as search criteria. You should then be able to perform affinity analysis of application requirements with available assets, and search for and select matching candidate assets to exploit in order to meet the needs of the development project.

Of course, many remaining artifacts with a significant investment in their original development are not readily usable by these criteria. Yes, these still have some value, but how do you leverage it?

 

Figure 1. “Wrappering” Legacy Code. Click here.

Wrap that rascal
The answer is “wrapping,” a common reuse technique for legacy artifacts. Wrapping is the process of encapsulating the artifact in a code shell that supports a public interface, yet hides its proprietary functionality, and perhaps any architectural flaws (see Figure 1). In the world of big iron, there are a number of methods for wrapping—and each yields varying degrees of success.

A common example of legacy artifact wrapping is to develop a fairly generic application that serves as an API to the legacy application. The developer of the API should structure it to resolve service requests from a variety of sources, and perhaps in multiple formats. Assuming the artifact has some measure of application-to-application communication, this shouldn’t be impossible.

The interface would manage the exchange of requests and responses, and also perform very limited (and localized) state management specific to the artifact at the object level (to address local environment persistence, lock escalation, recovery, and so on). A messaging infrastructure to manage the heterogeneous environment and partitioned resources could then be leveraged as the communication mechanism. This solution is not simple, nor is it inexpensive, but it some cases it gets the job done.

If you cringe when I mention state management, don’t read too much into it. The state management segment of wrapper development is localized, and suggested purely to mitigate the risk of impact from poorly behaved applications and requests. As an example, if the underlying artifact accesses a relational DBMS, requests with locks that produce extremely large result sets can cause lock escalation. This will in turn affect performance for local application access of those same data structures.

To be honest, I hesitate to recommend the wrapper approach if the developer hasn’t taken time to really research the artifact, and if management is uneducated in the value of reuse. If the artifact cannot meet the minimum criteria for reuse in its native state, you’ll probably encounter an added cost for wrapper development, risk of application and operational impact, and far less value from future reuse. For the most part, you would be delaying the inevitable—re-engineering the asset to be truly reusable.

Regardless of whether you intend to reuse legacy artifacts or you are reusing formally architected assets, consider the costs and benefits. In a best-case scenario (such as reusing an architected asset), there are costs of search, identification, validation, testing and implementation. In the more realistic scenario of reusing a legacy artifact, you will encounter greater research and analysis time, wrapper development costs, infrastructure costs, and perhaps significantly more testing.

Evaluate your ROI
If the picture looks bleak, you are probably asking why you should even bother. With a rigorous reuse methodology, reuse infrastructure, and the appropriate staffing in place, you can recover the costs associated with reuse and avoid more traditional development costs. The real trick is metrics and patience.

Treat reuse like any other financial investment. The desired goal is a measurable return on your investment and preferably a return that exceeds the theoretical return from a different investment. In the case of reuse, the ROI (return on investment) will most likely fall into three basic categories:
· Development cost savings, experienced as cost avoidance from traditional development.
· Reduced delivery time from reduced new development and assembly techniques.
· Improved quality. The reusable asset is rigorously unit tested upon development. Successive reuse instances should require reduced unit testing, but you’ll still need to perform integration, product, and system testing.

You should also recognize that there is a risk associated with most investments (as your stockbroker will no doubt inform you). This same principle of potential risk applies to reuse. Similar to a savings account, you can invest a small amount and receive small returns on your investment over time. You can also invest greater amounts and see greater returns—perhaps in a shorter period of time, thanks to compound interest. As you invest in higher-return financial instruments, the potential risk will also sometimes increase.

Although your reuse methodology should be highly structured, it should also support the ability to condense certain tasks, activities, and deliverables. This may produce less of a desired result and higher potential risk, but the effort required is also reduced. As the results of the reuse effort become more obvious and accepted, you can then expand to include the more rigorous tasks and activities. The effort and cost will be greater, but the resulting benefit over time should also be greater.

Time plays a key part in the ROI equation. Each instance of reuse carries with it associated costs and risks. As the number of reuse opportunities for an asset are increased, these costs and associated risks decrease. However, it is not a “straight line” equation. There is a minimum and a maximum threshold for cost and recovery. These thresholds will vary depending upon a number of factors, so do your homework before quantifying the return from your reuse proposition.

Use metrics for buy-in
Proving the value of reuse can be quite a challenge. As you can see, the concept of reuse engineering introduces initial development costs that might not be recoverable until the asset has been reused multiple times. I have successfully used a simple and common-sense metric to identify and plan for reuse benefits. It isn’t so much a mathematically accurate algorithm, but rather an effective generalization to describe a complex concept.

 

Figure 2. Long-term savings of Reuse. Click here.

The initial investment in reuse can exceed the costs associated with the traditional development paradigm (see Figure 2). As reuse development efforts progress, the costs associated with traditional development tend to decrease, while reuse and assembly related costs increase. Still, the more you reuse, the more you’ll see an overall reduction in development costs.

Initial investment in reuse will generally include additional development costs to identify reuse opportunities, search reusable assets, engineer reusable solutions, and deploy within a framework that supports reuse. If you faithfully adhere to a reuse methodology, development solutions that have been truly engineered for reuse will recover associated costs with the third reuse instance for those same objects. Beyond the third reuse, you’ll see your savings expressed as cost avoidance (rather than straight development savings).

However, with the continuous drive by the business community for immediate return on technology investments, this is often the toughest sell. Also, rather than becoming a fundamental development practice, reuse is tagged as an “infrastructure enhancement.” Many business and technology executives either can’t envision the benefits associated with rigorous reuse practices, or do not have the luxury of marketing these concepts to business management. The most you can do is educate on the topic of reuse, and not fold at the first hint of resistance from upper management.

I have seen several instances where the development manager caved in and agreed that a short-cut approach to reuse would result in tremendous savings—and promised that those savings would be gained immediately.

Let common sense guide you in these situations. If you continue to take short cuts and avoid basic architectural principles, you will most likely meet the short term expectations for time-to-market. However, the deliverables from the effort could incur continued enhancement and re-engineering costs, will not be readily extensible or flexible, and may even be of suspect quality. Recent advances in spiral methodologies and some application generation tools have reduced this risk, but I have not seen it disappear.

So when you have to pitch a reuse methodology to upper management, don’t simply tell the suits what you think they want to hear. Assemble some baseline metrics: traditional development costs, time to market, quality, frequency of errors, and so on. Validate your baseline metrics with technology representatives and with your financial group. Next, get creative and think outside the box. Don’t rely on a single set of metrics to prove or disprove the value of reuse.

The metric of cost savings is really a tricky one. The best approach is to measure to your baseline and consider several types of metrics. After you have invested in reuse, at some point you will need to prove that it works. It is during these moments that rigorous baseline metrics can be a career saver.

James Bean is CEO of the Relational Logistics Group. He is the author of The Sybase Client/Server Explorer (The Relational Logistics Group; ISBN 1576100456), has written numerous magazine articles, and is a frequently requested speaker for technology conferences. Mr. Bean can be reached at rdbms@aol.com.


© 1999 FAWCETTE TECHNICAL PUBLICATIONS, all rights reserved.



Figure 1. “Wrappering” Legacy Code. When wrappering code artifacts for reuse, the wrapper should provide a public interface for the artifact while hiding its proprietary functionality and any architectural flaws.
Figure 2. Long-term savings of Reuse. The cost incurred properly constructing components for reuse doesn’t yield savings for overall development in comparison to traditional development methods until the results have been reused at least three times. After that, every reuse yields increasing incremental savings.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles