Many of you asked for the complete byline article that appeared in SOA World Magazine, so I am presenting it again in my blog.
The saying “you can’t manage what you don’t measure” has never been so applicable to software engineering as it currently is to SOA development. As organizations are struggling with the day-to-day development implications of SOA, business leadership is beginning to realize that success does not come without effective organizational measures. Cross organizational measures are needed to bring transparency to those operational benefactors impacted by SOA’s promise of agility and cost reduction. But what should you measure and are all measures equally important?
The answer to these questions is rooted in understanding the nature of the first part of the opening quote – you can’t manage. We manage because we want to control and we need to control because we want to reduce risk in order to have outcome certainty. As such, every organization that implements SOA and every SOA implementation will need to identify and address the specific risks associated with the project. For example, if agility is an important driver for your SOA implementation, what does that mean to your organization and what specific metrics can be used to measure it?
There are several characteristics of measures and metrics, in general, that should be considered when identifying those specific to an SOA implementation. Are the measures predictive or reflective – that is, do tell us about the future or the past? Are they theoretical or practical – theory tends to shed light on emerging areas, while practical tends to be more useful in narrowly defined situations? Are the measures internal or external – do they look internally into the services or externally towards their application? Are they objective or subjective – objective measures tend be less dependent on personal perception? Are they quantitative or qualitative – quantitative measure lend themselves to classification, where qualification tends to be more descriptive. Who is the audience for the measure – are they meant for corporate, management, project, or developmental use? As you can see, there are lots of options and it is unlikely that any two organizations should have the same exact measurement system.
With that stated, there are a few measurement areas that should be looked into by any group and could be used as a starting point. I’ve broken those areas down into four core categories: Corporate Metrics, Management Metrics, Project Metrics and Service Development Metrics. Across those categories are 10 measures that seem to get the most attention and directly relate to successful SOA implementations:
Corporate Metrics:
1. Revenue Per Service
One of the most important measures for any company is revenue per employee. This number captures both the value an organization delivers, and the productivity achieved based on the value created by the employee base. Unfortunately, today there is no such universally accepted measure for services. Why? Mostly because SOA, up until now, has been dealt with as an engineering activity by technical people, which is normally tracked as cost in R&D. But this needs to change if SOA is going to be able to cross the business chasm.
For the same reasons that measuring revenue per employee is important, having a historical perspective on revenue per service will enable organizations to quantitatively assess both the value of the service and the productivity of the service-oriented architecture through which it is delivered.
2. Service Vitality Index
The Service Vitality Index is the amount of revenue from new services over the last 12 months as a proportion of total service revenue. You can’t simply throw more money at service-orientation and expect a proportional return on the investment. But the problem is there are no standardized metrics for measuring the investment value of SOA. And the three most commonly used means for measuring development cost and impact: 1) percent of sales, 2) R&D headcount, and 3) number of patents acquired, are flawed. None of these measures are owned by R&D. The percent of sales and headcount are cost-driven, and the number of patents does not give you an indication of future value.
A better way to track service impact on the business is to track the revenue and return on investment directly related to innovation. For example, to do this, Symphony Services combines a Service Vitality Index with Service-oriented ROI measurements.
The Service-oriented Vitality Index, or SoVI, is the ratio of revenue generated from a service (or services) over the last 12 months as compared with all other existing SOA revenue. This is a revenue view of service-orientation (as is the case with revenue per service) verses a spend view. For example, assume a company has four SOA product lines that have a combined return of $100 million in total revenue. Three of these four products have been earning revenue for more than a year. And the fourth SOA product was released just under a year ago and contributed $5 million to annual revenues. This company’s SoVI would be 5/100 or 5 percent. Healthy organizations should strive for a SoVI of 10-20 percent, which compounded over time, will result in a 100 percent turnover in revenue from new products every five years.
The second and supporting metric is called the Service-oriented ROI. SoROI accounts for the money invested in service-oriented development. The SoROI is the cumulative before tax profits over “N” number of years from SOA-driven products divided by the cumulative product expenditures for that same period. This can be further enhanced by discounting both revenue and cost as a function of prevailing and forecasted interest rates. The SoROI allows you to compare overall SOA values independent of their size.
Combining both the Service-oriented Vitality Index and the SoROI provides a much clearer picture of a company’s SOA health. Using these direct measures of service-orientation, companies can foresee the revenue implications of unfocused R&D years earlier than with traditional measures. To foresee is to be forewarned, a metaphor realized through the Service-oriented Vitality Index.
Management Metrics:
3. Number of New Services Generated and Used as a Percentage of Total Services
Organizations with non-existent or poor SOA governance often see out of control service proliferation (high ratio of new services as a percentage of total services). Uncontrolled development teams often look to create new service after new service, not thinking about re-choreographing existing implementations in order to achieve the desired business value. Not only does this drive the total cost of service development up, but it also reduces the average revenue per service, indicating poor service development productivity.
4. Mean Time To Service Development (MTTSD)
If one of the benefits of SOA is business agility, then how do you measure that? MTTSD provides a statistical measure, along with a range of certainty, of the average time to standup a service. Organizations new to SOA development (those with a low SOA maturity level) can see development time 10 times longer than those that have a managed or optimized SOA organizations (high SOA maturity level). Reducing MTTSD is a key benefit of SOA governance; that is, those activities related to the control of services in a SOA environment.
5. Mean Time To Service Change (MTTSC)
Just as with MTTSD, it is equally important to understand how long it takes to change a service. Business agility is measured both by creation and change. Services that are created quickly often lack the commercial rigor that stands the test of time. MTTSC can point out those services that have been poorly created and costing the organization in terms of effort and lost opportunity costs.
6. Service Availability
Service availability it the percentage of time a service is usable by the end user. It is a measure of the total delivery system, from having defect-free services to operable data centers. Low serviceability (less than 99.9 percent) need to be dealt with immediately since they impact customer satisfaction. Triaging service orchestration and choreography, service discovery through registries, as well as service load balancing and failure, are three activities performed by organizations when service availability is unacceptable.
Project Metrics:
7. Service Reuse
Development organizations have a tendency to rebuild what has already been built, a continuance of the “not invented here” syndrome. If business agility is based on the ability to standup services quickly, then creating services quickly is based on reusing what you have. As part of an overall SOA governance process, measuring the degree to which you reuse services is critical for keeping development costs low and business agility high.
8. Cost of Not Using or Stopping a Service
One of the least understood business costs in a SOA environment is the cost of not using or stopping an existing service. Not only are there obvious lost opportunity costs that can be measured in terms of revenue, but development costs as well. The end value delivered to users is often composed of many choreographed services, each delivering unique value. A well designed SOA implementation has low shutdown or switching costs.
Service Development Metrics:
9. Service Complexity, As Measured Through Cyclomatic Complexity
The cyclomatic complexity of a service is the single measure that will determine if your service is testable and maintainable. Studies have shown that services with cyclomatic complexity greater than 50 are not testable and often result in 10-20 percent more maintenance effort than those services whose cyclomatic complexity is less than 10.
10. Service Quality Assurance
Service Quality Assurance is based on systems-level tests that examine the behavior of service-oriented use cases across possible choreographies [derived through service code coverage]. As in the case of code coverage, we look to determine how much of the code was executed during the course of testing (which is still important for developers), but here we look to address how much of the service use case was executed. In this case we look to develop systems-level tests that execute all service use cases across all possible choreographies. In the case of code coverage, we know that the cyclomatic complexity number tells us how many test cases are needed (e.g., a CC of 10 implies a minimum of 10 test cases). However, this is not necessarily the case with service coverage, because of the emergent behavior implications. In this case, we need to apply specific design of experiment processes and tests to statistical outcomes (e.g., we are 95 percent confident that the system behaves within the specified requirements parameters).
Conclusion:
Several of the industry’s most well-known SOA pundits, like David Linthicum and Joe McKendrick, have debated SOA metrics – which ones work, which ones don’t , and why SOA needs to be measured in the first place. That debate will continue to take place as SOA adoption matures, and executive management grapples for some tangible evidence that SOA is “working.” While perhaps the 10 measure outlined here are not the traditional measures companies think about when discussing SOA implementations, they do provide some level of transparency into operational issues that impact SOA agility and therefore serve as effective starting points to translate SOA into a successful – and measurable – endeavor.