Feeds:
Posts
Comments

Archive for the ‘SOA’ Category

NewImage

Many of you asked for the complete byline article that appeared in SOA World Magazine, so I am presenting it again in my blog.

The saying “you can’t manage what you don’t measure” has never been so applicable to software engineering as it currently is to SOA development. As organizations are struggling with the day-to-day development implications of SOA, business leadership is beginning to realize that success does not come without effective organizational measures. Cross organizational measures are needed to bring transparency to those operational benefactors impacted by SOA’s promise of agility and cost reduction. But what should you measure and are all measures equally important?

The answer to these questions is rooted in understanding the nature of the first part of the opening quote – you can’t manage. We manage because we want to control and we need to control because we want to reduce risk in order to have outcome certainty. As such, every organization that implements SOA and every SOA implementation will need to identify and address the specific risks associated with the project. For example, if agility is an important driver for your SOA implementation, what does that mean to your organization and what specific metrics can be used to measure it?

There are several characteristics of measures and metrics, in general, that should be considered when identifying those specific to an SOA implementation. Are the measures predictive or reflective – that is, do tell us about the future or the past? Are they theoretical or practical – theory tends to shed light on emerging areas, while practical tends to be more useful in narrowly defined situations? Are the measures internal or external – do they look internally into the services or externally towards their application? Are they objective or subjective – objective measures tend be less dependent on personal perception? Are they quantitative or qualitative – quantitative measure lend themselves to classification, where qualification tends to be more descriptive. Who is the audience for the measure – are they meant for corporate, management, project, or developmental use? As you can see, there are lots of options and it is unlikely that any two organizations should have the same exact measurement system.

With that stated, there are a few measurement areas that should be looked into by any group and could be used as a starting point. I’ve broken those areas down into four core categories: Corporate Metrics, Management Metrics, Project Metrics and Service Development Metrics.  Across those categories are 10 measures that seem to get the most attention and directly relate to successful SOA implementations:

Corporate Metrics:

NewImage

1. Revenue Per Service

One of the most important measures for any company is revenue per employee. This number captures both the value an organization delivers, and the productivity achieved based on the value created by the employee base. Unfortunately, today there is no such universally accepted measure for services. Why? Mostly because SOA, up until now, has been dealt with as an engineering activity by technical people, which is normally tracked as cost in R&D. But this needs to change if SOA is going to be able to cross the business chasm.

For the same reasons that measuring revenue per employee is important, having a historical perspective on revenue per service will enable organizations to quantitatively assess both the value of the service and the productivity of the service-oriented architecture through which it is delivered.

2. Service Vitality Index

The Service Vitality Index is the amount of revenue from new services over the last 12 months as a proportion of total service revenue.  You can’t simply throw more money at service-orientation and expect a proportional return on the investment. But the problem is there are no standardized metrics for measuring the investment value of SOA. And the three most commonly used means for measuring development cost and impact: 1) percent of sales, 2) R&D headcount, and 3) number of patents acquired, are flawed. None of these measures are owned by R&D. The percent of sales and headcount are cost-driven, and the number of patents does not give you an indication of future value.

A better way to track service impact on the business is to track the revenue and return on investment directly related to innovation. For example, to do this, Symphony Services combines a Service Vitality Index with Service-oriented ROI measurements.

The Service-oriented Vitality Index, or SoVI, is the ratio of revenue generated from a service (or services) over the last 12 months as compared with all other existing SOA revenue. This is a revenue view of service-orientation (as is the case with revenue per service) verses a spend view. For example, assume a company has four SOA product lines that have a combined return of $100 million in total revenue. Three of these four products have been earning revenue for more than a year. And the fourth SOA product was released just under a year ago and contributed $5 million to annual revenues. This company’s SoVI would be 5/100 or 5 percent. Healthy organizations should strive for a SoVI of 10-20 percent, which compounded over time, will result in a 100 percent turnover in revenue from new products every five years.

The second and supporting metric is called the Service-oriented ROI. SoROI accounts for the money invested in service-oriented development. The SoROI is the cumulative before tax profits over “N” number of years from SOA-driven products divided by the cumulative product expenditures for that same period. This can be further enhanced by discounting both revenue and cost as a function of prevailing and forecasted interest rates. The SoROI allows you to compare overall SOA values independent of their size.

Combining both the Service-oriented Vitality Index and the SoROI provides a much clearer picture of a company’s SOA health. Using these direct measures of service-orientation, companies can foresee the revenue implications of unfocused R&D years earlier than with traditional measures. To foresee is to be forewarned, a metaphor realized through the Service-oriented Vitality Index.

Management Metrics:

NewImage

3. Number of New Services Generated and Used as a Percentage of Total Services

Organizations with non-existent or poor SOA governance often see out of control service proliferation (high ratio of new services as a percentage of total services). Uncontrolled development teams often look to create new service after new service, not thinking about re-choreographing existing implementations in order to achieve the desired business value. Not only does this drive the total cost of service development up, but it also reduces the average revenue per service, indicating poor service development productivity.

4. Mean Time To Service Development (MTTSD)

If one of the benefits of SOA is business agility, then how do you measure that? MTTSD provides a statistical measure, along with a range of certainty, of the average time to standup a service. Organizations new to SOA development (those with a low SOA maturity level) can see development time 10 times longer than those that have a managed or optimized SOA organizations (high SOA maturity level). Reducing MTTSD is a key benefit of SOA governance; that is, those activities related to the control of services in a SOA environment.

5. Mean Time To Service Change (MTTSC)

Just as with MTTSD, it is equally important to understand how long it takes to change a service. Business agility is measured both by creation and change. Services that are created quickly often lack the commercial rigor that stands the test of time. MTTSC can point out those services that have been poorly created and costing the organization in terms of effort and lost opportunity costs.

6. Service Availability

Service availability it the percentage of time a service is usable by the end user. It is a measure of the total delivery system, from having defect-free services to operable data centers. Low serviceability (less than 99.9 percent) need to be dealt with immediately since they impact customer satisfaction. Triaging service orchestration and choreography, service discovery through registries, as well as service load balancing and failure, are three activities performed by organizations when service availability is unacceptable.

Project Metrics:

NewImage

7. Service Reuse

Development organizations have a tendency to rebuild what has already been built, a continuance of the “not invented here” syndrome. If business agility is based on the ability to standup services quickly, then creating services quickly is based on reusing what you have. As part of an overall SOA governance process, measuring the degree to which you reuse services is critical for keeping development costs low and business agility high.

8. Cost of Not Using or Stopping a Service

One of the least understood business costs in a SOA environment is the cost of not using or stopping an existing service. Not only are there obvious lost opportunity costs that can be measured in terms of revenue, but development costs as well. The end value delivered to users is often composed of many choreographed services, each delivering unique value. A well designed SOA implementation has low shutdown or switching costs.

Service Development Metrics:

NewImage

9. Service Complexity, As Measured Through Cyclomatic Complexity

The cyclomatic complexity of a service is the single measure that will determine if your service is testable and maintainable. Studies have shown that services with cyclomatic complexity greater than 50 are not testable and often result in 10-20 percent more maintenance effort than those services whose cyclomatic complexity is less than 10.

10. Service Quality Assurance

Service Quality Assurance is based on systems-level tests that examine the behavior of service-oriented use cases across possible choreographies [derived through service code coverage].  As in the case of code coverage, we look to determine how much of the code was executed during the course of testing (which is still important for developers), but here we look to address how much of the service use case was executed.  In this case we look to develop systems-level tests that execute all service use cases across all possible choreographies. In the case of code coverage, we know that the cyclomatic complexity number tells us how many test cases are needed (e.g., a CC of 10 implies a minimum of 10 test cases). However, this is not necessarily the case with service coverage, because of the emergent behavior implications. In this case, we need to apply specific design of experiment processes and tests to statistical outcomes (e.g., we are 95 percent  confident that the system behaves within the specified requirements parameters).

Conclusion:

Several of the industry’s most well-known SOA pundits, like David Linthicum and Joe McKendrick, have debated SOA metrics – which ones work, which ones don’t , and why SOA needs to be measured in the first place.  That debate will continue to take place as SOA adoption matures, and executive management grapples for some tangible evidence that SOA is “working.”  While perhaps the 10 measure outlined here are not the traditional measures companies think about when discussing SOA implementations, they do provide some level of transparency into operational issues that impact SOA agility and therefore serve as effective starting points to translate SOA into a successful – and measurable – endeavor.

 

Read Full Post »

NewImage.jpgeBiz has asked another great question, “Is Event-Driven Architecture and Complex Event Processing an Affordable Option for Most Businesses?”

Looking at it as a cost is an incomplete way of looking at event and complex event processing (CEP). Event-oriented Architecture (EOA), which CEP is a subset, is the missing third architecture in most enterprise solutions, the first two being Service-oriented Architecture (SOA) and Data-Oriented Architecture (DOA). The statistics show that ver 98% of enterprise class solutions use some form of DOA in their data supply chain activities and between 30-40% use some variant of SOA. However, less than 5-10% are currently tapping into the benefits of what EOA can deliver.

EOA (events and CEP) offer answers and insights into knowledge, something SOA and DOA cannot. Think of the Wisdom Model: Data, Information, Knowledge, and Wisdom. At the lowest layer we have data. The universe if filled with it. For example, consider a university where you have data like students, books, building, colors, plants, etc. This is the realm of DOA. If you provide relevance to data, we call it information. Every Monday morning a university Provost is looking at a list of failing students and is trying to figure out just what to do. This is the realm SOA. When you study information, as in the case of the Provost, we call that knowledge. The provost is asking questions like, “Why are they failing?” This is the realm of EOA (CEP). Lastly, in depth reflection on knowledge provides us with wisdom. What is really important for the Provost is internalizing what it will take to turn failing students into successful students. This is the realm of EOA.

Once you map revenue potential for each of these layers in the wisdom model (Data, Information, Knowledge, and Wisdom) against each orientation (DOA, SOA, EOA), you soon come to the realization that EOA offers the highest growth rate for new revenue and margin. More on this later if you like.

So, a more business-oriented perspective would be to asked the question, “At what margin can we grow new revenue through EOA (event and complex-event) services? This is a conversation that most CEOs and CFO would gladly engage you in.

 

Read Full Post »

There are a lot of great questions coming out of eBiz and the latest I’d like to address is, “Is Dirty Data Becoming a Showstopper for SOA?

Dirty data is one of the many reasons why service-oriented architectures (SOA) are so powerful. Gartner studies over the last decade have demonstrated that dirty data “leads to significant costs, such as higher customer turnover, excessive expenses from customer contact processes like mail-outs and missed sales opportunities.” In this day and age, there can be no doubt that the one and zero sitting in your databases are corrupted. But what do you do about it?

Many have suggested that this is an IT issue. The fact that data assets are inconsistent, incomplete, and inaccurate is somehow the responsibility of those response for administrating the technology systems that power our enterprises. There solution seems to further suggest the only real way to solve the problem is with a “reset” of the data supply chain – retool the data supply chain, reconfigure the data bases, do a one time scrub of ALL data assets, and set up new rules that somehow prohibit corruption activities. At best, this has been shown to be a multi-million dollar, multi-year activity for fortune 2000 class companies. But at worst, it is a mere pipe dream of every occurring.

A more practical solution can be found in SOA, specifically Dirty Data Modernization Services (DDMS). These are highly tailored temporal services designed around the specific Digital Signatures of the dirty data in question. For example, Dirty Data Identification Services use artificial intelligence to identify and target corrupt data sources. Dirty Data Transformation Services use ontological web-based algorithms to transform bad data into better data (not correct data). Other services like Accuracy and Relevance Services can be used on an ongoing basis to aid in mitigating the inclusion of bad or dirty data.

Human beings, by our nature, do not like change. We often look to rationalize away doing the hard things in life, rather than justifying the discomfort that comes through meaningful change. Dirty data is just one of those reasons one can use if you truly don’t want to get on with different, often better solution paradigm. So, rather that treat dirty data as a show stopper, look to it as a catalyst for real meaningful enterprise change.

 

Read Full Post »

eBiz’s most resent question, “Is It Even Possible for a Company to Walk Away From Service Orientation?” raises in interesting question.

Change is inevitable, although progress may not be. But to walk away from anything, you must walk towards something. It is part of the human condition that we want to turn away from pain and move toward pleasure. Think about your reaction to putting your hand on a hot stove or hearing pleasant music in the distance. So it is the case when we build systems.

Service-oriented architecture (SOA) was a more pleasurable solution to the previous pains of tightly coupling systems together through data or functional abstractions. The practitioners of that time live through the pains of trying to extend those monolithic monsters. It was, no is, an awful experience. We all ran as fast as we could away from it towards the first viable less painful alternative – SOA.

So migrating (moving) from one place to another is always possible and our success in doing so is exponentially proportional to the pleasure pain difference. However, that isn’t the real issue, is it? The better question to address is, “What orientation would one move to, if you desired to move off of SOA?”

Read Full Post »

In my blog posting, I talk a lot about several measures for successfully implementing SOA. Since that time I have had the chance to write and speak extensively on the topic. Here is one such speech that I would like to share:

Read Full Post »

soa source book.jpg
The Open Group made available online their SOA Source Book. The repository is a collection of source material for use by enterprise architects working with Service-Oriented Architectures and was developed through theOpen Group’s SOA Working Group. The range of content varies – definitions, analyses, recommendations, reference models, and standards. One of the more interesting sections is on the SOA Maturity Model.

This is must read site if you are a SOA architect and a should read if you are part of a larger SOA initiative.

Read Full Post »

Joe McKendrick recent article “SOA is as Good as You Measure It” discusses some of my more important measures and metrics. The key metrics captured are:

>> Service Vitality Index
>> Return on Investment per Service
>> Number of New Services Generated as a Percentage of Total Services

In addition, McKendrick talks with Mark Little, CTO of Red Hat, about similar measure. Little added the following items to the mix:

>> Service Inter-dependencies.

Read Full Post »

Recently appearing in eBiz:

At the beginning of the year, Anne Thomas Manes claimed “SOA is dead” and it caused a firestorm in the industry, particularly from those who never read much further than the headline. But of course, Manes never intended to say that SOA was invalid or useless. As an architecture, SOA is no less valid and needed than it was before Manes penned her article. But it is true that many SOA projects have died an ugly death for a few key reasons (not mutually exclusive):

Mistakenly adopted as the only approach to an integration strategy Senior executives bought into the hype created by the technology platform vendors that SOA was a panacea to all their software ills, without evaluating if it solved any of the problems they actually had

>> Companies underestimated the importance of governance and their organization’s adoption readiness
>> Lead programmers were put in the role of “architects” who didn’t understand the underlying software engineering principles ensuring failure of the SOA strategy (that’s a whole other article)
>> Treating architecture, especially SOA, as an developmental afterthought or secondary citizen in the software development process

So what does this all mean for SOA in 2009? Will SOA projects go forward or will they die on the vine? Well in today’s economic environment, the answer is simple — it’s if you can justify the cost of the project in terms of contributions to increasing/preserving revenues, margins, and/or cost reduction. Savvy CIOs and CTOs realize that investment in certain software projects are important to improving business models and providing the infrastructure for future growth. Like any other initiative you will need to justify the project to the Line-of-Business leader, CFO and/or CEO — only under much more scrutiny than a year ago this time.

In order to build your justification for moving forward with your SOA project, you need to develop a set of metrics that are aligned with the business objectives of the company, not traditional software development metrics (those are still important, but not for convincing your CFO that you should get the funding for the resources you need). And remember, transparency and accountability are the watchwords of today’s political and economic reality, so be prepared to continuously measure and report progress against these metrics over time. Below are a number of business-oriented metrics that have been used successfully to make a business case and measure the progress of your SOA strategy.

Read Full Post »

I recently reviewed an article “Test SOA for the unexpected” by Rich Seeley that got me thinking about SOA testing frameworks.

The design of experimentation (DoE) is probably one of the most significant testing issues one faces in SOA and is more complicated that most other archetypes. Take, for example, three simple services that can be choreographed in any order, each capable of interacting with each other. How many tests do you need to ensure 100% functional coverage? You’re right if you said 6 (3x2x1). This is a classic factorial problem, in this case with 3 service. Now, let’s say your system has 100 services, so how many test would you need? Again, your right if you said 100!, or 9.33×10^157, and it would take you over 3×10^150 yrs to test completely, if you ran continuously a test every second of every day.

Traditional testing in these kinds of systems are impractical given the number of total tests needed to understand/certify the functional/non-functional behavior of the system. So what do you do in these circumstances? Well, that is where a good , well thought out, SOA design of experiment (DoE) comes into play. DoEs are all about reducing the number of experiments without unnecessarily diminishing the value of the test. Such DoE are call Partial Factorial DoE. A good SOA DoE for the 100 services, for example, could reduce it down to 10! or 5040 tests, a significant reduction in effort and cost.

So, the next time you are thinking about testing your complete SOA environment, ask whether or not you have the right type of testing framework (DoE) and whether or not you really need to do all those tests in order to show that your systems is actually working correctly or not.

Read Full Post »

While there are no standards that define the necessary and sufficient characteristics of SOA or how to evaluate it, SEI has published a technical report that provides the necessary elements for such a framework. I use this report quite a bit, both practically to help customers with SOA development and as a reference during presentations. Please check it out if you are in the SOA field.

Software Engineering Institute:
Evaluating a Service-Oriented Architecture
TECHNICAL REPORT
CMU/SEI-2007-TR-015
ESC-TR-2007-015

ABSTRACT:
“The emergence of service-oriented architecture (SOA) as an approach for integrating applications
that expose services presents many new challenges to organizations resulting in significant risks
to their business. Particularly important among those risks are failures to effectively address quality attribute requirements such as performance, availability, security, and modifiability. Because the risk and impact of SOA are distributed and pervasive across applications, it is critical to perform an architecture evaluation early in the software life cycle. This report contains technical information about SOA design considerations and tradeoffs that can help the architecture evaluator to identify and mitigate risks in a timely and effective manner. The report provides an overview of SOA, outlines key architecture approaches and their effect on quality attributes, establishes an organized collection of design-related questions that an architecture evaluator may use to analyze the ability of the architecture to meet quality requirements, and provides a brief sample evaluation.”

Read Full Post »

Older Posts »