Feeds:
Posts
Comments

Archive for the ‘Modernization’ Category

NewImageData migration is dead, long live the enterprise architecture! The days of planning and executing a large scale data migration from one enterprise system to another, while maintaining on operational business, should be gone. There are too many failure modes in a modern enterprise extraction, transformation, and load (eETL) operation. Risk management initiatives are ill-equipped to provide the aversion models necessary to ensure the availability of critical business process in the event of the inevitable migration failure. But even amidst such information technology despair there is a hope, but at what cost?

Before we get into the discussion why enterprise data migration fail, lets first spend a bit of time of the basics. Data migration is often defined as process of transferring data between storage types, formats, and/or computer systems.   In essence, it is the one off selection, preparation and movement of relevant data, of the right quality, to the right place, at the right time. It is usually performed manually and/or programmatically when organizations change (extend, upgrade, adopt) information systems, where the new data storage format is not the same as the old. But not all migrations of data are data migrations.

NewImageData migration is the irregular movement of data (an important concept that will come back to later). Regular movements of data, such as those found in data warehouse refreshes, are fully automated, highly repeatable, and take advantage of pre-mapped data schemas, are not part of this data migration space. With data migration, a single pass movement of data is often accompanied by low or inconsistent initial quality of data between disparate databases and/or applications. It is this large-scaled multi-factor characteristic (semantics, people, process, technical, workflows, data quality, and product types) of enterprise data migration that make is so complex, lending itself to numerous failure modes.

IDC has estimated that over 80% of data migration projects fail or have significant cost overruns, which is itself a type of financial failure. Within this cohort, 50% exceed their projected schedule by 75%,  their budgets are exceeded by 66%,  and 33% of these fail entirely. While these failure statistics are based on a wide range of projects, over have used some form of formal migration methodology. These are not good results if your looking for migrate a enterprise system while maintaining critical business function and financial predictability. Moreover, give the complexity of today’s enterprise systems, you can only reduce the risk of migration failure, but you can not eliminate it completely.

NewImage

While there are no industry standards for migrating data within the enterprise that guarantee success, there are many migration methodologies designed around risk reduction. The most common (e.g., Practical Data Migration) are based on variants of proof of technologies/concepts, landscape analyses, data quality/cleansing, engaging the business early and often, testing, and automation through ETL tools. While these are good practices, or in some cases even necessary, for any project, they have limited demonstrated ability to ensure that the data is correct/error free after being transformed through complex data quality and business transformation rules.

So, with its high complexity, enterprise data migration should be dead, not because it is bad or evil, but because it the risk of failure is just too high. But, if not the enterprise migration, then what? Well, the answer lays in moving away from data migration as an unplanned irregular movement of data, something today’s enterprise architecture (EA) is all about. But what is EA in the context of migration?

NewImageEnterprise architecture is the continuous practice of organizing logic for business processes and IT infrastructure that reflect the integration and standardization requirements of the company’s business operating model (TOGAF 2009). EA is more than just structure, it is an dynamic means to realize architectural vision, business architecture, information systems architecture, technology architecture, governance, and even migration architecture. EA treats the business as an organic growing entity that evolves through a continuum of changes. Which should not be a surprise to most, given that people (organic by nature) constitute most of a business’s enterprise to begin with.

The key difference between traditional data migrations and migrations developed through enterprise architecture is that EA-driven migrations tend to be less complex and therefore more successful. EA-driven migration succeed because, by their nature, they are designed to succeed. Migrations are not an afterthought, a single one time event of irregular movements of data. Enterprise data migrations are designed in the context, not in leu of, an evolving business and its infrastructure and is the means to the businesses ends, not the ends itself.

Traditional data migration died a long time ago, we just never noticed it because we were too busy cleaning up all the failure debris. It is time for us now to start thinking differently so we don’t repeat this same migration mistakes. Enterprise Architecture-driven migration presents the best hope of success for those looking to continuously evolve their business model.

For more articles like this, please see LiquidHub.

Advertisements

Read Full Post »

Field notes1

The risk of human failure is possible in any endeavor, not with standing the work that I am doing to help a client plan for the migration of an enterprise claims management system. As part of developing an operational readiness plan, which spans two and a half years, we are developing a wide variety of governance characteristics which range from migration requirements to staging infrastructure to migration approach to business/risk assessments. It is a fairly comprehensive model for how, when, and where their business will be migrated.

NewImage

As part of the process, we are now looking at some of the elements surrounding the human condition; specifically, the impact on productivity on the business during migration. Several studies have clearly demonstrated that there is a significant chance (30-50%) of decline in performance during periods of transition, during which new characteristics are being adopted. The question being address is how to deal with decline in a way that does not impact their clients. One can better prepare the employees which takes time or add additional temporary capacity (people) which takes money.

NewImage

Our current thinking on how to address the impact on productivity is to create a Personal Operational Readiness program tailored to meet the individual capability maturity needs of each employee.  We are not only looking at different kinds of training (beyond the 3-5 day training programs), but training on operational data as well. When employees see their work in the new ontology, their productivity increases significantly, which is key for making the transition without adding large numbers of temporary employees.

More on this to follow…

 

 

Read Full Post »

The eBiz question of the day is, “Do you agree with what Brain Stevens, CTO of Red Hat, said recently, that the adoption of cloud isn’t going to be in 2010, but that it will be several decades before we see the kind of evolution and maturation necessary to sway the big business to the cloud?”

Evolution, more specifically, sustaining evolution [Clayton Christensen, Innovator Dilemma] is natural progress seen in all technologies. Most users can always find things they don’t like about technology they are using, which leads to a wealth of incremental improvements. Some of which are more beneficial than others.

A more interesting insight, however, comes from addressing the question, “What is the next disruptive technology after the cloud?” Think about it for a second. What is the technology, that does not exist today, that will replace cloud computing tomorrow? Answer this riddle and you might be the next Amazon.

Read Full Post »

There are a lot of great questions coming out of eBiz and the latest I’d like to address is, “Is Dirty Data Becoming a Showstopper for SOA?

Dirty data is one of the many reasons why service-oriented architectures (SOA) are so powerful. Gartner studies over the last decade have demonstrated that dirty data “leads to significant costs, such as higher customer turnover, excessive expenses from customer contact processes like mail-outs and missed sales opportunities.” In this day and age, there can be no doubt that the one and zero sitting in your databases are corrupted. But what do you do about it?

Many have suggested that this is an IT issue. The fact that data assets are inconsistent, incomplete, and inaccurate is somehow the responsibility of those response for administrating the technology systems that power our enterprises. There solution seems to further suggest the only real way to solve the problem is with a “reset” of the data supply chain – retool the data supply chain, reconfigure the data bases, do a one time scrub of ALL data assets, and set up new rules that somehow prohibit corruption activities. At best, this has been shown to be a multi-million dollar, multi-year activity for fortune 2000 class companies. But at worst, it is a mere pipe dream of every occurring.

A more practical solution can be found in SOA, specifically Dirty Data Modernization Services (DDMS). These are highly tailored temporal services designed around the specific Digital Signatures of the dirty data in question. For example, Dirty Data Identification Services use artificial intelligence to identify and target corrupt data sources. Dirty Data Transformation Services use ontological web-based algorithms to transform bad data into better data (not correct data). Other services like Accuracy and Relevance Services can be used on an ongoing basis to aid in mitigating the inclusion of bad or dirty data.

Human beings, by our nature, do not like change. We often look to rationalize away doing the hard things in life, rather than justifying the discomfort that comes through meaningful change. Dirty data is just one of those reasons one can use if you truly don’t want to get on with different, often better solution paradigm. So, rather that treat dirty data as a show stopper, look to it as a catalyst for real meaningful enterprise change.

 

Read Full Post »

I just reviewed and excellent WSJ article by Roger Cheng that was posted on 21 October 2009. Cheng notes that “Amid the worst-ever decline in technology spending, corporations still invested in cloud computing and virtualization…” This is very good new, but I do take a bit of an issue or two with some of his comments:

>> “Cloud Computing Is Disruptive” They hope it is disruptive; that is, disruptive to their declining margins and lack luster growth. Getting infrastructure on demand (IaaS0, development on-demand (PaaS), or even software on demand (SaaS) is no more disruptive today than any other type of outsourcing. Twenty years ago, yes; today, don’t think so.

>> Cloud Computing – The real implications. Cloud computing is very important to consider as part of the evolution of any healthy company, but the real implication is very different than what you hear in the press. When a CFO says Cloud Computing, what they are implying is – I want lower costs. When a CIO says Cloud Computing, what they are implying is – I want somebody else to do that stuff. When you hear a CEO say Cloud Computing, what they are implying is – Please, let there be a $ilver lining in that cloud.

>> Virtualization Cloud Computing. For those less mathematically inclined, virtualization is not the same as or equal to cloud computing. Even more precisely, virtualization does not need cloud computing and cloud computing doesn’t even need virtualization. Cloud computing is a where and who; whereas virtualization is a how.

It would be reckless for any IT datacenter not to consider virtualization as part of its overall operational strategy. Leveraging under utilized computing resources is an important part of attaining cost efficiency. At the same time, for those circumstances that fit cloud computing, turning capex-oriented IT into opex-oriented services can help the financial model.

>> The Future is “Virtualized Desktops” Yeah!? Remind me again as to the problem we are trying to solve. Again, nothing new. Desktop virtualization has been around for over two decades. I personally created and worked on virtual desktops. So why hasn’t it been adopted? Hum, is it because the technology was immature. No, companies like Citrix and NeoIT don’t think so. Is it lack of a strong financial driver? Maybe, but given the cost of IT per person, one can often make a pretty compelling case. So what is it then? Us!

Cognitive and social psychology tells us that human like to maintain control (thing about it). Moving from an independent to dependent model requires a lot of organizational and cultural changes and is not something that naturally happens on its own. So, if one is interested in virtualizing (not a word) the desktop, they need to start virtualizing our culture first.

>> “Capitalism Still Strong in the IT Industry” – This should have been the headline of this column. Regardless of the economy, companies still need to perform. They need to grow profitable revenue, keep their customers happy, and prevent competitors from taking market share. Translated – they are behaving as Capitalist (with a capital C). Cloud computing and virtualization just happens to be one of the few remaining tools on their corporate belt that still works, and works well.

What did you think of the article?

Read Full Post »

This blog posting is a bit off the usual track, so please indulge me a bit. In the product engineering outsourcing business risk is everything, yet it is obviously nothing. In order achieve certainty (outcome certainty) around outsourced engineering activities, it is vital to evaluate all risk elements associated with the engineering lifecycle. Most organizations, however, have a myopic focus on just the product development lifecycle, centers of excellence, key people, etc. While all are very important, they are only a part of the overall risk that has to be managed. But what other risks that need to be addressed in this new world of global product engineering?

Let’s briefly look at a few risks in the headlines today. Pandemics like the Swine Flu are a threat to the world economy and could have a bigger impact on product delivery schedules than any other factor faced over the next several months. Program managers plan for a lot of activities that directly effect schedules – normal work hours, vacations, subject matter expertise (SME) availability, pregnancy, etc. However, very few, less than 5%, have direct contingency plans for significant resource losses as a result of H1N1. With an intermittent loss of just 50% of a work force over a 4 week period, engineering production could be delayed for two months or more. Can any company afford such a loss in productivity?

Terrorism comes in many forms and is another obvious risk that needs to be assessed as part of a comprehensive outsourcing strategy. Whether it be international terrorism trying to disrupt a nation’s economic basis or local disgruntled employees taking action against IT assets, each can have a devastating impact on a production schedule. While there is no consistent data on lost opportunity costs, anecdotally indications show that software engineering projects where delayed 2 or more weeks as the result of 911. A single disgruntled knowledgable employee could take out significant IT support structured for engineering production (a type of localized domestic terrorism), resulting in days if not weeks of lost development time. While these kinds of activities seem far fetched, they occur almost everyday to some degree or another. Could you afford such a loss?

The answer is probably no, if you are like most organizations. Because these kinds of risks have a high consequence, albeit at a lower probability of occurrence, they require proactive risk management. Take the Swine Flu example, your organization could provide education (webinars, blogs, etc.) on prevention (e.g., CDC speaker, web links, tracking, travel advice, etc.), reporting (departmental, corporate, partners, vendors), and mitigation (restricting travel, proactive flu shots, etc.). Reducing the risk of product schedule delays do to terrorism, while a bit more complex, can be achieve with a bit of work.

Pandemics, terrorism, financial instability, and geopolitics are areas that impact outcome certainty, so taking a leadership position can have positive consequences to your organization. The message here is that in this world of global engineering outsourcing where outcome certainty drives production, a more comprehensive view of risk that looks outside as well as inside is critical. Something to think about. Thoughts?

Read Full Post »

This week, the top 30 US and international cyber security organizations (e.g., Red Hat, EMC, Apple, MSFT, NSA, etc.) jointly released the consensus list of the 25 most dangerous programming errors that lead to security bugs and that enable cyber espionage and cyber crime. What this joint team found was that most of these error were NOT well understood by programmers, across all types of organizations (big/small, insource/outsource, etc.). Equally important is that these errors are all appropriate for both SOA and SaaS developmental activities.

I think it important for use to share these findings with your development teams, assess our current developmental activities, and educate your teams on the principles found in this study. Also, we each have an have an opportunity to show some thought leadership and participate in a technical area that is not currently owned by too many organization. Below I have the link to SANS (information security authority) which not only contains the top 25 programming errors, but also an excellent set of resources (from Mitre) to help identify and address these issues.

Findings Ref: http://www.sans.org/top25errors/

Read Full Post »

Older Posts »