Feeds:
Posts
Comments

Archive for the ‘Performance’ Category

Over the last few months, several colleagues have ask me to help them understand why their performance metric/measurement processes failed. In reviewing their unique situations, I found these 20 common mistakes, originally identified by Raj Jain, still to be valid:

1. No Goals: Any significant endeavor without goals is bound to fail.
2. Biased Goals: Implicit/explicit bias in stating the goals. For example, trying to show that our system is better than theirs often results in biased measurements.
3. Unsystematic Approach: Inaccurate conclusions result from unsystematically selecting parameters, factors, metrics, workload, etc.
4. Analysis with Understanding the Problem: Just Google “A problem well stated is a problem have solved” to better understand why this should be an axiom of life.
5. Incorrect Performance Metrics: A common mistake in selecting metrics is that analysts often choose those that are easily computed or measured rather than the ones that are relevant.
6. Unrepresentative Workload: A vary common problem – the choice of the workload has a significant impact on the performance study results.
7. Wrong Evaluation Techniques: There are three evaluation techniques – measurement, simulation, and analytical modeling. Analysts often prefer measurements because it is easier than simulation or building analytical models (e.g. Mathematica, Mathlab, Excel, etc.).
8. Overlooking Important Parameters: Make a complete list of system and workload characteristics that affect the performance of the system, and then choose from these parameters.
9. Ignoring Significant Factors: Factors (parameters that can be changed) that are under the control of the end-user should be given preference over those that cannot be changed. Do not waste much time on comparing alternatives that the end user cannot adopt because they involve actions that are unacceptable to decision makers.
10. Inappropriate Design of Experiment (DoE): Experimental design relates to the number of measurements or required simulated runs and the parameters values used in each experiment. Most analysts are often naive in that they only vary one factor at a time. Better alternatives include the use full or fractional factorial experimental designs.

11-20: No, I am not going to rattle them off here (too many, too little space). However, drop me an email and I will send you the full list.

Read Full Post »