Holistic Methodologies: Odd bed partners (Six Sigma and PMLC/SDLC), but Harmonious Relatives
Why Oh Y Does X Mark the Spot?
Are inquisitive children wise beyond their years due to their innate ability to ask the most difficult questions at some of the most awkward times? Do we lose our inquisitiveness as adults because we think we know the answers or are too embarrassed to ask?
Children may not understand the effect of asking a difficult or embarrassing question, but as adults do we have the courage to admit we do not know the answer? The only exceptions to this gross generalization are people in science who spend their entire careers in a quest to ask ever more insightful questions.
When something does not go according to plan there can be a tendency to apportion blame or find a scapegoat, which can often get in the way of seeking the truth. If the measurements are done correctly, the Voice of the Process usually reveals the truth. In a process or product it is likely tied to a product or process defect, the development environment, a mistake by an individual, or in rare cases Force Majeure.
In this article we will go back to Kindergarten -- or perhaps into the laboratory -- to try to reawaken that inquisitive nature we so often subdue. We will explore how "Whys" = x in the quest for the Big Y through some tools many of you will be familiar with.
Okay, you may ask, so how does this relate to the PMLC/SDLC? Actually, there are quite a few places in the development lifecycle where this concept can be used. In a prior article on Design of Experiment we explained how we change one characteristic to see its effect on the outcome. In this article we will explore ways to identify those inputs by asking the right questions.
The "Y" of this inquisition can be any number of things. The example we will use for this article is, "Why do 75% of the projects in the portfolio have estimates with a variance greater than 20% after baseline?" (Note: Be sure that you are asking SMART questions -- Specific, Measureable, Actionable, Realistic or Relevant and Time Bound.) It may seem like the question has an obvious answer, but is it the right question? Will the question get you to the answer you need to solve the problem? The challenge can be that instead of treating the disease, you only finish up treating the symptoms. In a worst case scenario, you could cause further damage by using assumptions rather than data. Using this approach and being armed with data can provide the courage to ask questions that may be more insightful but seem potentially career limiting:
- Why can't we get the right requirements to scope the project?
- Why are we selecting projects that people do not want to support?
- Why can't we get the resources we need to finish the project on time?
- Why does the opinion of one person trump common sense?
- Why can't we retain the skillsets we need throughout the project to complete on time and within budget?
- Is our environment really mature enough to realistically deliver projects within a +/- 20% variance?
- Do we really have the skillset to do this project?
- Do we only find out a project is in trouble when it finishes?
This is where we can get into the difficulty of choosing between asking the politically correct question or seeking an answer that can be acted upon to effect positive change. Understanding the master question (Big "Y") amongst the various subordinate questions (Little "Ys") is the topic we will explore later.
So what is the right question to ask when trying to get to the root cause of why a variance of greater than 20% exists across 75% of our projects? I do not have the right answer for a specific circumstance, but hopefully this article can provide you with some ideas to ask the right questions in a more systematic fashion.
Tools of the Trade
Fishbone Diagram and 5 Whys
For those of you who have ever conducted Root Cause Analysis, the Fishbone or Ishikawa diagram will be all too familiar. For those of you who are not, this diagram documents cause and effect where the x's are the "whys" that lead to an effect, which is characterized by a Y.
The steps to using a Fishbone diagram are quite simple.
- Select the Effect ("Y") you want to analyze. Let's use the example above: "Why do 75% of the projects in the portfolio have estimates with a variance greater than 20% after baseline?"
- Identify the major affinity categories in the boxes at the end of the fish-bones. In the diagram above, I have chosen People/Skills, Processing, Method/How/Tools, Measurement -- Defects/Performance, Machine/Technology, Material/Product, but you can chose your own affinity categories that make the most sense.
- Identify some primary causes under the affinity categories. I have found it best to do this on a whiteboard with sticky notes, as it is easy to move them around. Quite often even the affinity categories can change during the brainstorming process.
- Once the team has exhausted the unique causes relating to that effect, select a few to drill down into. These should be causes the team has control over to effect change.
- To drill down, keep asking why the cause occurred until the team can drill down no further. This is the 5 Whys.
Asking why 5 times is a proven approach for using this tool. It does not have to be exactly 5 times, but typically 5 questions can be effective for drilling down to the root cause of the cause (x) the team is examining. Sometimes you can get lucky and hit it the first or second time, but fundamentally, don't stop asking questions until the cause has been explored from every angle. The Fishbone diagram is a great brainstorming tool to place causes into affinities relating to the effect.
Big Y and Little Y
When conducting a comprehensive root cause analysis, there may be many smaller effects related to the overall Big Y effect. Therefore it may be necessary to conduct multiple effect analyses, hence the little Ys. How do you determine out of all the little Y's what the "Big Y" is?
In our example above where "75% of a portfolio's estimates were off greater than 20%..." some of the little Y's could be related to the following effects:
- Issues with the budgeting process where project sizes are artificially kept below a given threshold to avoid additional scrutiny
- Market conditions changing the project scope
- Unknown risks occurring requiring funds outside of forecasted contingencies
- Whims of management to add to scope or changing priorities
- Resources being pulled off projects to work on other priorities -- stop/starts and rework
- Some of the sensitive questions asked above
- Others that you discover along the way (serendipity)
All of these and many more could be factors into the overall effect of "75% of a portfolio's estimates were off greater than 20%." Okay so how can I determine out of all of these little Ys what is the Big Y? This is where we will turn to trying to measure the impact of the effect of the little Ys to determine what the big Y is.
Using the example above, a Pareto diagram can be used to look at the number of instances or impacts that fall under each of the effect categories above. The size of the project estimate variance, its effect on the overall portfolio, and the number of occurrence instances can determine which of the little Y's are indeed the Big Y. For a more detailed description of Pareto see my article on Design of Experiment, where it is used in a similar fashion to determine the effects of each of the x's on the experiment outcome Y.
In this case a Pareto can be used for the little Ys to prioritize them in search of the big Y. When conducting comprehensive root cause analysis there will be multiple effects, and some of those effects may be subtle, as illustrated by some of the questions above. Each fishbone can give a different set of optics based on how the question is asked.
When conducting comprehensive root cause analysis, it is likely that several of the x's in the affinity groups will overlap. This overlap provides credence to the evidence that you are on the right path to the credibility to support the Big "Y". From the example above, if in both market changing conditions and whims of management altering scope are having the greatest impact, then there may be a strong correlation showing that your industry is very dynamic. Perhaps the planning cycles for portfolio management may be too long, or there could be a lack of a strategic planning to remain aligned with market changing conditions.
Design of Experiment (DOE) revisited
At this stage, Design of Experiment could be used to take the Little Y's and turn each of them into the x variables within the experiment. For example, create a strategic plan, align all the projects in the portfolio to that plan, and then track the project variances of aligned projects to see if the use of a strategic plan has any impact on budgets. Change the delivery methodology to create more proofs of concept to shorten the development time before committing to a more significant product release, which will ensure that the product of the project still makes sense.
Consider another example: "resources are being pulled off projects during execution." After the root cause analysis and Pareto it was discovered that people were being randomized by operational work and were spending 60% of their time supporting operations. Changing one variable at a time could mean breaking the team to have people dedicated to project work and others on operations. Expecting only a 40% commitment to project work could be another DOE variable (x).
So now you have identified a change you want to make, how do you know the impact it has had on the environment? Is it responding to the original question of "Why do 75% of the projects in the portfolio have estimates with a variance greater than 20% after baseline?" When effecting changes from your Big Y it is also important to know if the medicine actually cures the patient over the long term. A tool that can be used to validate overall health is a Control Chart. In DMAIC (Define, Measure, Analyze, Improve and Control), the function of Control is to validate that the improvement has delivered on continuous improvement and still has left the process within operational tolerances. In the example above your desired goal may have a +/- 5 or 10% estimate variance after baseline for 90% of the projects in the portfolio. These would then become the upper and lower control limits of the process.
Note: There are about 20 types of Control Charts for various applications. For the scope of this article we will use a simple control chart with upper and lower control limits characterized by the control limits of the acceptable variance in the project after baseline (i.e. either +/- 5 or 10%). For the purists, a typical Control Chart uses Upper and Lower Control limits calculated as 3 standard deviations from the mean.
In our example we will also show the percent variance control limits we are striving for in our Effect statement at the head of the fish.
Control Chart interpretation
- So should I be concerned about wild swings above and below the mean?
- What does a good control chart look like?
- Does the pattern of how the data points are connected tell me anything?
All are good questions; let's explore them in some more detail.
1 - Yes you should be concerned about wild swings in the chart, but it is to be expected once a change has been made, since it may take several data points to see the results of the change on the process.
2 - A good control chart should look reasonably flat where the points remain within the upper and lower control limits. However one word of caution: if you always remain within control limits, you should question if you are using the correct measurement method.
3 – The pattern of connections between the data points can be very informative. In the chart above, the circled data points show the "Rule of 7": When there are 7 points in a row that are above or below the mean, it can point to a special cause that should be investigated further.
Special vs Common Cause
Examples of special cause can be a circumstance that occurred on a specific project that may not be a norm across other projects, such as a major scope change or a significant unplanned risk becoming an issue. Common cause, in contrast, shows the norm of your population, which provides examples of the effect your change is having on the estimation variance. This is what you are looking for in your investigations. Typically, common cause will yield a better sample to conduct additional analysis on if the x in your DOE is having the desired effect given that the sample size is big enough.
We have explored how to ask "Why" in a more systematic fashion. We have traveled a road with many turns and tied various tools together in our quest for the elusive Big Y.
- We used Fishbone diagrams,
- 5 Whys to identify the root cause of one or more x's
- Stratified fishbone diagrams to obtain little Y's,
- Used Pareto to show the impact of each of the Ys in our quest to find the Big Y,
- Used Design of Experiment to apply the chosen improvements
- Introduced a new tool (Control Chart) to measure the effect of your recommended improvement has had on the initial question to see if you really have found Big Y.
Next time someone asks you why, I hope this article will prompt you to answer them in a more scientific or systematic way or ask, is that a little "Y" or big "Y" question?