PM Articles > Kent McDonald > Be Careful What You Measure, You Just Might Get It

Be Careful What You Measure, You Just Might Get It

by Kent McDonald

There are two sayings that are commonly used when discussing measurement and management: "You can't manage what you don't measure," and "You get what you measure." It seems that several organizations pay a lot more attention to the first piece of advice without heeding the warnings that come along with the second, especially in the realm of leading project practitioners.

I work a great deal in the business analysis space and have seen this happen quite frequently there, but based on anecdotal information it occurs just as frequently for quality assurance/testers and developers. Basically the leaders of these knowledge workers want to find some way to gauge how well people are performing their jobs. These measurements are then used to determine development opportunities (hopefully) and for the purpose of performance evaluations (more likely). This seems most common in organizations that like to refer to people as "resources" and group them together in teams of similarly skilled people. In other words, all the business analysts are in one group, all of the testers are in another group, and all of the developers are in a third group.

On the surface, this approach makes sense. Everyone with the same skill set is grouped together for purposes of career development and training. The more experienced BA's, for example, can share their experiences with the less experienced BA's, and they can all share good practices. Since their department heads usually used to do the same job, these groups are supported by leaders who understand the ins and outs of their particular skill set. The thinking then goes that grouping people by skill set allows for easier measurement and evaluation, since they can all be measured by the same criteria. That's the idea at least.

But measuring knowledge work can be tricky. Do you measure productivity? (How many use cases can you complete in a week? How many lines of code can you write? How many bugs can you find?) Do you measure based on quality? (How many "errors" did you not put into those use cases, lines of code, or test scripts?) Do you measure based on performance to goals? (I said I was going to write four use cases this week, and by golly, I did.)

All of these approaches certainly provide useful information, but settling on one approach—or even combining metrics from different approaches—applies focus in the wrong place. If a business analyst is measured based on how many use cases they produce, use cases become the end product for that business analyst. If developers are measured based on how many lines of code they produce, the code will become very bloated, and probably not particularly efficient. Testers measured based on how many bugs they find will suddenly detect an infestation that may or may not actually exist.

One organization I recently heard about introduced a new measurement regimen designed to make their business analysts more effective. Their business analysts set weekly goals, and then compare their actual accomplishments with those weekly plans. So far, not too bad; this approach recognizes the need to frequently revisit and revise plans based on the latest information, and encourages the business analysts to do a little forward planning once a week in order to get organized.

A trickier aspect of this measurement program is that the goals that are set are based on how much time is spent doing particular activities, and the work is evaluated as value added or non value added. The value add of a particular task is determined based on whether it is actual business analysis work. So, writing a use case? Value added. Talking to a stakeholder about content for a use case? Value added. Helping another business analyst on another project work through a particularly gnarly analysis problem? Non value added. Helping a tester on their project team write test scripts? Non value added. Attending the team's daily standup? Non value added.

What this and many similar measurement programs seem to miss is that knowledge work is a team sport with the ultimate goal of delighting the organization's customers, not producing requirements, writing lots of code, or finding a lot of bugs. Individual measurement programs encourage sub optimization. Depending on how they are designed, these systems may even actively discourage teamwork. People working under these types of systems will quickly see that in order to get ahead, they can't afford to help anyone else. They have to keep hitting their own individual numbers, because it is not to their personal advantage to collaborate with the other members of the team.

Leading projects in this sort of environment is difficult enough, because as a project lead you typically have to lead through influence. That means bringing people together who report to three or more different leaders, each with different understanding of priority, and with their attention split across multiple projects. Throw a reward structure on top of that which by its very design encourages suboptimal behavior, and now you can't meet your own personal goals, which are usually focused around the legs of the iron triangle. You start making suboptimal decisions yourself trying to work a plan that conflicts with the individual goals of each of the team members. The result is a mind numbing set of discussions where the phrases "that's not my job" or "that's not your job" are heard far more frequently than "how can I help?"

I'm not suggesting you don't measure at all. That would be counterproductive in many other ways. I am suggesting that you change the focus of the measurement from output to outcome. If your organization pulls people together to work as a team, create measurements based on the outcomes of the team's efforts. Did they drive progress toward the business objectives they were seeking to impact? Did they work in an efficient and effective manner? Did they reduce waste in the overall process?

Some people will initially feel uncomfortable with this measurement approach. They may be concerned that they no longer control their fate, that they will be too reliant on team members, and that they will have to carry others' weight. Perhaps, but that will certainly provide some incentive for those team members to help out the ones who may need a little help for the good of the entire team, and for the organization. Placing the focus on the outcome of the entire team will also encourage people to step outside their comfort zone and pitch in on tasks that may not be their direct responsibility, but that they have sufficient skills to perform on a short-term basis. They will become more well rounded and both the people and the organization benefits as a result.

Plus, if you decide to measure based on team outcomes, you might just get them.




Comments
Not all comments are posted. Posted comments are subject to editing for clarity and length.

I love the team measurement approach. It would definitely reduce non-productive, one-upmanship, and establish the esprit de corp necessary to actually accomplish some goal. What is your suggestion to get the lack luster performer to step up their efforts without reverting back to (mostly) measuring individual contributions?


Nice article, I would take it as one of the arguments for why we use and should use Backlogs in Agile. Measuring how many Stories a team completed in each sprint, is a team measurement and measures overall ‘outcome’ of a sprint cycle. But article explained it in much more generic way thus guiding even those project and hr managers that have not opted or cannot opt Agile in their business flows.


Hi Kent!
Been awhile since we've talked.

I fully agree. A method I've used as an internal manager and an advisor to clients is to ask the sponsor "How satisfied are you with the business value the team is delivering for you?" Data collection could be as simple as asking verbally for a number at a Sprint Review or more formal with an email or heaven forbid a simple survey. Subjective, but trends are telling. 3-6 months after a release, you can also assess changes in actual business outcomes vs. the intended outcomes. Of course, you can't attribute all the cause (or failure) of a business outcome to a single variable (like the team's deliverables), but again it is part of the picture and trends are more telling than snapshots.

Then at the close of an assignment, I send an email to the PM/SM, and their team mates asking, 1) "How satisfied are you with the person's contribution to the business goals?" and 2) "How satisfied are you with the person's behavior as a team mate?"

This info blends well with other performance metrics on velocity or efficiency or # defects found during quality reviews or what have you.


The comments to this entry are closed.




©Copyright 2000-2017 Emprend, Inc. All Rights Reserved.
About us   Site Map   View current sponsorship opportunities (PDF)
Contact us for more information or e-mail info@projectconnections.com
Terms of Service and Privacy Policy

Stay Connected
Get our latest content delivered to your inbox, every other week. New case studies, articles, templates, online courses, and more. Check out our Newsletter Archive for past issues. Sign Up Now

Got a Question?
Drop us an email or call us toll free:
888-722-5235
7am-5pm Pacific
Monday - Friday
We'd love to talk to you.

Learn more about ProjectConnections and who writes our content. Want to learn more? Compare our membership levels.