Project Practitioners > Testing Balance

Testing Balance

By Kent McDonald

I think Sting best characterized a situation I find myself in on a current project: "Caught between the Scylla and Charybdis". We're having to choose between how much testing we to do on a development software upgrade, weighing the time and cost involved in hunting issues that may or may not exist with identifying issues before they could impact Pre-Production processes.

It seemed like a straight forward effort (they always do, don't they). The purpose of the project is to update the version of software we use for developing our data warehouse processes. Everything we had read about the upgrade led us to believe that coding changes would be minimal, and would just need to convert data sets to operate in the new version. Because of the structure of our development environment, it is impractical to run a complete cycle in our development environment, which is the ideal way to determine the impact of the upgrade on our hundreds of scripts. We had reviewed the release notes prior to the upgrade and tested several key functions and were relatively comfortable that upgrading the development environment and then letting regular development occur for a few weeks would surface any key issues before we upgraded Pre-Production (where a full cycle was run on a regular basis) and production. This approach was recommended by the Tech Lead on the project who was our resident expert on the development environment and trusted for his thoroughness and expertise.

We performed the update in the development environment and began running into issues. None of which were insurmountable, but based on recent experiences with another software upgrade, many of the developers and their leaders got nervous that there was not more testing being done. I asked the Tech Lead what further testing should be done in development, and he indicated that he had no good way to narrow down an effective amount of testing to make the most effective use of everyone's time and suggested that the best way to confirm what issues we may run into is to resolve the issues we knew about and then upgrade the Pre Production environment so that a cycle could run. Although the Pre-Production environment is not used by end users, there was concern that upgrading it without more testing could impact projects that were trying to promote their changes through the environments, and it certainly could make the lives of our operations people a nightmare.

So I am caught between the rock of being conscientious of time and budget and the hard place of performing enough testing to put the most risk adverse development team at ease. I can see both points of view so I am trying to help the team come to a sensible middle course which may still require a small, but acceptable amount of risk. The trick of course is that the definition of “acceptable risk” is different depending on the viewpoints of those involved. The tech lead, who is very familiar with the software and what he has tested already, is willing to take the risk of not finding issues in the development while various developers and their team lead justifiably do not want their projects to be overly impacted by any issues that don't appear until the move to Pre-Production.

My approach is to share with the development team lead what testing has occurred to date, and then do a quick analysis to determine what additional functionality we can expect to be impacted by the upgrade, and ask them to determine what scripts they are the experts on that utilize that functionality and test them in development. It will not provide full coverage, but it will at least provide some measure of due diligence. Of course, we'll also put a fairly substantial support plan in place so that we can react quickly to any issues that come up in a Pre-Production environment.

Although I'm still in the middle of this particular project, it doesn't hurt to take a moment to stop and do a quick lessons learned. Had I the ability to travel back in time and change how I approached this situation, here are some things I would have done differently:

  • In cases where it is not practical to test everything that may be impacted by a change such as a software upgrade, communicate the team's testing plans to developers that are impacted and ask for thoughts on other steps that may be needed.

  • Use information available on the web to identify not only issues that resulted from software upgrades, but also expected behavior changes, and then identify in general where cycle processes may be impacted.

  • Put on my analysis hat and pop the “What about” stack to make sure we have at least thought about potential scenarios.

  • Even though the project is considered “small” and straight forward, take the time upfront to perform a legitimate risk analysis and use the results to guide test planning.

Perhaps the biggest lesson learned is that sometimes the small projects can be the ones that reach up and bite you the most because your guard is down just a little. I am a huge proponent of adjusting your approach to fit the needs of the project, but that right sizing of method should always be guided by consideration of risk, and the triple constraints. The opinions and positions of stakeholders inevitably fit into that consideration as well, as does the location of fierce monsters from greek methodology.

Not all comments are posted. Posted comments are subject to editing for clarity and length.

The comments to this entry are closed.

©Copyright 2000-2017 Emprend, Inc. All Rights Reserved.
About us   Site Map   View current sponsorship opportunities (PDF)
Contact us for more information or e-mail
Terms of Service and Privacy Policy

Stay Connected
Get our latest content delivered to your inbox, every other week. New case studies, articles, templates, online courses, and more. Check out our Newsletter Archive for past issues. Sign Up Now

Follow Us!
Linked In Facebook Twitter RSS Feeds

Got a Question?
Drop us an email or call us toll free:
7am-5pm Pacific
Monday - Friday
We'd love to talk to you.

Learn more about ProjectConnections and who writes our content. Want to learn more? Compare our membership levels.