PM Articles > Alan S. Koch > When Agile Developers Test Well

When Agile Developers Test Well

by Alan S. Koch, PMP

Let's face it -- we developers have gotten sloppy with our testing over the years.

Before we had modern debuggers and Integrated Development Environments (IDEs), we knew that testing was our best way of finding the problems in our code. Today our code leads us around by our debugger. We test what it wants us to test, which is not necessarily what needs to be tested.

And before we had testers coming right behind us, the responsibility for testing fell squarely on our shoulders. If we didn't do it, our pager went off in the middle of the night. Today we know that whatever we miss, the testers will find. (And shame on them if they miss it!)

And way back in the day when running a compile took an hour or more (yes, I'm dating myself), we knew that our productivity depended on removing as many defects as possible each time we ran a compile. Today we think of productivity as writing lots of code. Testing is a different activity that involves someone else's productivity.

Of course, some of us do a better job of testing than others. But we as a group can do much better. Have you ever received training on how to test? Has your team defined what testing well means for developers? Do your testers think you test well? (I dare you to ask them!)

BTW, if you answered "Yes" to any one of those questions, you are ahead of your peers. If you answered "No" to any of them, you have work to do!

Why It Matters

When we were riding our software development barrels over waterfalls, how well we tested was not a terribly big issue. Insufficient developer testing contributed to the ills we experienced, but it was only one of many contributors -- and certainly not the biggest one.

As we become more Agile, we strip away those things that contribute to the pain of the waterfall: We replace big bang development with iterative development. We replace up-front predictive planning with adaptive incremental planning. We replace massive signed-off Requirements Specifications and painful change control with a progressive refinement of our understanding of customer value. And we draw our distant customer with U-shaped involvement (high at the beginning and end of the project but low in the middle) into a continuous dialog.

The Agile Principle of "Continuous Attention to Technical Excellence" stretches us in many directions; and doing better testing is one of them.

As the whole development process becomes more efficient and Agile, any lack of discipline with which we developers approach testing becomes more and more obvious -- and painful. The Agile Principle of "Continuous Attention to Technical Excellence" stretches us in many directions; and doing better testing is one of them.

We can't satisfy our customer with crappy code. And we can't move forward when we're constantly going backward to fix defects. Achieving the full promise of Agility requires that we developers adopt really good testing practices and use them consistently in every Sprint.

As Kent Beck (the author of Extreme Programming Explained) wrote in his foreword to my book, "Agility in software requires iron discipline -- [including] rigid and high quality goals."

What It Looks Like

When Kent Beck wrote his book before the turn of the millennium, two of the 12 Practices he described (Test-Driven Development and Continuous Integration) pointed us to new ways of working that can really move the needle on quality and productivity. (Do you use these best practices that we've know about for more than 20 years?)

But those practices are only the beginning of the story. Merely doing that amount of testing doesn't ensure quality and productivity; doing that testing well is what we must strive for. We developers must be doing good Unit Testing, good Integration Testing, good Functional Testing and good Regression Testing. (Yes, we are responsible for all of them!) So let's take a look at what doing good testing looks like.

Unit Testing

The phrase "Unit Testing" is the most badly abused term in all of software development (in my humble opinion). In almost every organization I have interacted with for decades, it simply means whatever-testing-the-developer-chooses-to-do-and-for-most-developers-that-ain't-much. Unit testing should be a very specific type of testing that has very specific purposes. Yes it is done by the developer who wrote the code, but no, most developers don't do true Unit testing.

True Unit testing means testing an individual code unit (as opposed to testing the system with a particular unit included in it). When we do good object oriented (OO) design (with encapsulation, information hiding and separation of concerns), true unit testing becomes relatively easy. (BTW, using an OO language does not mean you are doing good OO design. Design reviews and code reviews are a must in order to make sure that happens! And you can do good OO design on code that is written in COBOL, assembler or any other non-OO language.)

In well-designed code, each method (or function) is a "unit" that can and must be fully tested. If you have JUnit or NUnit or some similar facility in your IDE, then use it to write automated tests for everything each method/function does. If your IDE doesn't have that sort of capability, then write test methods/functions that automate the testing. If the tests are not executing every line of code in each unit, your testing is not yet complete. (OK, testing some error traps requires using the debugger to stop execution so you can poke in the error condition it is looking for. If you can't do that programmatically in your IDE, then do it manually. But DO IT!)

Integration Testing

Integration testing is not just slamming everything together to see if it works. It is explicitly testing every integration point to be sure the interface works as intended and all interface errors are being handled properly. If you are exposing an API, complete testing will include making sure that all the ways people will abuse it have been accounted for and handled properly.

If you are using Service Oriented Architecture (SOA), then Integration testing means testing every place that each service is used. A micro-service might consist of only one unit, but if a Service consists of many code units, you must first integration test inside the service to be sure all of those units come together as needed, and then test the service where it is used.

In complex systems, Integration should be seen as a progression of integrating units to form the basic components, then integrating those components to form bigger ones until the complete system has been integrated. In that case, Integration testing will not be a single activity, but a structured series of steps.

And yes, Integration tests need to be automated because the complexity of almost all interfaces makes it nearly impossible to fully test them manually.

Functional Testing

After effective Unit and Integration testing have been done, we need to test the functionality of the system. And no, having testers on the project does not remove that from our plate! How can we even consider claiming to be done with something if we haven't checked to be sure it does what it is supposed to do?

Regression Testing

On Agile projects, we know better than to do big-bang development. We evolve the system in very small steps. (By the way, a Sprint is not a single step, it is many steps because it contains many programming increments -- one or more per User Story.)

That means that we are constantly changing the system. Every step in the system's evolution is a change, and we all know that any change can cause regressions. These unintended negative side effects are nearly unavoidable because software quickly becomes so complex that it cannot be seen in one eye-full or thought about in one brain-full. This makes continuous Regression Testing a necessity on Agile projects.

I have found that the best approach any time I make any change to anything (even a one-liner) is to start with the hypothesis that I have caused a regression somewhere, then Regression Test until I disprove the hypothesis or until I find it. (And fixing the regression is a change, so I must start Regression Testing all over again!)

This is why automation is absolutely necessary for Unit and Integration testing, and why it is valuable for Functional testing as well. Operating this way, my development routine looks like this:

  1. Write/change the code and the automated tests for whatever User Story I am implementing or whatever defect I am fixing or whatever refactor I am performing. (Yes, "and automated tests." I am not done with coding until the tests are up to date!)
  2. Run 100% of the Unit Tests for every unit that I made any change to.
  3. Run 100% of the Integration Tests for interfaces involving any of the units that I made any change to.
  4. Run 100% of the Functional Tests that involve any of the units that I made any change to.
  5. Check my changes into the Continuous Integration system and let it run its tests. (Another topic for another day).

At each step above, if I find a problem that requires me to change even one line of code, I start over with step 1. Only after I have achieved success in all five of those steps do I move on to the next coding task. Regression Testing is part of my job. Therefore, maintaining and running all of those automated tests is also part of my job.

But What About Productivity?

Can we really call it "good" productivity if a large percentage of our coding effort is wasted on rework?

Remember what we said about productivity at the outset? If productivity means number of lines of code written, then of course your productivity would take a hit (even if you count the lines of code in the automated tests). But how much of our coding time is spent on new work versus diagnosing and fixing defects found by other developers, the testers, or (worst of all) end users in production? Can we really call it "good" productivity if a large percentage of our coding effort is wasted on rework?

And if you think defects aren't a big productivity-waster, I dare you to keep track. For a Sprint or two, log how much time you spend designing and writing code to implement new User Stories vs. how much time you spend diagnosing and fixing defects found by other people. I dare you! The results will shock and embarrass you.

True productivity is measured in terms of value deployed to our customers. And the mode of work described here results in the best productivity by that measure. We developers can make our Agile teams run like clockwork if we learn to test well. And if we consistently test well.

Webinar

I presented a webinar on this topic in May 2016. You can watch the recording here.




Comments
Not all comments are posted. Posted comments are subject to editing for clarity and length.

Post a comment




(Not displayed with comment.)









©Copyright 2000-2017 Emprend, Inc. All Rights Reserved.
About us   Site Map   View current sponsorship opportunities (PDF)
Contact us for more information or e-mail info@projectconnections.com
Terms of Service and Privacy Policy

Stay Connected
Get our latest content delivered to your inbox, every other week. New case studies, articles, templates, online courses, and more. Check out our Newsletter Archive for past issues. Sign Up Now

Got a Question?
Drop us an email or call us toll free:
888-722-5235
7am-5pm Pacific
Monday - Friday
We'd love to talk to you.

Learn more about ProjectConnections and who writes our content. Want to learn more? Compare our membership levels.