As I have mentioned before, I recently read The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas. One of the tips is “Use Saboteurs to Test Your Testing”. This reminded me of my first encounter with error seeding.
While working on my Masters, I took a software testing course. I had been working on test teams for several years so I felt confident coming into the course. We were asked to take a program we had written in a prior course to use as our case study. We were asked to create a suite of tests for our program. I did so and discovered several bugs I had not identified when I turned in the program. Overall, I felt very confident in the test suite I had created. Throughout the semester, we would apply the various concepts we were learning against the program we had selected.
When the concept of error seeding was introduced, the teaching assistant (TA) for the course took our programs and introduced some errors into them. She went through quite a bit of work. Some errors she introduced were simple to create – for example, changing <= to <. Others were more subtle, such as introducing a side effect into an existing method that had no side effects before. We were given new binaries and told to run our tests and submit the results. (We were not told how many errors were introduced into the program.) I ran my suite of tests -- all automated by now given the fact that this was an ongoing project. Sure enough, my tests discovered 3 changes in behavior. I felt good about the results. Then we were told that each program had 10 errors in them. I was humbled - and learned a lot by examining what my suite had not found.