The real reason we need to invent a time machine

The real reason we need to invent a time machine
Photo by Shawn Lee / Unsplash

I was reading a very interesting article in The Guardian: 'All people could do was hope the nerds would fix it': the global panic over the millennium bug, 25 years on. At the turn of the millennium, I was a mere 14 years old, and did not know I would be one of the nerds as a profession, starting 10 years later. The whole panic around the Y2K bug also largely flew past me, as a 14-year-old I was of course only obsessed with myself and my own insecurities. Puberty was so much fun /s... Anyway!

The Guardian article inspired this blog post, in which I will once again lament this Catch 22 situation: you cannot truly prove testing has worked. We can't A/B test the process of software testing because we have not yet invented time travel (and I hope we never will).

Let's take the Y2K conundrum as our hypothetical example. Nothing extremely devastating happened (there was still some terrible stuff that happened, but out of scope for this article), so the reaction was split into two camps.

Camp One: man, this millennium bug was sure overblown, we've been scammed by the nerds into thinking this was going to be a problem!

Camp Two: Good thing we invested time and money in being prepared, we for sure avoided horrible bugs.

As a tester, surely you must recognise the Camp One sentiment. When problems don't arise, some people in tech wrongly assume that testing doesn't add value and will gladly cut costs there. On the flip side, when stuff does go wrong, then it's testing's fault for not having caught the issues. Convenient, huh? You won't get credit when it's due, but you do get the blame, even when it wasn't your fault!

Some people are wrongly singling out testing from software development. This is not realistic, as you cannot separate testing from software development as a whole. I get that it's nice to have a black sheep to blame, but believe it or not, testing's main concern isn't to catch bugs!

In my view: testing's main concern is to craft a sensible strategy, identify the biggest risks, and decide where to spend our most precious resource: time. We should test most extensively where we perceive the biggest risks, and whether that testing is automated or done by a human is another matter. We also should aim to find known unknowns and unknown unknowns, but we will never be able to get a complete picture. Our end result should mainly be: actionable information and (fast) feedback loops that support and inform software development decisions.

It would be so much fun if we could time travel and truly test out the effects of testing. For the Y2K bug, we could test out the Camp One scenario, which would mean ignoring all the perceived risks and do nothing. Save the money, let the systems rot, see what happens! I would have loved to see the carnage. Obviously, I am in Camp Two, I think it was wise that we invested in modernising the systems and tried to avoid serious issues. However, I cannot prove this position!

In this sense, software testing is close to a scientific process. We keep acquiring knowledge (and experience) and add that to our "knowledge base", but truly proving causality is out of reach.

This is my suggested approach for "proving" that testing adds value: we should be more explicit about what our testing did and didn't do.

Many of us are too tacit (or simply too lazy) in our reporting about testing.

If a tree falls in the forest, but no one was around to witness it, did it make a sound?

This is what's happening a lot in our community: you're probably doing a lot of testing, but if you aren't talking about it, reporting the results and your opinion about the results, did the testing really happen? Did it need to happen? We should do ALL our testing for a clear reason, and we should be able to articulate clearly what those reasons were. Yes, this holds true for every test you're doing. What risk does it cover? Why are you doing it? What information are you hoping to uncover, and why does it matter for the process of software development?

The more invisible your testing efforts and test results are, the more you open yourself up to those vague criticisms from people who love to be in Camp One.

The closest we can come to proving testing works is to be exceptionally clear about its process, its results, and also its limitations.

I know this might be scary to do, but you should also name what you didn't test and why. Like I said, time is our most precious resource, and only idiots believe you can test everything.

Anyway, this is a topic I will explore in more detail this year: reporting about testing, being honest about testing.

See you soon!