My software testing approach

My software testing approach
Photo by Alina Grubnyak / Unsplash

In my last post about testing, I told you to stop thinking that testing can find all the issues. Today, I want to focus on what I do, instead of saying what you shouldn't do. Walk the talk, and such.

Disclaimer, because some people will attack me for presenting my experiences as facts: the following story is my experience, it's not the truth™. When I use words like "this is", I mean that as in "this is my experience", not as in "this is a fact that is universally true".

Truths in testing barely exist anyway. If you encounter people who say that they have solved testing and that they can guarantee the outcome or quality, be wary. They might have a service to sell that benefits from that narrative.

This leads me to one of the truths I do believe in, so let's get that out of the way: best practices don't exist, only good practices in context.

This immediately exposes my true colours: yes, I am a context driven tester. Hi, we still exist! We say "it depends" so often, it's like we have a freaking subscription on those words.

Anyway, what is my main job as a test consultant?

Basically, these days, it feels like my most important task is to unmask those false truths™ people believe about testing. My job is to get people from magical thinking around testing to the realm of the harsh reality.

This means that I have to teach people new vocabulary about testing, and correct vocabulary that lives in the realm of magical thinking. A few examples:

Testing doesn't (just) find bugs, it informs us about the state of our software.

I want people to be more aware of the powers of testing when we aren't interacting directly with the software. Asking questions can inform us of many important things, and there doesn't have to be a computer involved. Testing can be done at any time in the software development lifecycle, and everyone involved can help with these efforts. When I unlock this option in peoples brains, I know we'll be going to better places. The power of a group of people who put their minds towards a problem cannot be underestimated: it's a great source of diversity in thinking, from different backgrounds and with different perspectives.

Even though testing informs, it can never provide a complete picture.

You'll have to make peace with that, or else you're once again stuck in the realm of magical thinking. With testing, you can find so much useful information (risks, issues, feedback loops, questions, answers) that informs decision-making, but sadly, you'll never get a complete picture. Testing can show you part of the reality, but never the absence of a reality you don't want. Making software is a people sport. We make mistakes, we miss things, and this whole endeavour is also quite complex. Risks are always involved and cannot be completely wiped off the table, ever.

Testing doesn't exist in a vacuum.

This is an important one. Some people think that testing on its own "does something", but it doesn't. Like I said earlier, it uncovers information. You have to do something based on that information, like: deciding if you're going to do something with this risk, fix that issue, release a new version, etc. That's why it's baffling to me that some companies still slap on testing as an end phase after requirements, development, and expect it to go fast and smooth. Applying testing like this, a phase, is a recipe for disaster.

Who should know the basics about testing?

Someone had a comment on my last post: "you cannot expect people who aren't testers to understand testing", and this seems like a nice point in this post to respond to that.

As testers, we are expected to understand many things in software development that aren't directly about testing in order to do a good job: coding or at least reading code, business analysis, requirements gathering, project management, to name a few (you could expand this list with many things!).

Why shouldn't people in other roles related to software development be expected to know the basics about testing? It's exactly the magical thinking about testing, for example the idea that testing can give you a complete picture of your reality, that creates a ton of misery for everyone involved (whether they subscribe to this magical thinking, or not). It could easily be rooted out with some guidance!

As a tester, I am therefore very careful with my vocabulary around testing.

I correct other people's vocabulary about testing, and I try not to be obnoxious about it (not 100% possible). No, testing can not prove that we are 100% sure that the release is going to be without problems, risk is part of the deal. No, we cannot guarantee quality either, that's in the eye of the beholder. We can do a risk analysis, look at quality attributes and decide which ones we think are important, and focus on those in our test strategy, but that is not the same as guaranteeing quality.

I guess here is where my opinion might diverge from a lot of others in the software testing community: I'm not that interested in quality as the focal point of testing, not any more. It feels like a dead horse to me. I do care about quality, but in my consulting I've seen that the word has been thrown around so much and means so many different things to everybody that it loses its usefulness as a thing to work with. So, I mainly use risks and quality attributes as a practical tool to drive testing efforts.

Another thing I always try to do (with varying rates of success) is to try and steer people away from using test cases as a crutch. My god, the obsession with test cases is as strong as ever. What is it about them that tickles people's brains so much? The structure? The false sense of security? I mean sure, have some test cases (use test design please) and automate the shit out of them, and then move on beyond this realm.

I personally feel constrained by test cases. Instead, I want to use my best human qualities when I'm testing: curiosity, creativity, learning. That's why my love for exploratory testing can never be dimmed. The most impactful issues I found were always when I was exploring the product, and followed a hint that something seemed off. So yeah, I'd rather teach people how to structure their exploratory testing and pair with them for cool results.

Confining testing to the realm of test cases is one of the few cardinal sins that exist in testing, there I said it.

Alright, time to wrap up this post. It was a very meandering type of post, let me know if you like it or if it was too much all over the place. This is a glimpse of how my brain works: I just open up my Ghost editor and start writing. I've written this post in 60 minutes time.

Is there another testing topic that you'd like me to write about? Do let me know!