Testing with guardrails
"We want you to test our software well!" - says the manager type. "We care about quality, you can see that reflected in our company values".
"Okay", I say as I try to surpress a cough after that last bit about company values, and I begin to thoroughly test the software in a largely exploratory manner, uncovering all kinds of information, which will inevitably have a big impact on the progress of the project.
"No....not like that!" - says the manager type then. "Testing is delaying our project, and I can't have that." (They never say this literally, but this is literally always the reason. Their little deadline KPI is threatened, and they can't risk their bonus).
Does this sound familiar: Whenever you do actual deep testing and find worthwhile information, you then get called back when you report about it?
You might be testing with guardrails!
I'm not against automation in testing at all, but sometimes I feel that testing is mainly automated because simple answers are what we want. And what managers want. Test automation can be a guardrail that's hindering good testing, as management in many IT companies has largely reduced the testing role to a "produce (automated) test cases" job.
Automation and test cases are nice, but they have a big drawback: they're deterministic, repetitive, and will not uncover new information. You are firmly working in the known-known domain. I know there are exceptions to what I'm saying here (mutation testing, etc), but this statement largely holds true.
Exploratory testing, on the other hand, purposefully finds new information by going into the known-unknown domain in a structured manner. But in some places, it is discouraged. Not with words, but in how people in power react to the information that you report based on exploratory test sessions. It can often be shocking information that automation pipelines have completely missed (because they usually test parts of applications, never the thing as a whole. But the user don't care how pretty your automation pipelines are, they just want to get some shit done with your software).
It's the unit test versus integration test meme on a larger scale, you know the one:
When you look at the concept of a unit from a zoomed-out POV, namely the whole chain of systems that is required to bring a certain user experience, this meme still holds true. You can test codebases (units) on their own, the pipeline can be green, and you can still have a non-functioning user experience (integration) somewhere.
Automating integration testing on a chain of systems is discouraged for good reason, that shit is brittle as fuck. But then, you are left with a big risk: how do you know that the entire chain of systems is functioning well?
Modern IT organisations have this largely covered by having good observability, tracing and logging in the entire chain. This is great! It makes testing a lot easier as well. I cannot state my love for observability enough. It has so much more value for me as a holistic tester, compared to CI/CD pipelines.
However, I am still doing contracts at companies where observability is NOT in place, and I can tell. Not having observability has more implications beyond just the technical (making it harder to create software). It also provides "blindness as a service" for manager types.
Suddenly, testing can uncover surprises again. Big surprises.
And you can really tell if the leadership layer at a company actually wants to improve their software, or not.
If they don't, you'll often find that:
- Test KPIs are rewarded (number of automated tests, number of bugs raised and closed).
- Testing gets blamed for project delays, even though testing is only the messenger. Testing on its own doesn't even do anything, we just hold up a mirror to the state of the system, basically!
- Testing is reduced to its deterministic state (known-known domain), the discovery part (known-unknown + unknown-unknown) of it is not wanted because of how much of a wrench it can throw in reaching project deadlines.
- The tester role is reduced to deterministic tasks only, preferably automated. I'm sorry, but being an SDET is just not appealing to me.
On the other hand, with willing leadership, testing can demonstrate:
- shortcomings in the entire software development process, which translates to:
- opportunities to improve the software development process
- valuable information on what risks remain
- how valuable it is to knowingly uncover information that you know you don't know and that you don't know you don't know (yeah, good luck with this sentence). Exploratory testing can go into the known-unknown domain, although no one can willingly go into the unknown-unknown domain. Black Swan events are sadly part of our reality, no amount of testing can save you from them.
Anyway, whenever your testing efforts uncover problems that you think require solving*, do you have a hard time convincing management? Are your problems being downplayed? You now know what it is:
Testing with Guardrails. Crippled testing. Straight-jacket testing. Deadline-driven development.
*I hereby assume that you know how to report about testing you are doing, a skill that, actually, many testers completely suck at. A topic for another post, I guess.
More unit versus integration memes
Bye!
Comments ()