Errors catch the eye, but Familiarity creates assumptions

I did something deliberately bad in a previous post, in which I solicited for feedback for a software testing meetup in my home county of Kent. I used a featured image that hadn’t been cropped to size, so it would look off-kilter and awkward when it was shared across social media.

Wonky

This was to test a theory that I’ve had in the back of my mind for a while. I feel that well constructed, painstakingly created content floats past the eye of anyone idly browsing through a field of unfamiliar content. It’s just another right thing in a sea of right things.

We’ve been conditioned to expect online content to be neat and tidy, with properly aligned text, beautiful imagery and so on, so something that isn’t right sticks out. It niggles at you. Your brain yells at you that this thing is not right, and it causes a moment of hesitation before you move on. And that moment of hesitation is more attention than the polished content gets.

When looking through unfamiliar content, we’re paying attention. We’re that much more engaged with what we’re looking at because it’s stuff we haven’t seen before, and our brains LOVE that. But something very different happens when we’re looking at something familiar.

When we’re looking at something we’ve looked at a hundred times before, it’s very easy to overlook issues, as our brain can sometimes fill in information for us using assumed knowledge and familiar memories of previous interactions. This can cause us to gloss over problems while our conscious mind occupies itself with other things.

So the more familiar we are with something, the easier it is for problems within it to hide in plain sight. Which is partly why our end users, who aren’t as familiar with what we’re working on as we are, sometimes find problems which should have been glaringly obvious during testing.

While expert level domain knowledge is of paramount importance to the effectiveness of a tester, don’t allow familiarity to creep in, or allow assumed knowledge to paper over problems as you go.

Exploratory Testing ≠ Random. Exploratory Testing = Chaos.

If you’re describing exploratory testing as anything like ‘Randomly poking about in the corners of the system’, you’re using the wrong language.

I first published this article on my LinkedIn profile about a year ago, but I’m so proud of it, I thought it was worth republishing on my own site.
~Dave


Many times, I’ve heard exploratory testing described using terms alluding to randomness. I’ve almost certainly been guilty of it myself. But on reflection, this is a stance that I wholeheartedly disagree with.

I’ve mentioned on a few occasions that I feel exploratory testing is a highly specialised skill. It should be the largest and most important part of a manual testing role, and it dovetails with standardised automated checks of the system under test to provide a true assurance of quality. So if you’re describing exploratory testing as anything akin to ‘Randomly poking about in the corners of the system’, you’re using the wrong language. You’re giving the impression that you’re only going to find issues by dumb luck, and you’re doing both yourself and your profession a great disservice.

And really, any randomness in testing renders that test unreliable. There’s nothing worse than finding a big, nasty bug that seriously compromises the quality of your system and not knowing exactly how you got there. If you can’t recreate what you did, then you can’t fully prove the issue exists, you can’t systematically check the area around the bug to see if it occurs in just the way you’ve discovered or if there are other ways to trigger it, and worst of all, you can’t get anything done about it. Having a bug returned with a ‘Cannot Reproduce’ status is highly frustrating.

Software development, and especially exploratory testing, includes many elements of chaos in the mathematical sense. But it is very important not to incorrectly use chaos as a synonym for random. Edward Lorenz described chaos as: “When the present determines the future, but the approximate present does not approximately determine the future.”

In our industry’s terms, this means that the user journey through a single path of the system may be a complicated one with a large number of variables along the way. But it is still a deterministic practice where the pattern for each variable can only evolve within a limited scope, making it possible to predict the end state of the system, rather than a stochastic practice where outcomes are truly random. Therefore, with good understanding of the system under test, well written acceptance criteria, and proper insight into the path the developers will take to meet those criteria, a tester should be able to predict how the system will be affected by a change, precisely determine the automated checks they’ll need to run, and the areas they will need to focus on during exploratory testing.

One of the most often used examples of chaos theory is that of a butterfly flapping its wings in Brazil causing tornadoes in Texas. This is very poetic, and is almost entirely incorrect.

Chaos theory actually states that a butterfly flapping its wings can change the course, accelerate, decelerate, or otherwise greatly affect the outcome of a tornado already in effect. That man Lorenz again: “A very small change in initial conditions had created a significantly different outcome.” The butterfly does not create or power the tornado, but it causes a tiny change to the initial state of the system being examined, which cascades to incrementally bigger changes in subsequent events. It’s the theoretical flap that makes the difference between a breeze blowing itself out over the Gulf of Mexico, and an F5 tornado levelling Dallas.

It is therefore vitally important to understand that we work with the actual definition of chaos rather than the wider perception. The butterfly flap of a change to the search mechanics of a retail site won’t directly cause tornadoes in the payment handling system (Unless the two systems are somehow intrinsically linked – but I bet you a penny that they’re not), so there’d be no point in exploring the payment system during testing of the change to the search mechanics.

But, for example, if that change is not explored properly and issues are not pinpointed, a quirk in the requests and responses made as a result of the search could go unnoticed, leading to the service returning a search results page with incomplete or incorrect information, which will in turn lead to a downturn in traffic going to the payment handling system and a loss of revenue. A tornado, which could have been predicted and prevented, has hit.

So with chaos theory stating that even a small change to an initial state can yield catastrophic results down the line, it makes sense to explore the area where the cause of any issues will originate first, and move on from there. Examine the movement of butterfly’s wings to determine the actual path and end state of the event, rather than finding yourself picking through the rubble that used to be Cowboys Stadium, cursing all Lepidoptera.