I care about bugs!
This is the first in a series of Bug Advocacy posts which I'll be writing in the coming weeks. They're a follow-up to the workshop that I gave at TestBash, "Supercharging Your Bug Reports".
During my TestBash workshop, and in the days following it, I've had some interesting ongoing discussion with Richard Bradshaw (aka Friendly Tester) about the life of a bug, and what happens to problems after they are reported. I wrote the post below so that I could debate more with Richard at MEWT, as I think we share more common ground than our Twitter discussions suggest!
"Pushing for fixes"
This was a quote (and follow-up thought from Richard) from my TestBash workshop:
I think that my intended meaning was lost here (and I did clarify in a later slide, as you'll see in the "Learn when to let go" section). I was discussing why it mattered whether testers are skilled in the art of creating effective bug reports. My point was that although there's a common misconception that testers have a negative effect on product quality ("it was fine until the testers touched it!"), improving the persuasiveness of your bug reports (using compelling scenarios and language to push people to want to fix it) gives you the opportunity to have a more noticeable positive impact on quality.
Certainly it's rare that a tester should ever demand that a given issue be fixed. You are rarely the end-user of the product; your team are not building a product solely for you! By all means, if you have a suspicion that an issue could impact users, seek data to support your hypothesis (metrics of existing site users, evidence from well thought-out user personas, perform usability studies with real users). Even if you have this evidence though, your role is to present the evidence, not to use the evidence as a weapon to "force" a fix.
Making people (that matter) aware of issues
Soon after TestBash, the issue reared its head again on Twitter, and with apologies to Richard it was his voice that I heard again! Here's the full Twitter conversation.
To an extent, I agree with Richard here, though with some important caveats which weren't supported by the 140-character format.
Making people aware (through discussions or bug reports) is important, but it's also vital that you understand to what extent they're aware. I tend to have my own internal barometer for each issue that I log, and if the business decision differs significantly from my own feelings (e.g. if they choose to defer/reject an issue which seemed important to me) then I immediately seek to clarify why they disagreed.
- Maybe you didn't communicate the issue well enough. Maybe they don't understand the most important part of the issue. I often see this with testers who reference one issue in the title of their bug report, and a different one in the main body. For instance, I once saw a bug report which referenced a user preference getting reset to its default value, and this issue was initially deferred because it wasn't deemed significant. However when I looked more closely at the bug report, the setting was getting reset after an application crash, and it was a previously-unreported crash!
- Maybe they didn't receive your communication well enough. There could be a myriad of reasons why the "perfect bug report" for a business-critical issue still gets overlooked. One of these is - and I hate to break it to you, testers - managers are generally less invested in your bug reports than you are. Just because you wrote a thing of beauty, doesn't mean it's going to be a winner. At TestBash, I gave the example of a high-impact usability issue which could be mitigated with a simple fix, but (although stakeholders were well aware of the issue) everybody stopped reading the ticket before they got to the point where I suggested the fix!
- Maybe they're right to ignore it, but you don't understand why. If you're focused on testing a single component or feature of a product, sometimes it's hard to appreciate the bigger picture. Everything that you're testing can seem to be the most important thing in the here and now, but your stakeholders are likely working with more information. Understanding their decision can often give you better insight into what you're testing. For example: "We're removing/replacing that feature in the next version", "That code is too flaky to touch when we're this close to release", "There's only one customer who uses that feature".
Communication (through bug reports or through discussion) is key to understanding. I think you have to be very wary when you talk about "not caring" whether something is fixed. "Not caring" could make you reluctant to invest time and effort in understanding the scope/impact of an issue (if you don't care, you might not do a RIMGEA analysis on the issue). Consequently, if you missed the chance to find an important nugget of information, the issue might not get the attention that it perhaps deserves, and your "not caring" will becoming a self-fulfilling prophecy.
Richard also asked rhetorically during that conversation: "Who am I to say what's right/critical?" - well, for starters, a tester is likely one of the most informed people within a project team. Although it's wrong to talk purely in terms of "expected results" and "actual results", a tester does have the information to feed those: visibility of requirements / user stories, as well as hands-on experience with what the product actually does. Again, it's about information: you're not tasked with setting priorities, but the information that you give can greatly affect it. Don't sell yourself short.
Learn when to let go
To bring the discussion back around to my workshop, here is the concluding point that I made on the day:
A tester's role rarely includes having the final say on whether or not a problem gets resolved. We provide information, to help stakeholders to make the decision. If I am satisfied that I have delivered accurate/pertinent information, and that stakeholders have understood this information, then (as long as I see them as rational, competent thinkers) I can sleep soundly regardless of the outcome.
So, I rarely "push for fixes" in the classic sense. If anything, I'm more likely to be found warning against a fix, for example if a late "quick fix" introduces more risk or re-testing overhead than the development team realises.
This is difficult to accept for some testers. Often, these are the people who are also of the belief that testers should be the sole gatekeepers of the product, or who buy in to the "testers break the product" mantra. If you've been conditioned to believe that the quality buck stops with you, who can blame you for wanting to have more direct control over the development process?
There's nothing wrong with wanting to see a better product, and it's totally understandable that you should want to see your own bugs getting addressed. It's particularly tough for usability issues, where UX is rarely treated the same as functional problems, even though a build-up of bad UX can hurt the bottom-line just as badly as broken functionality. One way to address this, pioneered by the games company Blizzard (makers of World Of Warcraft), is to have a regular day focused on addressing the issues which would otherwise slip through the cracks: they call this their "Cheese Day".
Next time, we'll look at what you can do if your stakeholders don't seem to care about bugs...