August 18, 2014 · Weekend Testing

WTEU-48: Test Reporting

This month's Weekend Testing session, WTEU-48: Test Reporting, was the second session since Amy & I relaunched the European chapter. With one event already under our belts, and a familiarity with the setup/structure, it seemed to go much more smoothly this time around. We had an eager group of attendees, with everybody (without exception) joining in the discussion. Their interest and ideas were the lifeblood of the session.

We began with a broad discussion about reporting: Who asks us for reports? What are we reporting about? How often? In what format?

Reporting forms an integral part of our activity as testers. We surface information to the business, and that information tends to be in the form of a report. The report may be written, verbal (even just a chat in the corridor) or it may not even need any input on your part: making metrics available on a dashboard means that the information is there for somebody, whenever they want it.

I'd like to dedicate an entire session to dashboarding in the future, as it can relieve a lot of our everyday mundane activities whilst also improving the speed and quality of our test reporting.

We discussed the importance of remembering your audience: Consider a simple question such as "How is testing going?". You'd give very different responses to that question, depending on who you were talking to:

  • Your development/test manager might be interested in specifics, including tehcnical detail, IDs of key bug reports, and might be looking for ways that they can help solve your problems (such as introducing more testability, or pairing-up to resolve a troublesome scenario)
  • Your CEO cares little about detail, and is more concerned with whether target release dates are likely to be met, and whether his highest-paying customers are going to be ringing with complaints. (See Keith Klain's excellent TestBash talk for more insight into the mind of a C-level executive.)
  • Somebody in HR might be looking to find out about particular skills needs or shortages, to help shape the company's recruitment process.

We moved on to discuss the mission for this session: Our test manager had given us the Blind Text Generator tool to evaluate, to decide whether it would be beneficial to the test team.

![Screenshot of Blind Text Generator](/content/images/2014/Aug/Blind1.png)
*A screenshot of the Blind Text Generator website*

Having received the request, I began by looking for the hidden meaning behind it. (Interpreting the meaning of communications is a key part of the Satir Interaction Model, which is highly worth a read if you've not heard of it before.) The questions which immediately sprang to mind were:

  • Why have we been asked to look at the tool?
  • Does the test manager fear there is a gap in our coverage somewhere?
  • Has this task been passed-down to them from higher in the business?

Before starting the task, I also considered whether the test manager wanted a "$5 answer" or a "$5000 answer": Is a quick-but-reasoned response satisfactory, or is a detailed technical analysis required? The last thing you want to do is spend a day writing an essay when your manager really only wanted a thumbs-up or a thumbs-down.

I was pleased that our group spent longer than expected at this phase of the task, seeking to clarify the purpose/intention of the question, understand the structure of our team, and re-stating their perceived understanding of the mission. Failing to do this can be another major pitfall - a report can be beautifully written, but useless if it was written about the wrong thing!

Given the relatively short timescale, and Amy telling us that she would remain available for questions throughout the session, I decided to begin my exploration of Blind Text Generator.

The site was relatively familiar to me, as (having studied journalism at university) these "lorem ipsum" generators are fairly widespread. (In fact, here's 101 of them!) Therefore I focused on identifying its similarities/differences with comparable products that I have used, focusing on its unique selling points, and whether they were in keeping with what our test team might need.

My immediate thought was that this would not be appropriate for our test team, for two main reasons. Firstly, the tool's USPs were geared towards design/layout and style, rather than generating text content. Secondly, the site required entirely manual interaction - there was no API or URL parameterisation which could speed-up the task.

Knowing that there were many other lorem ipsum generating websites, I thought it worthwhile to run a simple Google search for "lorem ipsum API". Two sites floated straight to the top: Bacon Ipsum (a novelty site which nevertheless had a good API) and The latter had a well-documented API, with clickable examples which allowed me to quickly evaluate its suitability.

![Screenshot of](/content/images/2014/Aug/Blind2.png)
* gives clear examples of how we could access it programatically; it even "outs" Blind Text Generator as one of the sites that it's looking to improve upon!*

I'd spent a few minutes seemingly "off-task", but actually I was looking to deliver the value proposition which was requested by our test manager. Some other groups said "We don't think this is a suitable tool"; I looked to respond with "I don't think this is a suitable tool, but here's one which might be more suitable."

I had jotted some thoughts in plain-text format as I went, not even sure whether I was going to share any of those notes with the test manager. I used the textfile as a lightweight focusing tool; its contents were temporary and, as I hadn't invested heavily in it, I wasn't fearful of deleting or omitting information to suit my needs. This is a way that I tend to work even when I'm not anticipating anybody else ever viewing my working; it adds structure and aids rational thinking. (A mind-map can often be even better for this, though I felt that it was too heavyweight for today's task, a thought echoed by Simon who started with a mind-map before abandoning it.)

To conclude, here's how I presented my report to the group. As I explained to them, my reporting was heavily influenced by the available time (not much) and how significant I perceived the task to be (not very). I minimised information that nobody would care about, as a brief, focused report is more likely to be fully read and received.

Test Report - Neil Studd

I would probably deliver this in a brief email, although the major talking points could be given verbally. I'd ask the test manager how they would prefer to receive this.

I spent 45mins investigating Blind Text Generator.

Because of the short amount of time given for the task, I focused on discovering valuable information as quickly as possible:

  • I explored the product to evaluate its functionality.
  • I looked at what it does and doesn't do.
  • I evaluated whether any of those things were important.
  • I looked for alternatives which might better serve those important things.

The things which seemed important:

  • It doesn't have an API or similar entry point, so we'd have to extract data manually. (Load the site; manually reconfigure words/paragraphs; copy data to clipboard) - this will make it hard to include in any scripted/automated processes.
  • I googled for "lorem ipsum API", and several good examples appeared near the top of the search results. The one which looked best at first glance was which can be loaded programatically, and even given some specifics in the URL, e.g. here are 10 short paragraphs in all-caps:
  • It's capped at 9999 words / 99 paragraphs, so (without hacking/modifying the site) it's not suitable for generating larger volumes of data. (An API-controlled example, such as the above, could solve that)
  • It only generates plain-text, but not all web content is plain-text, so the generated data might not be sufficient for our needs. Other services allow more types of text generation, including lists; such as the API example above.
  • LinkedIn
  • Tumblr
  • Reddit
  • Google+
  • Pinterest
  • Pocket
Comments powered by Disqus