August 9, 2016 · Events CAST 2016

CAST 2016: Monday Lean Coffee & Tutorials

CAST 2016 began in earnest today with a full programme of tutorial sessions at the Simon Fraser University campus. Saturday's TestRetreat was great, but this was the moment that the conference began for real, as we finally gained registration packs and delegate packs (no more sticking post-its to our chests!)

Lean Coffee

We began the morning with Lean Coffee, at the bewilderingly-early 7.30am! It was incredible to see the sheer number of people who turned out, with around 30 testers suggesting topics, voting and discussing the subjects that were raised. There were many thumbs-up(s?), indicating that we wanted to continue talking about topics for long beyond their original five minutes; among the points for discussion were -

  • How to teach testing curiosity, and is curiosity even a skill that can be taught? Can you measure the degree to which a person is curious, and why might you want to do that? One common thread was the suggestion that we should inspire curiosity by demonstrating how to effectively ask questions of a product. Investigative techniques can be taught, and will help to develop curiosity within those who are capable of it.
  • How to debrief in session-based test management: There was acknowledgement among the group that, although the concept of SBTM was widely understood (and many claimed to be performing it), reporting is an essential element of SBTM which few have mastered. We brainstormed some key questions that any report should look to answer: What bugs did you find? What bugs did you expect to find? How did this session compare to other similar sessions? Are you finding particular patterns of problems?
  • Does a lack of skills lead to exclusion?: Continuing a discussion which had been prominent at TestRetreat, talking about how we can help testers to close skills gaps which threaten to undermine their position within a team. We talked about three distinct types of tester personality (subject matter expert, black box expert, programmer) and it was suggested that prioritising any of these personalities above the others will lead to the creation of risk. Diversity within the test team can be beneficial, and a tester who feels they're missing out might do well to develop their interactional expertise to help them achieve better agreement with their colleagues.
  • Surviving office politics: A discussion about whether testers (or other development colleagues) should become emotionally invested in the various shenanigans that they see happening within their business. To what extent should you keep your head down; when do you fight back against poor management decisions; how do you balance your credibility between wanting to show loyalty and wanting to do the right thing? There was some excellent advice provided on negotiation techniques to help enact change within your organisation, and reassurance that pushing back can be worthy ("swimming upstream" is likely to result in resistance, and you may not be rewarded for it, but it can be the right thing to do).

As with most Lean Coffee sessions, the real value is in the conversations in the here-and-now, but here are a few choice extracts from our group!

Tutorial - Michael Bolton: Testopsies

Testopsy (n.) - an examination of testing work, performed by watching a testing session in action and evaluating it with the goal of sharpening observation and analysis of testing work (James Bach)

This full-day tutorial was the perfect follow-on from our Lean Coffee discussions, giving us the opportunity to develop the skills necessary to tell a compelling testing story and allow us to identify underused techniques within our everyday work.

For the first exercise, we split into groups of 3-4 and produced a map of what testing meant to us. Our diagram is below; each group's diagram was unique but equally valuable, demonstrating the benefit of having a broad selection of voices during a planning/analysis activity.

Before lunch, we focused on trying to describe all of the different activities which occur during the performance of a test. We tried to keep these at a broad level, and by sheer coincidence (with a little last-minute finessing) we were able to turn them into an acronym. From nowhere, the CHRISTMAS heuristic was born!

In the afternoon, we turned the morning's learning into a practical pairing exercise, where one of us would perform a ten-minute recon of a chosen site (eBay's Tyres shop) while the other person recorded notes on the techniques that they observed being used.

I was up first, and the video of my session can be found below. The time absolutely flies by when you're also trying to annotate everything that you're saying! I spent the first five minutes just trying to locate the test URL and orient myself around the page, before I started to demonstrate anything which I would classify as a traditional test technique (planning, hypothesising, executing, recording). Even so, within the session I was able to identify one major point for further investigation (mismatch in the listings count) and had gained enough understanding of the page to understand how many more facets were yet to be tested:

After the session, Amit performed a debrief with me, asking further questions about particular approaches I had (and hadn't) taken, and evaluating my usage of the various elements of the CHRISTMAS acronym. We agreed that the only one I hadn't used was Tooling, but that this initial recon had revealed areas which might benefit from tooling in the future (e.g. a simple harness to repeatedly fire requests at the page and record the listings count).

We then swapped places in our pairs, and I took notes while Amit performed a follow-up session. I chose to codify this in a minute-by-minute breakdown, as can be seen in the tweet below; it also allowed me to record notes to aid my debrief with Amit once he had completed.

Again, the exercise (and the act of having a second participant observing) proved invaluable; in Amit's case, we even managed to catch a bug which we had to play-back three times on the recording to confirm its existence - it might otherwise have performed very hard to pin-down (threatening to be the dreaded "cannot reproduce").

By sheer coincidence, we then finished the day by looking at a series of similar charts that James and Michael had collated over the years, diagramming both real and fictional representations of a typical day (or week) in a tester's work, and how much (or how little) is spent on the actual activities of testing. This can help to illustrate to managers why we aren't actually completing eight hours of testing in a working day, and it can help to highlight where "stealth testing" is occurring (if the tests are getting done, are people working through lunches or pulling late/weekend shifts?)

We eventually wrapped the session at about 5.45pm, almost an hour after the scheduled finishing time, but nobody left early and there was certainly no protest - it was an engaging day of discussion and debate, one which has set the scene nicely for the two days of conferencing to follow.

  • LinkedIn
  • Tumblr
  • Reddit
  • Google+
  • Pinterest
  • Pocket
Comments powered by Disqus