Main blog page

Charting through Exploratory Waters – Session Based Test Management

February 18, 2021

On Thursday we dipped our toes into the waters of exploratory testing.

To start, we briefly looked at Scripted Testing to highlight the differences:

ISTQB on Scripted Testing - “A series of steps each with defined binary outcomes that are used to test”.

VS.

Exploratory Testing:

This takes a different approach to testing, as the tests are designed from and evolve from interacting with the software, rather than being set before, with the intention of discovering information around risks in the code/software. Bug reports are done in within the exploration time.

What value does exploratory testing bring?

  • Replicate how a user would/could use the system i.e. not necessarily as they should - whereas each script only handles one fixed route / input
  • Allows a tester more freedom to exercise their skills-et and discover without constraint
  • Get a user perspective and experience of the system and the journey through it
  • Find the quirky bugs that happen!
  • Introduces the tester to the software they are working on, and can then inform writing scripted tests for it

Mini-Challenges

First-up we had ten minutes to explore the mock booking form at https://automationintesting.online/#/ and then recorded our findings on the JAMboard. No other criteria whatsoever. Definitely fun but directionless, tricky to assess whether it has been thoroughly tested, and as you can see a whole lot of potential issues, but which are the priorities?

Group JAMboard results from the first exploratory tests

Next up was a travel board phptravels. This time we needed to focus in on either error messages or field inputs.

This next layer of focus was definitely useful, but still didn’t go quite far enough. Several people concentrated solely on the login input functionality, some (myself included) went straight into the various inputs forms once logged in. There was definitely more to explore than could fit into 10 minutes, and alone you could not cover each area comprehensively.

Our mini test sessions helped us see first hand the benefits of structuring exploratory testing to focus in on specific risks. Our discussions at the end highlighted the need to clearly document, as well as verbally communicate our findings. So, how can we do this?

Group JAMboard results from exploring PHPTravel with a narrower focus

Session-Based Test Management

This method, proposed by James and Jon Bach focuses and structures exploratory testing, and does so through 4 key points:

Mission:

  • What are we testing/ looking for ?
  • Will probably be set by the team lead or project manager beforehand as part of the test strategy document Test Strategy Document: goes alongside a piece of software and says what you are going to use to test, tools, approaches e.g. persona or api, security testing
  • Can vary in wiggle room, e.g. ‘explore the site as a customer’ vs, ‘explore the use of the api for updating a customers details’ ‘ensure you can make payments’
  • Tip: if given a written mission which doesn’t seem clear, go back to the source for a discussion to clarify.

Charter:

Focus and goal of the session Which bit is this particular session focussing in on?

  • Based on an identified risk within the mission area
  • Testers often use a template to structure a charter such as Explore < target > with < resources > to discover < information >

Report:

How you are going to deliver your findings to others, face-to-face, email, post-it note?

Debrief:

  • What happened and was achieved in the testing session.
  • What are the obstacles remaining and what next to tackle them?
  • Any notable feelings / points that came up but were outside the charter?

3 key benefits:

  1. Structure in knowing what to target
  2. Focus of the charter means a tester can recognise if their exploratory testing is veering into unrelated areas outside of the goal of that session.
  3. Planned feedback format for communicating findings

The Bug Report

Covering both the report and debrief, we considered what we can use to help support the exploratory testing process. Further sessions are going to look at writing bug reports more in depth, but broadly we can expect that the below would form part of a more comprehensive bug report:

1) Areas relating to the set-up of the testing environment – the who what where why when? 2) The tests run 3) Findings/ Write up

We rounded off the session with a final mini test session using a technique from below on a final testing website BlingRus - http://blingrus.azurewebsites.net .

We used one the following to structure our intentions…

  • Personas
  • Journeys and Tours
  • Test by Feature
  • Heuristics and Mnemonics
  • Mind Maps

…and for structuring feedback we looked at QLIP - Questions Likes Ideas Puzzles as a method for guiding us towards providing constructive feedback. Again, communicating feedback well in an upcoming topic where we will explore it more in depth.

JAMBoard of some of our responses using techniques to focus and structure our appraoch

Homework today is to keep exploring and trying out different templates to see what methods we enjoy using. Another enjoyable and incredibly informative session from the Coders Guild, but also really notable are the website resources we used. That these are made, released and maintained for free for the benefit of others I feel really speaks to the passion and collaboration of the testing community - which is evident also in the Ministry of Testing too and across social media, and I’m really excited to become a part of it!


Words by Martha Eccles.
Junior Developer