Monday, 27 November 2017

Language is Still Hindering Testing and The Hiring of Testers

It's been a month now since I attended Test Bash Manchester. I heard two very powerful talks at that conference which have been swishing around in my brain for a while now. Both talks came from speakers that shared a desire to advance the craft of testing.

The first talk was by Martin Hynie (@vds4), currently Director of Test Engineering at Medidata. The second talk was by Michael Bolton (@michaelbolton) a tester, collaborator, coach, consultant, author and Twitter super star.

Martin's talk "The Lost Art of the Journeyman" and Michaels talk "Where do you want to go today? No more exploratory Testing" both invoked the same feeling in me. Change is still very much needed when we are talking about testing. Martin said that only by identifying entrenched beliefs can we find opportunities for change. He explained that one of these entrenched beliefs is what "testing" means. So to invoke change we need to approach from the same side as someone that doesn't understand testing.

Both speakers talked about testing being a craft. Martin went a step further and said that testing is not a commodity.

I still frequently see testing treated as a commodity by people which do not work as testers. I get embarrassed when people believe smart self directed testing is of equal value to scripted testing. It's also very hard being the person trying to explain that someone's beliefs around testing are hindering and causing damage to a project. The belief that all testing is equal is one of those entrenched beliefs Martin told us to be mindful of.

Michael Bolton sums this up on his blog where he says scripted testing is expensive, time consuming, leads to inattentional blindness. Separating the designing of a test script from its execution in turn lengthens and weakens the feedback loop.

Michael told us that scripted testing makes testers incompetent as they are not empowered to think.

The Word 'Empowered' Matters.

As someone doing self-directed testing without a test script it can be very easy to criticise testers that write and work from test scripts and test cases. I have worked with financial institutions which rely heavily on scripts. I have met and spoken face to face with testers that work in this scripted way. Seeing things from their point of view I discovered some of the constraints they have to work within. They are not empowered to throw the scripts away. Management want them to work in this way as it is easy (yet foolish) to measure testing with numbers and stats.

When I worked in the UK games industry, I was lucky that I was able to do testing without scripts, but I was still not empowered. I was stuck behind a wall with many Devs throwing any code they wanted over the wall at a small group of testers. If bugs got missed, that was the testers fault - not the fault of a dysfunctional way of working.

Michael spoke about how the definition of testing had been stolen from testers. Now testing meant something completely different to people outside of testing. He said that the testing community needs to steal the definition of testing back.

What is Your Definition of Testing?

I have recently started asking some of my developer friends the following question: 'What is your definition of testing?' Some of the answers have shocked me!

The first Dev I asked said 'testing is ensuring quality'. I had to try explain that this wasn't entirely true. Testing is an activity that evaluates something (which could be anything) to find problems that matter. The discovery of those problems could have very little to do with ensuring quality if no action is taken once they are discovered!

My challenge to other testers would be start asking people you work with for their definition of testing. Start getting a feeling for how closely your ideas of testing are aligned. Just because you are using the same language does not mean that you are talking about the same things. Do not make the mistake of assuming everyone's idea of testing is the same.

Michael wanted us to return to using the word testing (not exploratory testing - which he said was like calling a cauliflower a vegetarian cauliflower). Martin wanted us to change the language we use for describing testing and testers.

At an open space event on Saturday 28th October 2017 a diverse group of testers sat around a table and openly discussed the testing role. Specifically the language used to describe that role. One thing became very clear very quickly - The language and definition of testing are certainly not shared between testers and non-testers. Even some testers present had slightly conflicting ideas. We certainly have a lot more work to do in this area.

Patrick Prill (@TestPappy) said that he knows people called tester but what they are doing does not match the job ads. Recruiters are have a very hard time when it comes to describing job roles. Instead of hiring testers, maybe we should be hiring people with critical thinking skills. Maybe the best testers aren't actually testers yet?

At the open space gathering it became clear that recruiters can be blind to what testers do. Both Neil Younger(@norry_twitting) and Martin Hynie shared their experiences of pairing with a recruiter. Essentially working together to identify good/bad candidates and the reasons why. Both had positive outcomes from the experience of a recruiter and a tester pairing up and working together.

From my own experiences, observations and conversations I am aware that some skilled testers are still not getting hired. 'Manual tester' has become a dirty word used to devalue testers. I have heard some pretty crazy things this year. I was asked recently by a recruiter if I knew anyone suitable for an 'Automation Tester' position. I also met a manager I met that told me 'most of our testers are manual but they have been with us a long time so rather than replacing them we are going to to train them to be automated testers.'

The first thoughts that went through my head was what is an 'Automated Tester'? Automation is a development task. There is no such thing as automatic testing. Automation is dumb, it can not direct itself, it can not explore or think. Further to that, automation in testing should be the responsibility of the whole team, not a single specialist. By putting the responsibility of an automation project on the shoulders of just one person you are heading for disaster (see the term bus factor).

A Keyword CV Search is Simply Not Enough.

When hiring testers, a keyword search on a CV is simply not enough. This comes back to a need to realign the language we use to talk about testing in the context of 'that thing testers do'.

As well as starting conversations with people we work with about the definition of testing. I believe testers also need to start sharing information with recruiters. This was one of the reasons I was very keen to write and share an article with a recruitment blog. By sharing understanding and knowledge around testing skills and testing work with the very people that are trying to hire us, we not only make things easier for the people trying to hire, but we also make things better for people (like us testers) trying to get hired.

If my job suddenly switched from software tester to recruiter these are some of the things from my experience of testing and testers that I would take with me when trying to specifically recruit testers.

Stop filtering out testing candidates based on certifications.

ISEB/ISTQB really is not a good filter for testing candidates. When I surveyed 187 testers in 2016, only 48% had completed ISEB/ISTQB foundation certificate. I do not hold this ISEB/ISTQB qualification. Some of the brightest smartest testers I know also do not hold ISEB/ISTQB qualifications. There is a big difference between being able to learn answers to some multiple choice questions and test software. By demanding this qualification you will also probably alienate the kind of people you want to attract. Smart testers know these qualifications exist for profit to make money.

Everyone these days puts agile on a CV, this does not mean they are agile.

Better things to ask a candidate rather than looking for the 'agile' keyword:

  • Ask them about a time they changed their mind or changed course.
  • Ask about some experiments they have done within a team.
  • Ask about a time they collaborated or paired with another team member.
  • Ask how they eliminate wasteful testing documentation.

Acknowledge that there is no automated testing

There are no automated testers. Automation is its own development project and should be owned by the whole team. It is possible for someone that can automate (e.g. write code that checks things) to not understand what should be automated. Writing code is a different skill to being able to decide what that code should do.

Acknowledge that there is no manual testing.

There are no manual testers, there is only testing. Trying to divide testers into two groups of manual and automated is a big mistake. Please stop calling testers manual, we don't like it and it damages our craft. If instead of labels we focused on hiring candidates with the ability to think critically and solve problems everyone would be in a much better place.

This post was also published on the blog of Ronald James Digital & Tech Recruitment Agency

Monday, 6 November 2017

Test Bash Manchester 2017 Tweet by Tweet

I was very fortunate that I was able to attend my second ever Test Bash in Manchester. This year was better than last year as two of my co-workers (Hannah & Jack) came along for the ride. I got so excited seeing them get excited!

I spent most of the conference day scribbling notes again. However unlike last year where I mostly wrote text in a pad. This year I had plain paper and used coloured pens. At the open space the following day it was really nice to have my notes from the conference day to hand. In the days following the conference these visual reminders really helped important ideas stick in my head.

I sent all my visual notes up into the Twitter-verse as they were completed. List of tweets below.

Anne-Marie Charrett @charrett Quality != Testing


Goran Kero @ghkero What I, A Tester, Have Learnt From Studying Psychology


Gem Hill @Gem_Hill AUT: Anxiety Under Test


Bas Dijkstra @_basdijkstra Who Will Guard the Guards Themselves? How to Trust Your Automation and Avoid Deceit


James Sheasby Thomas @RightSaidJames Accessibility Testing Crash Course


Vera Gehlen-Baum @VeraGeBa Turning Good Testers Into Great Ones


Simon Dobson Lessons Learnt Moving to Microservices


Martin Hynie @vds4 The Lost Art of the Journeyman


Claire Reckless @clairereckless The Fraud Squad - Learning to manage Impostor Syndrome as a Tester


Michael Bolton @michaelbolton Where Do you Want To Go Today? No More Exploratory Testing



Twitter Mining

Last year, I did some Twitter mining and sentiment analysis after the event. I wanted to re-use those scripts again to tell this year's story. After I got home (and had a bath and good rest) I sat down with my laptop and mined 2700 tweets out of Twitter on the hashtag #testbash. I worked through my code from last year starting to piece together the story of this year's event. If you're interested in the code that this article is based upon, it can be found (along with the raw data) here on Git Hub

Positive and negative word clouds

The word clouds above can be clicked for a larger image. The first thing I noticed after generating some positive and negative word clouds was that the positive cloud was bigger than the negative cloud. 173 unique positive words and 125 unique negative words were identified in the conference day tweets. The conference was a resoundingly positive event!

It didn't surprised me that the word 'Great' was at the center of the positive word cloud. Having done this kind of text crunching a few times now I've learned that 'great' and 'talk' are generally two of the most common words tweeted at conference events. What did surprise me though was the negative word cloud. Right at the center, the most frequently used negative word 'syndrome' closely followed by 'anxiety'. Claire Reckless & Gem Hill spoke about imposter syndrome and anxiety. Both these talks had a huge impact on the Twitter discussions which were taking place on the day. Getting the testing community talking about imposter syndrome and anxiety, even though the words used carry negative sentiments, is a very positive outcome.

The top 5 most favourited tweets were:

#1


#2


#3


#4


#5

Tweets by Time and Positivity

A number representing positivity index was calculated for each tweet. For every word in the tweet present in a dictionary of positive words, the tweet scored +1. For every word in the tweet present in a dictionary of negative words, the tweet scored - 1. The positive and negative words list used to score tweets was created by Minquing Hu and Bing Liu at the University of Illinois and can be found here

The tweet with the most positive sentiment on the day was this one from Richard Bradshaw

The tweet with the most negative sentiment on the day was this one from Dan Billing.

I plotted all the tweets by time and positivity then fitted a loess curve through the points on the scatter plot.

The first thing that really stood out was that one tester was up, awake and tweeting a picture of the venue at 4:17am?!?

Once the event got started, there was a dip in positivity just after 10:00am - Checking some of the tweets around that time

Reason for the dip is related to tweets about bias.

There was another dip in positivity just after 16:00 so I checked those tweets too.

Again, nothing negative was happening, the dip in positivity was caused by the discussion of a subject which has a negative sentiment.

Really positive tweets came at the end of the day once the event had been absorbed. With the last part of the day carrying the most positive sentiment

Tweets by Frequency and Platform

I plotted a frequency polygon broken down by platform to see which parts of the day people engaged the most with Twitter. Again the image below can be clicked for a larger version.

It was very interesting to see how frequently people were tweeting through out the day. The spikes in activity align very closely with the start of each talk. It was also nice to see people taking a break from using twitter on mobile phones over lunch (hopefully this is because real face to face conversations were happening over a meal). The biggest spike of activity happened immediately after lunch time was over during Vera Gehlen-Baum's talk "Turning Good Testers Into Great Ones".

It was a pleasure connecting so many wonderful people at this event. The mix of new faces and familiar faces was fantastic. Test community is the best community ♥ Hopefully see you in Brighton next year!

Wednesday, 19 April 2017

Help Your Testers Succeed in 8 Minutes

2017 has been a stressful year for me so far. I bought a really ugly flat in February, then found myself with two months to make it habitable and move into it. While frantically arranging appointments with trades people and deliveries of essential things (like carpet and furniture), a call for speakers came up for the Agile North East Lightning Talk competition.

I was already so stressed out from trying to move house, the stress of giving a talk felt insignificant by comparison. So I decided to throw my hat into the ring and enter the competition.

I knew the audience would be a diverse group of people, with only one or two software testers in the room so I wanted to come up with a talk that would be interesting to everyone. I came up with a working title of "Things you can do to help your software testers succeed" and wrote the following abstract, it was quite short and a little bit vague in places...

"Testing software is hard. Hiring good testers is hard. Some testing jobs are set up in such a way that testers can never succeed! If you have good testers in your organisation the last thing you want to do is drive them away. I'm going to tell you how you can help your testers succeed and enjoy the many benefits that happy testers can bring to a team."

I found out a few weeks later that my proposal had been accepted and I was in the competition!

It Will Be Alright On The Night

I now had an 8 minute slot in front of a captive audience of people which shared an interest in Agile development. I knew straight away that I was going to have to make each minute count. I wanted to use the opportunity to try raise awareness of the problems software testers face.

I wrote my slides and practised a little bit with a timer to see how much information and advice I could actually jam into 8 minutes. Turns out an 8 minute talk is actually quite a tricky duration to handle because you don't have enough time to get into really detailed explanations, but its long enough that you do have to start explaining concepts.

The day of the talk arrived and I got to the venue about an hour before the event was due to start. The building chosen for the event was a historic listed building, the Northern Institute of Mining and Mechanical Engineers. I was able to scope out the 1895 lecture theatre where the talk would be taking place, see where I would be standing, where the audience would be sitting etc. This really helped reduce some of the stress and nervousness I was feeling on the night.

I was very thankful that some of my friends and co-workers were able to come along to the event. Having a few people there that I knew genuinely wanted me to succeed made the task of speaking mentally easier for me to cope with. I checked with the event organiser that I would be able to make an audio recording with my smart phone and was told this would be fine. I have been trying to record myself every time I speak so I can listen to myself afterwards and find ways to improve.

My lightning talk, "How to Help Testers Succeed" is now up on YouTube.

I was voted 3rd place by the audience and I was absolutely shocked that the 1st and 2nd place winners didn't choose the Lego prize. This let me choose the Lego Millennium Falcon. I haven't built it yet, I need to find someone to help :)

This post was also published on my company's blog Scott Logic Blog

Monday, 16 January 2017

Foreign Currency Trading Heuristic Testing Cheat Sheet

Happy New Year everyone!

For the last 18 months I have been testing software designed to trade foreign currency, known as FX or Forex trading software.

I consider myself lucky as I joined the project on day one which enabled me to learn a lot about testing trading systems.

Challenges

Financial software, including trading applications, can be some of the most incredibly difficult complex applications to test because they contain many challenges such as:

  • Many concurrent users
  • High rates of transactions per second
  • Large numbers of systems, services and applications that all integrate with each other
  • A need to process transactions in real time
  • Time sensitive data e.g. the price to buy a Euro can change multiple times every second
  • Catastrophic consequences for a system failure, bugs can cause financial loss
  • Extremely high complexity level

At the start of my current project, I found very few resources available for testers covering complex financial systems. The few resources that I was able to find were quite dated and generally advised to write detailed plans and document all tests before executing them. I simply couldn't find any information about approaching testing of financial systems in a modern, agile, context driven way.

I was very fortunate on my project that I was able to implement testing with agility and focus on risk. Long checks historically done manually by human testers were replaced with good automated integration test coverage. The team also chose to release to production as frequently as possible, usually once a week. Not having to constantly repeat manual checks of existing functionality gave me time to do a LOT of exploratory testing. Almost all the really bad bugs, the ones with financial consequences, were found during exploratory testing sessions.

Heuristic Testing Cheat Sheet

Given the high level of exploratory testing I was able to do on my project, I generated a lot of ideas and identified some high risk areas and common mistakes. I have decided to put together a heuristic testing cheat sheet for anyone carrying out exploratory testing of trading software.

The full size version of my FX trading heuristic testing cheat sheet can be found here. I wanted to combine my knowledge of trading with some of the ideas I generated. On the sheet my ideas are written around the knowledge in magenta coloured boxes. I hope this may be useful to anyone working on trading software.

This post was also published on my company's blog Scott Logic Blog

Wednesday, 9 November 2016

Deconstructing #TestBash with R - Twitter Mining and Sentiment Analysis

Recently I attended a software testing conference held in Manchester. While I was at the conference I had a conversation with Andrew Morton (@TestingChef) about Twitter. Andrew told me he had a theory that at conferences people tweeted more in the morning than in the afternoon. As an active Tweeter and passionate R user I thought it would be interesting to try collect some real data, take a look and see what was happening.

Once the conference was over and I had finished my write up of the event I made a new github repository and started playing around with R. R, sometimes also called Rstats, is a an open source programming language used for statistical analysis and generation of graphics. I wanted to gather up all the tweets about Test Bash Manchester so I could start looking at them. I found that there was an R package called twitteR specifically designed to mine tweets out of Twitter.

Mining Twitter For Data

I went to http://dev.twitter.com and created a new application in order to get hold of a key and secrets so I could start accessing the Twitter API.

To get around storing my secrets in plain text in my script (I didn't want anyone to be able to read them straight out of github), I used environment variables to keep them safe.

The process of mining tweets from Twitter was quite straight forward. Install the twitteR package, include the twitteR library, give it all the keys and secrets, call a function to authenticate then call another function to search. There was even a nice helper function to convert the big long list of tweet data returned into a dataframe so it could be manipulated easily.

Here is a basic example I wrote that will collect the 100 most recent tweets containing the hashtag #cat

The code snippet above assumes the API secret is stored in an environment variable called TWITAPISECRET and the access token secret is stored in an environment variable called TWITTOKENSECRET

Its worth mentioning that the Twitter API does not hold on to all tweets forever. I found that tweets are generally available for about 10 days before they are gone forever. However because R is awesome it is possible to save a batch of tweets that can be loaded and investigated at a later date.

On 29-10-16 I mined and saved 2840 tweets tagged #testbash which spanned a period of the previous 10 days covering the day of the conference. I did this by converting tweets into a dataframe and using saveRDS() and readRDS() functions to save and load my dataframe as a .Rda object.

The tweets I mined required a little bit of clean up. I had mined on the #testbash hash tag which also included tweets about Test Bash conferences in Brighton, Philadelphia and Netherlands so I discarded tweets which were not specifically about the Manchester event. I also only focused on tweets created on 21st October 2016, the day of the conference. It is also worth mentioning that all the tweet data to UTF-8 to resolve problems caused by tweets containing emojis.

Top 5 Most Favourited Tweets

Immediately after mining the tweets it was very easy to see the top 5 most favourited from the day of the conference. They were as follows:

1st Place - 50 hearts

2nd Place - 37 hearts

3rd Place - 35 hearts

4th Place - 32 hearts

5th Place - 31 hearts

Examining Frequency Patterns

A few months ago I started learning how to draw advanced graphics in R using a package called ggplot2. I was able to use this package to create a frequency polygon of the conference day tweets and identify some of the different platforms the tweets had originated from. Please click the image below to see the full size image and get a better look

I used a black line to represent the total tweet frequency and different coloured lines to show the quantity of tweets originating from different platforms. I added annotations to the plot to indicate who was speaking at the time.

Straight away it became very clear that there was a spike in Twitter activity during Kim Knup's talk on positivity. This was one of my favourite talks of the day and I'm not surprised it got people talking on Twitter.

Tweeting activity can be seen to drop during the breaks and is especially low at lunch time. Possibly because during lunch everyone is focused on eating, not tweeting.

The level of twitter activity in the afternoon does not appear to be lower than the level of activity for the first two talks of the day.

It is also interesting to see how the number of tweets from Android and iPhone devices starts to fall by 18:00pm. I know the battery in my Android phone was at about 3% charge by 17:30pm which stopped my tweeting efforts. It's also noticeable that there aren't many tweets between 20:00pm and 22:00pm. This coincides with timing of the 2016 Dyn Cyber Attack that brought Twitter to its knees making it too slow to use between 20:00pm BST and 22:10pm BST.

Looking at times and quantity of tweets is one thing, but it does not tell us very much about the content of these tweets. I wanted to perform sentiment analysis to dig deeper and try discover more.

Lexicon Based Sentiment Analysis

A good place to start with sentiment analysis is to compare the tweets to a lexicon of positive and negative words. Then score each tweet +1 for containing a positive word and -1 for containing a negative word.

I used a lexicon created by Minquing Hu and Bing Liu at the University of Illinois. This Lexicon can be downloaded from:

http://www.cs.uic.edu/~liub/FBS/opinion-lexicon-English.rar

It is very important however to tailor any lexicon you may use for this purpose to the subject matter it is evaluating. Some of the changes I made to the lexicon included:

  • Adding words specific to the domain of software development e.g.'wagile' , a negative term used to describe agile development which has reverted back to waterfall.
  • Made some corrections based on context, e.g. I reclassified the word 'buzzing' from negative to positive.
  • Added UK spellings along side US counterparts e.g. 'honour' as only US version 'honor' was present.

I also removed all the positive and negative words present in titles of each speakers talk from the word lists. I did this to try mitigate bias as words in talk titles are mentioned more frequently but used to identify talks and do not carry a sentiment.

Once I had managed to identify positive and negative words in the conference day tweets, I was able to use this data to draw some word clouds. Please click on the image below to view at full size.

I drew two clouds, one positive and one negative. The larger, darker words in the centre appear more frequently than the smaller, lighter words towards the edge of the cloud. Be aware however that people on Twitter do swear and as such any data mined from Twitter may contain profanity. I chose to censor the profanity in my plots with the addition of some strategically placed asterisks.

Once all the tweets had been scored for sentiment, this made it possible to identify the most positive tweet on conference day:

And also the most negative:

I wanted to plot all the conference day tweets by their sentiment score to see which parts (if any) were especially positive or negative. I was able to do this using a scatter plot. Again, please click the image below to view the plot at full size.

This plot uses 'jitter' which adds a small amount of noise to uniformly distributed variables. So rather than having all the tweets with the same sentiment score in a perfect horizontal line, it shakes them up a bit and moves them a tiny distance in a random direction. I also reduced the alpha transparency level for each point on the scatter plot to make it easier to see areas where the tweets were more densely packed. I added a yellow line to the plot which is a smoothed conditional mean using a loess model. This line shows roughly how the positivity levels of tweets change throughout the day.

Positivity builds in the run up to the start of registration at 8:00am and remains positive between 0 and 0.5 until around 11:30 when it suddenly drops during Stephen Mounsey's talk. I was curious as to what was being tweeted around this time so I took a look.

Seems there quite a few tweets about not listening, this may explain the negativity during this section.

Positivity levels also dipped again during Mark Winteringham's talk at around 14:15 I checked the tweets again to see what was going on.

Tweets about ranting and what not to do with acceptance scenarios were responsible for lowering positivity levels during this section of the conference.

Its also worth noting that after all the talks were done positivity seemed to rise again, peaking at around 22:00. I like to believe this was due to the drinking and socialising that was done afterwards but 22:00pm was around the time Twitter came back online after the DDOS attack :)

I have made the script I wrote to generate all these plots (along with the Twitter data I analysed) available on git hub for anyone interested in looking at the tweets themselves or building upon the analysis that I did.

And now a shameless plug: If you are local to Newcastle and interested in finding out more about Twitter mining and sentiment analysis, I am giving a talk at Campus North on 12th December 2016 as part of the R North East bi-monthly Meetups and it would be great to see you there!

This post was also published on my company's blog Scott Logic Blog

Friday, 4 November 2016

I did it! I gave my talk!

This is a follow up on my earlier post about learning how to give a technical talk.

I did it! I gave my talk! The feeling of euphoria afterwards was overwhelming and I think I might still be buzzing from the experience.

I wanted to write a mini blog post to say a massive THANKYOU to everyone that came along to the Newcastle Testing meet up on 1st November. It was good to see a mix of both familiar and new faces. Also thank you to Russell & David for organising the evening and thank you to sponsor Sage for providing a steady supply of beer and pizza.

There are a couple of links I would like to share.

Firstly, for anyone interested in attending the Newcastle testing meet up, full details of future meetings can be found at:
http://www.meetup.com/Newcastle-Upon-Tyne-Agile-Testing-Meetup/

Secondly, for anyone that was unable to make the event, I have managed to get my talk & slides uploaded to Youtube here:
https://www.youtube.com/watch?v=Jms67_-tHqY

Retrospective Thoughts

It felt like the talk went better this time than the previous time I gave it. I know the free beer definitely helped suppress any feelings of anxiety and fear.

I feel exhausted from the journey public speaking has taken me on, but its been worth every moment. I need to take a rest but don't want to lose momentum so I have decided to set myself the following speaking goals for 2017:

  • Give a talk to a larger audience
  • Give a talk that isn't about testing
  • Give a lightning talk at a conference (probably a 99 second talk at a Test Bash event)

Look forward to hopefully seeing you at the next meet up!

Tuesday, 25 October 2016

Test Bash Manchester 2016

Usually Test Bash events in the UK are held in Brighton making them a bit inaccessible to people living in the North. However this changed on Friday 21st October when I was lucky enough to attend Test Bash Manchester, a software testing conference held at the Lowry in Salford Quays. Organised by Richard Bradshaw @friendlytester and Rosie Sherry @rosiesherry, this was the first time a Test Bash event had been held in the North West.

I woke up bright and early at 5:30am Friday morning and made my way to the station to catch a train to Manchester. Travelling on the day of the conference unfortunately meant that I missed the first speaker, James Bach @jamesmarcusbach. I was however able to catch the live tweets taking place during his talk about critical distance and social distance. There were some interesting slides. Sadly I had not heard the term 'critical distance' before and Google only revealed a reference to an acoustics calculation. I found because I had lack of context and was missing a definition this made the slides cryptically undecipherable. I heard the talk was very good, but I am really not in a position to be able to comment on it.

I arrived at the venue just in time to sit down and listen to the second talk.



"Psychology of Asking Questions" by Iain Bright

The main thing I took from this talk was that when encountering resistance in the workplace to keep asking "Why not?", in a loop, until no more objections exist. I have heard of this tactic before in sales environments to handle customer objections. I did feel that the message within this talk could have been stronger. Credit to Iain however, it takes guts to get up in front of a large room of your peers and deliver a talk. I really liked his slide with Darth Vader on it that described the dark side of asking questions.



"On Positivity – Turning That Frown Upside Down." by Kim Knup @punkmik

Kim's talk really connected with me. She said that as humans we were all hard wired to look for negativity because recognising it was a key survival mechanism. She spoke about the testing wall and throwing things over it. The word "Wagile" was used and how this usually resulted in lots of overtime work for testers. Kim explained how her testing job had made her start hating people and this negativity had manifested into the act of logging as many bugs as possible. Essentially in Kim's job software development had turned into a warzone. Her description really stirred a lot of memories of the games testing I did in the early days of my career. Kim mentioned that during these dark days her record was 102 translation bugs logged in one day. This was very impressive, higher than my personal best of 63.

Kim told us not to start the day by complaining and explained that happiness gives us an advantage because Dopamine invigorates the brain and turns on learning centres. She went on to explain that being grateful for three things a month can help re-program our brains so that they are more likely to scan for positive things. Happy brains have a wider perspective, increased stamina and creativity. A thoroughly enjoyable talk that left me feeling very positive about testing.



"Listening: An Essential Skill For Software Testers" by Stephen Mounsey @stephenmounsey

Stephen set about trying to prove that we don't listen hard enough. He asked us to listen and then to really listen. We eventually heard the sound of a buzzing fridge in the background, something which we had previously been blocking out. He told us that the amount of stuff we block out and ignore in everyday life is amazing.

Stephen went on to explain that listening was not skills based and that we had different listening modes such as critical or empathetical. He said that men and women both tend to listen in different ways and that we should evaluate our listening position. He reminded us that listening is not about us, it's about the speaker so we shouldn't interrupt them. It was an interesting talk that gave me a lot to think about.



"Testers! Be More Salmon!" by Duncan Nisbet @DuncNisbet

Duncan told us that shared documentation was not the same thing as shared understanding. He said that testing is asking questions to squash assumptions. He went on to explain that even though test first development tries to understand the need, it could be the wrong software that is being created.

As testers, Duncan wants us to ask questions of designs before code gets written and talk about testability. Duncan wanted us to not only question the idea, but also question the need and the why.


"The Four Hour Tester Experiment" by Helena Jeret-MΓ€e @HelenaJ_M and Joep Schuurkes @j19sch

The Four Hour Tester Experiment was inspired by a book called the Four Hour Chef which attempts to teach someone to cook in just four hours. Helena and Joep wanted to know if it would be possible to teach someone to test software in four hours. As for what to test, they knew it needed to be a familiar concept, something not too hard to learn yet sufficiently complex so they decided to use Google Calendar. If you know someone that would be interested in trying to learn testing in four hours, the four hour tester activities can be found online at http://www.fourhourtester.net/

The four hour tester talk concluded that while it was possible to illuminate testing to some degree, it's not possible to learn software testing in just four hours. The question and answer session afterwards echoed that being unable to teach someone to test software in four hours should not be viewed as a failure as it demonstrates how complex testing is which in turn proves that it is skilled work.


"The Deadly Sins Of Acceptance Scenarios" by Mark Winteringham @2bittester

Very early on Mark informed us this would be a rant about BDD scenarios. A show of hands revealed that a significant number of people were using cucumber style "given, when, then" scenarios. Mark explained that we write scenarios, then try to implement them, then realise that we missed bits and that we need more scenarios. He reminded us not to go too far. He wrote each of the deadly acceptance scenario sins in given, when, then format.

Mark told us that you can't specify love, but you can ask a loved one for declarative examples e.g. 'what could I do to make you feel loved?'. He continued that a loved one might say "Well you could email me sometimes or send me flowers". Mark explained that if we were to set up a cron task to automate emailing and ordering flowers online for a loved one, the love would be lost. He warned us that we shouldn't let scenarios become test cases and reminded us to put the human back in the centre of our automation efforts.


"A Road to Awesomeness" by Huib Schoots @huibschoots

Huib was exceptionally confident and chose not to stand behind the microphone stand but instead stood further forwards and addressed us directly. "I am awesome" he proclaimed, "Take note and be awesome too!" A video then played of The Script featuring Will.I.Am singing "Hall Of Fame".

Huib told us about his background. He had started automating, then became tester, then became an ISTQB instructor. He went on to say that you wouldn't teach someone how to drive a car by telling them how but without letting them do any actual driving. Yet ISTQB doing this with testing. He said the most value comes when you put someone in front of software and let them test it.

Huib said there was a difference between the kind of testing a business person might do and professional testing. He confirmed that professional testers are professional learners and explained that if we do the same work for 10 years, we might not have 10 years’ experience, we might just have 10 lots of the same 1 year experience. During his talk, Huib said he really liked feedback so I tweeted him with a tip for the data in his pie chart. Huib wanted us to ask ourselves the following questions: Who am I? What are my skills? What do I want? What do I need? How do I get there?

Huib's passion was so strong there were times I wasn't sure if I was listening to a tester or a motivational speaker. His talk was delivered with huge amounts of energy. It reminded me that there is always something new to learn and that receiving feedback is very important.

For part of the conference, I sat just behind Huib with some testers from Netherlands and Belgium. During this time I learned that his name is pronounced like 'Hobe'.


"Is Test Causing Your Live Problems?" by Gwen Diagram @gwendiagram

Gwen asked us if we can do load and performance testing in our test environments and reminded us that there is lots of room for error when humans carry out manual deployments. She dropped the f-bomb, repeatedly. Gwen spoke passionately about monolithic test environments that do more harm than good. She talked about deployments and the inevitable OMG moments which followed deployment. During her talk, Gwen reminded us that monitoring is a form of testing. She also said to keep in mind that even when a company to does monitoring and logging well, it can still get liquidated if its products don't sell.

Gwen's desire to make things better and do a good job was infectious. So much so that the first question asked after her talk concluded was "Would you like to come work for us?". My mind was blown.


"Getting The Message Across" by Beren Van Daele @EnquireTST

Beren spoke from experience about one particular test role he had held. They had called in the cavalry and enlisted the help of consultancy, but it soon turned into 'us and them' situation. It was September and they had to finish their project by December. He was a junior tester at the time and they had hired a QA manager with a strict inflexible way of working. None of the bugs were getting fixed so the testers decided to print out all the bugs, add pictures of bugs to them and cut them out. They then decided to create a 'Wall of Bugs' in the most visible place in the office, the entrance way. This was an extreme measure but management saw the problem and gave the developers more bug fixing time.

Beren's story continued and went to some pretty dark places like how the QA manager mysteriously disappeared and how the testers tried their best to cope with increasing levels of negativity in their work place. Beren told us that he eventually left that job but he stayed in touch with some of the people that worked there. He said that exploratory testing is still not accepted as valuable there and the testers have to hide the exploratory work that they do. Beren said that he felt like he had failed and then he did something incredibly brave and courageous, a slide titled "My Mistakes" appeared and he told us where he thought he had gone wrong. Even though Beren is a new speaker I was enthralled by his story. I really hope he continues sharing his experiences with others as stories like his deserve to be told.


Test Bash Manchester was a resounding success.

It felt really good to finally meet so many of the brilliant people in the testing community that I have only ever spoken to online. The event left me recharged, re-energised and brimming with positivity. Test Bash left me feeling like I was part of a giant, global testing family. I have so much love and respect for the software testing community right now. I'm really looking forward to Test Bash 2017 next year.

This post was also published on my company's blog Scott Logic Blog