Categories
Conferences Mobile Testing Presenting Technology

DevWorld 2019

I recently attended /Dev/World in Melbourne this week. This is Australia’s only iOS developer conference. Here is my summary of the conference.

Photos from the conference on Twitter under the hashtag #DevWorld

Themes

There were a few themes that gleaned from the talks. These were;

  • Swift is the most talked about language in this space
  • People are still using Objective-C
  • Cross platform remains a hot topic (from iOS to mac, React Native and Flutter)
  • Augmented Reality is still a cool toy
  • How to build engaging experiences in your app
  • Game design stories
  • Testing

Sketchnotes

Here are my highlights:

You can access all of my sketchnotes on google photos here.

My Talks

I presented on swift UI & wearables using my sample poo tracking app as the basis for it. You can watch a similar talk from my twitch channel. You can also access my slide decks here. I also gave a lightning talk on how to do sketch noting. You can read this blog post if you are interested in learning about sketch noting. I also have this blog post on Right To Left bugs which was inspired from my talk if you are interested in reading more.

Recordings

Eventually the talks will be up on youtube under the AUC_ANZ youtube channel. You can access previous years talks there too.

Categories
Design Mobile Testing Software Testing Technology

Right To Left design considerations for mobile apps

We truly live in a global and inter connected society. But have you tested your app using a Right to Left (RTL) language such as Arabic? This blog post is a reflection on some of the design considerations to keep in mind when accomodating this.

Why does this matter?

Arabic is one of the top 5 spoken languages in the world with around 3 hundred million speakers and it is the third most spoken language in Australia. Even if you only release apps for the Australian market someone out there will have Arabic set as their default device language. It’s ok if you haven’t translated your app, but you should check that these people can still use it.

How do I test this?

Android

Enable developer options and select “Force RTL layout direction”. On My Samsung S10 this is what my screen and options look like after enabling this option:

iOS

In Xcode you can change the build target language to a Pseudo RTL language to see how your app renders in this way without having to change the language on your device.

Number pads

You don’t actually need to render your key pads in Right To Left, in fact it’s actually more jarring to render numbers in a RTL arrangement because ATM’s and phone pads are left to right in Arabic. Most Arab’s are use to globalised number pads. Samsung has an in-depth article on when RTL should be applied.

When I have RTL rendering set on my android phone, the log in pin screen and phone call functionality is in LTR. However some of my banking apps render their pin pads in RTL.

Common RTL Issues

I was pleasantry surprised to find out how many of my apps weren’t broken when I switched to RTL rendering. Facebook, twitter and email still look very good. Some apps (like my calculator) do not make sense to render RTL and they remain LTR:

Bug One: Overlapping labels

You will have to watch out for when labels overlap like in the domain app here:

Bug Two: Visuals doesn’t match written language

And when your text is rendered RTL but the visual cue is still LTR like in the shade bar for representing countries visitors to my blog in this wordpress statistics view:

Bug Three: Menu’s that animate from the side

In the app I’m helping build, the side menu renders pretty funkily in RTL mode, I can’t show you a screenshot of this behaviour but it’s probably the quirkiest RTL bug I’ve seen. If you find an app with bad side menu behavior in RTL please share your screenshots with me.

But here are some screenshots of the CommSec app on android (LTR on the left and RTL on the right for comparison)

Bug Four: Icon’s aren’t flipped

Often icon’s have a direction associated with them like the walking person when you get google maps directions. Sometimes it can look a little odd when they aren’t flipped correctly (as if they are walking backwards).

Have you seen these bugs before?

Please let me know your thoughts or experiences in supporting RTL languages. I’d love to hear your stories.

Categories
Mobile Testing Software Testing

A Mobile App Test Strategy

Test strategy, what a funny concept. Now this strategy isn’t going to help you win any battles (this is where the word strategy comes from after all) but for lack of a better well understood term, this blog post is a reflection on what I imagine will work for my team*.

*disclaimer: what might work for my team might not work for yours. People are amazingly diverse and your team and company context is fundamentally different. Also this here is a wish list of what I think will work. It’s subject to change as we learn and evolve.

Context

First let’s set the scene. Our scrum team includes 1 Android developer, 1 iOS developer, 2 back end developers, 2 business analysts (1 is our scrum master), 2 testers and a team/tech lead. We are changing our team structure and I’ve come on board as a software engineer in test. Our team closely collaborates with the design team and they are included in our group email threads but don’t come to our retro’s. We have a 10 day sprint cycle that looks a little like this:

We have a daily standup, a few kick off meetings at the start of the sprint to lock in what we are working on for the next 2 weeks, some mid sprint review/next sprint refinement sessions and a few meetings at the end that help tie up what we’ve completed. Consider this a crash course in Scrum Agile if you will. Not everyone is required to attend all of these meetings and I won’t be covering these meetings in detail in this blog post.

Get to the Test Strategy

Yes I know, that was a rambling tangent but the context is important. Before I get into the good bits I’ll ask you a question;

Why do we even bother with testing?

Some people say, “to ensure the product works as expected” or, “to find bugs”, “it’s my job to test things” and these are all ok answers but they miss the point a little. Here’s my answer:

We test to get feedback on the state of the product. To help us answer the question, “are there any known risks with shipping this product to production?”

Paraphrased from conversations with Michael Bolton, the tester not the singer

Every part of the following strategy is all tied into facilitating feedback. The more timely and accurate the feedback the better.

Testing and quality is a team responsibility, it’s not just up to one person to be the quality gate keeper. My role is to help facilitate feedback

Layer One: The product design feedback loop

This is all a little out of scope of my teams day to day activities but this how our design team tests if we are building the right thing that users need.

User/Product Research

This might involve researching our market for current trends. How many of our customers care about their superannuation? What is their financial literacy? What type’s of problems are they facing? What are our competitors doing and how does their experience deliver value?

Wireframe/Design prototyping

Eventually someone will need to start sketching out some design ideas. What’s the user flow through a particular feature?

User Testing

This won’t happen for every new design, for example log in hasn’t gone through this process. Our new big features will go through this type of testing. This helps get feedback on the design and layout. Does it all make sense?

Design and user story creation

Out of all of the work, eventually the design team and the business analyst will work together to create acceptance criteria, refine the UI and get the rest of the team up to speed with the context of a feature. Our user stories and designs are usually shared on a confluence page and linked to Jira tasks. We use a GIVEN WHEN THEN structure for our user stories.

Layer Two: The code feedback loop

Exploratory Testing

All testing is exploratory in nature, its front and centre. It’s across everything we do. chaos engineering is a type of it as well as building the code locally. We use our skills, plans and judgement to determine when and how much testing is needed at any point.

Experience Testing

When we do the code review we will do exploratory testing based on the risk of the feature. Time boxed to a session or two depending on what has been built. We will look at the user stories, brain storm any more edge cases and consider if they are worth testing. Checking if the experience of the feature makes sense and if there are any ways people can get into some sticky unexpected situations.

Unit Tests

As a developer builds a feature, they will create unit tests based on the user acceptance criteria. Developers will use the tools they are most comfortable with the write these tests. If you’d like to read more Martin Fowler has this blog post on Unit Testing.

UI Automation

Here’s a visual risk for UI automation board; each feature is mapped against customer impact vs frequency of use

I have my visual risk board next to our team which we use to prioritise how much testing we build at this layer. We use Espresso for Android and XCUITest for the iOS app.

“Why not Appium?”, I hear you ask

Simple, when test code lives in a repository outside of your production code, you decrease collaboration with the whole team. Also you can’t easily run your appium tests during pre commit testing or locally as a developer. You can follow the interactive visual risk for UI automation exercise here to understand more.

Code Review

When a new API is being developed, I’ll often pair with the developer to do a code review. We will talk about the architecture, brain storm testing ideas, do a bit of testing (usually through postman if we are testing an API), we will chat about test coverage. Is it adequate? Is there any thing missing? Can we see the tests fail under the expected conditions?

If it’s a front end feature, I’ll check out the code locally and use a different emulator/simulator than what the developer uses. I’ll give the feature a good shake out and check the test coverage. I’ll also test for accessibility if it’s a new front end feature.

Mock Testing

For our mobile app, we are able to do most of our code review testing without ever talking to a backend. The engineers have built some mock servers into out apps, when the app would call an API, our mock server returns a canned response. This helps us test that the UI and the flow hangs together even when test environments aren’t available. If you’d like to read more, check out this article on mock testing for android or this one for iOS.

Build Pipelines

We have different pipelines for different applications. We are using TeamCity as our Continuous Integration tool. Generally all of our unit tests and UI tests will be run. Maybe our contract tests. I have a few other ideas to increase the value from our build pipelines that I’ll talk about in Chaos Testing. If our main builds start failing, we won’t release the software.

Device Coverage

We don’t necessarily focus on doing device testing for each feature that comes through. I try to pick a different emulator/simulator than what the developers do. I always make sure features get tested on a Samsung. Some features, if they are 3 star features from our risk analyst we will spend more time testing on a wide variety of devices. We currently have an on premise mobile device cloud server delivered by Mobile Labs. If you don’t have a device cloud, you could set up your own device farm.

Why Samsung?

Samsung has a wide market saturation and they always do funky stuff to the android UI. The android emulators are awesome at vanilla android. However, most people out there aren’t using vanilla Android :(.

Contract Testing

We are moving towards having contract testing in place that lets us know if an API starts to break, if someone changes the JSON payload in an API our contract will break and someone will know the have more stuff to clean up. We don’t have contract testing for our mobile app yet but some of our downstream micro services are starting to build these. If you’d like to find more, read this article by Martin Fowler.

Test Environment

We have an integration test environment where our code is being constantly deployed into. Sometimes it can mean an API is down because it’s being deployed. We do a lot of our API testing in this environment.

Chaos/Crash Testing

With android there’s this command line tool called chaos monkey. This tool is a UI exerciser, it throws random user input at your UI to try and find where it crashes. I’m hoping to include this in a build pipeline for an overnight build. Run it for a few hours on an android device and see if it crashes. The next night, do the same thing but on a different device/os combination. This will give us reasonable device testing over a sprint. I don’t know of a similar tool for iOS. You can read more about chaos engineering on wikipedia.

Layer Three: Shipping the product feedback loop

Bug bash

A few days before the end of the sprint, our team and invited guests will sit down and do some exploratory testing on the features that have just been built. If anyone wants to explore a new API that’s been built, they can. If they’ve had their head in unit tests lately, they have the chance to explore some of the new UI. You can read how to run a bug bash to find out more. If major bugs are found here we won’t release the software. We might do a mob programming session when we don’t have enough features for a bug bash.

Demo

On the last day of the sprint we will demo our features to a broader audience. Feedback is gathered and turned into Jira items/research for the design team.

Internal release

Then we release to internal staff members. Many other companies call this “eating your own dog food”. This gives people the chance to raise more feedback before we put the product in front of customers. You can read more on wikipedia here.

Beta Release

We can release our app to our high value or digitally savvy customers who want to ahead of the curve. This is a customer engagement strategy as well as a test strategy.

Percentage roll outs

The google play store allows you to do percentage rollouts. Say you rollout to 5% on the new version, monitor production for any new crashes or customers complaining. If it’s all smooth for a few days you can continue the rollout to 50% and then 100%. The google play store allows you to roll back if major bugs do occur. The apple play store has a similar feature.

Monitoring in production

What metrics should be communicated back to the team? How can we respond to issues in production? I like this quote from a 5 minute google talk back in 2007:

Sufficiently Advanced Monitoring is Indistinguishable from Testing

Ed Keyes

Layer Four: Supporting the product feedback loop

Supporting devices

We should support all of the devices that 80% of our market uses. We will be support from Android 6 (marsh-mellow) and from iOS 11. There probably will be some obscure android devices out there that don’t play nice with our app. Android is a beast like that.

Facilitating customer feedback

There will be an easy way for customers to provide feedback in app. I have some ideas on how to make that experience better but there are privacy concerns to consider. We will also be monitoring our google/apple play store reviews for bugs.

Triaging feedback

Someone should be monitoring all of this feedback, attempting to reproduce bugs if customers are facing them and raising them in the teams backlog for prioritisation next sprint.

Soap Opera Testing

Maybe in the future, we could try some soap opera testing with the business? Soap opera testing is a condensed and over dramatised approach to testing. What are the wackiest scenarios our customers have actually tried? How does our system break? You can read more about this exercise here.

Why the layers?

Consider each of these layers like a net. It won’t catch everything, bugs in production will still happen. But when we have all of these feedback loops layered on top of each other, we get a pretty tight net, where hopefully no major issues get into production.

What about auditing or compliance?

Our source of truth is the code, Jira and Confluence. When we have all of it integrated, we can prove we tested a feature thoroughly without too much extra overhead. An auditors mindset and a testers mindset are very similar. Testers are concerned with product risks, auditors are concerned with business risk.

Their main question is, “did you do what you say you do? Did you follow your process?” and, “Is the existing process adequate?”.

Where are your test cases?

Michael Bolton has a 7 part series on breaking the test case addiction. You can read series one here. You don’t need test cases to prove you did adequate testing. They create unnecessary overhead that detract from adding business value.

What else is missing?

Security testing is not included in this test strategy. Neither is performance testing. Getting these included can be challenging. I’m open to your suggestions in how I can incorporate this type of feedback in a timely manner.

What else would you add to your test strategy?

Categories
Marketing Software Testing

70 days into #100DaysOfLinkedIn

Wow, I’m two thirds of the way through my #100DaysOfLinkedIn marketing campaign. Here is an update of how I’ve adapted and grown over that time. You can read up on the launch of the campaign and a halfway through update too.

2200 connections

Before starting this campaign, I had 1400 connections. I’ve now see this grow to over 2200 at the time of writing this blog. I’ve written to every single one of those 800 new connections. EVERY. SINGLE. ONE. I’m now up to my third version of a template message. Here it is:

Hi ,

How are you? Thanks for connecting. What are some of the challenges facing you these days?

You might enjoy reading my blog on Soap Opera Testing:
https://bughuntersam.com/soap-opera-testing/

Is there anything I can help you with?

Regards,

Sam

It’s short and sweet. Most people don’t respond but I’ve been able to organise a few key meetings with this approach, organise a few testers Meetup events and score a job with a startup.

Reconnecting with Sydney testers

I have around 200-500 QA/Testing professionals based in Sydney in my network. I’ve been reconnecting with them to see what events they are interested in. Here is my template message reaching out to these people:

Hi ,

Have you been to a Sydney Testers event recently? We’ve got a few events lined up that might interest you:

API Testing at ING on the 4th of April https://www.meetup.com/Sydney-Testers/events/259748843/

Performance Testing at the Rockend in St Leonards on the 9th of May

https://www.meetup.com/Sydney-Testers/events/259722603/

I’d greatly appreciate it if you could share those events with any colleagues who would be interested.

If you can’t make it on the day, I will be streaming these meetups through www.twitch.tv/BugHunterSam

What other events would you like to see?

Regards,

Sam

Avoiding Automation

I don’t generally automate these messages because that is against LinkedIn’s terms and conditions. I actually have a Google doc full of these template messages that I copy and paste into LinkedIn and add the person’s name at the start.

5000 connections?

Can I get to 5000 connections? Maybe not by the time I finish this campaign but I will continue to use the techniques learned from this campaign after I finish. If I get to over 5000 connections I will become the most connected software tester that I know.

Categories
Software Testing

Test Automation Uni – Day 1

Today I hosted a lovely bunch of keen learners who wanted to put aside some time and do an online course with me. You can check out the event on meetup here. Today’s topic was setting a foundation for successful test automation by Angie Jones. I also streamed the activity and conversations via twitch.tv/BugHunterSam. Here is what my loungeroom looked like:

Chapter 1 – Designing a Test Automation Strategy

  • What is your goal for starting a test automation initiative and what is it that you want to accomplish?
  • Who do you envision participating in your test automation initiative and in what capacity?
  • How do you plan for the execution of this strategy

Chapter 2 – Creating a Culture for Test Automation Success

  • how to get people on board
  • how to help them understand their place in the automation strategy
  • how to enable them to do what’s needed

Chapter 3 – Developing for Test Automatability

The test automation pyramid is a mental model for thinking about what level you should build your automation tests. It’s a little contended because it doesn’t acknowledge exploratory testing but it’s a model that is widely used. James Bach has a blog on the round earth model that could also be used.

Chapter 4 – Tooling for Test Automation

We discuss how to choose the right tools for your initiative. Before choosing your tools, it’s important to consider who will be using them.

Chapter 5 – Future-proofing Your Test Automation Efforts

Without a clear strategy in mind, many teams make the mistake of automating their tests for their current situation. Perhaps you’re just starting out and you have a dozen or so tests that run locally. You don’t see the issues that your poorly written test code can surface.

Design patterns

Become familiar with design patterns that are especially beneficial for test automation projects such as:

  • Page Object Model
  • Screenplay
  • Fluent
  • Builder
  • Singleton
  • Factory
  • Facade

Here is a getting started guide with Page Object Model (POM) architecture with C# and selenium:

Chapter 6 – Scaling Your Test Automation

Writing automated tests that run perfectly against one environment is challenging in and of itself. But what about when you’re ready to scale your one suite of tests to run in multiple environments, browsers, or devices?

LUNCH

On the menu today was Turkish. We had a lentil tabbouleh salad, Turkish bread, felafel, hummus, tatziki and marinated chicken. You should come along to the next event just for the food. I always put on a good spread.

Chapter 7 – Measuring the Value of Your Test Automation

Many automation projects fail due to unrealistic expectations. To avoid this, it’s best to identify expectations early and communicate these expectations to the entire team. What’s your expected Return on Investment?

Bonus Material: Writing readable code

Having readable code is a prerequisite to scaling your product and the organisation behind it. Hard-to-read code not only intimidates your co-workers (and your future self) but also conceals bugs and hurts your team’s velocity, since every modification takes twice as long as it should. This talk shares the principles of writing clear, idiomatic JavaScript code, illustrated with real-world examples.

Hands on exercise

We finished the day with a live code example of getting started with Page Object Model architecture using c# and selenium. I would share the video with you but I was experiencing issues with OBS and was unable to get you a video :(.

Categories
Agile Conferences Software Testing Technology

Tails of Fail

Today I gave a talk at TiCCA (Testing in Context Conference). The talk topic was tails of fail – how I failed a quality coach role. It’s a story of how I tried out this quality coaching thing but I didn’t pass probation. You can access the slides here. I used slido to manage the questions at the end of the sesion.

At the end of the day, quality coaching can be hard to demonstrate value add.

Will you answer all these questions offline ?

Yes, this blog post is an attempt to answer all of the unanswered questions that were raised. Thanks Richard. First of all, a bit of context that was missed in my intro. I’m currently a Test Analyst at a superannuation company, I don’t technically have coach in my title but I’m also growing my side business where I provide training and workshops for teams in testing skills. This might have caused some confusion with the questions.

What does a quality coach’s typical day look like?

When I was at Campaign Monitor, my day would start with a stand-up and seeing what items needed focus on for the day. The team might have a work item that needed a bit of testing and I’d be available to pair test with that developer if needed. Some days we would run workshops (training for quality champions; developers who wanted to improve their testing skills) or bug bashes (these were generally once a fortnight).

What are the differences between a quality coach and an agile coach and a test coach?

An agile coach is a facilitator, often scrum certified (but not always). They are usually more focused on helping the team collaborate more effectively over improving the teams quality/testing practices. I don’t see much difference between a test coach vs a quality coach. You can use the words that make sense in your context.

Are there any drawbacks to using a quality coach practice?

Yes, when you are encouraging people who prefer to build things workout their testers mindset you aren’t going to get as focused attention as someone who has spent their career practicing their testing craft.

Also, you might have some really technical testers who aren’t interested in coaching/leadership skills. You shouldn’t expect everyone to want to become a coach and that’s fine too.

What are the benefits to the organisation of the assistance/coach/advocate model?

If your company believes that quality is a team responsibility a coach can help lift the testing capabilities of a team. If you need a bit of focus on quality (maybe you have lots of customers complaining about bugs and it’s costing you big $) but you don’t know how to get your engineering teams to prioritise bug finding as well as build new features a coach could help here. There isn’t a great deal of training out there in how to be a good tester, it’s not as easy as sending your devs off to a 3 day course and bam they are master bug hunters.

If everyone is responsible for quality, is anyone really responsible for quality?

You could always say the CEO or CTO are fundamentally responsible for quality. Maybe have a Chief Quality Officer (CQO)? Maybe they’d just become a scapegoat for all of the problems you face in production? The testing teams themselves aren’t responsible for quality if they can’t easily build quality in either.

What is a good team to quality coach ratio?

Depends on the team/company. When I was at Campaign Monitor we had 2 testers to roughly 50-ish engineers. Hence why we were using the quality champion model to help get more quality reporting from teams. We physically couldn’t sit with all 6 teams at the same time to understand their pain points. I’d prefer a 1 coach to 1-2 cross functional teams. Being embedded and focused on one team of roughly 8 people would work for me.

What are the challenges you faced while quality coaching?

Clearly articulating how I add business value that aligns with my own intrinsic motivations and interests. I don’t think I’ve struggled with convincing developers they need to do more of their own testing.

Categories
Agile Critical Thinking Mobile Testing Software Testing Technology

Visual Risk & UI Automation framework

Have you wanted to start with automation testing and not known where to begin? Or maybe you have 100’s or thousands of test cases in your current automation pipeline and you want to reduce the build times. Here I will walk you through one way you could consider slicing up this problem. Using examples from Tyro’s banking app (I use to work on their mobile iOS team).

Break into flows

Analyse your app/site/tool and brainstorm the main flows that people will take through it. I picked 6 flows using tyro as an example app. Next I numbered them.

1. Registration

Registration is a pretty common feature, you might also set a 2 factor authentication, a pin and a password for the account (especially if it’s a bank account)

2. Transfer Funds

If you have a bank account, it’s highly likely you want to access the money in it at some point

3. View Transaction

You might want to check if that bill was paid correctly or if the last transfer was processed

4. Contact Us

Something not quite right? send us a request and we will give you a phone call at a convenient time

5. Change Pin

When was the last time you changed the pin for your mobile banking app?

6. Log in

I’d say this is a pretty common feature

Mapping those flows to a risk board

Draw a graph, put frequency of use on the x axis down the bottom; things that are more used will be on the right hand side. On the vertical y axis put impact if broken. This is from a person point of view, how much would they care if that feature was a broken? From a business point of view you may have a different understanding of risk and that’s fine two. We will go into how to reflect that later.

Add your flows

We have our 6 flows to the right hand side of our graph, we’ve also broken our graph into 3 areas

Move the flows to your graph

It helps to pair on this exercise to help build up a shared understand. Do your designers and engineers have the same understanding of risk as you do? It’s ok if your answer is different to mine, we all have a different context and understanding.

Reflect other elements of risk

You might want to reflect other elements of risk such as security, financial, regulatory and anything else you can think of. At the end of the day this is only a 2 representation of risk and risk is a little more complex than these dimensions we put here.

Neat, what’s next?

If you are thinking, well that’s cool and all but what does that have to do with automation testing? Then please continue reading. You could use this board to decide which tests you should focus on building/refactoring next (hint, the stuff with 3 stars is pretty important). You could also use this to priortise your performance testing efforts. I took this board to our planning sessions to talk about new features and it helped with deciding how much automation/testing effort we may need. At the end of the day, your software will be more complex than this example.

Here is the actual board I used at Tyro with a bit more detail:

I then broke down each flow into a test case, and grouped similar test cases into a barebones automation test suite. You can also use this approach to generate exploratory testing ideas for each screen in your flow.

You can watch this talk in full here:

I also run this as a lunchtime 30-45 minute workshop exercise. Book me in for a lunchtime brownbag if you are based in Sydney (I can do remote too).

Categories
Conferences Critical Thinking Presenting Software Testing

Soap Opera Testing

What is it?

Soap Opera Testing is a dramatised method used for testing your business processes. You might want to try it for a super-condensed and thorough way of highlighting bugs. And because it’s fun. Embrace the drama.

Origins

Cem Kaner has been writting about scenario testing for a long time. He published this article on ‘an intro to scenario testing’ and Hans Buwalda presented on ‘soap opera testing’ nearly 20 years ago 😱. They’re both serious tester dudes and this stuff is legit.

How Does it Work?

You might start with a brain storming session with your sales or customer support team. Ask them for stories about things your users have done. Not just the ordinary things, but also some off-the-wall and crazy things. What you’re looking for is drama.

It might help to sketch out the story briefly. Write down steps that are essential, or those that you might make a mistake on. Cem Kaner gives some practical tips here, although you definitely don’t need to read all 500 pages.

Kaner’s Introduction to Scenario Testing is a bit more bite sized and describes the five main points your scenario needs. Namely that it’s a story, it’s credible, it will test the program in a complex way, the results are easy to evaluate and stakeholders will see the point of fixing the bugs identified.

A scenario is a hypothetical story, used to help a person think through a complex problem or system.

Cem Kaner

You then run a test exercise using the characters and scenarios from a soap opera, and analyse the results. You can do this as many times as you want, with as many different scenarios.

Use whatever soap opera you like. We make no judgements. Although fair to say that if you use A Country Practice, you’re showing your age and nobody will know what you’re talking about.

Let’s Soap Up

Here’s an example of Soap Opera Testing using The Simpsons. The program being tested is a mortgage loan application.

Let’s say Homer Simpson wins the lottery, and decides to apply for a second mortgage, for an investment property. Just as the paperwork is about to go through, Grampa Simpson burns his apartment down.

Investment property

Homer decides to help him out with the cost of a rental, meaning he needs to change the deposit he’ll pay on his investment property. Homer signs the amended paperwork but he signs it incorrectly.

Then his application is declined because even winning the lottery doesn’t give you a good credit rating overnight. The Simpsons’ next ‘diddly-door’ neighbour Ned Flanders offers to help Homer out. He’ll put in the 10% deposit.

Ned lends a helping hand

His own house is 90% paid off so it’s no big deal to him, and it will help Homer get around his bad credit rating. The Simpsons’ house is 50% paid off, and they’re putting down a 90% deposit, using Homer’s lottery winnings, and leaving some bowling money left over.

Side investment

They’re about to go to the bank and lodge the paperwork, when Homer’s half-brother Herbert Powell hears about the lottery win. Boy has he got the mother of all investment options for Homer – nuclear powered cars!

Adjust down payment

Homer can get in on the action if he puts some cash into building a prototype. So Homer has to syphon off yet more funds from the deposit he’ll make on his investment property, and change the paperwork again.

Whew, put all that through the system and see where you get to. If you think of more variables as you go, you can add them to the scenario and run the test again.

What We’ve Tested

A whole heap of stuff.

We tested rejections, with Homer’s first application, and signature recognition when he goofed up his name.

We tested multiple applications made by the same person, with an adjustment in the deposit amount made after the application had gone through.

We tested how to register multiple assets with different mortgage amounts, and a different percentage of ownership. What’s more, the owners of the properties and mortgage were not residents at the same address.

The applicants had different credit ratings, which affected the different algorithms in their application process. And they weren’t related, and didn’t intend living together at the property, which was for investment only.

Here’s a snappy list:

  • Rejections
  • Editing documents
  • Multiple applications from the same person
  • Adjusting deposits
  • Multiple assets with different mortgages
  • Different percentage ownership
  • Different credit ratings
  • Unrelated co-owners
  • Investment property applications

It only takes a little imagination to try to find many more bugs using a soap opera scenario, versus the standard “works as expected” response we’d have gotten from the test-case walk through.

Here’s a three minute recap in a lightning presentation I gave at the Selenium Conference in India.

You can access the slide deck here

have fun with Soap Opera Testing and tell me about your scenarios – add a comment below.

This article came into existence with the help of Fiona Stocker, a freelance writer and editor from the beautiful Tamar Valley in Tasmania

Visit Fiona’s website here
Categories
Agile Software Testing Technology

Becoming a Quality coach – course overview

I had the pleasure of doing Anne-Marie’s becoming a quality coach course today which was organised by Test-Ed. If you are looking to transition to a quality coach role it’s worth keeping this course on your radar. Anne-Marie is a well renowned expert in software testing and quality engineering. I had the pleasure of working for Anne-Marie at Tyro.

What is Quality Coaching?

First page of sketchnotes for the course – what is coaching?

How is a quality coaching different to a test lead? It depends on what your team wants out of a quality coach role but here is an example job description from Deputy’s principal quality coach role:

What You Will Do:

  • You will provide the guidance, inspiration and motivation for our amazing engineers to be better testers.
  • Help create a high-quality testing culture
  • Push the merits and benefits of TDD
  • Visualize testing and quality
  • Communicate with product and technical stakeholders
  • Be a customer advocate

How You Will Do It:

  • You have a combination of in-depth knowledge of Quality Assurance and Software Engineering principles and practices
  • You command the skill to communicate clearly and effectively.
  • You work directly with Engineers, Quality Coaches, Product Managers, and Discipline Heads to ensure the high quality of our software and practices.

What You Will Need:

  • 7+ years software engineering / testing experience
  • Strong understanding of QA processes and concepts.
  • Proven coaching experience in a development team with examples of how you’ve made a significant impact to their testing capabilities
  • Excellent written and verbal communication skills

Some questions you might ask?

Some people thing that coaching is all about knowing when to ask the right questions. The coaching habit by Michael Bungay Stanier would have you beleive that all you need to coach someone is 7 questions.

  • What’s on your mind?
  • And what else? (repeated a few times)
  • What’s the real challenge here for you?
  • What do you want?
  • How can I help? or What do you want from me?
  • If you say yes to this, what must you say no to?
  • What was most useful or most valuable here for you?

I think is only applies to one on one coaching, it doesn’t scale well to coaching a small team of developers and it definitely doesn’t scale to giving a lecture to 100’s of people or online. I think a good teacher is a good coach and also knows when someone needs a bit of mentoring too instead.

Models for Coaching

We discussed 2 different models you can use for coaching. Goal and ADKAR. We also discussed what does quality mean to us and expanded on a few definitions.

What does ADKAR stand for?

  1. Awareness: Leading people to see the need for change.
  2. Desire: Instilling the desire for change.
  3. Knowledge: Providing employees with the information or skills they need to achieve change.
  4. Ability: Applying knowledge and skills to bring about change.
  5. Reinforcement: Making sure that people continue to use the new methods.

We also briefly discussed Kent Beck’s talk on 3X (Explore, Expand & Extract.

Sketchnotes from Kent Beck’s 3X talk

Coaching Software Testing

Test leads will need a bunch of skillsets to do well in coaching. We also used role play to practice our newly developed coaching skills.

Running Software Testing Workshops

When running a coaching session there could be a bunch of behaviours you come across in your testers or developers that are mental barriers to trying something new. Your developers might say:

  • Testing isn’t my responsibility
  • I don’t have time for testing
  • Testing is boring
  • What if I miss a bug?
  • All testing should be automated

You testers might respond with mindsets like:

  • If I help developers do their testing, how will I prove my value?
  • I’m not technical, I can’t help with code reviews
  • I might loose my job if I raise bugs earlier
  • 100% coverage is achievable

Summary

It was a good day of engaged learning. I’m not really working in a context where I can put a lot of these coaching methods into practice though. How would you come up with antidotes to these mindset problems in your team?

Categories
Marketing Meetup Software Testing

Orders of Communication

Have you ever wanted to ask a large group of people their thoughts on a particular topic? Maybe you want to know what the 2000+ members of the Sydney Testers meetup group want to get out of the group? Was your first thought to create a poll or send out a survey? I bet you that channel didn’t work out so well for you because I tried it.

Here is my list of orders of communication to help you get the results you need when trying to get data to help influence your decisions.

Face to Face

Nothing beats face to face communication. The only draw back with this method is it’s hard to scale to reach a mass market. However the best way to get lots of feedback in a face to face style would be to collect opinions at a physical event. The low-tech solution here would be to hand out a paper survey at a meetup event and collect the results as people leave. BOOM, 30-120 responses based on attendance and it’s all useful data that’s not from some random scrub off the internet. There’s a little bit of manual data entry at the end but the results justify the return on investment.

You see this method play out at conferences for lead generation. A company will have a stall displaying their products and services. They might tempt people in with a competition or survey. “Give us your email for your chance to win” type of deal. This is leveraging face to face communication.

Do you want to settle a deal or convince your boss to give you a raise? Have a face to face meeting in a cosy cafe. It helps build up that environment of trust.

Video or Phone call

If you can’t meet someone in person, arranging a phone call or video chat should be your next point of call. As a millennial I’ve had a mild social phobia of talking on the phone that I’ve had to overcome. I highly recommend getting comfortable with getting on the phone. It will really help with collaborating with any task. As our workforce gets more global this will become more of the way we get work done.

Direct Message

I created this survey for gathering feedback from Sydney testers members last year. However I’ve only received 41 responses so far. I’ve done some pretty thorough marketing for this survey and the results just don’t justify the effort. I did:

  • Sent an email through the MeetUp app asking for feedback
  • Created a discussion on meetup asking for feedback
  • Asked every tester in Sydney who’s a level 1 connection on my LinkedIn to provide feedback via personal direct messages
  • Asked the committee to share the survey
  • Ask every new Tester who I connect with on LinkedIn to provide feedback

The feedback has been really useful but it’s been a lot of work. Work that I’m not getting paid to do either. So I don’t see it as a useful use of my time.

Social Media

Social Media is the ultimate spray and prey method. You put stuff on the internet hoping for people to stumble upon it and react to it. Posting just once hardly is effective. You need to be consistent with this and constantly posting. This is also a lot of work.

For example we’ve had this poll on our Sydney Testers meetup page since 2014, yet we only have 20 responses.

Most of the results so far are around job opportunities, networking, learning tech skills and remaining relevant in my career. If you run a tech meetup, your members probably want very similar things.

Conclusion

I prefer face to face communication above all others. I think meetup sucks as a platform for trying to engage people outside of the “turn up to this event” type of engagement. Do you dis-agree with what I’ve put here? How so?