Categories
Agile Design Finances Mobile Testing Software Testing Technology

Metrics and Quality

The superannuation and investment mobile app I’ve been working on over the last year has finally been released. It’s been on the app store for just over a month now* and this blog is about how we are using metrics to help keep tabs on the quality of our app.

*You can download the app via google play or the apple app store, you can also create an account here.

Average app store rating

The average app store rating is one useful metric to keep track of. We are aiming to keep it above 4 stars and we are also monitoring the feedback raised for future feature enhancement ideas. I did an analysis of the average app store reviews of other superannuation apps here to get a baseline of what the industry average is. If we are better than the industry average, we have a good app.

Analytics in mobile apps

We are using Adobe Analytics for tracking page views and interactions for our web and mobile app. On previous mobile app teams I’ve used mParticle and mixpanel. The framework here doesn’t matter, I’ve found adobe workspace to be a great tool for insights, once you know how to use it. Also Adobe has tons of online web tutorials for building out your own dashboards.

App versions over time

Here’s our app usage over time broken down by app version:

We have version 1.1 on the app store and released 1.0 nearly 2 months ago. We did an internal beta release with version 0.5.0. If anyone on the old versions tries to log in they’ll see a forced update view.

Crash Rates

Crashes are a fact of life with any mobile app team, there are so many different variables that go into app crashes. However keeping track of them and aiming for low rates is a good thing to measure.

With version 1.1 we improved our crash rates on android from 2.77% to 0.11%. You can use a UI exerciser that is called monkey from the command line in your android emulator to try and find more crashes too. With the following command I can send a 1000 random UI events to the emulator:

adb shell monkey -p {mobile_app_package_name} -v 1000

Crashes in App Centre

We can dive a bit deeper into crashes in app centre (a Microsoft based platform that integrates with our team city continuous integration pipeline for managing all of our test builds).

When exploring stack traces you want to look for lines that reference your app (instead of getting lost in all of the framework code), look for lines that start with your apps package name.

App Centre gives reports based on device and operating system break down:

With analytics set up, you can even dig into an individual report and get the page views that happened before that crash occurred.

What’s a good crash rate?

That depends on the context of your app, ideally zero is the best but perfect software is a myth we keep trying to obtain. As long as it’s trending downwards you are making progress towards improving it. Here’s a good follow up blog post if you are interested in reading more.

Error Tracking

I can also keep on eye on how many error messages are seen. The spike in the android app error messages was me throwing the chaos monkey at out production build for a bit. However when there is both a spike in android and iOS, I know I can ask, “was there something wrong with our backend that day?”

Test Vs Prod – page views

If every page has one event being tracked, we can compare our upcoming release candidate against production; say we see that 75 page views were triggered on the test build and we compare this to the 100 page views we can see in production. We can then say we’ve tested 75% of the app and haven’t seen any issues so far.

This is great for measuring the effectiveness of bug bashes/exploratory testing sessions. If you want an answer to, “how much testing did you/the team do?”.

Hang on, why 75%?

There’s no need to aim for 100% coverage, our unit tests do cover every screen but because they run on the internal CI network those events are never sent to adobe. We have over 500 unit/UI tests on both android and iOS (not that number of tests is a good metric, it’s an awful one by the way).

But if you’ve tested the main flows through your app and that’s gotten you 50% or 75% coverage you are now approaching diminishing returns. What’s the chances in finding a new bug? Or a new bug that someone cares about?

You could spend that extra hour or two getting to 90-95% but you could also be doing more useful stuff with your time. You should read my risk based framework if you are interested in finding out more.

Measuring Usability

If you are working on a new feature or flow of your app, you can measure how many people actually complete the task. E.g. first time log in, how many people actually log in successfully? How many people lock their accounts? If you are trying to improve this process you can track to see if the rates improve or decline.

You could also measure satisfaction after a task is completed and ask for feedback, a quick out of 5 score along the lines of, “did this help you? was it easy to achieve?”. You can put a feedback section somewhere in your app.

The tip of the iceberg

These metrics and insights I’ve shared with you are just a small subset of everything we are tracking. And is a small part of our overall test strategy. Adobe has been useful for digging down into mobile device’s and operating systems breakdowns too. There’s many ways you can cut the data to provide useful information.

What metrics have you found useful for your team and getting a gauge on quality? What metrics didn’t work as well as you had hoped?

This is not financial advice and the views expressed in this blog are my own. They are not reflective of my employers views

Categories
Agile Conferences Software Testing Technology

Tails of Fail

Today I gave a talk at TiCCA (Testing in Context Conference). The talk topic was tails of fail – how I failed a quality coach role. It’s a story of how I tried out this quality coaching thing but I didn’t pass probation. You can access the slides here. I used slido to manage the questions at the end of the sesion.

At the end of the day, quality coaching can be hard to demonstrate value add.

Will you answer all these questions offline ?

Yes, this blog post is an attempt to answer all of the unanswered questions that were raised. Thanks Richard. First of all, a bit of context that was missed in my intro. I’m currently a Test Analyst at a superannuation company, I don’t technically have coach in my title but I’m also growing my side business where I provide training and workshops for teams in testing skills. This might have caused some confusion with the questions.

What does a quality coach’s typical day look like?

When I was at Campaign Monitor, my day would start with a stand-up and seeing what items needed focus on for the day. The team might have a work item that needed a bit of testing and I’d be available to pair test with that developer if needed. Some days we would run workshops (training for quality champions; developers who wanted to improve their testing skills) or bug bashes (these were generally once a fortnight).

What are the differences between a quality coach and an agile coach and a test coach?

An agile coach is a facilitator, often scrum certified (but not always). They are usually more focused on helping the team collaborate more effectively over improving the teams quality/testing practices. I don’t see much difference between a test coach vs a quality coach. You can use the words that make sense in your context.

Are there any drawbacks to using a quality coach practice?

Yes, when you are encouraging people who prefer to build things workout their testers mindset you aren’t going to get as focused attention as someone who has spent their career practicing their testing craft.

Also, you might have some really technical testers who aren’t interested in coaching/leadership skills. You shouldn’t expect everyone to want to become a coach and that’s fine too.

What are the benefits to the organisation of the assistance/coach/advocate model?

If your company believes that quality is a team responsibility a coach can help lift the testing capabilities of a team. If you need a bit of focus on quality (maybe you have lots of customers complaining about bugs and it’s costing you big $) but you don’t know how to get your engineering teams to prioritise bug finding as well as build new features a coach could help here. There isn’t a great deal of training out there in how to be a good tester, it’s not as easy as sending your devs off to a 3 day course and bam they are master bug hunters.

If everyone is responsible for quality, is anyone really responsible for quality?

You could always say the CEO or CTO are fundamentally responsible for quality. Maybe have a Chief Quality Officer (CQO)? Maybe they’d just become a scapegoat for all of the problems you face in production? The testing teams themselves aren’t responsible for quality if they can’t easily build quality in either.

What is a good team to quality coach ratio?

Depends on the team/company. When I was at Campaign Monitor we had 2 testers to roughly 50-ish engineers. Hence why we were using the quality champion model to help get more quality reporting from teams. We physically couldn’t sit with all 6 teams at the same time to understand their pain points. I’d prefer a 1 coach to 1-2 cross functional teams. Being embedded and focused on one team of roughly 8 people would work for me.

What are the challenges you faced while quality coaching?

Clearly articulating how I add business value that aligns with my own intrinsic motivations and interests. I don’t think I’ve struggled with convincing developers they need to do more of their own testing.

Categories
Agile Critical Thinking Mobile Testing Software Testing Technology

Visual Risk & UI Automation framework

Have you wanted to start with automation testing and not known where to begin? Or maybe you have 100’s or thousands of test cases in your current automation pipeline and you want to reduce the build times. Here I will walk you through one way you could consider slicing up this problem. Using examples from Tyro’s banking app (I use to work on their mobile iOS team).

Break into flows

Analyse your app/site/tool and brainstorm the main flows that people will take through it. I picked 6 flows using tyro as an example app. Next I numbered them.

1. Registration

Registration is a pretty common feature, you might also set a 2 factor authentication, a pin and a password for the account (especially if it’s a bank account)

2. Transfer Funds

If you have a bank account, it’s highly likely you want to access the money in it at some point

3. View Transaction

You might want to check if that bill was paid correctly or if the last transfer was processed

4. Contact Us

Something not quite right? send us a request and we will give you a phone call at a convenient time

5. Change Pin

When was the last time you changed the pin for your mobile banking app?

6. Log in

I’d say this is a pretty common feature

Mapping those flows to a risk board

Draw a graph, put frequency of use on the x axis down the bottom; things that are more used will be on the right hand side. On the vertical y axis put impact if broken. This is from a person point of view, how much would they care if that feature was a broken? From a business point of view you may have a different understanding of risk and that’s fine two. We will go into how to reflect that later.

Add your flows

We have our 6 flows to the right hand side of our graph, we’ve also broken our graph into 3 areas

Move the flows to your graph

It helps to pair on this exercise to help build up a shared understand. Do your designers and engineers have the same understanding of risk as you do? It’s ok if your answer is different to mine, we all have a different context and understanding.

Reflect other elements of risk

You might want to reflect other elements of risk such as security, financial, regulatory and anything else you can think of. At the end of the day this is only a 2 representation of risk and risk is a little more complex than these dimensions we put here.

Neat, what’s next?

If you are thinking, well that’s cool and all but what does that have to do with automation testing? Then please continue reading. You could use this board to decide which tests you should focus on building/refactoring next (hint, the stuff with 3 stars is pretty important). You could also use this to priortise your performance testing efforts. I took this board to our planning sessions to talk about new features and it helped with deciding how much automation/testing effort we may need. At the end of the day, your software will be more complex than this example.

Here is the actual board I used at Tyro with a bit more detail:

I then broke down each flow into a test case, and grouped similar test cases into a barebones automation test suite. You can also use this approach to generate exploratory testing ideas for each screen in your flow.

You can watch this talk in full here:

I also run this as a lunchtime 30-45 minute workshop exercise. Book me in for a lunchtime brownbag if you are based in Sydney (I can do remote too).

Categories
Agile Software Testing Technology

Becoming a Quality coach – course overview

I had the pleasure of doing Anne-Marie’s becoming a quality coach course today which was organised by Test-Ed. If you are looking to transition to a quality coach role it’s worth keeping this course on your radar. Anne-Marie is a well renowned expert in software testing and quality engineering. I had the pleasure of working for Anne-Marie at Tyro.

What is Quality Coaching?

First page of sketchnotes for the course – what is coaching?

How is a quality coaching different to a test lead? It depends on what your team wants out of a quality coach role but here is an example job description from Deputy’s principal quality coach role:

What You Will Do:

  • You will provide the guidance, inspiration and motivation for our amazing engineers to be better testers.
  • Help create a high-quality testing culture
  • Push the merits and benefits of TDD
  • Visualize testing and quality
  • Communicate with product and technical stakeholders
  • Be a customer advocate

How You Will Do It:

  • You have a combination of in-depth knowledge of Quality Assurance and Software Engineering principles and practices
  • You command the skill to communicate clearly and effectively.
  • You work directly with Engineers, Quality Coaches, Product Managers, and Discipline Heads to ensure the high quality of our software and practices.

What You Will Need:

  • 7+ years software engineering / testing experience
  • Strong understanding of QA processes and concepts.
  • Proven coaching experience in a development team with examples of how you’ve made a significant impact to their testing capabilities
  • Excellent written and verbal communication skills

Some questions you might ask?

Some people thing that coaching is all about knowing when to ask the right questions. The coaching habit by Michael Bungay Stanier would have you beleive that all you need to coach someone is 7 questions.

  • What’s on your mind?
  • And what else? (repeated a few times)
  • What’s the real challenge here for you?
  • What do you want?
  • How can I help? or What do you want from me?
  • If you say yes to this, what must you say no to?
  • What was most useful or most valuable here for you?

I think is only applies to one on one coaching, it doesn’t scale well to coaching a small team of developers and it definitely doesn’t scale to giving a lecture to 100’s of people or online. I think a good teacher is a good coach and also knows when someone needs a bit of mentoring too instead.

Models for Coaching

We discussed 2 different models you can use for coaching. Goal and ADKAR. We also discussed what does quality mean to us and expanded on a few definitions.

What does ADKAR stand for?

  1. Awareness: Leading people to see the need for change.
  2. Desire: Instilling the desire for change.
  3. Knowledge: Providing employees with the information or skills they need to achieve change.
  4. Ability: Applying knowledge and skills to bring about change.
  5. Reinforcement: Making sure that people continue to use the new methods.

We also briefly discussed Kent Beck’s talk on 3X (Explore, Expand & Extract.

Sketchnotes from Kent Beck’s 3X talk

Coaching Software Testing

Test leads will need a bunch of skillsets to do well in coaching. We also used role play to practice our newly developed coaching skills.

Running Software Testing Workshops

When running a coaching session there could be a bunch of behaviours you come across in your testers or developers that are mental barriers to trying something new. Your developers might say:

  • Testing isn’t my responsibility
  • I don’t have time for testing
  • Testing is boring
  • What if I miss a bug?
  • All testing should be automated

You testers might respond with mindsets like:

  • If I help developers do their testing, how will I prove my value?
  • I’m not technical, I can’t help with code reviews
  • I might loose my job if I raise bugs earlier
  • 100% coverage is achievable

Summary

It was a good day of engaged learning. I’m not really working in a context where I can put a lot of these coaching methods into practice though. How would you come up with antidotes to these mindset problems in your team?

Categories
Agile Software Testing Technology

What’s with the coach titles?

Agile, Quality, Life, Career … everyone wants to be some sort coach these days.

coach sign

I don’t like the term coach. I feel like it downplays the hard work that goes into teaching. I learn and I teach what I learn. I love teaching so much I often do it for free. During uni I volunteered with Robogals teaching kids programming through Lego Robotics workshops. I had my own community radio show where I interviewed Engineers to promote the field. I give presentations at conferences to teach other people my learnings. I volunteer with Sydney Testers to teach my community how to be better at testing.

Is Coaching just asking questions?

As I teach I’ll ask coaching like questions to help you to explore ideas further. I often do this when I play the dice gamed with other testers.

Roleplay dice

If you have an idea while playing the dice game with me I’ll prob you with questions to try to get you to explain your thinking. The dice game is a fun way to explore black box testing and critical thinking, I’ll be running these on a fortnightly basis. Keep an eye out on the Sydney Testers meetup page for more events like this.

The Coaching habit book

The coaching habit” book would have us believe that coaching is all about asking questions. Just 7 questions in total. Where as I believe the main function of a sports coach is to provide the fast feedback in the moment to help an athlete improve and perform at their best. 

Coaching is having the knowledge to know when to ask probing questions, knowing what questions to ask and how to follow up on particular questions to gain more insights. I use these techniques to learn not to coach.

Mentoring vs Coaching vs Teaching

There might be different contexts where one might be applied over another but I use these terms almost interchangeable. On twitter recently I said I mentored a colleague in giving technical presentations:

And I thoroughly enjoyed the experience. I love helping other people improve. I love this stuff so much, I do it for free. The mentoring wasn’t ongoing though, it was just a 1.5 hour session with the clear goal of improving 1 thing. So maybe I shouldn’t call it mentoring? Well I view it as a mentoring session, but I also taught content and asked coaching questions. All three; Mentoring, Coaching and Teaching were all involved in this situation.

Here’s the review of that session:

Review from Tania Dastres for a technical presentations workshops

Teaching is Learning

But putting together a blog post, workshop or lecture I consolidate my own learning into key take aways. It helps me practice my communication and I learn to exchange ideas more effectively. The best teachers are constant learners.

Maybe I’m jaded because I had a bad experience where a job with the title “Quality Coach” didn’t quite work out for me but I don’t think there is much point in distinguishing the differences between teaching, coaching and mentoring. I’ll use it all to help you improve.

Categories
Agile Conferences Technology

Agile Australia 2018

On Tuesday the 19th of June I spoke at Agile Australia( access the recording here) on how to get more people involved with testing. You can access my slides; The bug hunt is on. I proposed 5 activities to help get more people involved with quality:

  1. Bug Bashes
  2. Bug Bounties
  3. Dog Fooding
  4. Knowledge sharing practices
  5. Soap Opera testing (you also watch the first three minutes of this video)

Sketch notes

Here are all of my sketch notes on the presentations I attended;

My personal highlights where Nigel’s talk on “Agile is the last thing you need”:

And Martin Fowler’s talk on “the state of Agile in 2018”:

Steven gave an interesting talk on visual strategy maps:

Categories
Agile Software Testing Technology

Running a Bug Bash

For any solo tester out there I recommend leading a regular bug bash/mob testing activity. It’s an activity you can run at the end of a sprint/feature dev cycle. You invite the team, get some snacks/beverages together and get everyone testing for around an hour.

Setup

You might want to make sure you have good device/browser coverage by ensuring everyone is set up with data/devices before hand. You might also want to prepare some help docs to help people get started. I like to have a mindmap on coverage ideas prepared before hand so people have a visual indicator of what they are testing. I might also create a bunch of test accounts before hand and distribute them to the team.

I use this whiteboard for running bugbashes at Insight Timer; regression on the left, what’s changed recently in the middle and “other” things to consider on the right

Light Bug reporting

You should encourage light weight bug reporting;

  • maybe people write bugs on sticky notes
  • add bugs to a shared spreadsheet
  • or raise Jira bugs directly from Slack using a /jirio create bug shortcut

The focus should be on finding bugs, not getting caught up in how to report them. You can always clarify issues further after the bug bash if you need more information.

What to test in a bug bash

I have a 3 step heuristic for deciding what to test (think; guide or rule of thumb):

  1. what’s changed recently? Change introduces new risk. what features have been built? What code has been refactored? What libraries have been updated? Focus the team to test these areas first
  2. Regression testing. Have a checklist of at most a dozen scenarios of core functionality (maybe around registration, payments and the main stuff people do with your product). Ask people to consider these cases while doing 1), maybe ask people to put there name next to a case when they test it so you can get an idea of coverage
  3. What other testing should/could we do? Sometimes it might make sense to do a pen test bug bash session or a performance testing bash or a cross browser test bash but it doesn’t have to happen every time

Test in Production

If you can; test in production. There’s no environment quite like it.

Put on your ruby red slippers and repeat after me, "there's no place like prod"
Toto, I’ve a feeling we’re not in Kansas anymore – Wizard of Oz

But if you can’t test in production, make sure your test environment is stable, your data is set up and you’ve given your team the “Do Not Touch except for bug bash testing” request.

Post Bug Bash

Make sure to thank people for their time, count up the bugs and give kudos to your best bug hunter. Testing/Quality is a team responsibility after all. Send out a thank you email with results and award trophies. I’ve handed out this trophy of a rhino bug in resin on a trophy base to our number one bug hunter before:

rhino bug in resin on a trophy base
Bug Bash Trophy
Categories
Agile Software Testing

Agile and Stress

I was having a chat with an old colleague on LinkedIn today (Brian Osman) and we were talking about Agile. The question was, “how does Agile at Tyro differ from <Client>*?” My conclusion was that by focusing on certain “Agile” rituals an artificially high stress environment can be easily created.

Here is this conversation:

Bosman:

Samster! How’s it going at EPAM? All good I hope 🙂 . Hey just a question – do you guys *do* agile and if so how does it compare to Tyro?

 

Samster:

There’s lots of people in EPAM who do agile but in the <Client> office in my little corner there is no mention of agile, no Kanban boards, no daily standups (in my team). Every team is encouraged to develop their own process. E.g. my manager at <Client> often doesn’t come into the office until 10:30-12pm so a 9am standup just wouldn’t work. He has kids and often works from home. I know the EPAM cloud support guys we have here in Sydney have standups and mostly deal with support tickets so their work is fairly regular.

So with my manager we have a sync doc that I fill out every day on what I’m working on with a breakdown of roughly how much time I’m spending on tasks. Roughly half my time should be spent on bug triaging and trying to mainstream that process across some of the Sydney teams (about 70 developers/designers/PM’s and researchers) and the other half of my time spent on helping those teams with feature testing. This sync doc is in place of a daily standup and we have a sync up once a week where we go through it and look for improvements. There’s no sprints, no regular retrospectives. The <Client> guys will tend to work based on quarters (3 month cycles). Higher up management will set certain goals for the quarter and every team will set their own goals that hopefully line up with those.

So atleast in my little bubble at <Client> there is very little”formal” Agile but product excellence is a heavy part of the culture.

The funny thing with Tyro’s agile is it didn’t really support flexibility. A bit of an oxymoron in a way.

The context of <Client> and Tyro is vastly different too. Often the guys in the Sydney <Client> office have to collaborate with people all over the world, so they will often have conference calls with guys from Mountain View in the morning, often taking these calls at home and maybe conference calls with India in the afternoon. I have a weekly sync with one guy in India who helps lead the team that is responsible for communicating with public transport providers. The ops team for <Client> project that I’m working on is based in Seattle.

On a side note, I have heard stories of <Client> bringing in agile experts for coaching/consulting to learn about it. They usually bring in the guys that write the books on the stuff because <Client> has that kind of pull.

 

Bosman:

That’s awesome! It’s something like I just read from a <Client staff member> and something I like…lower case agile. Figure out the problem yourself and make your own rules (but don’t break the law….and be nice!). Thanks Sammy! I’m writing a report for a team I’m coaching at <Somewhere> and they want to compare this with places like <Client>. I think they’ll be surprised 😊 .

 

Samster:

Yeah. Otherwise things are going pretty well. No one has kicked me out yet. There have been a few days where I felt I wasn’t good enough to be at <Client> but it wasn’t really as stressful as starting at Tyro was. I think a lot of companies aspire to have a culture like <Client> but that’s actually pretty expensive, but nicely liberating. I think companies also often miss the point of <Client>’s culture, I guess they see the free food and games rooms and think that’s all you need to emulate <Client>’s culture.

I think the whole point of agile, is the being able to adapt to a rapidly changing environment. In the context of <Client>, sure there are some areas of the business that are constantly evolving but focusing more on product excellence tends to put a slower spin on things. There’s a focus on facilitating fast feedback but it’s to help make work easier, not to adapt to the market. There’s no mantra of ship it quickly, but there is a huge amount of effort put into supporting engineers get new code into production. Even <Client> uses a giant spreadsheet of manual test cases for Android/iOS <Client> maps apps on top of all of their other checks and balances.

There is a lot of process involved with getting a new feature into <Client> maps.

Maybe the over emphasis on”Agile”, and adapting quickly creates an artificially high stress environment. There’s always this push to beat competitors to market, to get this work out the door this sprint, to be constantly, “Go Go Go”. People don’t function well under stressful situations and that constant stress can’t be that healthy.

 

Bosman:

I agree with your point …. I don’t know if they’ll meet their growth targets and I don’t know if staff will feel happy about being ALWAYS under the pump so to speak…

 

*<Client> = I work client side as a contractor. I can’t officially say who this client is (because contract) but I get to help test an android app you might use for mapping and navigation. The Client is pretty well renowned in the tech space.