Metrics and Quality

The superannuation and investment mobile app I’ve been working on over the last year has finally been released. It’s been on the app store for just over a month now* and this blog is about how we are using metrics to help keep tabs on the quality of our app.

*You can download the app via google play or the apple app store, you can also create an account here.

Average app store rating

The average app store rating is one useful metric to keep track of. We are aiming to keep it above 4 stars and we are also monitoring the feedback raised for future feature enhancement ideas. I did an analysis of the average app store reviews of other superannuation apps here to get a baseline of what the industry average is. If we are better than the industry average, we have a good app.

Analytics in mobile apps

We are using Adobe Analytics for tracking page views and interactions for our web and mobile app. On previous mobile app teams I’ve used mParticle and mixpanel. The framework here doesn’t matter, I’ve found adobe workspace to be a great tool for insights, once you know how to use it. Also Adobe has tons of online web tutorials for building out your own dashboards.

App versions over time

Here’s our app usage over time broken down by app version:

We have version 1.1 on the app store and released 1.0 nearly 2 months ago. We did an internal beta release with version 0.5.0. If anyone on the old versions tries to log in they’ll see a forced update view.

Crash Rates

Crashes are a fact of life with any mobile app team, there are so many different variables that go into app crashes. However keeping track of them and aiming for low rates is a good thing to measure.

With version 1.1 we improved our crash rates on android from 2.77% to 0.11%. You can use a UI exerciser that is called monkey from the command line in your android emulator to try and find more crashes too. With the following command I can send a 1000 random UI events to the emulator:

adb shell monkey -p {mobile_app_package_name} -v 1000

Crashes in App Centre

We can dive a bit deeper into crashes in app centre (a Microsoft based platform that integrates with our team city continuous integration pipeline for managing all of our test builds).

When exploring stack traces you want to look for lines that reference your app (instead of getting lost in all of the framework code), look for lines that start with your apps package name.

App Centre gives reports based on device and operating system break down:

With analytics set up, you can even dig into an individual report and get the page views that happened before that crash occurred.

What’s a good crash rate?

That depends on the context of your app, ideally zero is the best but perfect software is a myth we keep trying to obtain. As long as it’s trending downwards you are making progress towards improving it. Here’s a good follow up blog post if you are interested in reading more.

Error Tracking

I can also keep on eye on how many error messages are seen. The spike in the android app error messages was me throwing the chaos monkey at out production build for a bit. However when there is both a spike in android and iOS, I know I can ask, “was there something wrong with our backend that day?”

Test Vs Prod – page views

If every page has one event being tracked, we can compare our upcoming release candidate against production; say we see that 75 page views were triggered on the test build and we compare this to the 100 page views we can see in production. We can then say we’ve tested 75% of the app and haven’t seen any issues so far.

This is great for measuring the effectiveness of bug bashes/exploratory testing sessions. If you want an answer to, “how much testing did you/the team do?”.

Hang on, why 75%?

There’s no need to aim for 100% coverage, our unit tests do cover every screen but because they run on the internal CI network those events are never sent to adobe. We have over 500 unit/UI tests on both android and iOS (not that number of tests is a good metric, it’s an awful one by the way).

But if you’ve tested the main flows through your app and that’s gotten you 50% or 75% coverage you are now approaching diminishing returns. What’s the chances in finding a new bug? Or a new bug that someone cares about?

You could spend that extra hour or two getting to 90-95% but you could also be doing more useful stuff with your time. You should read my risk based framework if you are interested in finding out more.

Measuring Usability

If you are working on a new feature or flow of your app, you can measure how many people actually complete the task. E.g. first time log in, how many people actually log in successfully? How many people lock their accounts? If you are trying to improve this process you can track to see if the rates improve or decline.

You could also measure satisfaction after a task is completed and ask for feedback, a quick out of 5 score along the lines of, “did this help you? was it easy to achieve?”. You can put a feedback section somewhere in your app.

The tip of the iceberg

These metrics and insights I’ve shared with you are just a small subset of everything we are tracking. And is a small part of our overall test strategy. Adobe has been useful for digging down into mobile device’s and operating systems breakdowns too. There’s many ways you can cut the data to provide useful information.

What metrics have you found useful for your team and getting a gauge on quality? What metrics didn’t work as well as you had hoped?

This is not financial advice and the views expressed in this blog are my own. They are not reflective of my employers views

8 comments

Leave a Reply