Here is a table breaking down the average funds per person based on country and population (roughly sorted by funds under management):
Assets US$ (in billions)
Population ( in millions)
Average funds per person
Wow, look at Norway
Norway has the most saved per person, but their fund is publicly owned and was formed when Norway made lots of money in oil and is now invested in ethically run companies. Individuals don’t contribute to it and the government can only access 3% of the funds each year. You can watch this youtube video to find out why Norway is so rich.
A few other countries of note
The US has the most funds under management, but due to wealth distribution the average per person is lower. Pensions in the UK seems to be a confusing affair and it’s up to the employer to set up a pension plan and to make contributions on your behalf.
Singapore once had employer contributions set as high as 25% before their recession in the 80’s.
The main reason why we have so much saved is because of the compulsory employer contributions; which is currently 9.5% on top of your salary goes towards retirement savings. This was established in the 90’s by the Keating government at the time. Here is a youtube video of Keating ranting about super.
The average working Australian
I’ve told my brother, who’s 22 and works in a supermarket as a fruit and veg manager, that he’ll likely have 400k in super when he retires, even if he does nothing with it. You can use money smarts super calculator to play around with some numbers.
The average supermarket employee makes around 50k a year (plus or minus around 5k). 400k in savings feels like an insane amount of potential wealth for someone in my family (a previously low socio economic but still very bogan family, we are now upper middle bogans 😉)
The superannuation and investment mobile app I’ve been working on over the last year has finally been released. It’s been on the app store for just over a month now* and this blog is about how we are using metrics to help keep tabs on the quality of our app.
The average app store rating is one useful metric to keep track of. We are aiming to keep it above 4 stars and we are also monitoring the feedback raised for future feature enhancement ideas. I did an analysis of the average app store reviews of other superannuation apps here to get a baseline of what the industry average is. If we are better than the industry average, we have a good app.
Analytics in mobile apps
We are using Adobe Analytics for tracking page views and interactions for our web and mobile app. On previous mobile app teams I’ve used mParticle and mixpanel. The framework here doesn’t matter, I’ve found adobe workspace to be a great tool for insights, once you know how to use it. Also Adobe has tons of online web tutorials for building out your own dashboards.
App versions over time
Here’s our app usage over time broken down by app version:
We have version 1.1 on the app store and released 1.0 nearly 2 months ago. We did an internal beta release with version 0.5.0. If anyone on the old versions tries to log in they’ll see a forced update view.
Crashes are a fact of life with any mobile app team, there are so many different variables that go into app crashes. However keeping track of them and aiming for low rates is a good thing to measure.
With version 1.1 we improved our crash rates on android from 2.77% to 0.11%. You can use a UI exerciser that is called monkey from the command line in your android emulator to try and find more crashes too. With the following command I can send a 1000 random UI events to the emulator:
I can also keep on eye on how many error messages are seen. The spike in the android app error messages was me throwing the chaos monkey at out production build for a bit. However when there is both a spike in android and iOS, I know I can ask, “was there something wrong with our backend that day?”
Test Vs Prod – page views
If every page has one event being tracked, we can compare our upcoming release candidate against production; say we see that 75 page views were triggered on the test build and we compare this to the 100 page views we can see in production. We can then say we’ve tested 75% of the app and haven’t seen any issues so far.
There’s no need to aim for 100% coverage, our unit tests do cover every screen but because they run on the internal CI network those events are never sent to adobe. We have over 500 unit/UI tests on both android and iOS (not that number of tests is a good metric, it’s an awful one by the way).
But if you’ve tested the main flows through your app and that’s gotten you 50% or 75% coverage you are now approaching diminishing returns. What’s the chances in finding a new bug? Or a new bug that someone cares about?
You could spend that extra hour or two getting to 90-95% but you could also be doing more useful stuff with your time. You should read my risk based framework if you are interested in finding out more.
If you are working on a new feature or flow of your app, you can measure how many people actually complete the task. E.g. first time log in, how many people actually log in successfully? How many people lock their accounts? If you are trying to improve this process you can track to see if the rates improve or decline.
You could also measure satisfaction after a task is completed and ask for feedback, a quick out of 5 score along the lines of, “did this help you? was it easy to achieve?”. You can put a feedback section somewhere in your app.
The tip of the iceberg
These metrics and insights I’ve shared with you are just a small subset of everything we are tracking. And is a small part of our overall test strategy. Adobe has been useful for digging down into mobile device’s and operating systems breakdowns too. There’s many ways you can cut the data to provide useful information.
What metrics have you found useful for your team and getting a gauge on quality? What metrics didn’t work as well as you had hoped?
This is not financial advice and the views expressed in this blog are my own. They are not reflective of my employers views
Bugasura is an android app and a chrome extension. it helps with keeping track of exploratory testing sessions and comes with screenshot annotation and jira integration.
Here are a couple of screenshots of the android app in action, being used for an exploratory session on our test app.
First I selected the testing session:
While I’m testing I see this Bugasura overlay which I can tap to take a screenshot and write up a bug report on the spot:
Here’s their reporting a bug flow:
And here’s a testing report after I finished my exploratory testing where I can push straight to Jira if I want:
Here’s the sample report link (caveat, the screenshots attached to the bug are now public information on the internet, so there’s a privacy concern right there), but OMG, the exploratory session recorded the whole flow too (so a developer could see exactly what I did to find that bug).
Here’s that bug report in chrome paused at screen 13 out of 18:
Some caveats I’ve found so far; the test report is public (not private by default), you wouldn’t want to include screenshots of private or confidential information.
Bugasura only works on android/chrome. There isn’t an ios version but I guess with some remote device access running through Chrome it could work? We use Gigafox’s Mobile Device Cloud at work to access a central server of mobile devices and I imagine Bugasura could work with it.
Also I think they may have misspelt Elisabeth’s name in her quote.
This blog post reflects my opinions only and do not reflect the views held by my employer.