On Sunday I’ll be Interviewing Manoj Kumar who’s a Principal Consultant at ThoughtWorks & a Selenium Conf Organiser. And I was chatting to Manoj about my marketing strategy as a tester. I’ve been fascinated about marketing (more than learning about test automation tools) and this blog post is a reflection on that strategy.
Marketing is a numbers game
In marketing, click through rates, conversion rates and eyeballs are king. You can either increase your eyeballs/views or increase the conversion rate.
E.G. You are promoting a free event, if you send an email to 100 people and 5 people sign up, your conversion rate is 5% for that campaign. It’s alot easier to increase the number of emails sent than the conversions/click through rates.
You could send that same email to 100 more people or send follow up emails to the people who opened the email but didn’t register to increase conversions. Or maybe the subject line didn’t get people’s attention?
Anyway, there’s tons of testing and iterating that can be applied to a marketing campaign.
What is your goal?
Have a think of what you’d like to achieve with your marketing campagin before starting out. I’ve structured my LinkedIn to get more views of my blog. Nearly everything I do on LinkedIn is to increase the web traffic to my blog.
My goal is to get more views/traffic on my blog
You might want to increase ticket sales for a conference, increase sign up rates for an event, etc. My secondry goal is to grow my number of followers on YouTube/Twitch.
How will you measure your goal?
I use analytics on wordpress to measure sources of web traffic. First let me tell you about my template messages and then I’ll explain how I use analytics to measure it.
Here are my LinkedIn template messages
Thanks for connecting. What are some of the challenges facing you these days?
If that’s too early for you the interview will be up on youtube 24 hours after the live event.
I’ve had over 30 people respond to that last message directly over night, that’s a 30% response rate of people who’ve atleast clicked on the auto generated response message, “Thanks”.
Using wordpress; Jetpack paid services, I can track views, where they come from and what people look at:
Yesterday I had 286 Views from 141 Visitors which is my second best performing day over the last month for web traffic.
Out of those 286 Views, 11 were on my Metrics and Quality blog, so they were probably from my New_Connections template message.
173 Views were from India (Normally most of my traffic comes from Australia) and 16 of those views came from LinkedIn. This would indicate that my blog was shared outside of my marketing efforts.
Here’s a more regular day
As a comparison, here’s a more regular web traffic day for my blog:
Newer published blogs are the most viewed (especially if I’ve shared them), Most of my traffic comes from LinkedIn and most of my traffic is from Australia, because that’s my biggest area of influence.
I can also keep an eye on my engagement in LinkedIn too, this helps me to understand if the content I’m sharing is resonating with people:
Is this useful?
Would you try/test anything from this blog post? What are your views on marketing?
One of the hardest things in software engineering is naming variables. They need to be easy to understand, short (ish) yet descriptive. And working in international teams, everyone has a different understanding of language. It’s a nigh impossible task if you ask me.
Say there’s a kill switch feature that a business or developer can enable to block a client from hitting a backend via an API. Just in case there’s peak demand or something in the system is struggling. Or maybe you are concerned there’s a widespread Denial of Service Attack (DoS) hitting your system and you want to keep your customer data safe.
What should this kill switch be named? And what state corresponds to the switching off of web traffic?
The enabled state should correspond with the no traffic state. When this kill switch is enabled, the business has gone in and switched it on, effectively switching off traffic. By default this kill switch is disabled and web traffic is normal. You might want to call this kill switch something that relates to the API it switches off, e.g. WebLoginKillSwitch if the kill switch prevents people from login to your web.
A feature flag is a software development technique used to enable or disable functionality remotely without deploying code.
Say you are working on a new feature but you are operating in a continuous integration and deployment environment. Once your code is merged in it could be deployed to customers within minutes. But your feature isn’t ready for customers just yet. You can wrap your feature behind a feature flag and enable it for your team so you can test in production even before your customers see it.
By default this feature is disabled for most of your users. But you could also set up a % rollout for the feature too. Maybe 5% of your users see the new feature before doing a general release.
When naming a feature flag you don’t need to include the word enabled or disabled in the variable. If you are experimenting with a new way of web login using apple ID you might call this feature flag webLoginWithAppleID and have it disabled by default.
Read the art of readable code
If you are interested in learning more about readable code, I recommend reading the Art of Readable Code:
Do you have any code smells related to naming of variables? How about test code smells?
The superannuation and investment mobile app I’ve been working on over the last year has finally been released. It’s been on the app store for just over a month now* and this blog is about how we are using metrics to help keep tabs on the quality of our app.
The average app store rating is one useful metric to keep track of. We are aiming to keep it above 4 stars and we are also monitoring the feedback raised for future feature enhancement ideas. I did an analysis of the average app store reviews of other superannuation apps here to get a baseline of what the industry average is. If we are better than the industry average, we have a good app.
Analytics in mobile apps
We are using Adobe Analytics for tracking page views and interactions for our web and mobile app. On previous mobile app teams I’ve used mParticle and mixpanel. The framework here doesn’t matter, I’ve found adobe workspace to be a great tool for insights, once you know how to use it. Also Adobe has tons of online web tutorials for building out your own dashboards.
App versions over time
Here’s our app usage over time broken down by app version:
We have version 1.1 on the app store and released 1.0 nearly 2 months ago. We did an internal beta release with version 0.5.0. If anyone on the old versions tries to log in they’ll see a forced update view.
Crashes are a fact of life with any mobile app team, there are so many different variables that go into app crashes. However keeping track of them and aiming for low rates is a good thing to measure.
With version 1.1 we improved our crash rates on android from 2.77% to 0.11%. You can use a UI exerciser that is called monkey from the command line in your android emulator to try and find more crashes too. With the following command I can send a 1000 random UI events to the emulator:
I can also keep on eye on how many error messages are seen. The spike in the android app error messages was me throwing the chaos monkey at out production build for a bit. However when there is both a spike in android and iOS, I know I can ask, “was there something wrong with our backend that day?”
Test Vs Prod – page views
If every page has one event being tracked, we can compare our upcoming release candidate against production; say we see that 75 page views were triggered on the test build and we compare this to the 100 page views we can see in production. We can then say we’ve tested 75% of the app and haven’t seen any issues so far.
There’s no need to aim for 100% coverage, our unit tests do cover every screen but because they run on the internal CI network those events are never sent to adobe. We have over 500 unit/UI tests on both android and iOS (not that number of tests is a good metric, it’s an awful one by the way).
But if you’ve tested the main flows through your app and that’s gotten you 50% or 75% coverage you are now approaching diminishing returns. What’s the chances in finding a new bug? Or a new bug that someone cares about?
You could spend that extra hour or two getting to 90-95% but you could also be doing more useful stuff with your time. You should read my risk based framework if you are interested in finding out more.
If you are working on a new feature or flow of your app, you can measure how many people actually complete the task. E.g. first time log in, how many people actually log in successfully? How many people lock their accounts? If you are trying to improve this process you can track to see if the rates improve or decline.
You could also measure satisfaction after a task is completed and ask for feedback, a quick out of 5 score along the lines of, “did this help you? was it easy to achieve?”. You can put a feedback section somewhere in your app.
The tip of the iceberg
These metrics and insights I’ve shared with you are just a small subset of everything we are tracking. And is a small part of our overall test strategy. Adobe has been useful for digging down into mobile device’s and operating systems breakdowns too. There’s many ways you can cut the data to provide useful information.
What metrics have you found useful for your team and getting a gauge on quality? What metrics didn’t work as well as you had hoped?
This is not financial advice and the views expressed in this blog are my own. They are not reflective of my employers views
Bugasura is an android app and a chrome extension. it helps with keeping track of exploratory testing sessions and comes with screenshot annotation and jira integration.
Here are a couple of screenshots of the android app in action, being used for an exploratory session on our test app.
First I selected the testing session:
While I’m testing I see this Bugasura overlay which I can tap to take a screenshot and write up a bug report on the spot:
Here’s their reporting a bug flow:
And here’s a testing report after I finished my exploratory testing where I can push straight to Jira if I want:
Here’s the sample report link (caveat, the screenshots attached to the bug are now public information on the internet, so there’s a privacy concern right there), but OMG, the exploratory session recorded the whole flow too (so a developer could see exactly what I did to find that bug).
Here’s that bug report in chrome paused at screen 13 out of 18:
Some caveats I’ve found so far; the test report is public (not private by default), you wouldn’t want to include screenshots of private or confidential information.
Bugasura only works on android/chrome. There isn’t an ios version but I guess with some remote device access running through Chrome it could work? We use Gigafox’s Mobile Device Cloud at work to access a central server of mobile devices and I imagine Bugasura could work with it.
Also I think they may have misspelt Elisabeth’s name in her quote.
This blog post reflects my opinions only and do not reflect the views held by my employer.
My team is going through a beta release for our mobile app to get early feedback. We’ve noticed that our android app is struggling compared to iOS. It seems that having an extra hurdle with signing up for the android beta program impacts installations. Naturally we’d expect the android engagement to lag a little behind iOS based on the Aussie mobile usage market analysis but there is still a significant drop.
We have 428 iOS installs, and 99 android installs. That’s a 19% Android installation rate. We have roughly 75-80% successful registrations and return log ins once people actually figure out how to install the app.
Google Groups vs Test Flight
We are using google groups to manage the distribution of the android beta app and because it’s harder to use than test flight for iOS we’ve gotten less installations. It’s fascinating how an extra hurdle in the sign up process can impact installations.
Our android numbers appear to be higher than usual here but I think it’s to do with the timeframe I’m collecting these numbers over. We’ve had a few people install the android app before we officially started the beta release and I think they’ve been counted in this statistics.
Do you remember how the world freaked out over the potential Y2K bug when the year was changing from 1999 to 2000? A large mitigation factor was a huge outsourcing effort to India and it helped to establish India as the global IT giant it is today. So when not many bugs eventuated it was a bit anti climatic.
Globally; $308 billion dollars was spent on compliance and testing and it helped build more robust systems that survived the system crashes from the 9/11 terrorist attacks.
Well the y2k bug would only impact systems that used 2 digits to represent the year, i.e. using the DDMMYY format to save on memory.
And the world updated this and their systems every where. Businesses did a pretty good job of patching that bug before it became an issue. Bugs still did come up but the world didn’t end.
Sometimes the fix was to push it out
Some fixes people implemented was to make 00 to 20 stand for 2000 and 2020 respectively. That’s only pushed out the problem and we’ve had some cases this year of this bug coming into effect. You can read more about this 2020 Y2K bug here.
There’s another bug for 2038
However did you know there is another Y2K bug scheduled for the year 2038? Basically the way our current 32 bit computer systems count time is the number of seconds since 1970. In the year 2038 we get a bit overflow issue and that counter resets to zero. This is more likely to impact cheap embedded systems with limited memory or old legacy systems.
If we switch to a 64 bit counter, our sun will explode before we get the same issue. It’s like going from ipV4 (we are already running out of ip addresses) to ipV6 for the internet.
I like to use mind maps to help me test. Mind maps are a visual way of brainstorming different ideas. On a previous mobile team I printed and laminated this mind map to bring along to every planning session to help remind me to ask questions like, “What about accessibility? Automation? Security? or Performance?”:
As I go through exploratory testing (or pair testing), I’ll tick things off as I go and take session notes. Often this will involve having conversations with people, sometimes bugs are raised. Here is a quick mind map I’ll use for log in testing:
Heuristics for testing
This mind map approach can be combined with a heuristic test strategy approach or a nemonic test approach. Heuristics is a rule of thumb that helps you solve problems, they often have gaps because no mental model is perfect.
SFDPOT is a common nemonic that was developed by James Bach; who also developed Rapid Software Testing; a context driven methodology. James Developed his RST course with Michael Bolton.