Categories
Mobile Testing Software Testing Technology

A day in the life of a mobile app tester

A few days ago I posted on LinkedIn that I have hardly written any test automation code in the last year:

And I had a few people ask, “how do I test during a code review?”. So I thought I would dive deeper into what an average day for me looks like.

My Role; Software Engineer in Test

Officially my role is a Senior Software Engineer in Test on a mobile app team, it’s a little like this role description but with a mobile app focus instead of a C#/web focus. So why am I not writing test automation code? It’s there in the job description.

Checking Jira

I’ll start my day by checking our scrum board for our current sprint and figuring out what my focus is for the day. What tasks have been assigned to me and what needs some focused testing:

We have user stories and tasks in our sprint backlog, we have three columns; To Do, Doing and Done. We create dev and test subtasks for each user story.

Here’s a rough cadence of sprint meetings, you can read more about what these meetings involve in my mobile app test strategy blog:

Check’in out the Code

Next I’ll usually update my local code to include recent changes, if the dev work was merged in recently it can be tested. This is the git command I’ll usually run to get my local up to date:

git checkout develop && git pull project develop && git push

I use git to help me with large code reviews. The developers will create a pull request with their new feature. I’ll check that code out on a local branch and use Xcode/Android studio to test that code on emulators/simulators. I’ll often use this git command to do so:

git fetch project pull/{pr_number}/head:{pr_number} && Git checkout {pr_number}

I’ll explore the new feature and pass the code review. I’ll chat with the developer if there were any issues observed. I might even run the tests they’ve created as part of that pull request.

I don’t do this for every code review, if it’s a small change I’ll approve it and once it’s merged into the main dev branch I’ll update my local to test those changes. Then I’ll test on emulators/simulators or I’ll wait for our continuous integration (CI) testing pipeline to produce an app file and test on devices.

Android Emulators

I have an emulator for every supported android version, for really important features (i.e. a 3 star feature based on my risk analyst) I’ll test it on every emulator:

I have a similar set up for Xcode, we are in the process of dropping support for iOS 11 because we’d also like to update Xcode.

Backend Testing

I also sometimes checkout backend code but my work machine isn’t set up to be able to build our back end locally. I’ll usually use a test environment to test API’s via Postman or cURL.

Helping with unit testing

I’m more likely to help the developers I work with come up with unit testing strategy. E.G. we were working on a feature recently to add up transactions over the last financial year and we have over 100 different transactions types.

We came up with an approach, if we give every transaction it’s type as it’s value we can more easily check the sums invoved. E.G if one category to calculate was:

transaction type2 + transaction type3 - transaction type1 

we could say we’d expect the unit test result to be $4 (2+3-1). We created the mock JSON that went into the unit tests and it really helped the developer figure out how to test this more easily.

Supporting the App

I helped my team build about a mobile analytics dashboard where we could monitor user engagement and any potential issues that needed more investigation. Recently I’ve be checking it nearly every day and helping my team add more sections because we’ve got an SSL certificate issue at the moment.

We release our app monthly and we’ve got 7 versions that our customers can be on. We have a force update API that the mobile app will hit before it starts the log in process. If a user is on an unsupported version they can’t log in and are directed to the play store to update.

BUT this API uses HTTPS and our production SSL certificate expired recently. We haven’t built rolling certificate pinning yet so it means instead of the old app seeing a forced update, they see a generic error message instead because the SSL certificate has expired. Oops, our bad.

We were able to get 98% of our regular app users onto the latest build with the latest SSL certificate but people who log in infrequently would not have seen the forced update. We are monitoring for an increase in generic error messages and hoping that no one gives us a poor review on the app stores.

I have a crash course in SSL certificates if you’d like to read more.

Building our mocking framework

Most of my code changes involve adding mock JSON data to our mobile app mocking framework. We can test our mobile app independently of any test environment.

We have a bunch of profiles mocked into the test build of our app. E.G if you wanted to see the difference between a pensioner vs investor you can change the profile in the test build.

Our whole business has access to this test build to help the company support the app (not every staff member has an account to use the app with). Sometimes these mocks need updating. As a team we’ve discussed a programatic way to keep these profiles up to date but it’s a little hard and hasn’t been prioritised.

If we are working on a new feature and I’ve recently been testing the API, I’ll often add the API mock test data into our mobile repositories ahead of the mobile dev work to make that process easier.

User Research

Recently I’ve been branching out and helping the design team with competitor analyst and research for new features we are working on. For example, how do other mobile apps handle two factor authentication?

Running a bug bash

every few sprints I’ll book the team in for a bug bash. It’s dev tools down and bug hunting goggles on. I’ll organise some snacks and a confluence page of test data and information about what has changed recently to help people focus on what to test. We award our best bug bug hunter for their efforts and add bugs found to the backlog. You can read more about running a bug bash here. We try to keep it fun and social.

Summary

So yes, I haven’t written much test automation code and this is ok. The developers I work with are great at writing their own unit and integration tests. Does this make my work any less valuable? No.

I enjoy engaging with customers and analytics more than building out test automation code. If you are interested in learning these technical skills that I use on a day to day basis, I have this technical testers guide here.

What are your thoughts on this way of working?

Categories
Mobile Testing Software Testing Technology

Testing for accessibility

Testing for accessibility in your apps/websites doesn’t have to be a chore. Making accessible and inclusive content will increase your target audience and reduce your chances of being sued (123).

It’s not like I’m encouraging lawsuit driven development, but it is important to build an inclusive experience.

Other benefits with building with accessibility in mind are known as the Curb-Cut effect. When people first introduced curb cuts (i.e. sloped ramps) at intersections for wheel chair riders they discovered that many other people benefited too; i.e. people pushing strollers/shopping trolleys, people with deliveries and people with walkers all used these new curb cuts to help navigate their urban environments.

Image for post
A curb cut from WikiCommonshere is a podcast on the history of these ramps.

Over 4 million people in Australia live with some form of disability. That’s 1 in 5 people (1). Over my lifetime I’ve experienced a number of health issues that have impacted my abilities temporarily. I might not be living with an officially diagnosed disability today but I frequently use features such as the closed captions on YouTube and Audio Books when I feel like changing up how I consume information.

It’s easy to get overwhelmed when thinking about accessibility testing, there’s many different levels of accommodations to consider (7). Getting started with the following is a good step in the right direction;

  • Vision Impairment (Can people use screen readers interact with your content?)
  • Hearing Impairment (does any linked video content have closed captions?)
  • Mental health/Intellectual/Autism spectrum (Does your content overload people’s processing ability? Is it easy to read and follow?)

It’s no wonder people often put this type of testing in the too-hard basket. However if you start with testing with vision impairments in mind you should be able to get a good return on investment towards a more inclusive experience.

Screen Reader technology

Most vision impaired users will use some sort of screen reader technology to browse the web. Even I use this technology when I feel like listening to a news article online instead of reading. 2.3% of the US population have some sort of vision impairment (2). This statistic is likely to grow with an aging population. The growth of voice interfaces will also grow the use case for this technology. I prefer using iOS’s VoiceOver technology but there are many other tools out there; Windows comes with Narrator and Android has TalkBack, there’s also the paid software JAWS.

Using VoiceOver with mobile apps

On my iPhone, I can go into Setting > Accessibility > VoiceOver to enable the screen reader:

Image for post
iOS’s accessibility options view

I enable accessibility shortcuts on my mobile device to make it easier to use. When I press the side button 3 times, I can easily switch accessibility features on and off.

Image for post
My iOS accessibility shortcuts, I have VoiceOver and colour filters enabled

When I test using VoiceOver;

  • First I’d navigate to the app/site I’d like to test
  • Then enable VoiceOver using my shortcut
  • Finally flick/scroll two fingers up the screen

This enables the screen reader to read everything on the screen from top to bottom and I can test if the flow makes sense. There are plenty of other guides out there for using VoiceOver (12). I suggest switching on a screen reader on your mobile device and getting familiar with the technology. As a test, can you figure out how to take a photo on your phone using screen reader technology?

Large font accomodations

Jacking up the font size or using a dyslexia font plugin in on chrome is a great way to find ways your UI may break. Watch out for views that don’t scroll and text that doesn’t line wrap.

Alternative text for images

Screen readers will read out the alternative text for images for vision impaired users who can’t see your images. Alt text is an optional field for HTML image tags. For example, using Campaign Monitor’s Email Builder I have included an alt text tag with my image to convey the message of the image for screen readers. Here is the HTML tag for the image;

<img src="darth_vader.jpg" alt="Meme; Come to the dark side ... we have cookies">

This is what this email looks like with images;

Image for post
Email view with images

I used Firefox’s web developer tools to replace images with their alt text to see this next view of the same email;

Image for post
Email view with alt text replacing the images

It’s considered a web best practice to include alt text (15). You can also read more about the web accessibility initiative at w3.

Tips for Alt text

  • Check you have alternative text for your images and that it makes sense
  • Check that you have blank strings or no alt text options for decorative images so that screen reader don’t try to read the image or the file name
  • Don’t include words like “image of” in your alt text because screen readers will announce it’s an image and then announce the alt text. You will get a double whammy “image. image of blah” from your screen reader, this is super annoying
  • Use tools that display the alt text or replace images with their alt text to help test this

Test your images for colour blindness

About 4.5% of the British population experience some form of colour blindness (3). To help test this I have a grey scale shortcut button set up on my iPhone. On a side note, setting up grey scale for your phone helps make it less addictive too (17).

Image for post
This is what my phone looks like with grey scale enabled, you can also see what the accessibility shortcuts look like

Hopefully you now realize that testing for accessibility in your apps/websites isn’t that hard. If a screen reader works with your content, you have alternative text fall back options for images and your images are still readable using grey scale you will be most of the way there in providing an awesome, inclusive experience with your online content.

References

  1. https://www.and.org.au/pages/disability-statistics.html
  2. https://nfb.org/blindness-statistics
  3. http://www.colourblindawareness.org/colour-blindness/
  4. https://ssir.org/articles/entry/the_curb_cut_effect
  5. https://commons.wikimedia.org/wiki/File:Pram_Ramp.jpg
  6. https://99percentinvisible.org/episode/curb-cuts/
  7. https://services.anu.edu.au/human-resources/respect-inclusion/different-types-of-disabilities
  8. https://www.apple.com/au/accessibility/iphone/vision/
  9. https://support.microsoft.com/en-au/help/22798/windows-10-narrator-get-started
  10. https://support.google.com/accessibility/android/answer/6283677?hl=en
  11. https://www.freedomscientific.com/products/blindness/jawsdocumentation
  12. https://www.imore.com/how-use-voiceover-iphone-and-ipad
  13. https://www.campaignmonitor.com/resources/guides/alt-text-in-email/
  14. https://addons.mozilla.org/en-US/firefox/addon/web-developer/
  15. https://support.siteimprove.com/hc/en-gb/articles/115000013031-Accessibility-Image-Alt-text-best-practices
  16. https://www.w3.org/WAI/
  17. https://www.youtube.com/watch?v=NUMa0QkPzns
  18. https://www.abc.net.au/news/2019-01-10/commonwealth-bank-settles-discrimination-claim/10702194

Originally posted on medium in 2018

Categories
Critical Thinking Marketing Mobile Testing Software Testing

That elusive Test Strategy

I recently was asked about recommendations for learning about test strategies. Here are my sample strategies:

a strategy doesn’t have to be a big giant document. It starts as an idea in your head and you have to get other people on board as part of that strategy. So you need to share some knowledge in some format to help share your idea. This blog is about how I’d go about developing a new test strategy in a new team.

History of the term

First let’s take some time to understand this term; strategy. Historically the word strategy is associated with war and battle:

Quote: Strategy without tactics is the slowest route to victory. Tactics without Strategy is the noise before the defeat - Sun Tzu
https://www.pinterest.com.au/pin/287878601154737781/

Strategy is to help you win or achieve some goal. Many people talk about their tactics when they are thinking of their strategy. Tactics are your how. They aren’t your whole strategy.

A tactic is a conceptual action or short series of actions with the aim of achieving of a short-term goal. This action can be implemented as one or more specific tasks.

https://en.wikipedia.org/wiki/Tactic_(method)

Book: I have a strategy (No you don’t)

This book helped me understand the term, “Strategy” in a visual and fun way.

https://www.amazon.com.au/Have-Strategy-You-Dont-Illustrated/dp/1118484207

According to this book a strategy has 4 parts:

  • A purpose
  • A distinct, measurable goal
  • A plan
  • A sequence of actions or tactics

Start with a purpose

If I was dumped into a new team tomorrow and asked to develop a test strategy, I’d start by interviewing/surveying a few people. Depending on the size of the team and who I was working with it could be an online survey or a casual chat over a coffee. I’d ask something along the following lines:

  • What does quality mean to you?
  • What are common problems in the testing process here?
  • If you could fix just one thing about our quality, what would it be?

Now different people are going to answer this differently. Developers might say test code coverage, easily maintainable code and easy deployments make a high quality product. Your project manager might say happy customers. Testers might say less bugs found in the test phase.

Develop a goal

Once I’ve surveyed enough people (5 people is a good enough number for most user research interviews), I’ll work on constructing a goal. it might be;

  • improve our continuous integration build times
  • increase our test coverage
  • reduce the amount of negative customer feedback

Make sure it is measurable. You could use SMART or OKR goal formats.

https://www.toolshero.com/personal-development/smart-goals/
https://blog.weekdone.com/introduction-okr-objectives-key-results/

Develop a plan

Now what are some things I or the team could do to achieve our goals? We could create tasks during our sprint to help us work towards our goal. Once you’ve achieved something you survey those original interviewers to see if the perceived quality has actually improved.

Measure your progress

Measure the improvements in quality of your product. For my team we are tracking the average app store ratings, crash rates and engagement with in app features to see if they are actually useful. https://bughuntersam.com/metrics-and-quality/

Risks and Gaps

A Test Strategy could also have a section about risks or gaps in this approach. For example things like performance testing and security testing might not be included. Having a brief explainer why these aren’t part of your strategy can be useful for explaining the context and scope.

UI Automation Visual Risk Framework

if you are working on improving the UI Test automation coverage you can use this visual risk based framework to help focus on where to start and what to automate first and measure progress against it as part of your strategy.

https://bughuntersam.com/visual-risk-ui-automation-framework/

Conclusion

I’m more comfortable with the term marketing strategy over test strategy because it’s easier to measure your impact and easier to come up with concrete goals. Software testing isn’t as tangible as many other parts of the business process and can be hard to measure.

Can your strategy be summarised by this comic:

test all the things
automate all/some things

What resources have helped you understand test strategies? I’d love to check them out.

Categories
Agile Design Finances Mobile Testing Software Testing Technology

Metrics and Quality

The superannuation and investment mobile app I’ve been working on over the last year has finally been released. It’s been on the app store for just over a month now* and this blog is about how we are using metrics to help keep tabs on the quality of our app.

*You can download the app via google play or the apple app store, you can also create an account here.

Average app store rating

The average app store rating is one useful metric to keep track of. We are aiming to keep it above 4 stars and we are also monitoring the feedback raised for future feature enhancement ideas. I did an analysis of the average app store reviews of other superannuation apps here to get a baseline of what the industry average is. If we are better than the industry average, we have a good app.

Analytics in mobile apps

We are using Adobe Analytics for tracking page views and interactions for our web and mobile app. On previous mobile app teams I’ve used mParticle and mixpanel. The framework here doesn’t matter, I’ve found adobe workspace to be a great tool for insights, once you know how to use it. Also Adobe has tons of online web tutorials for building out your own dashboards.

App versions over time

Here’s our app usage over time broken down by app version:

We have version 1.1 on the app store and released 1.0 nearly 2 months ago. We did an internal beta release with version 0.5.0. If anyone on the old versions tries to log in they’ll see a forced update view.

Crash Rates

Crashes are a fact of life with any mobile app team, there are so many different variables that go into app crashes. However keeping track of them and aiming for low rates is a good thing to measure.

With version 1.1 we improved our crash rates on android from 2.77% to 0.11%. You can use a UI exerciser that is called monkey from the command line in your android emulator to try and find more crashes too. With the following command I can send a 1000 random UI events to the emulator:

adb shell monkey -p {mobile_app_package_name} -v 1000

Crashes in App Centre

We can dive a bit deeper into crashes in app centre (a Microsoft based platform that integrates with our team city continuous integration pipeline for managing all of our test builds).

When exploring stack traces you want to look for lines that reference your app (instead of getting lost in all of the framework code), look for lines that start with your apps package name.

App Centre gives reports based on device and operating system break down:

With analytics set up, you can even dig into an individual report and get the page views that happened before that crash occurred.

What’s a good crash rate?

That depends on the context of your app, ideally zero is the best but perfect software is a myth we keep trying to obtain. As long as it’s trending downwards you are making progress towards improving it. Here’s a good follow up blog post if you are interested in reading more.

Error Tracking

I can also keep on eye on how many error messages are seen. The spike in the android app error messages was me throwing the chaos monkey at out production build for a bit. However when there is both a spike in android and iOS, I know I can ask, “was there something wrong with our backend that day?”

Test Vs Prod – page views

If every page has one event being tracked, we can compare our upcoming release candidate against production; say we see that 75 page views were triggered on the test build and we compare this to the 100 page views we can see in production. We can then say we’ve tested 75% of the app and haven’t seen any issues so far.

This is great for measuring the effectiveness of bug bashes/exploratory testing sessions. If you want an answer to, “how much testing did you/the team do?”.

Hang on, why 75%?

There’s no need to aim for 100% coverage, our unit tests do cover every screen but because they run on the internal CI network those events are never sent to adobe. We have over 500 unit/UI tests on both android and iOS (not that number of tests is a good metric, it’s an awful one by the way).

But if you’ve tested the main flows through your app and that’s gotten you 50% or 75% coverage you are now approaching diminishing returns. What’s the chances in finding a new bug? Or a new bug that someone cares about?

You could spend that extra hour or two getting to 90-95% but you could also be doing more useful stuff with your time. You should read my risk based framework if you are interested in finding out more.

Measuring Usability

If you are working on a new feature or flow of your app, you can measure how many people actually complete the task. E.g. first time log in, how many people actually log in successfully? How many people lock their accounts? If you are trying to improve this process you can track to see if the rates improve or decline.

You could also measure satisfaction after a task is completed and ask for feedback, a quick out of 5 score along the lines of, “did this help you? was it easy to achieve?”. You can put a feedback section somewhere in your app.

The tip of the iceberg

These metrics and insights I’ve shared with you are just a small subset of everything we are tracking. And is a small part of our overall test strategy. Adobe has been useful for digging down into mobile device’s and operating systems breakdowns too. There’s many ways you can cut the data to provide useful information.

What metrics have you found useful for your team and getting a gauge on quality? What metrics didn’t work as well as you had hoped?

This is not financial advice and the views expressed in this blog are my own. They are not reflective of my employers views

Categories
Mobile Testing Software Testing

Bugasura and exploratory Mobile Testing

A few weeks back, Pradeep Soundararajan; founder of Moolya testing and I were having a conversation on twitter on test strategies for mobile apps. He suggested trying Bugasura for running a bag bash.

Bugasura is an android app and a chrome extension. it helps with keeping track of exploratory testing sessions and comes with screenshot annotation and jira integration. 

Here are a couple of screenshots of the android app in action, being used for an exploratory session on our test app.

Bugasura Flow

First I selected the testing session:

While I’m testing I see this Bugasura overlay which I can tap to take a screenshot and write up a bug report on the spot:

Here’s their reporting a bug flow:

And here’s a testing report after I finished my exploratory testing where I can push straight to Jira if I want:

Here’s the sample report link (caveat, the screenshots attached to the bug are now public information on the internet, so there’s a privacy concern right there), but OMG, the exploratory session recorded the whole flow too (so a developer could see exactly what I did to find that bug).

Flow Capture

Here’s that bug report in chrome paused at screen 13 out of 18:

On a side note, I love having these Jerry Weinberg (who wrote perfect software and other illusions about testing) and Elisabeth Hendrickson (who wrote Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing) quotes sprinkled through out the app.

Some caveats I’ve found so far; the test report is public (not private by default), you wouldn’t want to include screenshots of private or confidential information.

Bugasura only works on android/chrome. There isn’t an ios version but I guess with some remote device access running through Chrome it could work? We use Gigafox’s Mobile Device Cloud at work to access a central server of mobile devices and I imagine Bugasura could work with it.

Also I think they may have misspelt Elisabeth’s name in her quote.

This blog post reflects my opinions only and do not reflect the views held by my employer.

Categories
Mobile Testing

Beta releases for mobile apps

My team is going through a beta release for our mobile app to get early feedback. We’ve noticed that our android app is struggling compared to iOS. It seems that having an extra hurdle with signing up for the android beta program impacts installations. Naturally we’d expect the android engagement to lag a little behind iOS based on the Aussie mobile usage market analysis but there is still a significant drop.

We have 428 iOS installs, and 99 android installs. That’s a 19% Android installation rate. We have roughly 75-80% successful registrations and return log ins once people actually figure out how to install the app.

Google Groups vs Test Flight

We are using google groups to manage the distribution of the android beta app and because it’s harder to use than test flight for iOS we’ve gotten less installations. It’s fascinating how an extra hurdle in the sign up process can impact installations.

Return Logins

Our android numbers appear to be higher than usual here but I think it’s to do with the timeframe I’m collecting these numbers over. We’ve had a few people install the android app before we officially started the beta release and I think they’ve been counted in this statistics.

Tools

We are using adobe analytics, test flight and using a closed test track via google groups to help us manage the beta app releases and to gauge engagement metrics. You can read more about our general mobile app test strategy here if you like.

What were your experiences going through a beta release process? What worked well? How did you improve the process?

Categories
Critical Thinking Mobile Testing Software Testing

Mindmaps and Heuristics for testing

I like to use mind maps to help me test. Mind maps are a visual way of brainstorming different ideas. On a previous mobile team I printed and laminated this mind map to bring along to every planning session to help remind me to ask questions like, “What about accessibility? Automation? Security? or Performance?”:

As I go through exploratory testing (or pair testing), I’ll tick things off as I go and take session notes. Often this will involve having conversations with people, sometimes bugs are raised. 
Here is a quick mind map I’ll use for log in testing:

Heuristics for testing

This mind map approach can be combined with a heuristic test strategy approach or a nemonic test approach. Heuristics is a rule of thumb that helps you solve problems, they often have gaps because no mental model is perfect.

SFDPOT is a common nemonic that was developed by James Bach; who also developed Rapid Software Testing; a context driven methodology. James Developed his RST course with Michael Bolton

In SFDPOT each letter stands for something that is meant to help generate testing ideas, they are; Structure, Functions, Data, Platform, Operations and Time. Here’s someone’s blog on applying SFDPOT to a mobile app testing approach.
I tend to use all of these different approaches as part of my exploratory testing practices.

More resources

If you are interested in reading about exploratory testing Elisabeth Hendrickson has written this book called Explore It!, reduce risk and increase confidence with exploratory testing. Elisabeth also has this nightmare headlines game which is a fun tool for brainstorming potential error cases.

Your team could also go through the process of creating your own nemonic/mindmap to help you have consistency across different testing styles, maybe something that made sense for your context? 

You could also use mindmaps to feed into the different feedback loops you’d like to have in your team. You can read more in this mobile test strategy blog.

Categories
Conferences Mobile Testing Presenting Technology

DevWorld 2019

I recently attended /Dev/World in Melbourne this week. This is Australia’s only iOS developer conference. Here is my summary of the conference.

Photos from the conference on Twitter under the hashtag #DevWorld

Themes

There were a few themes that gleaned from the talks. These were;

  • Swift is the most talked about language in this space
  • People are still using Objective-C
  • Cross platform remains a hot topic (from iOS to mac, React Native and Flutter)
  • Augmented Reality is still a cool toy
  • How to build engaging experiences in your app
  • Game design stories
  • Testing

Sketchnotes

Here are my highlights:

You can access all of my sketchnotes on google photos here.

My Talks

I presented on swift UI & wearables using my sample poo tracking app as the basis for it. You can watch a similar talk from my twitch channel. You can also access my slide decks here. I also gave a lightning talk on how to do sketch noting. You can read this blog post if you are interested in learning about sketch noting. I also have this blog post on Right To Left bugs which was inspired from my talk if you are interested in reading more.

Recordings

Eventually the talks will be up on youtube under the AUC_ANZ youtube channel. You can access previous years talks there too.

Categories
Design Mobile Testing Software Testing Technology

Right To Left design considerations for mobile apps

We truly live in a global and inter connected society. But have you tested your app using a Right to Left (RTL) language such as Arabic? This blog post is a reflection on some of the design considerations to keep in mind when accomodating this.

Why does this matter?

Arabic is one of the top 5 spoken languages in the world with around 3 hundred million speakers and it is the third most spoken language in Australia. Even if you only release apps for the Australian market someone out there will have Arabic set as their default device language. It’s ok if you haven’t translated your app, but you should check that these people can still use it.

How do I test this?

Android

Enable developer options and select “Force RTL layout direction”. On My Samsung S10 this is what my screen and options look like after enabling this option:

iOS

In Xcode you can change the build target language to a Pseudo RTL language to see how your app renders in this way without having to change the language on your device.

Number pads

You don’t actually need to render your key pads in Right To Left, in fact it’s actually more jarring to render numbers in a RTL arrangement because ATM’s and phone pads are left to right in Arabic. Most Arab’s are use to globalised number pads. Samsung has an in-depth article on when RTL should be applied.

When I have RTL rendering set on my android phone, the log in pin screen and phone call functionality is in LTR. However some of my banking apps render their pin pads in RTL.

Common RTL Issues

I was pleasantry surprised to find out how many of my apps weren’t broken when I switched to RTL rendering. Facebook, twitter and email still look very good. Some apps (like my calculator) do not make sense to render RTL and they remain LTR:

Bug One: Overlapping labels

You will have to watch out for when labels overlap like in the domain app here:

Bug Two: Visuals doesn’t match written language

And when your text is rendered RTL but the visual cue is still LTR like in the shade bar for representing countries visitors to my blog in this wordpress statistics view:

Bug Three: Menu’s that animate from the side

In the app I’m helping build, the side menu renders pretty funkily in RTL mode, I can’t show you a screenshot of this behaviour but it’s probably the quirkiest RTL bug I’ve seen. If you find an app with bad side menu behaviour in RTL please share your screenshots with me. I’ve also seen a pin login screen where the icons where flipped but the button presses weren’t.

Bug Four: Icon’s aren’t flipped

Often icon’s have a direction associated with them like the walking person when you get google maps directions. Sometimes it can look a little odd when they aren’t flipped correctly (as if they are walking backwards).

Have you seen these bugs before?

Please let me know your thoughts or experiences in supporting RTL languages. I’d love to hear your stories.

Categories
Mobile Testing Software Testing

A Mobile App Test Strategy

Test strategy, what a funny concept. Now this strategy isn’t going to help you win any battles (this is where the word strategy comes from after all) but for lack of a better well understood term, this blog post is a reflection on what I imagine will work for my team*.

*disclaimer: what might work for my team might not work for yours. People are amazingly diverse and your team and company context is fundamentally different. Also this here is a wish list of what I think will work. It’s subject to change as we learn and evolve.

Context

First let’s set the scene. Our scrum team includes 1 Android developer, 1 iOS developer, 2 back end developers, 2 business analysts (1 is our scrum master), 2 testers and a team/tech lead. We are changing our team structure and I’ve come on board as a software engineer in test. Our team closely collaborates with the design team and they are included in our group email threads but don’t come to our retro’s. We have a 10 day sprint cycle that looks a little like this:

We have a daily standup, a few kick off meetings at the start of the sprint to lock in what we are working on for the next 2 weeks, some mid sprint review/next sprint refinement sessions and a few meetings at the end that help tie up what we’ve completed. Consider this a crash course in Scrum Agile if you will. Not everyone is required to attend all of these meetings and I won’t be covering these meetings in detail in this blog post.

Get to the Test Strategy

Yes I know, that was a rambling tangent but the context is important. Before I get into the good bits I’ll ask you a question;

Why do we even bother with testing?

Some people say, “to ensure the product works as expected” or, “to find bugs”, “it’s my job to test things” and these are all ok answers but they miss the point a little. Here’s my answer:

We test to get feedback on the state of the product. To help us answer the question, “are there any known risks with shipping this product to production?”

Paraphrased from conversations with Michael Bolton, the tester not the singer

Every part of the following strategy is all tied into facilitating feedback. The more timely and accurate the feedback the better.

Testing and quality is a team responsibility, it’s not just up to one person to be the quality gate keeper. My role is to help facilitate feedback

Layer One: The product design feedback loop

This is all a little out of scope of my teams day to day activities but this how our design team tests if we are building the right thing that users need.

User/Product Research

This might involve researching our market for current trends. How many of our customers care about their superannuation? What is their financial literacy? What type’s of problems are they facing? What are our competitors doing and how does their experience deliver value?

Wireframe/Design prototyping

Eventually someone will need to start sketching out some design ideas. What’s the user flow through a particular feature?

User Testing

This won’t happen for every new design, for example log in hasn’t gone through this process. Our new big features will go through this type of testing. This helps get feedback on the design and layout. Does it all make sense?

Design and user story creation

Out of all of the work, eventually the design team and the business analyst will work together to create acceptance criteria, refine the UI and get the rest of the team up to speed with the context of a feature. Our user stories and designs are usually shared on a confluence page and linked to Jira tasks. We use a GIVEN WHEN THEN structure for our user stories.

Layer Two: The code feedback loop

Exploratory Testing

All testing is exploratory in nature, its front and centre. It’s across everything we do. chaos engineering is a type of it as well as building the code locally. We use our skills, plans and judgement to determine when and how much testing is needed at any point.

Experience Testing

When we do the code review we will do exploratory testing based on the risk of the feature. Time boxed to a session or two depending on what has been built. We will look at the user stories, brain storm any more edge cases and consider if they are worth testing. Checking if the experience of the feature makes sense and if there are any ways people can get into some sticky unexpected situations.

Unit Tests

As a developer builds a feature, they will create unit tests based on the user acceptance criteria. Developers will use the tools they are most comfortable with the write these tests. If you’d like to read more Martin Fowler has this blog post on Unit Testing.

UI Automation

Here’s a visual risk for UI automation board; each feature is mapped against customer impact vs frequency of use

I have my visual risk board next to our team which we use to prioritise how much testing we build at this layer. We use Espresso for Android and XCUITest for the iOS app.

“Why not Appium?”, I hear you ask

Simple, when test code lives in a repository outside of your production code, you decrease collaboration with the whole team. Also you can’t easily run your appium tests during pre commit testing or locally as a developer. You can follow the interactive visual risk for UI automation exercise here to understand more.

Code Review

When a new API is being developed, I’ll often pair with the developer to do a code review. We will talk about the architecture, brain storm testing ideas, do a bit of testing (usually through postman if we are testing an API), we will chat about test coverage. Is it adequate? Is there any thing missing? Can we see the tests fail under the expected conditions?

If it’s a front end feature, I’ll check out the code locally and use a different emulator/simulator than what the developer uses. I’ll give the feature a good shake out and check the test coverage. I’ll also test for accessibility if it’s a new front end feature.

Mock Testing

For our mobile app, we are able to do most of our code review testing without ever talking to a backend. The engineers have built some mock servers into out apps, when the app would call an API, our mock server returns a canned response. This helps us test that the UI and the flow hangs together even when test environments aren’t available. If you’d like to read more, check out this article on mock testing for android or this one for iOS.

Build Pipelines

We have different pipelines for different applications. We are using TeamCity as our Continuous Integration tool. Generally all of our unit tests and UI tests will be run. Maybe our contract tests. I have a few other ideas to increase the value from our build pipelines that I’ll talk about in Chaos Testing. If our main builds start failing, we won’t release the software.

Device Coverage

We don’t necessarily focus on doing device testing for each feature that comes through. I try to pick a different emulator/simulator than what the developers do. I always make sure features get tested on a Samsung. Some features, if they are 3 star features from our risk analyst we will spend more time testing on a wide variety of devices. We currently have an on premise mobile device cloud server delivered by Mobile Labs. If you don’t have a device cloud, you could set up your own device farm.

Why Samsung?

Samsung has a wide market saturation and they always do funky stuff to the android UI. The android emulators are awesome at vanilla android. However, most people out there aren’t using vanilla Android :(.

Contract Testing

We are moving towards having contract testing in place that lets us know if an API starts to break, if someone changes the JSON payload in an API our contract will break and someone will know the have more stuff to clean up. We don’t have contract testing for our mobile app yet but some of our downstream micro services are starting to build these. If you’d like to find more, read this article by Martin Fowler.

Test Environment

We have an integration test environment where our code is being constantly deployed into. Sometimes it can mean an API is down because it’s being deployed. We do a lot of our API testing in this environment.

Chaos/Crash Testing

With android there’s this command line tool called chaos monkey. This tool is a UI exerciser, it throws random user input at your UI to try and find where it crashes. I’m hoping to include this in a build pipeline for an overnight build. Run it for a few hours on an android device and see if it crashes. The next night, do the same thing but on a different device/os combination. This will give us reasonable device testing over a sprint. I don’t know of a similar tool for iOS. You can read more about chaos engineering on wikipedia.

Layer Three: Shipping the product feedback loop

Bug bash

A few days before the end of the sprint, our team and invited guests will sit down and do some exploratory testing on the features that have just been built. If anyone wants to explore a new API that’s been built, they can. If they’ve had their head in unit tests lately, they have the chance to explore some of the new UI. You can read how to run a bug bash to find out more. If major bugs are found here we won’t release the software. We might do a mob programming session when we don’t have enough features for a bug bash.

Demo

On the last day of the sprint we will demo our features to a broader audience. Feedback is gathered and turned into Jira items/research for the design team.

Internal release

Then we release to internal staff members. Many other companies call this “eating your own dog food”. This gives people the chance to raise more feedback before we put the product in front of customers. You can read more on wikipedia here.

Beta Release

We can release our app to our high value or digitally savvy customers who want to ahead of the curve. This is a customer engagement strategy as well as a test strategy.

Percentage roll outs

The google play store allows you to do percentage rollouts. Say you rollout to 5% on the new version, monitor production for any new crashes or customers complaining. If it’s all smooth for a few days you can continue the rollout to 50% and then 100%. The google play store allows you to roll back if major bugs do occur. The apple play store has a similar feature.

Monitoring in production

What metrics should be communicated back to the team? How can we respond to issues in production? I like this quote from a 5 minute google talk back in 2007:

Sufficiently Advanced Monitoring is Indistinguishable from Testing

Ed Keyes

Layer Four: Supporting the product feedback loop

Supporting devices

We should support all of the devices that 80% of our market uses. We will be support from Android 6 (marsh-mellow) and from iOS 11. There probably will be some obscure android devices out there that don’t play nice with our app. Android is a beast like that.

Facilitating customer feedback

There will be an easy way for customers to provide feedback in app. I have some ideas on how to make that experience better but there are privacy concerns to consider. We will also be monitoring our google/apple play store reviews for bugs.

Triaging feedback

Someone should be monitoring all of this feedback, attempting to reproduce bugs if customers are facing them and raising them in the teams backlog for prioritisation next sprint.

Soap Opera Testing

Maybe in the future, we could try some soap opera testing with the business? Soap opera testing is a condensed and over dramatised approach to testing. What are the wackiest scenarios our customers have actually tried? How does our system break? You can read more about this exercise here.

Why the layers?

Consider each of these layers like a net. It won’t catch everything, bugs in production will still happen. But when we have all of these feedback loops layered on top of each other, we get a pretty tight net, where hopefully no major issues get into production.

What about auditing or compliance?

Our source of truth is the code, Jira and Confluence. When we have all of it integrated, we can prove we tested a feature thoroughly without too much extra overhead. An auditors mindset and a testers mindset are very similar. Testers are concerned with product risks, auditors are concerned with business risk.

Their main question is, “did you do what you say you do? Did you follow your process?” and, “Is the existing process adequate?”.

Where are your test cases?

Michael Bolton has a 7 part series on breaking the test case addiction. You can read series one here. You don’t need test cases to prove you did adequate testing. They create unnecessary overhead that detract from adding business value.

What else is missing?

Security testing is not included in this test strategy. Neither is performance testing. Getting these included can be challenging. I’m open to your suggestions in how I can incorporate this type of feedback in a timely manner.

What else would you add to your test strategy?