Main content

Testing Ö÷²¥´óÐã iPlayer Mobile App

Steven Cross

Senior Test Engineer

Welcome!

I’m Steven Cross, Senior Test Engineer, working within Ö÷²¥´óÐã Future Media. I am Test Lead for the mobile Ö÷²¥´óÐã iPlayer native application on iOS and Android.

Whilst embedded within the Ö÷²¥´óÐã iPlayer development team, I work closely alongside our Project Manager, Product Owners, Business Analyst, Developers, and fellow Test Engineers.

In this blog post, hopefully the first of many, I will be describing how the development team tests the Ö÷²¥´óÐã iPlayer mobile app.

The challenge

Most recently we released .

This proved a challenge for the whole team but let’s look at how we went about testing these new features ahead of release. These days we are very much ‘one team’ focusing on delivering across both iOS and Android platforms.

Our v4.0.0.x release has arguably been the first time we have developed (and tested) concurrently for iOS and Android. Whilst this is great news for our audience, we needed to ensure new features were adequately tested ahead of release which, in turn, meant ensuring our test approach was aligned to our goal.

We had lots of new features, which we expect to work alongside existing functionality (since we were only changing certain areas of the app), to run on a medley of devices (to include older models), of which could be running a variety of supported operating systems e.g. iOS 5.1.1 and above; Android 2.2 and above. Easy!

Moreover, we wanted to start writing automated tests for this release. Something else we hadn’t done in anger for the app before. These were to be written so that we had a consolidated set of feature files for the two platforms and were to augment our manual test effort. The overriding objective was to automate those tests which not only could be automated (since not everything can) but that were also relatively cheap to automate (in the sense that the effort in doing so didn’t outweigh the value). The most important message to takeaway here is to focus on the value of a particular test. We asked ourselves such questions as:

• Is this something we care about?

• Do we want this to be run repeatedly against daily builds?

• Can this be automated?

• Is it quicker and easier to manually test?

Our ethos is to test as early as possible in order to identify issues sooner rather than later. As any tester will tell you (myself included), we not only love finding bugs, but equally love seeing them fixed.

Defect Management (probably a good topic for another blog post) is crucial. However, regarding those bug fixes I mentioned. We have to take a pragmatic stance when triaging defects. Some may well be prioritised for fixing in the current or forthcoming release (which is great news) whilst others may not be deemed important enough for us to expend effort fixing (which is not so great).

The approach

When we begin to verify expected behaviour for a particular user story, we first start to test the various scenarios which we, as a team, have written beforehand. Each scenario must adhere to a set of acceptance criteria.

Since our approach includes automation, we then look to ‘tag' those scenarios we wish to automate with @automatable which eventually become @automated and of course remaining scenarios are tagged with @manual.

Over and above verifying these scenarios we also manually look at the feature overall and simply try to find as many problems as we can (whilst balancing the need to move a ticket into ‘Done’ and move on to the next ticket). Speaking of workflow, we limit ourselves to a small sample of devices first of all to flush out any obvious areas of concern before then looking at wider test coverage (more on this later).

During this feature testing phase, we look to identify bugs and sometimes identify scenarios that were never considered upfront leading to a new scenario being created for that particular user story.

We collaborate as much as possible with the rest of the team, surfacing any observations, issues, defects, or simply blockers affecting test progress as early as possible.

In parallel to our feature testing we have another team of testers who perform ‘iterative regression testing’ which includes verifying newly implemented features across a wider suite of devices and operating systems (plus, as ever, looking at the app overall to ensure nothing else has been broken).

After a series of sprints, where we have both feature tested and undertaken iterative regression test effort, this culminates into a build we refer to as our ‘Release Candidate’. This is what we would be prepared to ship to our audience, but before that happens, we perform a final round of regression testing.

Depending on the nature of the release in terms of complexity of change and associated risk, we look to plan which test cases we need to execute. We plan together as a team and agree upon which areas we feel we need to regression test along with selecting which devices and operating systems we want to run these tests against.

Inevitably, despite looking to identify issues as early as possible through previous phases of testing, we have been known to uncover issues present within the and, if deemed to be something we definitely need to fix ahead of release, we look to complete our initial round of regression before then verifying any final fixes. Since this involves editing code, we then look to conduct additional regression testing on the back of these fixes against another release candidate version to raise confidence levels.

Once we are happy with how the app is behaving, we then look to release. Often, this can take a while to propagate throughout the various app stores before becoming available for our audiences to install and/or update.

I hope this has given you a useful insight into how we undertake our testing for mobile Ö÷²¥´óÐã iPlayer.

Steven Cross, Senior Test Engineer, Ö÷²¥´óÐã Future Media

More Posts