Skip to main content

Blogs: Testing mobile responsive design

Responsive design is rapidly becoming standard for websites as mobile traffic represents roughly 12% of the global browser share, and is growing. Users and clients expect sites to look good and work well regardless of the device they use to access them. This isn’t just a technical or design challenge we need to adapt to, it is also a challenge for Quality Assurance.

As a Quality Assurance Manager I have the responsibility to ensure the sites we deliver meet high standards with their look, feel and performance. Until responsive design came along, the biggest challenges faced came from cross-browser testing. The number of browsers available today are greater than ever, and while a small subsection of these own any kind of meaningful share, the requirements for ensuring high quality for your digital products is higher than ever.

As more of our clients have asked for and expressed interest in responsive, I have researched and developed a set of tools and techniques to meet this new challenge. Along the way I have picked up some practical tips I want to share to help anyone developing their own testing strategy. The task of testing and maintaining responsively-designed websites may seem daunting and it is— especially for small teams with limited budgets.

mobile responsiveness

(photo credit: kayako)

Tip 1. Testing with real or virtual devices?

First question: Hardware or software? At the time of writing, there are a number of services that offer virtual access to large libraries of different phones and platforms. They typically offer access to all their devices for a fixed yearly fee, or provide access by selling “device hours”. The latter is particularly ideal for teams who might be doing one or two responsive projects a year, and don’t want to pay for constant use.

In principle, the idea of having access to thousands of real devices sounds excellent. At first, one may be convinced that virtualisation of devices is the way forward as the choice and range is so comprehensive. For the price, having access to real devices (as opposed to using emulators) is very attractive as it provides comprehensive support.

However, after trialling a number of these services, we’ve found many of them to be too slow. There could be a number of reasons for this, but I suspect that, being based in the UK and many providers servers’ being in the US is the main cause. However, I would expect this will improve over the next few years as more and more teams have the requirement to go responsive. Currently, despite having access to a wide range of devices – using them in any productive way was simply untenable for us.

The other problem with virtual devices is that you do not have the touch interface, a key aspect of a site’s user experience and something that you cannot replicate virtually. Interactivity with page elements (search bars, drop downs, radio buttons, etc.) is a vastly different experience to real devices, and you may not discover problems in mission-critical areas of your site.

The bottom line is that you should use real hardware  to test a site’s responsive design properly. This point was backed up by Peter-Paul Koch’s talk at Mobilism earlier this year, where his main encouragement was to gather together as many real devices as you can to make sure your supported hardware is covered.

Tip 2. What to support?

Before choosing what devices you need in your toolkit, you have to decide what to support. We formulated our support matrix based mainly on statistics from two sources – online data providers and our current client’s analytics data. We considered our position as a UK agency, and also our client’s audience when producing a baseline support criteria. The supported platforms are constantly changing and moving; one of the most important tasks is doing research to find out what is best based on audience, internal and externally.

For example, we categorised the following screen sizes:

From this, we developed a support matrix of OS version against screen size where appropriate. For instance, on iOS, it would currently look something like this:

You can see already how the combinations for just one manufacturer present a logistic and budgetary challenge! Our device library has to cover off all these major phone and tablet platforms, plus additional devices that represent lower-end feature phones. We keep a record of these devices we have in order to ensure we are meeting the needs of our support criteria.

Tip 3. Tools & devices

Once you’ve decided on the platforms to support, the fun begins – getting the devices. A bit of simple math demonstrates that it is actually cheaper to buy the devices needed second hand as opposed to yearly subscription to a major virtual provider. This said, there is still a significant cost associated with buying, maintaining and upgrading your hardware test suite and budgets must be planned with this in consideration.

We’ve taken stock of our currently available devices and organised them as below, noting screen resolution, operating system and manufacturer/brand. We then went about getting our devices in three ways:

  1. Rounding up all agency-owned devices that were available for use
  2. Buying required devices
  3. Asking colleagues to provide the loan of their personal devices

In addition to the hardware, we also accrued a set of software tools to help in the QA process. The following were used for initial front-end testing and error submission:

  1. Opera Mobile & Mini emulator
  2. Android emulator
  3. iOS emulator
  4. W3C MobiOK Checker

Again, the majority of testing was done on hardware – but these tools were excellent for taking screenshots and remotely demonstrating issues and errors encountered during development. You should not rely soley on emulation and software to ensure quality as you may find yourself surprised when you start viewing the site on real devices.

Tip 4. Testing and test preparations

Testing on mobile devices is a similar process to testing the desktop, but there are a few differences to consider:

  1. Orientation – do both orientations display correctly? Do menus, zoom levels or other variables change upon orientation change?
  2. Touch – Are buttons, labels, fields and checkboxes accessible? Do animated or visual elements react in an appropriate, useable manner?
  3. Error reporting – Do you have a method for getting a screenshot of the error on your device. If not, can you reproduce with a software emulator?
  4. Regressions – Look out for regressions across your different screen widths. Changes to the mobile resolution breakpoints can and will affect the way elements appear in higher resolution breakpoints. Encourage developers to check if they can; it will save these issues going into the queue.
  5. Budget – Have you budgeted and quoted accordingly? This doesn’t just apply to the extra time required to design and build a responsive site – QA time should not be skimped on!

In Summary

  • Testing responsive designs is an incredibly time intensive task that if carried out in-house can be a costly process, both in terms of man-power and investment in testing equipment.
  • With the rapid rise in mobile and tablet adoption, investing in Quality Assurance is vital to ensuring a high standard of end user experience, protecting both your initial investment and brand strength.
  • Be sure to budget appropriately for the extra workload. Responsive designs can put another 50% of development time and 20% of testing, especially if there is an aim to have meaningful coverage of the most popular devices.
  • Collaborate with different teams and work together to meet compromises head-long; sometimes it’s impossible to know how a feature will work until you try on the device!

Comments

Add new comment