Which UI tests should we automate?

Our website front-end development team is moving to a new open source JavaScript platform and wanted our end-to-end (E2E) automated tests written in the same language, which means a rewrite from C# into JavaScript.  This gives our test team a chance at a do-over when it comes to test automation.

We have a lot of existing automated tests, but…

For sure we have a large number of automated tests that cover all layers of the stack including unit tests, API tests, and UI tests.  These tests number in the thousands.  But taking a step back to analyze our entire suite, a number of questions present themselves:

  • What’s our site coverage?
  • How many tests should we have?
  • If 20% of our tests are flaky or routinely fail, what should we do with them?
  • Are we staffed to analyze all test failures?
  • Are we missing any key customer scenarios?
  • Are we executing tests for functionality that customers rarely use?

Role of automated testing

The test pyramid, described by Mike Cohn in his book Succeeding with Agile: Software Development Using Scrum, consists of three layers.  It’s hard to disagree with the opinion that UI tests should comprise the smallest component of automated testing.  They are the slowest to run, and the most expensive to write and maintain.  That being said, they are great at verifying the user’s experience.

Test Pyramid

When should UI tests be automated?

As a UI test automator, it’s not uncommon for me to encounter those who flatly rule out any benefits for UI automation.  But on our team, we always validate new code deployments and any tests not automated have to be manually executed.  It’s interesting that those opposing the idea of UI automation aren’t exactly volunteering to run the manual regressions either.

I understand the reluctance to develop UI automation; careful consideration of the following factors is important to prevent short-lived throwaway work.

  • Functionality should always be understood and validated manually  first
  • A test should not be too difficult (expensive) to automate
  • The team should be staffed with skilled automation developers
  • High priority customer scenarios are identified
  • The product code is stable and most tests won’t need constant updating
  • The test team is staffed to analyze test results

Tests that shouldn’t be automated

Some tests of course will not be good candidates for automation:

  • Unlikely corner cases
  • Exploratory tests (how would you automate those?)
  • Testing for visual appearance or layout

Planning automated regression tests from a customer perspective

Our existing UI automated test suite provides decent coverage of the primary happy-path workflows on our site.  They cover the main operations typical on most storefront sites:  creating accounts, resetting passwords, making purchases, and viewing product information.  At the time we created them we thought our coverage was pretty good.

What we failed to take into account was real data on how customers used our site. Basically we guessed.

Once we had an incident over a holiday where one of our purchase paths was completely broken for three days without setting off any alarms.  After the problem was fixed, we started putting together a strike team to close the test coverage gap.  To our surprise we were made aware that the product was rarely purchased on our site, with the exception of purchases made by the test team during validation!

Lesson learned.  In the future we’ll make sure and get real customer usage data before designing our test coverage.  We’ve identified several sources to get this data.

Who better than the customer telephone support team would be aware of critical paths actually used by our site visitors.  We’ve got to those relationships started and maintained.

Another source we should have been consulting is Google Analytics data.  The business analysts already have the data on which pages get the most traffic yet our QA team had not taken the time to consider it.   Seems like a major oversight.

Ideal future state of our regression test suite

Our test leads put their heads together and came up with some key ideas to remember going forward:

  • Automated tests should be stored with the site code where developers can easily find them and update them
  • The test case priorities should be driven by:
    • Customer site usage data
    • Customer support team input — what are the biggest customer pain points?
    • High visibility bug regression tests.
    • Workflows and site paths ranked by revenue histories
  • The test suite should not be driven by:
    • Tests that are easy to automate
    • Guesses as to the most used parts of the site
    • Development team acceptance test automation only (it’s incomplete)
  • Tests must be able to run in any environment including production
  • Test transactions must be identifiable for easy cancellation/reversal

Conclusion

UI automated tests do a great job of verifying the site is working from a customer’s perspective.  Before you tackle the planning for your test suite, take a step back and gather the usage data you need to get the highest impact from your tests.  You’ll sleep better and have a great story for management.

 

One thought on “Which UI tests should we automate?

Leave a reply to Dave Cancel reply