48

The state of UI testing at Mixpanel

 5 years ago
source link: https://www.tuicool.com/articles/hit/YfEnqaY
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

End-to-end tests

At Mixpanel, we’ve been writing UI tests for a long time. However, they haven’t always been easy to set up, write, and debug. When we first began testing the UI, we wrote tests in Python using the Selenium framework . In this setup, the Python tests interact with the browser through the API provided by Selenium. These Selenium commands are then sent to browser-specific drivers for controlling different browsers. These “end-to-end” tests required setting up a web server, database, and various supporting backend services, as well as populating these services with the data needed for the tests. These tests have the benefit of not just testing the UI, but also testing the integration between the backend services needed to render the UI. The intent was for these tests to mimic the experience of an end user visiting Mixpanel’s production website.

MR7vuma.png!webFigure 1. The setup of Mixpanel’s end-to-end tests.

However, these end-to-end tests also have a number of downsides:

  • Front-end developers have to learn the Selenium API, which is quite different from other tools used by front-end developers.
  • This setup introduced latency in several places: 1) latency from when a Selenium command is sent by the test to when it’s executed in the browser, 2) latency of network requests made by the browser to the web server. Having to take these variable delays into account made it harder to write tests that weren’t flaky.
  • There was overhead setting up all the backend services and populating them with the fixture data needed for each test.
  • Tests became harder to maintain/debug, since issues were not limited to just the front-end, and could be from any of the backend services in the stack.

End-to-end tests are currently used to test Mixpanel’s older reports. The components within these older reports were built with Backbone. Many of these components have dependencies on global state and intertwined dependencies with other components. This was partially due to lack of discipline but also because these components were written before JavaScript had the good module and bundler tooling (e.g. Webpack) that it currently has. These entangled dependencies made it hard to test individual components in isolation. This is one of the reasons end-to-end tests were used to test these reports – they required a web server to serve a production-like version of the website with all the necessary dependencies for the components being tested.

WCT tests

In the previous section, we saw how Mixpanel used to write UI tests, and some of their drawbacks. However, recent front-end developments at Mixpanel have allowed us to prefer a different approach to UI testing that solves the above-mentioned problems.

In the last 1-2 years, we have started using Web Components as the building blocks for Mixpanel’s newer reports. These reports have a top-level “application” custom element which is composed of other custom elements, all the way down to custom elements representing basic components like buttons and tooltips.

When we started using custom elements, we focused on creating components with well-defined attribute-based interfaces to pass information into the component, and event propagation for the component to communicate with the outside world. This contrasts with the entangled dependencies that exist in Mixpanel’s older Backbone components . Creating modular custom elements has now made it possible to write more isolated/modular tests for individual custom elements like buttons and tooltips, while also being able to write higher-level tests for an entire report composed of many custom elements.

These new-style tests are written using the web-component-tester (WCT) browser testing framework , which came out of the Polymer project. Hence, we refer to them as WCT tests. While the end-to-end tests exercise the entire stack, WCT tests are strictly front-end-only UI tests. WCT tests are written in JavaScript, which runs on the same web page as the components they’re testing.

f6V32iq.gifFigure 2. The WCT test running environment. The test code is being run in an iframe on the left. Information about success/failure of individual tests is output on the right side.

WCT tests address all the downsides of end-to-end tests mentioned earlier:

  • When writing WCT tests, developer can use the JavaScript DOM APIs and other JavaScript testing libraries like Sinon instead of needing to learn a new set of APIs (Selenium ).
  • WCT tests run faster and are less prone to race conditions since the test code runs directly on the web page, whereas the end-to-end tests have a layer of separation between test code and the web page.
  • WCT tests have no dependencies on a web server by mocking out requests to the server, and thus are much easier to setup, write, and maintain compared to end-to-end tests. (See the Mocking server responses section below for more details.)

Contrast the simplicity of the setup for these WCT tests shown in Figure 3 below with the setup for end-to-end tests from Figure 1.

YVzaAjI.png!webFigure 3. The setup of WCT tests.

Guidelines when writing WCT tests

We use the WCT framework as an environment for running UI tests, as seen in Figure 2 earlier. However, the framework doesn’t enforce how tests should be written/structured. So we’ve come up with some of our guidelines when writing WCT tests which are discussed below.

Mocking server responses

As described earlier, Mixpanel’s end-to-end UI tests required setting up backend services. In contrast, WCT browser tests are front-end-only. Any server requests are stubbed to return mock responses. Our front-end code uses fetch to make network requests. At the beginning of each test, we use the Sinon mocking library to create a mock server that responds to fetch requests that will be made by the test.

DOM helpers

Interacting with the DOM is a necessity for browser tests. To keep our code DRY, we created a small library of utilities that all tests should use when they need to interact with the DOM. Here are some of the utilities we have:

  • Helpers to wait during a test. For example, nextAnimationFrame  is an async function that awaits till the next requestAnimationFrame . retryable  and condition  will wait until some condition is met. (They’re described in more detail in a later section.)
  • Helpers for interacting with DOM elements. For example, clickElement  will click an element while sendInput  will send text to an input element.
  • Helpers for querying element in the Shadow DOM, since our custom elements make use of the Shadow DOM. For example, queryShadowSelectors  queries for the first matching element in the Shadow DOM, while queryShadowSelectorsAll  queries for all matching elements (similar to querySelectorAll ) in the Shadow DOM.

Element wrappers

When writing browser tests, a large portion of the test code will be for performing actions on components and querying the state of the component after these actions. Often, multiple tests perform similar interactions with the same component. To keep the test code DRY, we created the concept of “element wrappers”.

An element wrapper is a helper class that wraps an element in the DOM. They consist of the methods mentioned above that are needed by test code for performing actions and querying the DOM state of these elements.

Besides keeping the test code DRY, another benefit of element wrappers is that they are modular. They allow you to group all the possible interactions with a component in a single place. Similar to how custom elements can be composed of other custom elements, element wrappers can mirror this composability by providing helper methods that return element wrappers for child elements. These child element wrappers can then be used to interact with these child elements.

An example of an element wrapper is the Calendar element wrapper which wraps the < mp - calendar > custom element, which is used for picking dates from a calendar.

ZRNzEfv.png!web Figure 4. Screenshot of the < mp - calendar >   custom element.

Below is the implementation of the Calendar element wrapper.

export default class Calendar extends ElementWrapper {
  getPreviousMonthButtonEl() {
    return queryShadowSelectors(this.el, [`.pika-prev`]);
  }
 
  getNextMonthButtonEl() {
    return queryShadowSelectors(this.el, [`.pika-next`]);
  }
 
  isPreviousMonthButtonEnabled() {
    return !this.getPreviousMonthButtonEl().classList.contains(`is-disabled`);
  }
 
  isNextMonthButtonEnabled() {
    return !this.getNextMonthButtonEl().classList.contains(`is-disabled`);
  }
 
  async clickPreviousMonthButton() {
    await mousedownElement(await condition(() => this.getPreviousMonthButtonEl()));
  }
 
  async clickNextMonthButton() {
    await mousedownElement(await condition(() => this.getNextMonthButtonEl()));
  }
 
  getDateEl({year, month, day}) {
    return queryShadowSelectors(
      this.el,
      [`.pika-day[data-pika-year="${year}"][data-pika-month="${month}"][data-pika-day="${day}"]`]
    );
  }
 
  async clickDate({year, month, day}) {
    await mousedownElement(await condition(() => this.getDateEl({year, month, day})));
  }
}

The Calendar element wrapper provides methods like clickDate and clickNextMonthButton for performing actions on the < mp - calendar > custom element. It also provides query methods like isNextMonthButtonDisabled for querying the DOM state of the < mp - calendar > custom element.

Waiting without sleeping

Within our WCT tests, it’s often necessary to write asynchronous code that waits for some condition before continuing the test:

  • The Panel library we use for creating custom elements batches DOM updates to the next requestAnimationFrame  by default for performance reasons. This means any time we perform an action on an element (e.g. clicking a button), the update to the DOM associated with the change happens asynchronously. Since a large portion of browser testing is triggering actions on the web page, needing to wait for the DOM to update is a common occurrence in our tests.
  • fetch  requests (even though they’re mocked) are asynchronous.
  • Animations will delay a component from reaching its final state.

To deal with the abundance of asynchronous code in our WCT tests, we have opted to use async/await syntax introduced in ES2017. This allows the test code to be more readable by removing the excessive nesting associated with callbacks and (to a lesser extent) Promises.

An anti-pattern when you need to wait within a test is to sleep. However, this makes the test brittle and slows the test down because you end up sleeping longer than needed in most cases. Instead, the test should wait for some explicit conditions to be met to decide if it can continue execution. In this vein, we created some helper functions for this use case: retryable and condition . Both these functions take a function as input and will repeatedly execute it until some condition is met or a predefined timeout. retryable will continue to execute the function until it doesn’t throw an exception. condition will continue to execute the function until it returns a truthy value.

Below is a simplified version of a WCT test in our codebase that follows these guidelines. Comments have been added for explanation purposes.

it(`updates the label when picking date from the calendar`, async function() {
  // Setting up the report.
  const app = await App.setup({
    mockDate: new Date(`2018-02-15T10:00:00-08:00`),
  });
  
  // Open the <mp-date-range-picker>, a component for picking relative 
  // (e.g. last 96 hours) or absolute (e.g. Feb 10 - 20) date ranges.
  // `dateRangetype` is `on`, meaning we're picking a single absolute date
  // (e.g. on Feb 10)
  const {screen, picker} = await app.openDateRangePicker({dateRangeType: `on`});
 
  // Wait until the <mp-calendar> is rendered to the DOM. This will keep
  // running retrying until `picker.getCalendar()` returns a truthy value
  // (i.e. the element wrapper for <mp-calendar>)
  const calendar = await condition(() => picker.getCalendar());
 
  // Check the button for confirming the date selection is initially disabled.
  // Retry until the expect() passes. When the expect() fails, it'll throw an
  // Error, which will cause the retryable() block to re-run.
  await retryable(() => expect(screen.isButtonBarEnabled()).to.equal(false));
 
  // Pick a date from the <mp-calendar>.
  await calendar.clickDate({year: 2018, month: 1, day: 10});
 
  // Check the confirmation button is now enabled.
  await retryable(() => expect(screen.isButtonBarEnabled()).to.equal(true));
 
  // Click the confirmation button to select the date range.
  await screen.clickButtonBar();
 
  // Check that confirming the date updates a label in the report with the
  // selected date.
  await retryable(() => {
    expect(app.getTimeClause().getLabel()).to.eql(`Feb 10, 2018`);
  });
});

UI testing within CI

The end-to-end and WCT tests are run on every pull request. They are also regularly run on master to catch any bad code that might have slipped through the cracks. WCT tests selectively run depending on the code change. For instance, if only backend changes are made, WCT tests will not run. If front-end changes are made to a single report, only the WCT tests for that report will run. The end-to-end tests in contrast are run for every pull request since virtually any code change (front-end or backend) could impact them.

The end-to-end tests are run in VMs that are set up with all the backend services needed to run them. The tests are run in Chrome on this VM using Xvfb . In contrast, the WCT tests run on Sauce Labs , a platform for running automated browser tests that the WCT frameworks supports of the box . Sauce Labs itself allows configuring a list of browser environments to test on. Below is the wct.conf.js (WCT framework configuration file) we use to run our tests on Sauce Labs.

{
  suites: [
    `test/browser/index.html`,
  ],
  verbose: false,
  plugins: {
    sauce: {
      disabled: true,
      extendedDebugging: true,
      tunnelOptions: {
        connectRetries: 5,
      },
      browsers: [
        {
          browserName: `chrome`,
          version: `latest`,
          platform: `OS X 10.13`,
        },
        {
          browserName: `firefox`,
          version: `latest`,
          platform: `OS X 10.13`,
        },
        {
          browserName: `safari`,
          version: `latest`,
          platform: `OS X 10.13`,
        },
        {
          browserName: `microsoftedge`,
          version: `latest`,
          platform: `Windows 10`,
        },
      ],
    },
  },
};

As you can see, we run our WCT tests on the latest version of Chrome, Firefox, Safari, and Edge.

Closing remarks

In this post, we looked at the different types tests we write to test the UI at Mixpanel. In the beginning, we wrote only end-to-end tests which exercise the entire stack. Despite them being ill-suited for the purpose, we used end-to-end tests for testing the UI for a long time because that’s all we had. However, because of better modularization of our front-end code, we are now able to write front-end-only WCT tests for this purpose. Nonetheless, the introduction of WCT tests don’t obviate the need for end-to-end tests, which still serve the important function of verifying high-level behavior across the stack.

Since WCT tests are easier and less time-consuming to write compared to end-to-end tests, developers have been much more receptive to writing them. The difference in adoption between test two types can be seen by taking a look at our codebase. We currently have almost 7x as many WCT tests as end-to-end tests, despite the fact that we’ve only been using WCT for a couple of years. Reducing the friction in writing and maintaining UI tests has therefore increased our regression coverage significantly, making for both happier users and happier front-end engineers.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK