Measure Performance

From Wikitech
Jump to: navigation, search

The Performance Team continuously measures site performance using mediawiki/extensions/NavigationTiming and our WebPageTest instance, but you can also test it yourself by either using the performance APIs our browsers support, or by using tools to collect metrics.

Getting metrics from your browser

Modern browsers has built in support for performance measurements. We use some of these metrics to collect data from real users to know the performance of Wikipedia. You can get those metrics yourself by running JavaScript in your browser console.

One important thing: Most of these metrics are browser focused instead of user focused.

Navigation Timing API

The Navigation Timing API is supported in all major browsers. From the API you get information about how the browser is processing the page and all the assets. See for the full picture.

In version 1 all time metrics are measured as UNIX time, i.e. milliseconds since midnight of January 1, 1970 (UTC). With version 2 all metrics are relative to navigation start.

If your browser supports version 1 you can get the information by doing the following:

var t = window.performance.timing;
    navigationStart: 0,
    unloadEventStart: t.unloadEventStart > 0 ? t.unloadEventStart - t.navigationStart : undefined,
    unloadEventEnd: t.unloadEventEnd > 0 ? t.unloadEventEnd - t.navigationStart : undefined,
    redirectStart: t.redirectStart > 0 ? t.redirectStart - t.navigationStart : undefined,
    redirectEnd: t.redirectEnd > 0 ? t.redirectEnd - t.navigationStart : undefined,
    fetchStart: t.fetchStart - t.navigationStart,
    domainLookupStart: t.domainLookupStart - t.navigationStart,
    domainLookupEnd: t.domainLookupEnd - t.navigationStart,
    connectStart: t.connectStart - t.navigationStart,
    connectEnd: t.connectEnd - t.navigationStart,
    secureConnectionStart: t.secureConnectionStart ? t.secureConnectionStart - t.navigationStart : undefined,
    requestStart: t.requestStart - t.navigationStart,
    responseStart: t.responseStart - t.navigationStart,
    responseEnd: t.responseEnd - t.navigationStart,
    domLoading: t.domLoading - t.navigationStart,
    domInteractive: t.domInteractive - t.navigationStart,
    domContentLoadedEventStart: t.domContentLoadedEventStart - t.navigationStart,
    domContentLoadedEventEnd: t.domContentLoadedEventEnd - t.navigationStart,
    domComplete: t.domComplete - t.navigationStart,
    loadEventStart: t.loadEventStart - t.navigationStart,
    loadEventEnd: t.loadEventEnd - t.navigationStart

If your browser supports version 2 you can run it like this to get all the entries:

window.performance.getEntriesByType('navigation').forEach(entry => console.log(entry));

Or if you wanna have deltas (easier to understand where the time is spent):

var t = window.performance.timing;
    domainLookupTime: (t.domainLookupEnd - t.domainLookupStart),
    redirectionTime: (t.fetchStart - t.navigationStart),
    serverConnectionTime: (t.connectEnd - t.connectStart),
    serverResponseTime: (t.responseEnd - t.requestStart),
    pageDownloadTime: (t.responseEnd - t.responseStart),
    domInteractiveTime: (t.domInteractive - t.navigationStart),
    domContentLoadedTime: (t.domContentLoadedEventStart - t.navigationStart),
    pageLoadTime: (t.loadEventStart - t.navigationStart),
    frontEndTime: (t.loadEventStart - t.responseEnd),
    backEndTime: (t.responseStart - t.navigationStart)

User Timing API

The User Timing API is also supported by all major browsers. The API lets developers define custom measurements on the page. We use it today to measure the JavaScript startup time for MediaWiki and in the future this API will be more important to us if we do a single page application.

We currently only use marks and not measurements. To get the marks we create, you can do:


Resource Timing API

The Resource Timing API is about getting information on all resources downloaded for a page. In version 2 the size of the resource is also included. To get information about resources on different domains, you need to add the Timing-Allow-Origin response header (we do that on Wikimedia domains).


Paint Timing API

The Paint Timing API provides information about when the browser starts to paint something on the screen. First paint is interesting because it's more related to the user experience.

In the past Chrome and IE have been the only ones supporting this unstandardized feature, but Firefox is coming along.

One important thing about first paint is it's from the browser's perspective, it doesn't take into account the rest of the pipeline bringing pixels to the user's eyes (operating system, motherboard, GPU, screen).

console.log(window.performance.timing.msFirstPaint - window.performance.timing.navigationStart);

In Firefox firstPaint is called timeToNonBlankPaint and is at the moment behind a preference (you need to turn it on in your browser). You need to set dom.performance.time_to_non_blank_paint.enabled to true in about:config for it to work.

console.log(window.performance.timing.timeToNonBlankPaint - window.performance.timing.navigationStart);

Testing performance on your local machine

There are two ways to test performance on your local machine:

  • Use developer tools to get in-depth information about JavaScript and layout. This is good to investigate bottlenecks or known problems.
  • Collect First Visual Change and Speed Index to make sure that a change you make doesn't impact those values (by testing each change x amount of times).

Using developer tools

Using developer tools is perfect for finding in-depth information about JavaScript or CSS performance.


Chrome has a long history of strong developer tools. You should check out the performance tab where you can investigate JavaScript and CSS performance. The best way to learn how to do this well is to watch performance audits done by Google Engineers. Check out Paul Irish investigating CNET, Time and Wikipedia or look at Sam Saccone Profiling Paint Perf or Identifying the JavaScript slowdown.


Firefox is about to release Firefox 57 (Quantum) in November this year. When that's happened I'm gonna look for good examples on how to use Firefox devtools, but you can use as a start for now.


Windows Performance Toolkit is the way to test on Edge (The Microsoft team recommends that you don't use devtools because it adds too much overhead). You need to invest some time to get into the (powerful) toolkit. Let check if we can find tutorial videos as a start. To get Edge on other platforms than Windows you can use

Collecting Visual Metrics like SpeedIndex and First Visual Change

One problem with the metrics that you collect from the browser is that they are browser-centric instead of user-centric. To get a feeling of why Visual Metrics are important you can look at

Speed Index is the average time at which visible parts of the page are displayed.  It is expressed in milliseconds and dependent on size of the view port.

To get user-centric metrics, we need to record a video of the screen and analyze the results. To do that we use FFMPEG, ImageMagick and a couple of Python libraries. The easiest way to get that all to work is to use a ready-made Docker container containing all the software you need.

Testing for changes in Visual Metrics is something you don't need to do for every change, but if you know that the change you are doing can impact performance, you should do it.

Setup with Docker

The Docker container comes ready made with Firefox and Chrome.

  1. Install Docker
  2. Download the container to your local machine.
    docker pull sitespeedio/browsertime
  3. Run the container against your localhost (on Mac OS X the is the way the container access localhost on your machine).
    docker run sitespeedio/browsertime

Setting up connectivity

Running tests on your localhost will be super fast and will not be the same experience as a real user. To better simulate real users conditions you need to slow down your connection. It's hard to do that inside of Docker, since it's depending on the host you run on. It's simpler to change the connectivity outside of Docker.

On Mac OS X you can do that with pfctl and on Linux you can use tc. If you want help to simulate slow networks you can use Throttle.

Throttle has pre configured connectivity profiles following the same setup as WebPageTest so it will be easy for you to test out simulating traffic on 3g/2g connectivity.


To get Throttle up and running you need latest LTS release of NodeJS and then install it:

npm install sitespeedio/throttle -g
Linux (tc based)

If you test on your localhost on Linux, you need to specify that to Throttle. You can add delay/latency with the rtt-switch. To add 100 ms latency on all traffic on localhost run:

throttle --localhost -rtt 100

To remove the latency you stop Throttle by:

throttle --stop

You can also use the pre made connectivity profiles (3g, 3gfast, 3gslow, 2g and cable) with the following setup:

throttle --profile 3g

You will then have the connectivity set as a 3g network.

Mac OS X

On Mac you can specify RTT and up/download speed on all network interfaces (and not only on localhost):

throttle --up 330 --down 780 --rtt 200

And to stop it you:

throttle --stop

Full setup

To run this locally, you first set the connectivity (this will affect every access to internet from your local machine, so be sure to turn it off when you are ready), run your tests and then remove the connectivity filters.

throttle --up 330 --down 780 --rtt 200
docker run --shm-size=1g --rm -v "$(pwd)":/browsertime sitespeedio/browsertime --video --speedIndex -n 5 -b chrome
throttle --stop

You can choose between running Chrome and Firefox. For Chrome you can also get the trace log by adding --chrome.collectTracingEvents. You can take the trace log and drag and drop into your Performance tab in Chrome devtools to get the full picture.

The files output by browsertime go into a browsertime-results folder in the working directory by default.

The HAR file

When you test your page, your tool will generate a HAR file that describes how the browser did the requests and how the server responded. Analyzing the HAR file is the best way to understand what's happening.

To get a waterfall view of the HAR file you can use HAR viewers like and .

Comparing HAR files

The easiest way to compare HAR files is to have them on a layer on top on each other and toggle the transparency between them. You can do that by uploading you HAR files to