Jump to content

Data Platform/Systems/Druid/Load test

From Wikitech

This page documents a load test performed on the Analytics' Pivot instance and the Druid cluster on 2016-09-29.

Experiment

3 data sets have been explored in Pivot (https://pivot.wikimedia.org/): Edit History Test (small, simplewiki only, 2001->2016), Pageviews Daily (medium, all wikis, 2015-06->2016-09) and Pageviews Hourly (big, all wikis, 2016-06->2016-09). For each of them a grid of tests have been performed, playing with 2 lengths of time ranges (short, long) and 3 types of queries (totals, time-series, table). Note that totals queries are the simplest ones computationally, while time-series queries have a medium complexity, and table queries are the most complex, because they all possess 2 breakdowns/splits (each of them with limit=10).

The script test/pivot_url_generator.js in https://gerrit.wikimedia.org/r/analytics/pivot has been used to generate 100 URLs for each square of the experiment grids. Then a sample of those URLs have been requested to https://pivot.wikimedia.org and measurements made. Caveat: The requests per second measurements are approximations, because of issues with executing POST request with Siege. This issues con be solved by modifying the URL generator script to generate plywood queries (the ones that pivot executes asynchronously).

Results

Edit History Test

Totals query Time-series query Table query
Short time range (<20% of data set) 559 ms/req | 30+ req/s 536 ms/req | 30+ req/s 960 ms/req | 30+ req/s
Long time range (40%-80% of data set) 546 ms/req | 30+ req/s 853 ms/req | 30+ req/s 869 ms/req | 30+ req/s

Pivot machine: No alterations to CPU or memory consumption.

Druid cluster: Minor CPU peaks (<10% of 32 cores) when executing bursts of requests.

Pageviews Daily

Totals query Time-series query Table query
Short time range (<20% of data set) 703 ms/req | 30+ req/s 1.108 ms/req | 30 req/s 6.050 ms/req | 30 req/s
Long time range (40%-80% of data set) 1.033 ms/req | 30+ req/s 1.496 ms/req | 30 req/s 11.874 ms/req | 30 req/s

Pivot machine: No alterations to CPU or memory consumption.

Druid cluster: CPU peaks (10%-20% of 32 cores) when executing bursts of requests.

Pageviews Hourly

Totals query Time-series query Table query
Short time range (<20% of data set) 1.076 ms/req | 30 req/s 1.453 ms/req | 30 req/s 24.168 ms/req | 10 req/s
Long time range (40%-80% of data set) 7.087 ms/req | 30 req/s 10.553 ms/req | 20 req/s 25.313 ms/req | 10 req/s

Pivot machine: No alterations to CPU or memory consumption.

Druid cluster: CPU peaks (30% of 32 cores) when executing bursts of requests.

Conclusions

  • Pivot is not a bottleneck in any case.
  • Small and medium datasets (Edit History Test, Pageview Daily) work fine for WMF internal consumption.
  • Large datasets mostly work, but in some cases, with more than 10 queries per second Druid may have pretty high response times, and Pivot may display timeout messages.
  • The most demanding queries for Druid are those that have high-cardinality breakdowns. Such queries on medium or large datasets may break Druid. Time-series breakdowns are OK, because Druid is optimized for them.