Analytics/Systems/Varnishkafka

From Wikitech
Jump to navigation Jump to search

Varnishkafka is a daemon that runs on all the Wikimedia frontend caching hosts. Since Varnish logs HTTP requests in its own format in shared memory, Varnishkafka uses the Varnish Log API to read data, format it following some user input and finally send the result to a specific Kafka topic (using librdkafka).

Where is the code?

Gerrit: https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/varnish/varnishkafka

Github: https://github.com/wikimedia/varnishkafka

Testing a code change

Use the Docker image as outlined in https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/varnish/varnishkafka/testing. The wise reader might think "and what about unit tests? These ones are integration tests!". The Analytics team tried in T147432 to explore the possibility of adding unit tests to Varnishkafka but the estimated amount of time for the code refactoring, testing, releasing etc.. (this software is critical for the team) was not worth the benefits of having some code tests. The team preferred instead to spend more time on creating a flexible and simple integration testing suite.

Varnishkafka instances

You might see some reference in puppet of Varnishkafka instances, and this is because the same Varnish request data can be sliced and formatted in different ways for different scopes:

- Webrequest

- Statsv

- Eventlogging

On all the cache segments (text, upload, misc and maps) we run the Webrequest instance, meanwhile the Statsv and Eventlogging ones are only running in text.

Monitoring

The varnishkafka grafana dashboard uses Prometheus metrics, and hence it follows its conventions of grouping metrics by datacenter. This means that we'll have a separate dashboard for eqiad, codfw, eqsin, ulsfo and esams. You can switch source of data by selecting a value in the top left corner of the grafana dashboard (datasource dropdown).

Delivery errors

This error means that Varnishkafka failed to send messages to Kafka Jumbo, and hence data has been lost. Usually this means that something is not working properly at the caching layer, or that Kafka Jumbo is in trouble for some reason. Start by checking the Kafka jumbo dashboard, and see if there is any correlation with Varnishkafka metrics. If everything looks good, check with the SRE/Traffic team if anything is ongoing at the caching layer.

Important note: the delivery alerts are per datacenter, so this can give you some information about the source of the problem. If you get alarms for webrequest text on all datacenters, it probably means that something global is happening to Kafka or Varnish. If you get only an alarm for say esams, it is probably a local-only problem with that datacenter.