Performance/Runbook/Webperf-processor services

From Wikitech
Jump to navigation Jump to search

This is the run book for deploying and monitoring webperf-processor services.

Hosts

The puppet role for these services is role::webperf:processors_and_site.

navtiming

The navtiming service (written in Python) extracts information for the NavigationTiming and SaveTiming schemas from EventLogging using Kafka. It submits them to Graphite via Statsd. The EventLogging data comes a JS plugin for MediaWiki (beacon js source, MediaWiki extension).

Meta

Monitor navtiming

Application logs for this service are not sent to Logstash currently.

  • Ssh to the host you want to monitor.
  • Run sudo journalctl -u navtiming -f -n100

Deploy navtiming

This service runs on the webperf*1 hosts.

To update the service on the Beta Cluster:

  1. ssh to deployment-webperf11.deployment-prep.eqiad.wmflabs
  2. run sudo journalctl -u navtiming -f -n100 and keep this open during the following steps
  3. in a new tab, ssh to deployment-deploy01.deployment-prep.eqiad.wmflabs
  4. cd /srv/deployment/performance/navtiming
  5. git pull
  6. scap deploy
  7. Review the scap output (here) and the journalctl output (on the webperf server) for any errors.

To deploy a change in production:

  1. Before you start, open a terminal window in which you monitor the service on a host in the currently main data center. For example, ssh to webperf1001 (if Eqiad is primary) and run sudo journalctl -u navtiming -f -n100.
  2. In another terminal window, ssh to deployment.eqiad.wmnet and navigate to /srv/deployment/performance/navtiming.
  3. Prepare the working copy:
    • Ensure the working copy is clean, git status.
    • Fetch the latest changes from Gerrit remote, git fetch origin.
    • Review the changes, git log -p HEAD..@{u}.
    • Apply the changes to the working copy, git rebase.
  4. Deploy the changes, this will automatically restarts the service afterward.
    • Run scap deploy

coal

Written in Python.

Application logs are kept locally, and can be read via sudo journalctl -u coal.

Reprocessing past periods

Coal data for an already processed period can be overwritten safely. To backfill a period after an outage, run coal manually on one of the perf hosts (no need to stop the existing process), using a different consumer group, and use the --start-timestamp option (careful about the timestamp being expressed in milliseconds since Epoch). Once you see that the outage gap has been filled, you can safely stop that manual coal process.

statsv

The statsv service (written in Python) forwards data from the Kafka stream for /beacon/statsv web requests to Statsd.

Application logs are kept locally, and can be read via sudo journalctl -u statsv.

coal-web

Written in Python.

site

This powers the site at https://performance.wikimedia.org/. Beta Cluster instance at https://performance-beta.wmflabs.org/.