Jump to content

UDP based profiling

From Wikitech

What and Where

  • $wgProfiler = new ProfilingSimpleUDP; (in index or Settings)
  • $wgUDPProfilerHost = '10.0.6.30'; (in Settings)
  • running on the host in $wgUDPProfilerHost (in 2014, tungsten):
    • mwprof git repo
      • listens on udp:3811 for profiling packets, provides xml dumps on tcp:3811
      • Is controlled by upstart, e.g. restart mwprof.collector
      • Is deployed via trebuchet to /srv/deployment/mwprof/mwprof
        This information is outdated.
    • profiler-to-carbon
      • polls collector, inserts count and timing (in ms) data into whisper db's
      • controlled by upstart, e.g. restart mwprof/profiler-to-carbon
      • Is deployed via trebuchet to /srv/deployment/mwprof/mwprof (yes, inside the repo for mwprof...)
        This information is outdated.
    • /opt/graphite/bin/carbon-cache.py
      • updates whisper db files for graphite
  • graphite based web interface - uses labs ldap for auth
  • aggregate report web interface

Using The Graphite Dashboard

Finding Metrics

  • The left sidebar of the graphite dashboard provides two drop down menus - "Metric Type", which is used for providing shortcuts or aliases to certain metrics (which are hardcoded in the dashboard.conf located in puppet/files/graphite) and Category. Then below, a hierarchical finder of everything under the chosen category. This is all straight forward, except what's shown when Category = * is limited to a single level of the hierarchy - you don't want this! If Metric Type == Everything, make sure to select a class in Category.

  • The dashboard menu option allows sets of graphs to be saved as a named dashboard. Share provides a direct url to a saved dashboard, and Finder lists all shared dashboards.

Combining Metrics

  • Just drag graphs on top of each other to combine.

Types of Metrics

  • count - the number of calls made in the last minute. Note that for a few types of requests, mediawiki profiles 100% of requests, but most are at about 1.5%.
  • tavg - average time in ms, based on everything collected in the sampling time - total-time/count
  • tp50 - 50th percentile in ms, calculated from a bucket of 300 samples
  • tp90 - 90th percentile
  • tp99 - 99th percentile

Examples

  • When the Job Queue depth alerts, but jobs appear to be running, its been difficult to know if the insertion rate has spiked or if execution has slowed. Looking at the pop rate over the insert rate provides an easy to parse picture of health.

  • The 8 Slowest Parser functions, based on time averages. (99th percentiles here are too scary.) This shows how to group by and limit across lots of stats.

  • 8 slowest db write queries, by 99% time

See also