Incidents/2017-05-03 x1 outage

From Wikitech

Summary

x1 db outage after the datacenter switchover back to eqiad from codfw due to too many connections

  • Too many connections to the database being held open
  • Connections being held open waiting for table level locks
  • Table level locks were apparently caused by the query killer killing long running queries
  • What was the cause of these long running queries?

It should be noted that during the eqiad->codfw switch Cognate was not deployed and the Cognate databases were not populated.

We observed a lot of secondary effects, and a lot of things that could possible be connected, but couldn't figure out the root cause that lead to this chain of events. The changes made to the Cognate code should not be seen as resolving the underlying issue.

Possible causes

  1. During the initial incident it appeared that googlebot may have been making large numbers of requests to wiktionaries. We don't know exactly how many of these requests triggered db queries, but the list of interwiki links provided by Cognate is cached in the parser cache and unless these requests all caused re-parses it is unlikely to be the cause of the issue.
  2. One idea is that there were a lot of write operations backed up in the job queue which all hit the DB at once when the JQ was restarted. The Cognate write rate is so low however that we couldn't think of how there could be more than a few hundred jobs in there. It is probably worth checking for similar JobQueue related issues from other extensions hosted on x1 as the timeline fits rather well with the JQ switch on.
  3. Increased Disk Utilization and Disk Write latency is apparently normal on the DB servers after the switch over. This increased write latency is probably the cause of the extended write query times, however these queries getting to 60 seconds + still seems far fetched, but this is definitely a contributing factor. The servers in each datacenter appear to have different setups. Below are some differences on the x1 master for each switch regarding disk / query performance and latency:

Timeline

This is a step by step outline of what happened to cause the incident and how it was remedied.

  • 13:58 - Switch back actually starts
  • ... <More SAL logs relating to switchover>
  • 14:15 END: (PASS) - Wipe and warmup caches in eqiad - t04_cache_wipe (switchdc/oblivian@neodymium)
  • 14:15 START: - Switch MediaWiki master datacenter and read-write discovery records from codfw to eqiad - t05_switch_datacenter (switchdc/oblivian@neodymium)
  • ... <More SAL logs relating to switchover>
  • 14:21 START: - Restore the TTL of all the MediaWiki read-write discovery records and cleanup confd stale files - t09_restore_ttl (switchdc/oblivian@neodymium)
  • 14:21 END: (PASS) - Restore the TTL of all the MediaWiki read-write discovery records and cleanup confd stale files - t09_restore_ttl (switchdc/oblivian@neodymium)
  • 14:21:41 - First table level lock appears grafana
  • 14:21:59 - First SlowTimer log logstash (one log) , logstash (whole period).
  • 14:22 START: - Start MediaWiki jobrunners, videoscalers and maintenance in eqiad - t09_start_maintenance (switchdc/oblivian@neodymium)
  • 14:24 END: (PASS) - Start MediaWiki jobrunners, videoscalers and maintenance in eqiad - t09_start_maintenance (switchdc/oblivian@neodymium)
  • 16:30 - jynus@naos: Synchronized wmf-config/db-eqiad.php: Fine-tune per-server load to reduce db connection errors (duration: 01m 27s)
  • 16:51:51 - Spike in DB connections starts grafana
  • 16:53:42 - Peak DB connections
  • 17:05 - Last table level lock
  • 17:14 - jynus@naos: Synchronized wmf-config/InitialiseSettings.php: Disable cognate- it is causing an outage on x1 (duration: 01m 06s)

Conclusions

  • Adding monitoring within extensions, even when you don't feel that it is needed, is often helpful further down the line. It is probably worth adding such monitoring to any new extensions being deployed on the cluster so that in situations like this we have more information.
  • Read only modes in code are also a good idea, and avoid people taking all functionality offline.
  • The Cognate extension & x1 in general should be watched carefully during the next switchover

Actions Taken

Actionables

  • Db
    • The query killing script can cause table level locks causing things to back up. Are these locks needed? Are there alternatives? They contributed to the traffic jam here.
    • Do the database setups in each datacenter differ? If so why? Could this have contributed to the issue?
  • Cognate Extension
    • PHAB - Cognate does some updates synchronously, and others via the JQ. That may lead to inconsistencies in the DB. Simply writing synchronously all of the time would decrease JQ odd interactions for Cognate.
  • Other
    • Mention in extension writing docs that read only modes might be a good idea.
    • Investigate the JobQueue immediately after it was turned back on as part of the data centre switch, were there lots of Echo jobs? or jobs that would touch the x1 cluster?
    • Investigate the cause of the high write latency on db masters during / after the switch over. Is this something we need to expect during switches?