Incident documentation/QR201407/group1

From Wikitech
Jump to navigation Jump to search

Quarterly Review of post-mortems - 2014-07 - Group 1

Questions we want to be able to answer

  • Have all of the issues that came out of the post-mortem been addressed? If not, why not?
  • Are we satisfied with the current state of that part of the infra? Are there further actions to take (upon further reflection)?
  • anything else?


  • Go through the post-mortems and their respective action items and make sure they have been followed up appropriately.
    • If you have details that are relevant to the post-mortem in BZ/etc, please link from the post-mortem.
  • Discuss if there is anything else that we learned from the situation and follow up to better inform future decisions.
  • Notes written up by all, collaboratively, so that others in the organization will learn from these as well.



The post mortems


Sean, Nuria

  • Status:    on going - Clarify who is to respond to which EventLogging alerts by when.
    • Analytics now owns EL, and can respond to some alerts, but not all. Discussion between Ops and Analytics is still ongoing.
  • Status:    Done - RT #7081 - Move EventLogging database to m2.
  • Status:    Done - Figure out a way to allow joining EventLogging data against enwiki, as this seems to be critical for researchers.
Replication back to db1047 is included in the required events to move EventLogging database to m2.



  • Status:    Unresolved - PrivateSettings.php should be in a repo so we can be sure what's changed.
  • Status:    Done - Db user and password settings should go into PrivateSettings (and not be removed from AdminSettings until anyone relying on that file has converted their jobs).
  • Status:    Declined - Changes made should go out immediately as they do for all configuration files.
  • Status:    Declined - Better coordination



  • Status:    Done - We have set up alarms on Event Logging regarding throughput so analytics team is notified if events per second go beyond a certain threshold.
  • Status:    Done - We have set a policy of notifying our users when these type of events happen.
  • Status:    Done - Repopulate data?
    • Analytics talked to the affected teams and looks like there is no need to repopulate the data lost.



In the short term:

  • Status:    Done MAX_USER_CONNECTIONS is 512 for "bloguser" on db1001.
  • Status:    Done The affected MyISAM tables have been switched to ARIA.

In the long term:

  • Status:    on-going Phabricator will use proper InnoDB tables, plus Elasticsearch.
  • Status:    on-going Tendril query sampling will steadily reach more hosts.



  • Status:    on going - Don't ever deploy stuff on Friday afternoon.
  • Status:    informational - Cirrus was using SSDs!
    • Why didn't we know? That's not clear. Obviously some communication breakdown.
    • Short term: we're going to continue using SSDs.
    • Long term: we don't actually look like we're doing much IO at once at steady state. When shards move around we obviously need more. We might be able to avoid needing expensive ssds. OTOH, it might not be worth the time. Not sure.
  • Status:    Done - We're going to have to file a bug upstream to do something about nodes that are *broken* like this one. We've had this issue before when we built a node from puppet and didn't set the memory correctly and it ran out of heap and started running really really slow. The timeouts we added then helped search limp along with the broken node, but it'd be nice to do something automatically to the node that is obviously sick.



  • Status:    Declined - Add monitoring for individual job types on single machines.
  • Status:    on-going - Make job-loop aware of the status of launched jobs, or rethink and rewrite it completely.



  • Status:    Declined - Run puppet nice'd
  • Status:    Declined - A puppet run should not start if a box is under abnormal load.
  • Status:    Declined - Improve how Mediawiki handles a DB host that is flaky rather than completely down
  • Status:    Declined - Migrate parsercache away from being a full RDBMS.



  • Status:    Done An additional S5 slave has been deployed.
  • Status:    Done DB traffic sampling has been deployed to S5.
  • Status:    Done Aude and Hoo deployed



  • Status:    Done - Upgrade swift frontend bandwidth to 10G
  • Status:    Declined - Number of uploads should be a metric in graphite
  • Status:    Done - Increase the number of imagescaler workers once swift bandwith can withstand it



  • Status:    on-going - Be more careful next time.
  • Status:    Declined - If sync-dir/sync-file/scap don't sync any files then we need to log something about it because its weird. Warn the operator that the sync they just performed was a noop.
  • Status:    Done - Add automatic cache warming to CirrusSearch to prevent load spikes when loading cold caches.
  • Status:    Done - Improve CirrusSearch error handling, it's very broken.