A Grafana dashboard showing a "client errors" and a "server errors" graph for the Swift service at the time of the incident
All times are in UTC.
A Grafana dashboard showing the memory usage of the Swift instances at the time of the incident
14:32 (INCIDENT BEGINS)
14:32 jinxer-wm: (ProbeDown) firing: Service thumbor:8800 has failed probes (http_thumbor_ip4) #page - https://wikitech.wikimedia.org/wiki/Runbook#thumbor:8800 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown
14:32 icinga-wm: PROBLEM - Swift https backend on ms-fe1010 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Swift
Were the people responding to this incident sufficiently different than the previous five incidents?
no
Were the people who responded prepared enough to respond effectively
no
preparedness, we just discussed we don't understand what happened and that the documentation is a decade old
Were fewer than five people paged?
no
Were pages routed to the correct sub-team(s)?
no
Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours.
no
Process
Was the incident status section actively updated during the incident?
yes
Was the public status page updated?
yes
Jaime was not one of the oncallers nor the IC, but he was the first to speak up with the suggestion of updating the status page, quite a long time into the outage
Checking who can access file
Is there a phabricator task for the incident?
yes
https://phabricator.wikimedia.org/T322424
Are the documented action items assigned?
no
The incident is very recent
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?
no
Tooling
To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are
open tasks that would prevent this incident or make mitigation easier if implemented.
yes
We don't know what's causing the issue so there was no way to have a task for it
Were the people responding able to communicate effectively during the incident with the existing tooling?
yes
Did existing monitoring notify the initial responders?
yes
Were the engineering tools that were to be used during the incident, available and in service?
no
We didn't have any cummin cookbooks on how to restart the Swift service so the engineers had to figure out the right commands during the incident
Were the steps taken to mitigate guided by an existing runbook?