Jump to content

Incidents/2022-11-04 Swift issues

From Wikitech

document status: in-review


Incident metadata (see Incident Scorecard)
Incident ID 2022-11-04 Swift issues Start 2022-11-04 14:32
Task https://phabricator.wikimedia.org/T322424 End 2022-11-04 15:21
People paged 2 Responder count 10
Coordinators denisse Affected metrics/SLOs No relevant SLOs exist
Impact Swift has reduced service availability affecting Commons/multimedia

For 49 minutes the Swift/mediawiki file backend returned errors (how many? Which percentage?) for both reads and new uploads.


A Grafana dashboard showing a "client errors" and a "server errors" graph for the Swift service at the time of the incident

All times are in UTC.

A Grafana dashboard showing the memory usage of the Swift instances at the time of the incident


The issue was detected automatically and the engineers On Call received a page from Splunk on Call

Alerts that fired during the incident:

The alerts that fired were useful for the engineers to solve the incident.


What went well?

  • Automated monitoring detected the incident
  • Several engineers helped debug the issue

What went poorly?

  • Our documentation for Swift could be updated.

Where did we get lucky?

  • An expert in the Swift service was present
  • We had unused hardware laying around

Links to relevant documentation


  • Investigate why the alerts scalated to batphone even when the engineers on call have already ACK'd the initial alert.
  • Add runbooks, documentation on how to troubleshoot this issues.


Incident Engagement ScoreCard
Question Answer


People Were the people responding to this incident sufficiently different than the previous five incidents? no
Were the people who responded prepared enough to respond effectively no preparedness, we just discussed we don't understand what happened and that the documentation is a decade old
Were fewer than five people paged? no
Were pages routed to the correct sub-team(s)? no
Were pages routed to online (business hours) engineers?  Answer “no” if engineers were paged after business hours. no
Process Was the incident status section actively updated during the incident? yes
Was the public status page updated? yes Jaime was not one of the oncallers nor the IC, but he was the first to speak up with the suggestion of updating the status page, quite a long time into the outage

Checking who can access file

Is there a phabricator task for the incident? yes https://phabricator.wikimedia.org/T322424
Are the documented action items assigned? no The incident is very recent
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? no
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are

open tasks that would prevent this incident or make mitigation easier if implemented.

yes We don't know what's causing the issue so there was no way to have a task for it
Were the people responding able to communicate effectively during the incident with the existing tooling? yes
Did existing monitoring notify the initial responders? yes
Were the engineering tools that were to be used during the incident, available and in service? no We didn't have any cummin cookbooks on how to restart the Swift service so the engineers had to figure out the right commands during the incident
Were the steps taken to mitigate guided by an existing runbook? no
Total score (count of all “yes” answers above) 6