Jump to content

Incidents/2025-09-25 wdqs

From Wikitech

document status: in-review

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2025-09-25 wdqs Start 2025-09-25 04:38:00
Task T405545 End 2025-09-25 06:31:00
People paged 1 Responder count 1
Coordinators No coordinator Affected metrics/SLOs https://wikitech.wikimedia.org/wiki/SLO/WDQS

Uptime (availability) percentage, Excessive lag percentage, WDQS Updater Lag

Impact All internal and external users of the WDQS service were affected. Service was unavailable for the duration of the incident.

SREs got paged overnight (CEST) by an LVS alert. No healthy host was found in the WDQS codfw pool. All servers were in a deadlock state and any access to query.wikidata.org resulted in a 500 status (https://phabricator.wikimedia.org/T405545).

Timeline

All times in UTC.

2025-09-24 18:43:xx alerts fire in #wikimedia-data-platform-alerts with failing wdqs probes (https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/custom&var-module=All). Deadlock can be seen on codfw wdqs-main hosts; however enough hosts stay operational that no LVS outage is detected (https://grafana.wikimedia.org/d/000000489/wikidata-query-service?orgId=1&refresh=1m&var-cluster_name=wdqs-main&from=2025-09-24T20:46:04.768Z&to=2025-09-24T21:37:15.665Z&timezone=utc&var-graph_type=(9102%7C919%5B35%5D)&viewPanel=panel-7). This might be worth investigating as possibly the same entity that was issuing queries during the actual outage covered below.

2025-09-25 04:25:30 First host (wdqs2007) experiences blazegraph deadlock per https://grafana.wikimedia.org/d/000000489/wikidata-query-service?orgId=1&refresh=1m&var-cluster_name=wdqs-main&from=2025-09-25T04:24:39.139Z&to=2025-09-25T04:38:17.184Z&timezone=utc&var-graph_type=%289102%7C919%5B35%5D%29&viewPanel=panel-7 (missing datapoints indicates deadlocked server)

2025-09-25 04:36:45 FULL OUTAGE BEGINS (all codfw hosts are deadlocked)

2025-09-25 04:38:xx failed query rates (all nodes) starts to spike (https://grafana.wikimedia.org/goto/96HUfmqNg?orgId=1), Done Rate (Requests per Second) flattens to 0 (https://grafana.wikimedia.org/goto/utTCfm3NR?orgId=1), Pooled Servers dropped (https://grafana.wikimedia.org/goto/WuGWG73Hg?orgId=1)

2025-09-25 04:45:xx SRE get paged (LVS)

2025-09-25 06:08:xx Guillaume Lederrey (Data Platform SRE, Manager) opens a thread on slack and troubleshoot that all server are deadlocked simultaneously.

2025-09-25 06:31:xx Guillaume Lederrey restarts all WDQS servers (https://sal.toolforge.org/log/2SWRf5kBffdvpiTr5EmA)

2025-09-25 06:32:30 Blazegraph able to perform work on servers again per https://grafana.wikimedia.org/d/000000489/wikidata-query-service?orgId=1&refresh=1m&var-cluster_name=wdqs-main&from=2025-09-25T06:31:22.121Z&to=2025-09-25T06:41:02.447Z&timezone=utc&var-graph_type=%289102%7C919%5B35%5D%29&viewPanel=panel-7

2025-09-25 06:33:00 (End of outage) Hosts should be responding to queries normally at this point, although they do have some update lag (while deadlocked the graph database does not get updated)

2025-09-25 06:37:45 Hosts have caught up on lag

Detection

SRE received a page from LVS (no hosts nodes in WDQS pool), and escalated to Guillaume Lederrey

Conclusions

We don't know the root cause of the issue yet, but WDQS' Blazegraph backend is known to be unstable and crash occasionally. This was an extreme case where it appears all servers crashed (deadlocked) simultaneously.

What went well?

OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc

What went poorly?

OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc

Where did we get lucky?

OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc

Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.

Actionables

Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.

Add the #Sustainability (Incident Followup) and the #SRE-OnFire Phabricator tag to these tasks.

Scorecard

Incident Engagement ScoreCard
Question Answer

(yes/no)

Notes
People Were the people responding to this incident sufficiently different than the previous five incidents?
Were the people who responded prepared enough to respond effectively
Were fewer than five people paged?
Were pages routed to the correct sub-team(s)?
Were pages routed to online (business hours) engineers?  Answer “no” if engineers were paged after business hours.
Process Was the "Incident status" section atop the Google Doc kept up-to-date during the incident?
Was a public wikimediastatus.net entry created?
Is there a phabricator task for the incident?
Are the documented action items assigned?
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are open tasks that would prevent this incident or make mitigation easier if implemented.
Were the people responding able to communicate effectively during the incident with the existing tooling?
Did existing monitoring notify the initial responders?
Were the engineering tools that were to be used during the incident, available and in service?
Were the steps taken to mitigate guided by an existing runbook?
Total score (count of all “yes” answers above)