Incidents/2024-10-24 codfw pc and kafka mirrormaker issues
document status: draft
Summary
Incident ID | 2024-10-24 codfw pc and kafka mirrormaker issues | Start | 10:54 |
---|---|---|---|
Task | T378076 | End | 14:01 |
People paged | 2 | Responder count | 9 |
Coordinators | cdanis | Affected metrics/SLOs | No relevant SLOs exist |
Impact | User reads and writes were impacted for the duration of the outage |
The Parser cache database for a single section (p2) in codfw became saturated by connections, causing failures to render and update Wiki pages. This connection spike was partially related to a surge in reparse jobs coming from the job queue as a result of a large shared template update. Newly deployed parsercache hosts accidentally did not have the query killer installed Additionally, an aggressive scraper was creating new parse activity throughout the outage.
User impact was limited by depooling mw-web-ro in codfw and pooling it in eqiad at 11:17, despite the outage spanning a longer timeframe.
Additional, as this incident progressed, alerts began to appear related to the Kafka MirrorMaker service due to a severe backlog in a single partition.
Timeline
All times in UTC.
2024-10-24
- 08:32 pc1017 crashed
- 08:46 pc1017 booted back up
- 10:11 pc1014 set as master of pc5, substituting pc1017 (interesting because job queue increase starts around this time)
- 10:54 Initial alerts trigger for pc2012 (pc2) being read-only and mw-web-ro pages OUTAGE BEGINS
- 10:58 MirrorMaker codfw-to-eqiad alerts for 10+min lag
- 10:59 User reports emerge related to outage
- 11:02 Status page is updated
- 11:23 mw-web-ro is depooled in codfw and pooled in eqiad, user-facing issues resolve
- 12:49 query killer is forcibly installed on all parsercache hosts
- 15:04 an impactful scraper is identified
- 15:30 scraper is rate-limited via requestctl
- 15:30 OUTAGE ENDS
2024-10-25
- 10:00 Replication issue on pc5 solved by resetting replication
Detection
The alerts for the service mark the start of the outage
10:55 <•icinga-wm> PROBLEM - MariaDB read only pc2 #page on pc2012 is CRITICAL: Could not connect to localhost:3306 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Master_comes_back_in_read_only
10:54 <•icinga-wm> PROBLEM - MariaDB Event Scheduler pc2 on pc2012 is CRITICAL: Could not connect to localhost:3306 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Event_Scheduler
10:55 <+icinga-wm> PROBLEM - PyBal backends health check on lvs2014 is CRITICAL: PYBAL CRITICAL - CRITICAL - mw-web_4450: Servers parse2001.codfw.wmnet, wikikube-worker2033.codfw.wmnet, wikikube-worker2102.codfw.wmnet, wikikube-worker2081.codfw.wmnet, parse2009.codfw.wmnet, wikikube-worker2010.codfw.wmnet, parse2020.codfw.wmnet, wikikube-worker2027.codfw.wmnet, kubernetes2006.codfw.wmnet, wikikube-worker2023.codfw.wmnet, parse2013.codfw.wmnet, kubernetes201
10:55 <+jinxer-wm> FIRING: [3x] ProbeDown: Service text-https:443 has failed probes (http_text-https_ip6) #page - https://wikitech.wikimedia.org/wiki/Runbook#text-https:443 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown
Conclusions
This is not the first time that we have had a changeprop-involved outage that was non-obvious at first. We frequently discover that these outages are related to queue pressure as a result of muscle memory from previous outages, or after a more detailed analysis that surfaces tasks related to the jobqueue or changeprop.
What went well?
- Switching datacentre quickly limited user-facing impact.
- Alerting caught the issue immediately
What went poorly?
- We have no clear insight into when big template updates happen
- We've seen many similar sudden, hard-to-isolate changeprop related outages many times in the past
- changeprop needs tooling and observability improvements - the causes change but the system is the same
- IC role was fuzzy and not really taken on by a single person until cdanis took it on after coming online
Where did we get lucky?
- Only one section of parsercache was impacted
- We had a varied group of people available during the outage
Links to relevant documentation
- …
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
Actionables
- …
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
Add the #Sustainability (Incident Followup) and the #SRE-OnFire Phabricator tag to these tasks.
Scorecard
Question | Answer
(yes/no) |
Notes | |
---|---|---|---|
People | Were the people responding to this incident sufficiently different than the previous five incidents? | No | |
Were the people who responded prepared enough to respond effectively | Yes | Except changeprop | |
Were fewer than five people paged? | Yes | ||
Were pages routed to the correct sub-team(s)? | No | ||
Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours. | Yes | ||
Process | Was the "Incident status" section atop the Google Doc kept up-to-date during the incident? | No | |
Was a public wikimediastatus.net entry created? | Yes | ||
Is there a phabricator task for the incident? | Yes | T378076 | |
Are the documented action items assigned? | No | ||
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? | No | Very common pattern | |
Tooling | To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are
open tasks that would prevent this incident or make mitigation easier if implemented. |
No | |
Were the people responding able to communicate effectively during the incident with the existing tooling? | Yes | ||
Did existing monitoring notify the initial responders? | Yes | ||
Were the engineering tools that were to be used during the incident, available and in service? | Yes | ||
Were the steps taken to mitigate guided by an existing runbook? | No | ||
Total score (count of all “yes” answers above) | 8 |