Jump to content

Incidents/2024-05-31 Image hotlinking

From Wikitech

document status: draft

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2024-05-31 Image hotlinking Start 2024-05-31 10:47:4
Task End 2024-05-31 12:39:53
People paged 2 Responder count 7
Coordinators kamila_ Affected metrics/SLOs SLOs exist for Varnish and ATS, but these were still met
Impact Increase in response times for users using eqsin

Hotlinking of an image on Commons caused link saturation in the eqsin datacentre.

Timeline

Link to a specific offset in SAL using the SAL tool at https://sal.toolforge.org/ (example)

All times in UTC.

  • 10:47 Page for port utilisation arrives OUTAGE BEGINS
  • 11:14: VictorOps page: DDos Detected (eqsin)
  • 12:01 hnowlan adds requestctl/request-actions/cache-upload/hotlink_from_jio_blomen.yaml and requestctl/request-patterns/req/cache_buster_nnn.yaml. No effect
  • 12:08 Incident opened. Kamila becomes IC
  • 12:33 hnowlan manually deploys varnish frontend rule
  • 12:48 All damaging requests for URL in question are receiving HTTP 429 in response
  • 12:39 <+jinxer-wm> RESOLVED: DDoSDetected: FastNetMon has detected an attack on eqsin #page - https://bit.ly/wmf-fastnetmon - https://w.wiki/8oU - https://alerts.wikimedia.org/?q=alertname%3DDDoSDetected OUTAGE ENDS

Detection

This incident was detected via paging for port utilisation in eqsin:

<+jinxer-wm> FIRING: Primary outbound port utilisation over 80%  #page: Alert for device asw1-eqsin.mgmt.eqsin.wmnet - Primary outbound port utilisation over 80%  #page

Additionally FastNetMon detected what it perceived as a DDoS. This was more or less correct as the behaviours witnessed are similar to a simple DDoS attack.

Conclusions

This was a somewhat familiar pattern, as we have seen similar issues in the past on a larger scale.

What went well?

  • ...

OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc

What went poorly?

OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc

Where did we get lucky?

OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc

Links to relevant documentation

Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.

Actionables

Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.

Add the #Sustainability (Incident Followup) and the #SRE-OnFire Phabricator tag to these tasks.

Scorecard

Incident Engagement ScoreCard
Question Answer

(yes/no)

Notes
People Were the people responding to this incident sufficiently different than the previous five incidents? yes
Were the people who responded prepared enough to respond effectively yes
Were fewer than five people paged? yes
Were pages routed to the correct sub-team(s)? yes
Were pages routed to online (business hours) engineers?  Answer “no” if engineers were paged after business hours. yes
Process Was the "Incident status" section atop the Google Doc kept up-to-date during the incident? yes
Was a public wikimediastatus.net entry created? no
Is there a phabricator task for the incident? no
Are the documented action items assigned? no
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? no
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are

open tasks that would prevent this incident or make mitigation easier if implemented.

no
Were the people responding able to communicate effectively during the incident with the existing tooling? yes
Did existing monitoring notify the initial responders? yes
Were the engineering tools that were to be used during the incident, available and in service? yes
Were the steps taken to mitigate guided by an existing runbook? no
Total score (count of all “yes” answers above) 9