Jump to content

Incidents/2022-02-01 ulsfo network

From Wikitech

document status: in-review


Incident metadata (see Incident Scorecard)
Incident ID 2022-02-01 ulsfo network Start 2022-01-31 16:05
Task No task End 2022-01-31 16:08
People paged 20+ Responder count 6+
Coordinators 0 Affected metrics/SLOs
Impact For 3 minutes, clients served by the ulsfo POP were not able to contribute or display un-cached pages. 8000 errors per minute (HTTP 5xx).

A firewall change was pushed to ulsfo routers, which caused ulsfo to lose connectivity to the other POPs and core sites for 3min.


  • 16:03 - apply configuration change on cr3-ulsfo
  • 16:05 - apply configuration change on cr4-ulsfo - outage starts
  • 16:06 - Icinga notifies about connectivity issues to ulsfo - paging
  • 16:08 - change rolled back - outage ends


Root cause:

The change incorrectly restricts BFD to BGP peers:

+      term allow_bfd {
+          from {
+              source-prefix-list {
+                  bgp-sessions;
+              }
+              protocol udp;
+              port 3784-3785;
+          }
+          then accept;
+      }

While BFD is also used by OSPF sessions, which caused them to be tear down. One surprising point is that the issue didn't show up in the verification commands (show ospf interfaces, show ospf neighbors), all neighbors are present.

show ospf interface   

Interface           State   Area            DR ID           BDR ID          Nbrs

ae0.2               PtToPt            1

et-0/0/1.401        PtToPt            1

xe-0/1/1.0          PtToPt            1

show ospf neighbor    

Address          Interface              State     ID               Pri  Dead    ae0.2                  Full    128    35    et-0/0/1.401           Full    128    33    xe-0/1/1.0             Full   128    34

While it was effectively down:

rpd[16292]: RPD_OSPF_NBRDOWN: OSPF neighbor (realm ospf-v2 xe-0/1/1.0 area state changed from Full to Down due to InActiveTimer (event reason: BFD session timed out and neighbor was declared dead)


  • More progressive roll out (longer wait time between each routers, as well as fewer changes at a time) could have reduced the risk of the issue happening
  • Monitoring properly caught the issue


Question Score Notes
People Were the people responding to this incident sufficiently different than the previous five incidents? (score 1 for yes, 0 for no) 0
Were the people who responded prepared enough to respond effectively (score 1 for yes, 0 for no) 1
Were more than 5 people paged? (score 0 for yes, 1 for no) 1
Were pages routed to the correct sub-team(s)? (score 1 for yes, 0 for no) 1
Were pages routed to online (business hours) engineers? (score 1 for yes,  0 if people were paged after business hours) 1
Process Was the incident status section actively updated during the incident? (score 1 for yes, 0 for no) 0
Was the public status page updated? (score 1 for yes, 0 for no) 0
Is there a phabricator task for the incident? (score 1 for yes, 0 for no) 0
Are the documented action items assigned?  (score 1 for yes, 0 for no) 0
Is this a repeat of an earlier incident (score 0 for yes, 1 for no) 0
Tooling Was there, before the incident occurred, open tasks that would prevent this incident / make mitigation easier if implemented? (score 0 for yes, 1 for no) 1
Were the people responding able to communicate effectively during the incident with the existing tooling? (score 1 for yes, 0 or no) 1
Did existing monitoring notify the initial responders? (score 1 for yes, 0 for no) 1
Were all engineering tools required available and in service? (score 1 for yes, 0 for no) 1
Was there a runbook for all known issues present? (score 1 for yes, 0 for no) 1
Total score 9