Incident response/Full report template
To use this template:
document status: draft
|Incident ID||Full report template||Start||YYYY-MM-DD hh:mm:ss|
|People paged||Responder count|
|Impact||Who was affected and how? For user-facing outages: Estimate how many queries were lost, which regions were affected, or which types of clients (editors? readers? bots?), etc. Do not assume the reader knows what your service is or who uses it.|
Summary of what happened, in one or two paragraphs. Avoid assuming deep knowledge of the systems here, and try to differentiate between proximate causes and root causes.
Write a step by step outline of what happened to cause the incident, and how it was remedied. Include the lead-up to the incident, and any epilogue.
Consider including a graphs of the error rate or other surrogate.
All times in UTC.
- 00:00 (TODO) OUTAGE BEGINS
- 00:04 (Something something)
- 00:06 (Voila) OUTAGE ENDS
- 00:15 (post-outage cleanup finished)
TODO: Clearly indicate when the user-visible outage began and ended.
Write how the issue was first detected. Was automated monitoring first to detect it? Or a human reporting an error?
Copy the relevant alerts that fired in this section.
Did the appropriate alert(s) fire? Was the alert volume manageable? Did they point to the problem with as much accuracy as possible?
TODO: If human only, an actionable should probably be to "add alerting".
What weaknesses did we learn about and how can we address them?
What went well?
- (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
- (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
- (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
How many people were involved in the remediation?
- (Use bullet points) for example: 2 SREs and 1 software engineer troubleshooting the issue plus 1 incident commander
Links to relevant documentation
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
- To do #1 (TODO: Create task)
- To do #2 (TODO: Create task)
|People||Were the people responding to this incident sufficiently different than the previous five incidents? (score 1 for yes, 0 for no)|
|Were the people who responded prepared enough to respond effectively (score 1 for yes, 0 for no)|
|Were more than 5 people paged? (score 0 for yes, 1 for no)|
|Were pages routed to the correct sub-team(s)? (score 1 for yes, 0 for no)|
|Were pages routed to online (business hours) engineers? (score 1 for yes, 0 if people were paged after business hours)|
|Process||Was the incident status section actively updated during the incident? (score 1 for yes, 0 for no)|
|Was the public status page updated? (score 1 for yes, 0 for no)|
|Is there a phabricator task for the incident? (score 1 for yes, 0 for no)|
|Are the documented action items assigned? (score 1 for yes, 0 for no)|
|Is this a repeat of an earlier incident (score 0 for yes, 1 for no)|
|Tooling||Was there, before the incident occurred, open tasks that would prevent this incident / make mitigation easier if implemented? (score 0 for yes, 1 for no)|
|Were the people responding able to communicate effectively during the incident with the existing tooling? (score 1 for yes, 0 or no)|
|Did existing monitoring notify the initial responders? (score 1 for yes, 0 for no)|
|Were all engineering tools required available and in service? (score 1 for yes, 0 for no)|
|Was there a runbook for all known issues present? (score 1 for yes, 0 for no)|