Portal:Toolforge/Admin/Runbooks/ToolsDBReplication
This runbook is about different types of problems in the ToolsDB replication from the primary instance to the replica instance.
Error / Incident
This usually comes in the form of an alert in alertmanager. We currently have these 3 alerts about different replication problems:
- ToolsDBReplicationMissing (replication is not running as expected)
- ToolsDBReplicationError (replication is erroring)
- ToolsDBReplicationLagIsTooHigh (replication lag is too high)
The alert labels will also indicate which instance is affected.
Debugging
Checking the replication status
The alert is usually triggered on the replica instance (currently tools-db-2
). You can SSH to that instance and check the replication status:
dcaro@vulcanus$ ssh tools-db-2.tools.eqiad1.wikimedia.cloud
dcaro@tools-db-2:~$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 49
Server version: 10.4.29-MariaDB-log MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: tools-db-1.tools.eqiad1.wikimedia.cloud
...
Master_Log_File: log.014140
Read_Master_Log_Pos: 4489664
Relay_Log_File: tools-db-2-relay-bin.000002
Relay_Log_Pos: 725
Relay_Master_Log_File: log.014015
Slave_IO_Running: Yes <-- if this is no, the slave is not running
Slave_SQL_Running: Yes <-- same for this
...
There is now a cookbook that does it for you: cloudcumin1001:~$ sudo cookbook wmcs.toolforge.toolsdb.get_cluster_status |
If the replication is running
If both Slave_IO_Running or Slave_SQL_Running are "Yes", it means the replication is working, but the replication lag has increased beyond the threshold. The first thing to check is this Grafana chart that will show you the evolution of the replication lag in the last few hours (and days/weeks if you change the time window).
Most of the times the replication lag will eventually decrease by itself until it gets back to zero. It's still worth identifying what caused the issue, so please open a Phabricator task like phab:T343819.
Below you find some of the issues that triggered this alert in the past.
If the replication is NOT running
If one of Slave_IO_Running or Slave_SQL_Running are not "Yes", the replication has stopped. Last_SQL_Error
or Last_IO_Error
will show you the error that caused the replication to stop.
This is a more critical situation that will NOT resolve by itself.
Sometimes restarting the replication is enough to fix the issue, so you should try this first:
ssh tools-db-X.tools.eqiad1.wikimedia.cloud
tools-db-X:~$ sudo mariadb
MariaDB [(none)]> START SLAVE;
If after restarting the replication the same error occurs, you can try analyzing the query that caused the replication to fail and see if it safe to skip it, or if it can be applied manually. The offending query is often shown in Last_SQL_Error
or you can also check the MariaDB logs with journalctl -u mariadb
.
In the worst case scenario, you can recreate a new replica from scratch, using a snapshot of the volume holding the primary data. Documentation on how to do it is available at Portal:Data Services/Admin/Toolsdb#Creating a new replica host.
Common issues
Add new issues here when you encounter them!
Transactions way slower on the secondary than the primary
One or more transactions that were applied (somewhat) quickly in the primary, take much longer (hours or days) to be replicated in the replica instance (see for example phab:T341891). According to this Stackoverflow thread, this behaviour can be caused by a missing index on the id
column.
If the same database or tool cause this issue on multiple occasions, it's probably worth contacting the tool owners and trying to add the missing index, to prevent this from happening again.
To find out which database is causing the issue, you can check the binary log. In the result of SHOW SLAVE STATUS
in the replica (see section above), look at the values Relay_Master_Log_File
and Exec_Master_Log_Pos
. Then SSH to the primary and cd into /srv/labsdb/binlogs
. There you can use mysqlbinlog
to see the corresponding transaction:
dcaro@urcuchillay$ ssh tools-db-1.tools.eqiad1.wikimedia.cloud
dcaro@tools-db-1:~$ sudo -i
root@tools-db-1:~# cd /srv/labsdb/binlogs
root@tools-db-1:/srv/labsdb/binlogs# mysqlbinlog --base64-output=decode-rows --verbose --start-position $EXEC_MASTER_LOG_POS $RELAY_MASTER_LOG_FILE |less
There is now a cookbook that does it for you: cloudcumin1001:~$ sudo cookbook wmcs.toolforge.toolsdb.get_current_replica_transaction |
If this happens again, please create a task in Phab and file it as a subtask of phab:T357624
Replication stops because of "invalid event"
This happened a couple of times and the following error was shown in the MariaDB logs:
Read invalid event from master: 'Found invalid event in binary log', master could be corrupt but a more likely cause of this is a bug
I'm not sure what was the root cause, but restarting the replication with START SLAVE;
was enough to fix the issue.
If this happens again, please add a comment to phab:T351457.
Related information
Support contacts
The main discussion channel for this alert is the #wikimedia-cloud-admin in IRC.
There is no user impact as users are always connecting to the primary (as of 2023), so there's no need to notify anyone outside of the Toolforge admins.
If the situation is not clear or you need additional help, you can also contact the Data Persistence team (#wikimedia-data-persistence on IRC).
Old incidents
Add any incident tasks here!