Jump to content

MariaDB/Troubleshooting

From Wikitech
(Redirected from MariaDB/troubleshooting)
DBAs are working on it

This guide is a work in progress. See also MariaDB/monitoring and MariaDB/Backups#Recovering_a_Snapshot

All dbctl write commands must be run with sudo

Depooling a replica

From one of the cluster management hosts (cumin1002.eqiad.wmnet, cumin2002.codfw.wmnet):

sudo dbctl instance dbXXXX depool
sudo dbctl config commit -m "Depool dbXXXX"

Also make sure you downtime the host. You can either downtime it via cumin, disable paging for that host via puppet (example) or both.

To disable via cookbooks:

sudo cookbook sre.hosts.downtime --hours 72 -r "HW issues" 'db1137*'

If you depool a host, please create a task tagging #DBA so we can take a look at what happened.

Depooling a parsercache host

From one of the cluster management hosts (cumin1002.eqiad.wmnet, cumin2002.codfw.wmnet):

parsercache hosts are handled by dbctl as well, but the procedure differs: if one of the hosts dies, we need to swap it with a spare one to replace it in the affected section.

Tip: If you're unsure which section a failed host is in, you can quickly identify it with a command like dbctl section all get | jq 'select(.. | .master? == "pc2011")' (replace with the right host).

Spare hosts can technically be used for any section, but are preconfigured as spares for only a subset of sections and replicate from one of them (i.e., a hot spare). If a cold spare must be used, this is fine, but may impact performance for a few minutes.

You can identify a spare for a given section by running a command like (in this example, searching for a spare for section pc1 in codfw, adjust as needed):

dbctl --scope codfw instance all get | jq 'select(.. | .sections? | has("pc1"))'

This will list hosts with pc1 configured. Look for a host with a note field indicating it is a spare (e.g., pc2014 currently has "note": "Hot spare for pc1 and cold spare for pc2").

To replace a failed host, first identify an appropriate spare host as described above and disable replication on the spare (in this example pc2014, adjust as needed) by doing the following:

ssh pc2014.codfw.wmnet
sudo mysql -e "stop slave; reset slave all"

Now you can switch the section to the spare host with dbctl. This involves pooling the spare in the section and promoting it to master there.

Using the example of replacing pc2011 with pc2014 for pc1 in codfw (again, adjust as needed), from one of the cluster management hosts (cumin1002.eqiad.wmnet, cumin2002.codfw.wmnet):

dbctl instance pc2014 pool --section pc1
dbctl --scope codfw section pc1 set-master pc2014 
dbctl instance pc2011 depool  # optional clarity step: depool the broken host
dbctl config commit -m "Replace pc2011 with pc2014 for pc1"

If you depool a host, please create a task tagging #DBA so we can take a look at what happened.

x2 special topology

If the failed replica is part of x2, there is no need to depool it as they aren't used.

See MariaDB#x2 for more details.

Master comes back in read only

A master coming back in READ ONLY mode is expected after a crash and it is done to prevent accidental corruption (or even more corruption on a crash).

Unless you know what you are doing, do not set it back to writable: if unsure, call a DBA

Impact

Reads will remain unaffected but no writes will be able to go through, the wikis on that master will be on read-only mode.

To find out which wikis are those:

ssh to the host: mysql -e "show databases"
Or from any cumin host: db-mysql $instance -e "show databases"

What to do

If unsure, call a DBA

If you know what you are doing:

  • Check the reason for the crash:
journalctl -xe -umariadb -f
dmesg
/var/log/messages
  • Check the state of the data:
    • Check errors above
    • Check if tables are marked as corrupted or you see InnoDB crashes
    • Select from the main tables and see if you can get results: revision text users watchlist actor comment
    • If storage is the cause, you most likely want to failover to a different host: https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting#Emergency_failover
    • If a memory dimm is the cause, you most like want to:
      • Disable puppet
      • Reduce innodb buffer pool size on my.cnf
      • Restart mariadb
      • Check data from a few tables
      • Check that all replicas are in sync
      • set global read_only=OFF;
      • Create a task to follow up

If this is part of a maintenance and pages it could be due to expired downtime or because it was forgotten to be downtimed. If that is the case, contact whoever is doing the maintenance, if it is you, remember to:

set global read_only=OFF;

Depooling a master (a.k.a. promoting a new replica to master)

Planned switchover

See an example task and checklist at: https://phabricator.wikimedia.org/T313383

See: Primary database switchover

Emergency failover

Note: It's better to just call a DBA.

If the master is not available, or replication is broken, this is a more complex case. The reason is that replicas will have executed different amount of transactions and will be in a close, but different state. E.g. slave1 has executed transactions A, while slave2 has executed transactions A, B and C. In addition to that, if we do not have access to the master's binary log (or it has not properly been synchronized to disk after a crash), we will have to recover from a replica. In theory, with semi-sync replication, no transaction will be lost, and at least one replica will have the change, but all other replicas will be on different coordinates (and binary log position is only local to the masters).

Scenario 1 -master is recoverable: just wait until the master restarts, it will avoid headaches and be faster and less disruptive than trying to failover it.

Scenario 2 -master is not recoverable, but its binary log is (and all replicas have a less or equal amount of data):

  1. For each replica: send the master log position, starting from the last Exec_master_position so all replicas are in the same starting state
  2. Follow regular failover steps as mentioned in the scheduled maintenance

Scenario 3 -neither master is recoverable nor its binary logs (or a master binary log is behind a replica binary log): We need to put all servers in the same state, using the most up-to-date replica, then perform the regular failover process. This is the most complicated part without using GTIDs:

  1. Identify the most up to date replica by comparing Exec_master_log_pos
  2. By comparing binary log positions, try to find the binlog coordinate that corresponds to the other's replicas binlog to the most up to date replica's binlog. This is the tricky part. pt-heartbeat should be able to find this.
  3. Execute the pending transactions on each replica
  4. Follow the regular steps for regular scheduled maintenance

Again, these steps can be automated.

Replication lag

See also MySQL#Replication lag for additional tips.

Caused by hardware

This is what a half-failing disk looks like in monitoring (small lag until it becomes critical).

One common cause of lag that is easy to check and repair is hardware issues.

Disks about to fail

Databases have a lot (and I mean a lot) of IO pressure, and while it is not insane, it means that 3-year old drives are very prone to fail.

As an operator, you are already familiar with the way drives fail (not very reliably, to be honest). All important databases have a hardware RAID, which means 1 disk can fail at a time, usually with very little impact. When that happens, the icinga alert "1 failed LD(s) (Degraded)" should tell you it is time to replace at least one disk. Usually there are spares onsite or the servers are under warranty, which means you can create a ticket to ops-eqiad or ops-codfw and let Chris or Papaul know that should take it off and insert a new one, the hw RAID should automatically reconstruct itself.

To check the RAID status, execute (please also check PERCCli as the new databases are using a new RAID controller and that's the tool to use:

You can simply use the wrapper that would pick the right tool for you:

sudo /usr/local/lib/nagios/plugins/get-raid-status-perccli

Alternatively, the megacli command is:

 megacli -AdpAllInfo -aALL

And check the section "Devices present"

To identify the particular disk

 megacli -PDList -aALL

Check in particular for the Firmware State (on or off), the S.M.A.R.T alerts, and the number of medium errors (a few, like a dozen, should not affect much performance, but when there are hundreds of errors in a short timespan, that is an issue).

Sadly, disks fail in a very creative way, and while our RAIDs controllers are reliable enough to 1) continue despite medium errors and 2) disable the disk when it fails completely; in a state of "almost failing", there could be lag issues. If that is the case, executing:

 megacli -PDOffline -PhysDrv \[#:#\] -aALL

where #:# is enclosure:slot, will take the particular physical drive offline so that it can be replaced later.

For perccli commands check PERCCli

Bad or defective BBU

If all the disks are looking good, it can be that the RAID controller went to WriteThrough mode because of a failed BBU or because it is in a learning cycle (which shouldn't because it is disabled in our environment). If the Cache Policy is set to WriteThrough it will dramatically affect performance. In order to check the Current Policy (that is it, the active one):

megacli -LDInfo -LAll -aAll | grep "Cache Policy:"

If it is not in WriteBack mode, it means (most likely) that the BBU has failed for some reason and the default is to switch back to WriteThrough as it is safer. You can check the BBU status with:

megacli -AdpBbuCmd -GetBbuStatus -a0 | grep -e '^isSOHGood' -e '^Charger Status' -e '^Remaining Capacity' -e 'Charging'

If you are in an emergency, you can always force WriteBack, but this can lead to data loss if there is a power failure, so use it carefully

megacli -LDSetProp -ForcedWB -Immediate -Lall -aAll

For Perccli run the following command, where /c0 is controller 0 (the first controller):

sudo perccli64 /c0/bbu show status

To check the relearn cycle status:

sudo perccli64 /c0/bbu show learn

Overload due to long running queries

If you need to kill queries from a host, connect to any cumin host (cumin1001.eqiad.wmnet or cumin2001.codfw.wmnet) and run:

db-kill INSTANCE

Please note that this can alleviate the symptoms, but won't fix the root cause.

If a host or several of them are complaining due to max_connection limit as seen by the processlist and max_connection errors on the mysql graphs [1] [2] or on application server logs (sometimes causing also "Not enough idle PHP-FPM workers for Mediawiki" as a cascade error), it is possible that one or several queries run by mysql are taking over all mysql resources, snowballing into blocking others for execution. While mysql server will likely stay up, as they use a queue to avoid executing many of them at the same time, the queing will increase extra latency on execution, affecting application servers resources, too.

This can be caused by a deliberate abuse of expensive functionality or API call, by a bug on application code sending poorly-optimized queries, or a change on the query planner that makes old fast queries now slower.

To fix the issue, the best way is disabling on mediawiki the functionality that is sending the long running queries.

MariaDB mediawiki instances have a query killer enabled that kicks in when it detects idle queries or queries executing for longer than 60 seconds (agreed as the maximum query time allowed by web requests). If a connection overload is detected (by monitoring current connections), an overload killer also kicks in.

You can also login to the database and do `show processlist`. Find the long-running queries and run `kill QUERY ID` (replace QUERY ID with the id given in show processlist).

Memory issues

There is an alert "MariaDB memory", that checks that there is not memory over-commitment due to a missconfiguration, heavy activity or a leak. For now the alert is just a check that there is enough free memory (not counting filesystem cache) for client connections, utilities and background tasks. If a host starts alerting (or starts OOMing), reduce the buffer pool or load to go back to the 80-90% memory usage.

While MariaDB can reduce the buffer pool in a hot way, just by typing SET GLOBAL innodb_buffer_pool_size = <new size in bytes>;. Not recommended to be run when the host is pooled as it might create stalls/replication lag while the buffer pool is being resized.

Wikireplicas memory issues (clouddb*)

Clouddb hosts are sometimes alerting because less than 5% of memory is available (more information at phab:T365164). This can be usually fixed by restarting the Systemd service that is using too much memory. Please note that each clouddb* host has more than one mysql service (e.g. mysql@s4.service and mysql@s6.service).

Check the memory usage with systemctl status and restart the service with systemctl restart.

Important: remember to restart the replication with START SLAVE; after restarting the service, because it does not resume automatically.

Data loss

Normal reprovisioning

The best (and fastest) way to repair a replica (or a master) is to use the regular provisioning workflow - to copy the latest snapshot from dbprovXXXX hosts. Depending on the section it should take minutes to 1h.

If only a partial recovery is needed (single dropped table), logical backups (on the same dbprov* hosts) may be faster and more flexible. A full logical recovery can take from 12 hours to 1 day.

More info at MariaDB/Backups.

Long term backups

If for some reason the short term backups/provisioning files are not enough, you can get those from bacula. Recover from bacula to dbprovXXXX then use the same method as above.

More info at Bacula.

Cloning

If for some reason no backups are available, but replicas are, we can clone a running mariadb server with xtrabackup or the files of a stopped one into another host (see transfer.py utility for both file and xtrabackup transfers).

Binlogs

Binlogs are not a backup method, but they are files containing the transactions in the last month, on every master and replica. They are helpful to do point in time recovery if replication is not working, allowing to move forward a backup until an arbitrary point in time (e.g. before a DROP was sent).

Point in time recovery at the moment is fully manual but its automation is a work in progress.

Data inconsistency between nodes "drift" / replication broken

db-compare utility allows to manually check the difference between 2 hosts. This is right now manually run, but it is schedule to be constantly running comparing host inconsistencies. It can be run from any cumin host.

Aside from a manual check, the other most common way to find inconsistencies is for replication to break as a change is applied to a master that cannot be applied to the replica.

Steps to perform:

  • If an inconsistency happens on one replica, it is likely the host data got corrupted, depool it and research should be done why it happened. If the issue was due to only replica issues, wipe data and recover from provisioning hosts.
  • If it happens on all replicas, it should be checked if there is master corruption or other operational error. If an operational error "a table exist on the master an not on others", it could be corrected manually (e.g. create the table manually). Otherwise, failover the master so it can be depooled and continue with a different host as the new master.
  • In all cases, it should be clear which host or set of hosts have the right data; the "bad host(s)" should be put out of production, itrs data deleted and reprovisioned from backup

NEVER use sql_slave_skip_counter! Not only you will skip full transactions (despite maybe a single row being problematic, creating more drift issues), you will make hosts have a different gtid counter. If you have to manually change something on only 1 host, apply the dml with set sql_log_bin=0; so it doesn't go to the binlog/gtid counter/replication.

Split brain

There is an additional case, which is if a "split brain" has happened and correct data was written to both master and replicas.

This is rare given that all replicas are set in read only to prevent this issue. Also this is difficult to handle- ideally data should be merged into a single unified version.

If data affected is derived (non-canonical) ", eg. *links tables, you could temporarily chose a single set of servers, go back to read/write and the try to merge the difference in the background.

If canonical data is affected (page, revision, user), consider setting up the application/section in read only until data is reconciliated, so new versions are not added that could make the 2 data version merging more complicated.

Depooling a Labs dbproxy

See Portal:Data Services/Admin/Runbooks/Depool wikireplicas

Using mysqlbinlog

binlogs are typically stored at /srv/sqldata/ and called: db$HOSTNAME-bin.XXX If you need to explore them, simply ssh to the host and run:

sudo mysqlbinlog -vv --base64-output=DECODE-ROWS db$INSTANCE-bin.XXXXX

Check replication health quickly

Here are quick snippets to help you check a replica's health

$ function quick_show_slave() { sudo db-mysql $1 -e "show slave status\G"|grep -iE 'second|run|state|log_|Master_Host|Exec' && echo "######" && sudo db-mysql $1 -e "SELECT greatest(0, TIMESTAMPDIFF(MICROSECOND, max(ts), UTC_TIMESTAMP(6)) - 500000)/1000000 FROM heartbeat.heartbeat ORDER BY ts LIMIT 1;" && echo "#######"; } && function watch_slave() { while : ; do quick_show_slave $1 ; sleep 1 ; done }


$ watch_slave db1234
                Slave_IO_State: Waiting for master to send event
		[...]
Or repeat
$ quick_show_slave db1234
                Slave_IO_State: Waiting for master to send event
		[...]

Be sure that Exec_Master_Log_Pos and Read_Master_Log_Pos offset are moving forward. You can also add the snippets to your .rc files for direct usage