Jump to content

Setting up a MySQL replica

From Wikitech

New Server Setup

Hardware Raid

Ensure hardware raid is properly configured (raid-10, 256k strip, writeback cache, no read ahead)

 root@db59:~# megacli -LDInfo -lALL -aALL | egrep "Level|Current Cache|Strip"
 RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0
 Strip Size: 256kB
 Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU

If cache policy says 'ReadAdaptive' instead of 'ReadAheadNone', run megacli -LDSetProp NORA -Lall -Aall

XFS

A newly imaged server should have LVM setup correctly, with /dev/mapper/tank-data mounted at /srv. Ensure:

  • mkfs.xfs -f -d sunit=512,swidth=4096 -L data /dev/mapper/tank-data
  • /etc/fstab mount options for /srv from defaults to noatime,nobarrier

Puppet

Make sure server is in the correct groups in site.pp

 node /db(32|36|38|59|60|63|69|70)\.pmtpa\.wmnet/ {
   if $hostname == "db59" {
     class { role::coredb::s1 : mariadb => true }
   } else {
     include role::coredb::s1
   }
 }

Make sure the server is included in the regex for appropriate datacenter in hieradata/regex.yaml:

 mysql_eqiad:
   __regex: !ruby/regexp /^(db10[0-9][0-9]|dbstore100[1-2]|dbproxy100[1-8]|es101[1-9]|labsdb100[1-8])\.eqiad\.wmnet$/
   cluster: mysql
 
 mysql_codfw:
   __regex: !ruby/regexp /^(db20[0-9][0-9]|dbstore200[1-2]|es201[1-9])\.codfw\.wmnet$/
   cluster: mysql

Make sure the server is defined in the correct shard section of the topology hash in role/coredb.pp. If the server is intended to be an lvm snapshot host, add it to the snapshot section. Do this last, after replication is configured and caught up.

 's1' => {
   'hosts' => { 'pmtpa' => [ 'db32', 'db36', 'db38', 'db59', 'db60', 'db63', 'db67', 'db69', 'db70' ],
     'eqiad' => [ 'db1001', 'db1017', 'db1042', 'db1043', 'db1047', 'db1049', 'db1050'] },
   'primary_site' => $::mw_primary,
   'masters' => { 'pmtpa' => "db63", 'eqiad' => "db1017" },
   'snapshot' => [ "db32", "db1050" ],
   'no_master' => [ 'db67', 'db1047' ]
 },

mysql servers should be added to X property on hiera:path

Transferring Data

On New/Intended Slave

  • ensure puppet runs cleanly
  • cd /srv/sqldata
  • nc -l -p 9210 | tar xvi

On Existing DB

This process is only intended for databases using only InnoDB tables!

  • Select a source slave based on the intended use of the new slave.
    • If you are creating a slave in the active datacenter intended for site production, use either the secondary master in the failover datacenter which can be found via dns (host sX-secondary) or an active production slave. This way replication data will be based on the active write master.
    • If creating a slave in the failover datacenter, or for analytics/non-site use in any datacenter, select a slave in the alternate datacenter. Replication in this case needs to come from the secondary, not the live write master.
  • innobackupex-1.5.1 --stream=tar /srv/sqldata --user=root --slave-info | nc NEW-SERVER 9210
    • on stretch, innobackupex-1.5.1 is replaced by mariabackup --innobackupex
    • This results in a brief write lock towards the end of the process while the table definition files (but not data) are copied. For enwiki, this is around 200Mb of data. For this reason, this shouldn't be run on a write master if slaves are available.

On New Slave, After xtrabackup

Do this before starting mysql!

  • innobackupex-1.5.1 --apply-log --use-memory=22G /srv/sqldata
  • chown -R mysql: /srv
  • /etc/init.d/mysql start
    • tail the error log in /srv/sqldata and check for any errors

Starting Replication

  • Check /srv/sqldata/xtrabackup_slave_info on the new host, it provides a partial CHANGE MASTER statement based on whichever host the target was replicating from. If this is correct, use this, adding MASTER_HOST and the repl account/password portions that are missing.
    • e.g. CHANGE MASTER to MASTER_USER='repl', MASTER_PASSWORD='...', MASTER_PORT=3306, MASTER_HOST='deployment-db1', MASTER_LOG_FILE='deployment-db1-bin.000026', MASTER_LOG_POS=191406295;
  • If building a prod slave and had to run the hot backup from the master, use the information in /srv/sqldata/xtrabackup_binlog_info as slave_info won't exists.
  • If you are building a failover slave, or something for analytics and copied from the secondary master, xtrabackup_slave_info will point to the active master; use xtrabackup_binlog_info instead.
    • Important: If a production slave, this should point to the correct binlog position on the actual write master at which to start replication. If creating a slave in the failover datacenter or for analytics, make sure you have replication information for the second level master. Do not point these dbs to the active master!

Post Replication

  • If part of a cluster that uses heartbeat replication monitoring, restart: /etc/init.d/pt-heartbeat restart

Adding to Production

  • Edit /home/w/common/wmf-config/db.php
    • Make sure the IP address mapping is present in hostsByName[]
    • Add to the appropriate shard in sectionLoads[]
    • Initially add with a low weight to give it time to warm up and deply.
    • Ensure the db operates well (check err and slow logs on the db, and db-error.log on fenari) and that replication keeps up under load.
    • Once warm, raise to its full load level, typically the same as other slaves of the same hardware type, or greater depending on capacity.

Slaves Not In Active Production (db.php)

Make sure they'll receive schema migrations

  • Slaves of the production cluster that don't belong in db.php still need to get schema migrations whenever they are applied in prod in order to keep replicating.
  • Make sure they're added to the correct section in fenari:/home/w/common/wmf-config/db-secondary.php - it is an override for $wgLBFactoryConf['sectionLoads'] pointing only to slaves not used by the live MW site.

Making use-specific tweaks

Some DBs have differences from the normally generated config. These are applied via the puppet generated my.cnf, and further use case changes for prod slaves should be made similarly. This outlines what we do for the research slaves.

Research Slave

Note: These tweaks are reflected in puppet within mysql.pp as $research_dbs

  • Hosts: db42, db1047
  • Purpose: slaves of the s1 cluster dedicated to intense queries by the Summer of Research team (aka community intern statisticians).
  • Contact:
  • Tweaks:
    • no binlog
    • innodb-locks-unsafe-for-binlog
    • slave_transaction_retries = very high