Dumps

From Wikitech
Jump to: navigation, search

Docs for end-users of the data dumps at meta:Data dumps.

For a list of various information sources about the dumps, see Dumps/Other information sources.

Overview

User-visible files appear at http://dumps.wikimedia.org/backup-index.html

Dump activity involves a monitor node (running status sweeps) and arbitrarily many worker nodes running the dumps.

Status

For which hosts are serving data, see Dumps/Dump servers. For which hosts are generating which dumps, see Dumps/Snapshot hosts.

We want mirrors! For more information see Dumps/Mirror status.

Worker nodes

The worker processes go through the set of available wikis to dump automatically. Dumps are run on a "longest without a dump runs next" schedule. The plan is to have a complete dump for each wiki every 2 weeks, except for enwikipedia, which should have a complete dump once a month.

The shell script worker which starts one of these processes simply runs the python script <worker.py> in an endless loop. Multiple such workers can run at the same time on different hosts, as well as on the same host.

The worker.py script creates a lock file on the filesystem containing the dumps (as of this writing, /mnt/data/xmldatadumps/) in the subdirectory private/name-of-wiki/lock. No other process will try to write dumps for that project while the lock file is in place.

Local copies of the shell script and the python script live on the snapshot hosts in the directory /backups and are run in screen sessions on the various hosts, as the user "backup".

Monitor node

The monitor node checks for and removes stale lock files from dump processes that have died, and updates the central index.html file which shows the dumps in progress and the status of the dumps that have completed (i.e. http://dumps.wikimedia.org/backup-index.html ). It does not start or stop worker processes.

The shell script monitor which starts the process simply runs the python script monitor.py in an endless loop.

As with the worker nodes, local copies of the shell script and the python script live on the snapshot hosts in the directory /backups but currently are run out of /backups-atg (since this code is not yet in trunk) in a screen session on one host, as the user "backup".

Code

Check /operations/dumps.git, branch 'ariel' for the python code in use. Eventually this will make its way back into master; it's still a bit gross right now.

Getting a copy:

git clone https://gerrit.wikimedia.org/r/p/operations/dumps.git
git checkout ariel

Getting a copy as a committer:

git clone ssh://<user>@gerrit.wikimedia.org:29418/operations/dumps.git
git checkout ariel

Programs used

See also Dumps/Software dependencies.

The scripts call mysqldump, getSlaveServer.php, eval.php, dumpBackup.php, and dumpTextPass.php directly for dump generation. These in turn require backup.inc and backupPrefetch.inc and may call ActiveAbstract/AbstractFilter.php and fetchText.php.

The generation of XML files relies on Export.php under the hood and of course the entire MW infrastructure.

The worker.py script relies on a few C programs for various bz2 operations: checkforbz2footer and recompressxml, both in /usr/local/bin/. These are in the git repo, see [1].

Setup

Adding a new worker box

Install and add to site.pp, copying one of the existing snapshot stanzas in puppet. This does, among other things:

  1. set up the base MW install without apache running
  2. Add worker to /etc/exports/ on dataset2
  3. Add /mnt/data to /etc/fstab of worker host
  4. Build the utfnormal php module (done for lucid)

For now:

  1. Backups are running test code out of /backups-atg on each host so grab a copy of that from any existing host and copy it into /backups-atg on the new host. This will include conf files, you don't need to specify them separately.
    In transition, being moved to /backups. To be updated as soon as move is complete.
  2. Check over the configuration file and make sure it looks sane, all the paths point to things that exist, etc. For too many details see the README.config file in the git repo.
    • We run enwiki on its own host. If this host is going to do that work, check /backups-atg/wikidump.conf.enwiki.
    • The next 8 or so largest wikis are run on their own separate host so they don't backlog the smaller wikis. For that, check /backups-atg/wikidump.conf.bigwikis.
    • The remainder of the wikis run on one host. Check /backups-atg/wikidump.conf for those.

Dealing with problems

Space

If the host serving the dumps runs low on disk space, you can reduce the number of backups that are kept. Edit the appropriate file /backups-atg/wikidump.conf* on the host running the set of dumps you would like to adjust, en wiki = wikidump.con.enwiki, the next 8 or so big wikis = wikidump.conf.bigwiki, the rest = wikidump.conf) and change the line that says "keep=<some value>" to some smaller number.

Failed runs

Logs will be kept of each run. You can find them in the directory for the particular dump, filename dumplog.txt. You can look at them to see if there are any error messages that were generated for a given run.

The worker script can send email if a dump does not complete successfully. (Better enable this.) It currently sends email to...

When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.

See Dumps/Rerunning a job for how to rerun all or part of a given dump. This also explains what files may need to be cleaned up before rerunning.

Dumps not running

This covers restarting after: rebooting a host, rebooting the dataset host with the nfs share where dumps are written (which may cause dumps to hang), or when the dumps stop running for other reasons.

If the host crashes while the script running, the status files are left as-is and the display shows it as still running until the monitor node decides the lock file is stale enough to mark is as aborted. To restart, start a screen session on the host as root and fire up the appropriate number of worker scripts with the appropriate config file option. See Dumps/Snapshot hosts for which hosts do what; this lists which commands gets run on each host in how many windows. If the monitor script is not running, restart it in a separate window of the same screen session; see the Dump servers page for the command and for which host it runs on.

If the worker script encounters more than three failed dumps in a row (currently configured as such? or did I hardcode that?) it will exit; this avoids generation of piles of broken dumps which later would need to be cleaned up. Once the underlying problem is fixed, you can go to the screen session of the host running those wikis and rerun the previous command in all the windows. See Dumps/Snapshot hosts for which hosts do what if you're not sure.

Running a specifc dump on request

See Dumps/Rerunning a job for how to run a specific dump. This is done for special cases only.

Deploying new code

See Dumps/How to deploy for this.

Bugs, known limitations, etc.

See Dumps/Known issues and wish list for this.

File layout

Sites are identified by raw database name currently. A 'friendly' name/hostname can be added for convenience of searching in future.

See also