Dumps

From Wikitech
Jump to navigation Jump to search

We want mirrors! For more information see Dumps/Mirror status.

Docs for end-users of the data dumps at meta:Data dumps. If you're a Toolforge user and want to use the dumps, check out Help:Shared storage for information on where to find the files.

For a list of various information sources about the dumps, see Dumps/Other information sources.

The following info is for folks who hack on, maintain and administer the dumps and the dump servers.

Setup

Current architecture

Rather than bore you with that here, see Dumps/Current Architecture.

Current hosts

For which hosts are serving data, see Dumps/Dump servers. For which hosts are generating dumps, see Dumps/Snapshot hosts. For which hosts are providing space via NFS for the generated dumps, see Dumps/Dumpsdata hosts.

Adding a new snapshot host

Install and add to site.pp in the snapshot stanza (see snapshot1005-7). Add the relevant hiera entries, documented in site.pp, according to whether the server will run en wiki dumps (only one server should do so), or misc cron jobs (one host should do so, not the same host running en wiki dumps).

Dumps run out of /srv/deployment/dumps/dumps/xmldumps-backup on each server. Deployment is done via scap3 from the deployment server.

Starting dump runs

  1. Do nothing. These jobs run out of cron.

Troubleshooting

Fixing code

The dumps code is all in the repo /operations/dumps.git, branch 'master'. Various supporting scripts that are not part of the dumps proper, are in puppet; you can find those in the snapshot module.

Getting a copy as a committer:

git clone ssh://<user>@gerrit.wikimedia.org:29418/operations/dumps.git
git checkout master

ssh to the deployment host.

  1. cd /srv/deployment/dumps/dumps
  2. git pull
  3. scap deploy

Note: you likely need to be in the ops ldap group to do the scap. Also note that changes pushed will not take place until the next dump run; any current run uses the existing dump code to complete.

Fixing configuration files

Configuration file setup is handled in the snapshot puppet module. You can check the config files themselves at /etc/dumps/confs on any snapshot host.

Out of space

See Dumps/Dumpsdata hosts#Space issues if we are running out of space on the hosts where the dumps are written as generated.

See Dumps/Dump servers#Space issues if we are running out of space on the dumps web or rsync servers.

Broken dumps

The dumps can break in a few interesting ways.

  1. They no longer appear to be running. Is the monitor running? See below. If it is running, perhaps all the workers are stuck on a stage waiting for a previous stage that failed.
  1. Shoot them all and let the cron job sort it out. You can also look at the error notifications section and see if anything turns up; fix the underlying problem and wait for cron.
  1. A dump for a particular wiki has been aborted. This may be due to me shooting the script because it was behaving badly, or because a host was powercycled in the middle of a run.
  1. The next cron job should fix this up.
  1. A dump on a particular wiki has failed.
  1. Check the information on error notifications, track down the underlying issue (db outage? MW deploy of bad code? Other?), fix it, and wait for cron to rerun it.

Error notifications

Email is ordinarily sent if a dump does not complete successfully, going to ops-dumps@wikimedia.org which is an alias. If you want to follow and fix failures, add yourself to that alias.

Logs are kept of each run. From any snapshot host, you can find the logs in the directory (/mnt/data/xmldatadumps/private/<wikiname>/<date>/dumplog.txt). From these you may glean more reasons for the failure.

Logs that capture the rest are available in /var/log/dumps/ and may also contain clues.

When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.

Monitoring is broken

If the monitor does not appear to be running (the index.html file showing the dumps status is never updated), check which host should have it running (look for the host with profile::dumps::generation::worker::monitor in the role, at this writing snapshot1007). This is a service that should be restarted with systemd or upstart, depending on the os version, so you'll want to see what change broke it.

Rerunning dumps

You really really don't want to do this. These jobs run out of cron. All by themselves. Trust me. Once the underlying problem (bad MW code, unhappy db server, out of space, etc) is fixed, it will get taken care of.

Okay, you don't trust me, or something's really broken. See Dumps/Rerunning a job if you absolutely have to rerun a wiki/job.

A dump server (snapshot host) dies

If it can be brought back up within a day, don't bother to take any measures, just get the box back in service. If there are deployments scheduled in the meantime, you may want to remove it from scap targets for mediawiki: edit hieradata/common/scap/dsh.yaml for that.

If it's the testbed host (check the role in site.pp), just leave everything alone, no services will be impacted

If it will take more than a day to be fixed, swap it for the testbed/canary box, and remove it from scap targets for mediawiki:

  • open manifests/site.pp and find the stanza for the broken snapshot host, grab that role
  • now look for the snapshot host with role(dumps::generation::worker::testbed), and put the broken host's role there
  • in hieradata/hosts, git mv brokenhost to that testbed hostname, if there is such a file
  • edit hieradata/common/scap/dsh.yaml to remove the broken host as a mediawki scap target
  • merge all the things

A dumpsdata host dies

Coming soon...

A labstore host dies (web or nfs server for dumps)

These are managed by Wikimedia Cloud Services. When this situation should arise, someone on that team should conduct the procedure below.

At current writing there are two labstore boxes that we care about; one serves web to the public + NFS to stats hosts; the other serves NFS to cloud VPS instances/toolforge.

  • Determine which box went down. You can look at hieradata/common.yaml and the values for dumps_dist_active_web, dumps_dist_nfs_servers, and dumps_dist_active_vps for this.
  • Remove the host from dumps_dist_nfs_servers.
  • Change dumps_dist_active_vps to the other server, if the dead server was the vps NFS server.
  • Change dumps_dist_active_web to the other server, if the dead server was NOT the vps NFS server (this means it was the stats NFS server, which is all that this setting controls).
  • Forcibly unmount the NFS mount for the dead host everywhere you can in ToolForge. Try Cumin first, if that fails try clush for ToolForge. See #Notes on NFS issues and ToolForge load for more about this.
    • Hint: If using clush under pressure, try:
      clush -w @all 'sudo umount -fl /mnt/nfs/dumps-[FQDN of down server]'
      on tools-clushmaster-02.tools.eqiad.wmflabs
  • If the dead server was the web server:
    • Switch the values in hieradata/hosts/<deadhostname> and hieradata/hosts/<vps nfs hostname> so that the other server has do_acme: true. Without this, https will likely fail due to an expired certificate.
    • Change the 'dumps' entry here, and deploy to gdns according to https://wikitech.wikimedia.org/wiki/DNS#authdns-update
    • Once that change has had some time to propagate (check the TTL), test to see that it successfully picked up a cert (checking https://dumps.wikimedia.org should work). Trying puppet runs on the working server might be helpful here.

Notes on NFS issues and ToolForge load

Both hosts' NFS filesystems are mounted on all hosts that use either server for NFS, and the clients determine which nfs filesystem to use based on a symlink that varies from cluster to cluster. The dumps_dist_active_web setting only affects the symlink to the NFS filesystem on the stats hosts. Likewise, the dumps_dist_active_vps only affects the symlink to NFS filesystem on the VPSes (including Toolforge).

If the server is the vps NFS server (the value of dumps_dist_active_vps), Toolforge is probably losing its mind by now. The best that can be done is to remove it from dumps_dist_nfs_servers and change dumps_dist_active_vps to the working server and unmount that NFS share everywhere you possibly can. The earlier this is done, the better. Load will be climbing like mad on any Cloud VPS server, including Toolforge nodes the entire time. This may or may not stop because you unmounted everything.