Data Engineering/Systems/Cluster/Deploy/Refinery

From Wikitech
(Redirected from Analytics/Refinery)

Refinery is the software infrastructure that is used on the Analytics Cluster. The source code is in the analytics/refinery repository.

How to deploy

The refinery now uses Scap so the deployments are as simple as:

  • IMPORTANT: wait about 5 minutes if you just did a refinery-source build. There's a cron that creates the git-fat symlinks and scap will fail if those are missing. And if it does fail, you'll have problems later with the repository on the deploy target (eg. stat1007's /srv/deployment/analytics/refinery will be in an inconsistent state and you'll need to git fat pull or similar)
  • First ssh: ssh deployment.eqiad.wmnet
  • Make sure scap config is up to date:
    cd /srv/deployment/analytics/refinery/scap
    git pull
  • Tell the #wikimedia-analytics and #wikimedia-operations IRC channels that you are deploying ( using !log for instance)
  • Create a screen/tmux session to prevent network failures from ruining your deployment.
  • Run:
    cd /srv/deployment/analytics/refinery
    git pull
    scap deploy "Regular analytics weekly train [analytics/refinery@$(git rev-parse --short HEAD)]"
    scap deploy -e thin "Regular analytics weekly train THIN [analytics/refinery@$(git rev-parse --short HEAD)]"
    scap deploy -e hadoop-test "Regular analytics weekly train TEST [analytics/refinery@$(git rev-parse --short HEAD)]"

Scap will deploy to the canary host first (stat1007) and then if everything goes fine it will ask you to proceed with the rest of the hosts. If you want to double check the status of the deployment, you can run scap deploy-log and check what is happening. Please note that Scap will create another copy of the repository after each deployment, and it will use symlinks to switch versions.

  • Make sure that all the deployment are finished, and went fine. Rollback in case of fireworks.
    • WARNING: while the rollback process is generally safe, a rollback following git fat related issues might leave the refinery working directory in a stale state. This in turn will prevent rolled back deps to deployed. If unsure on how to clean up the working directory, do reach out to the team in #wikmedia-analytics (IRC)
  • After the deployment, ssh into an-launcher1002 or (an-test-coord1001 for the test cluster).
    • Make sure to wait until the entire scap deployment is completed before ssh to an-launcher1002. If you cd into the refinery directory before the symlinks change, you'll still be seeing the old git log and be very confused.
    • Change dir into /srv/deployment/analytics/refinery and check (git log) that the code has been pulled. (In case of an error about "dubious ownership", you can run first sudo -u analytics-deploy /usr/bin/git log -n 1. )
    • Create a screen/tmux session to prevent network failures to interfere with the execution of the following command.
    • Run:
      /srv/deployment/analytics/refinery/bin/refinery-deploy-to-hdfs --verbose --no-dry-run
      This part brings the refinery code to the HDFS

Deploying to clouddumps* hosts

We deploy refinery to labstore separately from our main deploy targets, avoiding git-fat objects to save disk space. The clouddumps nodes need the hdfs-rsync script to pull regularly public data from Hadoop. The scap "thin" environment deploys and clouddumps* hosts right now:

scap deploy -e thin

Deploying to Hadoop test

On the deployment machine: scap deploy -e hadoop-test

Then you can need to upload refinery to HDFS from an-test-coord1001 (See upper for the refinery-deploy-to-hdfs command.)

How to deploy Oozie jobs

You can find test / production deployment information here: Analytics/Cluster/Oozie/Administration

For a tutorial / introduction to oozie, read that page first: Analytics/Cluster/Oozie.