Jump to content

Services/Scap Migration

From Wikitech

All Node.JS should start using Scap3 as the deployment method as soon as possible. This document describes how to migrate from Trebuchet to Scap3. Before starting the process, you should schedule a migration window on the Deployments page and make sure you have support at that time from the relevant teams - ops && (services || releng).

Puppet Patches

Before actually converting your deploy repo to Scap3 (and use it for deployment), you need to add yourself to the scap3 deployment group and prepare your service's module. Create a patch for the operations/puppet repository with the following changes:

  • In modules/admin/data/data.yaml find the definition for the deploy-service group and add all of the deployers to the list of its members.
  • In modules/<service-name>/manifests/init.pp, add the following line to the service::node resource:
deployment => 'scap3',
  • In hieradata/role/common/deployment_server.yaml, add your repository to the list of scap::sources to clone:
  # The empty hash ({}) uses the default params to scap::source.  See the scap::source docs for usage info.
  <service-name>/deploy: {}

In order for your deployments to work in BetaCluster as well, you need to add the same line to hieradata/labs/deployment-prep/common.yaml under the scap::sources key.

If you used to deploy your service with Trebeuchet, you need to remove your service's definition from hieradata/common/role/deployment.yaml.

Commit your changes, send them to review and have an Operations Engineer ready to review them.

Deployment Repo

Next, you need prepare the deploy repository for usage with Scap3. Create the scap directory inside your deploy repository and fill the contents of scap/scap.cfg with:

git_repo: <service-name>/deploy
git_deploy_dir: /srv/deployment
git_repo_user: deploy-service
ssh_user: deploy-service
server_groups: canary, default
canary_dsh_targets: target-canary
dsh_targets: targets
git_submodules: True
service_name: <service-name>
service_port: <service-port>
lock_file: /tmp/scap.<service-name>.lock

git_server: deploy1002.eqiad.wmnet

git_server: deployment-tin.deployment-prep.eqiad.wmflabs
server_groups: default
dsh_targets: betacluster-targets

This represents the basic configuration needed by Scap3 to deploy the service. We still need to tell Scap3 on which nodes to deploy and which checks to perform after the deployment on each of the nodes. First, the list of nodes. Two files need to be created: scap/target-canary and scap/targets. In the former, you need to put the FQDN of the node that will act as the canary deployment node, i.e. the node that will first receive the new code, while in the latter file put the remainder of the nodes. For example, if your target nodes are in the SCB cluster, these files should look like this:

$ cat target-canary 

$ cat targets

In the same vein, you need to create the scap/betacluster-targets file which will contain the FQDNs of the targets in BetaCluster.

Finally, enable the automatic checker script to check the service after each deployment by placing the following in scap/checks.yaml:

    type: command
    stage: promote
    command: depool-<service-name>
    type: command
    stage: restart_service
    command: pool-<service-name>
    type: nrpe
    stage: restart_service
    command: check_endpoints_<service-name>

Commit your changes, send them to Gerrit for review and merge them.


Before deploying for the first time, you have to make sure the operations/puppet changes have been merged and Puppet has been run on all of your target nodes as well as on the deployment server (deploy1002.eqiad.wmnet). You can then proceed with deploying your service by following the deployment guide.