Jump to content

Services/FirstDeployment

From Wikitech

This page describes how to deploy your first service. It does not cover how to sign up for an account, write code, engage in ongoing projects, or what to do after you've deployed your service.

For what you need to do to join us in deploying services, see Help:Getting Started.

For information on coding new Mediawiki extensions, see mw: Review queue.

For how to deploy to ongoing projects, see Services/Deployment and How to deploy code.

For help with your service once it's been deployed, see How to perform security fixes and SWAT deploys

Repositories

When deploying a new service, two repositories are necessary, because any Node.js services included with the source need to be pre-built for production.

First, you need a source repository, containing the source code for your service.

Second, you need a deploy repository, which will mirror the code actually deployed in production.

It is very important that the deploy repository mirrors the code deployed in production at all times.

The deploy repository must only to be updated directly before deploying or re-deploying the service. Do not update it every time you merge a patch to the master branch of the source repository.

All services are required to be hosted on our Gerrit servers. Gerrit does not have to be your primary development technique or tool, though we strongly encourage its use.

When requesting the repositories for your service, ask for the source repository to be a clone of the service template, or of your own service repository. Ask for the deploy repository to be empty.

The source repository should have the path, mediawiki/services/your-service-name, and the deploy repository should have the path, mediawiki/services/your-service-name/deploy.

The remainder of this guide assumes that the two repositories have been created, and that you have cloned them using your Gerrit account, i.e. not anonymously, with the following outline:

~/code/
  |- your-service
  -- deploy

Source repo configuration

If you opt to use our service template for the source repository, it includes a script which automatically updates the deploy repository.

However, this script needs to be configured before it will work.

Package.json

First, you need to update the source repository's package.json file. Find the deploy field inside of this file. It contains three interesting properties, the most important of which is dependencies.

Dependencies property

The dependencies property of the deploy field must be kept up to date at all times.

Whenever you need to install extra packages into your development environment to satisfy node module dependencies, make sure to add them to the dependencies property. This will ensure that the deploy repository can successfully build and update.

The _all field of the dependencies property denotes packages which should be installed regardless of the target distribution. You can also add other, distribution-specific package lists, e.g.:

"deploy": {
  "target": "ubuntu",
  "node": "system",
  "dependencies": {
    "ubuntu": ["pkg1", "pkg2"],
    "debian": ["pkgA", "pkgB"],
    "_all": ["pkgOne", "pkgTwo"]
  }
}

In this example, packages pkg1, pkg2, pkgOne and pkgTwo are going to be installed before building the list of dependencies. If, instead, the target is changed to debian, then pkgA, pkgB, pkgOne and pkgTwo are selected.

Other properties

Depending on the machine that your service will be deployed on, you may need to set the target property of the deploy field, to indicate the expected distribution–most likely, ubuntu or debian.

To specify a version of Node.JS different from the official distribution package, set the value of the node property. Follow the nvm version naming scheme. To explicitly use the official distribution package, set the value of node to system.

Local git repository

You also need to tell the he script where to find the local copy of the deploy repository. This requires you to set the path within git.

To set the path to the local deploy repo, go into your local source repository, and run the following command:

$ git config deploy.dir /absolute/path/to/deploy/repo

Deploy repo set-up

If you haven't done so already, initialise the deploy repository:

$ cd ~/code/deploy
$ git review -s
$ touch README.md
$ git add README.md
$ git commit -m "Initial commit"
$ git push -u origin master  # or git review -R if this fails
# go to Gerrit and +2 your change, if needed and then:
$ git pull

Next, prepare the deploy repository for our deployment script, Scap3. Create a scap directory inside your deploy repository, and make a file named scap.cfg. The scap.cfg file should contain the following:

[global]
git_repo: <service-name>/deploy
git_deploy_dir: /srv/deployment
git_repo_user: deploy-service
ssh_user: deploy-service
server_groups: canary, default
canary_dsh_targets: target-canary
dsh_targets: targets
git_submodules: True
service_name: <service-name>
service_port: <service-port>
lock_file: /tmp/scap.<service-name>.lock

[wmnet]
git_server: deploy1001.eqiad.wmnet

[deployment-prep.eqiad.wmflabs]
git_server: deployment-tin.deployment-prep.eqiad.wmflabs
server_groups: default
dsh_targets: betacluster-targets

This is the basic configuration needed by Scap3 to deploy the service. We still need to tell Scap3 which nodes to deploy on, and which checks to perform after deployment on each node.

Listing nodes

Create a new file in the scap directory, named target-canary. This is where you need to put the fully qualified domain name, or FQDN, of the node that will act as the canary deployment node. This is the node that will receive any new code first.

For example, if your target nodes are in the SCB cluster, the file should look like this:

$ cat target-canary 
scb1002.codfw.wmnet

The complete list of all targets should be ops-controlled and derives from Puppet and conftool. Ask ops to set it up before you do your first production deploy.

In the same vein, you need to create a scap/betacluster-targets file, which will contain the FQDNs of the targets in Beta Cluster. Beta Cluster is a production-like environment for testing new code.

Enabling the checker script

Finally, create a file for the automatic checker script to run after each deployment. Place it in the scap directory, and name it checks.yaml.

checks:
  depool:
    type: command
    stage: promote
    command: depool-<service-name>
  endpoints:
    type: nrpe
    stage: restart_service
    command: check_endpoints_<service-name>
  repool:
    type: command
    stage: restart_service
    command: pool-<service-name>

Commit your changes locally, create a pull request on Gerrit for review, and merge them.

Docker

The deployment process includes a script that builds the deployment repository using Docker containers, so make sure you have the latest version installed.

Additionally, you need to add your user to the `docker` group after installation. This is so that you don't need to use `sudo` when running the build script:

$ sudo usermod -a -G docker <your-user>

You will need to log out of all terminals in order for the change to take effect.

New service request

Before your service can go live, operations needs to take care of a few more things: machine allocation, IPs, LVS, etc.

To request help from the operations team, file a ticket in the service-deployment-requests project in Phabricator.

The ticket should contain the following information:

  • Name - the name of the service you wish to deploy
  • Description - a paragraph clearly explaining what the service does, and why it is needed
  • Timeline - the desired deployment timeline; note that you should allow a reasonable amount of time (if the deployment is unplanned, typically at least a quarter).
  • Point person - the person responsible for the service; this is the person that will get called when there are problems with the service while it is running in production
  • Technologies - additional information about the service itself, including, but not limited to, the language used for development and any frameworks used
  • Request flow diagram - a link to a diagram that explains the interaction between your service and any other parts of the operational stack inside the production cluster, such as requests made to MediaWiki, RESTBase, etc.

Example requests

See task T105538, task T117560, and task T128463 for good examples of successful new service requests.

Role and profile creation

While you are waiting for the service request to be completed, do not fear: you still have useful things to do. You may start by creating your service's Puppet role and profile in the operations/puppet repository.

First, add your service's deploy repository to the list of repositories deployed in production, by appending the following block to hieradata/common/role/deployment.yaml (note the extra spaces at the beginning of each line):

  <service-name>/deploy:
    upstream: https://gerrit.wikimedia.org/r/mediawiki/services/<service-name>/deploy
    checkout_submodules: true

Next, create modules/profile/manifests/<service-name>.pp; it should contain the following:

# == Class: profile::<service-name>
#
# Describe the service here ...
#
# === Parameters
#
# [*param_name1*]
#   Description of param_name1
#
# [*param_name2*]
#   Description of param_name2
#
class profile::<service-name>(
    $param_name1 = hiera('profile::<service-name>::param_name1'),
    $param_name2 = hiera('profile::<service-name>::param_name2,
) {

    service::node { '<service-name>':
        port            => <service-port>,
        config          => {
            param_name1 => $param_name1,
            param_name2 => $param_name2,
        },
        healthcheck_url => '',
        has_spec        => true,
        deployment      => 'scap3',
    }

}

Note that only configuration specific to your service should be listed here, and not the whole configuration file, i.e. only the configuration parameters that your service code accesses via app.conf.*.

Instead of in-lining it directly in the module, you can also store the configuration in the form of an ERB YAML template, in modules/<service-name>/templates/config.yaml.erb. Then, simply use it directly as the config parameter for the service::node resource like so:

        config          => template('<service-name>/config.yaml.erb'),

You will also need a role for your service. Put the following code fragment in manifests/role/<service-name>.pp:

# Role class for <service-name>
class role::<service-name> {

    system::role { 'role::<service-name>':
        description => 'short description',
    }

    include ::profile::<service-name>
}

and add the values for the hiera lookups in place inside hieradata/role/common/<service-name>.yaml as follows:

profile::<service-name>::param_name1: "some-value"
profile::<service-name>::param_name2: "some-other-value"

You can now submit the patch for review. Don't forget to mention the service request bug in your commit message.

Access rights

As the service owner and maintainer, you need to be able to log onto the nodes where your service is running. Once the exact list of target nodes is known, you need to file an access request ticket with the following information:

  • Ttile - access Request for <list-of-maintainers> for <service-name>
  • Description - <list-of-maintainers> needs access to <list-of-nodes> for operating <service-name>. We need to be able to read the logs at /srv/log/<service-name> and be able to start/stop/restart it. The task asking for the service's deployment is {<service-request-task-number>}

This request implies sudo rights on the target nodes, so you will need the approval from your manager on this task.