User:Martyav/Portal:Pipeline (staging)

From Wikitech

This page serves as a testing ground for work related to the pipeline portal project. The title of this page will change when it is moved to main. For current and definitive information, please see Deployments, Deployment pipeline and Help:Cloud Services Introduction.


Welcome to the Wikimedia deployment pipeline!

The deployment pipeline has been running since mid-2017 and is expected to take some years to complete. It is currently managed under the TEC3 Annual Plan program for 2018–19, and was formerly known as Streamlined Service Delivery.

The intent of the pipeline program is to migrate from a continuous integration workflow to continuous delivery, and, eventually, to continuous deployment. To implement this, we're moving our pipeline to use Kubernetes, and switching from our in-house scap deployment tool to using Helm.

Documentation
Get a basic understanding of what we do.
We have a weekly schedule of deployments. See what we're up to, or sign up for a deployment window.
To work on the pipeline, you need to sign up for a few things first.
Ready to get to work? Learn how to create and deploy a new service to the pipeline.
We have numerous additional how-to's describing how to deploy different things. For a basic rundown, see How to deploy code.
Get familiar with the tools that we use.
Understand the ideas behind the pipeline.
Learn more about the terminology that we use.
Wikimedia deployment has a long and storied history. Read about the deprecated stuff here!

Background

As of January 2019, almost all Wikimedia-deployed code is manually deployed, in various forms:

  • MediaWiki – Code in MediaWiki, its skins and extensions, and dependent libraries (collectively, the monolith) is deployed manually, either via a bulk weekly "train" or by cherry-pick back-ports, with between one and two flavours of the monolith in production, set to either 0% or 100% of each wiki as it is scaled out.
  • MediaWiki config – Wikimedia sites configuration is entirely manually deployed. Only one version of the config repo runs in production at once.
  • MediaWiki services – Services code, like parsoid, the mobile content service, or the maps service, is deployed via a per-tool operation, where a service deployer creates a commit combining recently-merged code and any config changes, applies it to a special deployment repository, and triggers its release into production. Only one version of a service runs in production at once, except for short deployment periods with canary servers and phased roll-out over a few minutes.
  • Operations – A mix; Puppet manifests are auto-deployed on merge as a back-up, but are mostly done manually by the merger. Other repos like DNS configuration are only deployed manually. Only one version of these run in production at once.