User:Martyav/Portal:Pipeline (staging)
Material may not yet be complete, information may presently be omitted, and certain parts of the content may be subject to radical, rapid alteration. More information pertaining to this may be available on the talk page.
This page serves as a testing ground for work related to the pipeline portal project. The title of this page will change when it is moved to main. For current and definitive information, please see Deployments, Deployment pipeline and Help:Cloud Services Introduction.
Welcome to the Wikimedia deployment pipeline!
The deployment pipeline has been running since mid-2017 and is expected to take some years to complete. It is currently managed under the TEC3 Annual Plan program for 2018–19, and was formerly known as Streamlined Service Delivery.
The intent of the pipeline program is to migrate from a continuous integration workflow to continuous delivery, and, eventually, to continuous deployment. To implement this, we're moving our pipeline to use Kubernetes, and switching from our in-house scap deployment tool to using Helm.
Background
As of January 2019, almost all Wikimedia-deployed code is manually deployed, in various forms:
- MediaWiki – Code in MediaWiki, its skins and extensions, and dependent libraries (collectively, the monolith) is deployed manually, either via a bulk weekly "train" or by cherry-pick back-ports, with between one and two flavours of the monolith in production, set to either 0% or 100% of each wiki as it is scaled out.
- MediaWiki config – Wikimedia sites configuration is entirely manually deployed. Only one version of the config repo runs in production at once.
- MediaWiki services – Services code, like parsoid, the mobile content service, or the maps service, is deployed via a per-tool operation, where a service deployer creates a commit combining recently-merged code and any config changes, applies it to a special deployment repository, and triggers its release into production. Only one version of a service runs in production at once, except for short deployment periods with canary servers and phased roll-out over a few minutes.
- Operations – A mix; Puppet manifests are auto-deployed on merge as a back-up, but are mostly done manually by the merger. Other repos like DNS configuration are only deployed manually. Only one version of these run in production at once.