Jump to content

Kubernetes/Kubernetes Infrastructure upgrade policy

From Wikitech

Kubernetes Infrastructure upgrade policy

This is a document about the proposed and agreed upon policy for doing upgrades of the components of the kubernetes infrastructure(s) for Wikimedia.

Introduction

This document proposes a policy for updates of the various components of the Wikimedia Kubernetes infrastructure(s). This is a process that is not yet clearly defined and streamlined. Doing so would reduce technical debt and allow the Wikimedia Foundation to benefit from technical innovations quicker.

The kubernetes infrastructure(s) in this document are scoped to be only the ones in the production realm of Wikimedia Infrastructure.

At the time of this writing those are:

  • Deployment pipeline production clusters
  • Deployment pipeline staging cluster

At the time of this writing the following future endeavors were known:

  • A 2nd deployment pipeline staging cluster in codfw
  • 1 Kubeflow powered serving cluster per DC, to be owned by the Machine Learning Platform team
  • 1 Kubeflow powered model training cluster (still unclear if there will be 1 per DC), to be owned by the Machine Learning Platform team

Problem statement

There’s a number of reasons why we need to upgrade the components of our Kubernetes software stack:

  • To ensure the security of the site and our users’ data, we need to keep the infrastructure free of known security vulnerabilities. In contrast to scheduled maintenance updates, these updates are not plannable ahead and happen based on when security issues are found and disclosed. While our current clusters aren’t exposed to the public and access to them requires already provisioned access and credentials, applying security updates ensures that the infrastructure does not become a stepping stone for an attacker.
  • For our software deployments we also want to benefit from ongoing development trends and provide new features for our users. Especially the Kubernetes project from the set of components we describe below is a fast pacing software, constantly adding new functionalities and features
  • Using current releases is also relevant for our deployment pipeline support and testing by our deployers. We have found ourselves running components in production that were versions behind the currently proposed ones by upstreams. Those had breaking API changes standing in the way of users testing our helm charts
  • Supporting old versions of components comes at a significant, albeit not very visible, internal cost resulting in technical debt. The older our versions are, the more difficult it is to get support in debugging issues, meaning we have to fend for ourselves more than necessary.

As of November 2020, the components of our Kubernetes infrastructures are the following:

  • Kubernetes. Currently running 1.12 and 1.13 with latest being at 1.20 and last supported at 1.16
  • Calico, our networking plugin of choice. Currently running 2.2 with 3.16 recently being released.
  • Helm, our deployment tooling of choice. Currently running 2.6.12, and since November 13 2020 we are out of support per helm’s support contract.
  • Helmfile, wrapping helm so we can have deployment state in git. No stated support for this one but we currently run 0.125.2

Background

Kubernetes release cadence and support

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes is a very very fast moving software. The release cycle up to now has been 1 new release every quarter and support for 3 releases (meaning 9 months of support).  While that is quite brutal, there have been efforts to increase that to 12 months and seems like 1.19 is going to be the first version to supply that. We currently have 1.12 and 1.13 versions in the in scope clusters.

Calico release cadence and support

Calico is an open source networking and network security solution for containers, virtual machines, and bare-metal workloads. Calico uses standard Linux networking tools to provide two major services for Cloud Native applications:

  • Network connectivity between workloads.
  • Network security policy enforcement between workloads.

Calico is the open source part of calico enterprise. It does not have a stated release cadence or support scheme. As such, it is difficult to target a specific release. Up to now, to avoid issues, we have only 1 component (calico-policy-controller) that binds calico to Kubernetes via the networking.k8s.io API (the NetworkPolicy object) which is marked generally available (GA).

In their requirements page, they list their supported Kubernetes versions. The history needs a bit of git log --follow to fully unravel it, but they ‘ve been bumping tested releases in groups of 1 or 2 at a time (e.g. d7cadb989 or fab405d8) and relatively closely following Kubernetes upstream development

At the time of writing, support for 1.16, 1.17, 1.18 for 3.16. 1.19 seems to be slated for 3.17.

Helm release cadence and support

Helm is an open source software for packaging and deploying applications on Kubernetes. We adopted it early on as it was the de facto standard.

Helm has a version support policy and they follow Semantic Versioning. To align with Kubernetes releases, a minor helm release will be done every 4 months (3 releases a year). The 2 major versions currently in support are version 2 and version 3. As of Helm 3, Helm is assumed to be compatible with n-3 versions of Kubernetes it was compiled against. Due to Kubernetes' changes between minor versions, Helm 2's support policy is slightly stricter, assuming to be compatible with n-1 versions of Kubernetes. It is not recommended to use Helm with a version of Kubernetes that is newer than the version it was compiled against, as Helm does not make any forward compatibility guarantees. They release a version table that can be found at https://helm.sh/docs/topics/version_skew/

Per the table above, major version 2 (we are at 2.16.2) is supporting up to Kubernetes 1.16. Version 3 (3.4.x) does support up to 1.19.

Helmfile release cadence and support

Helmfile is an open source project that allows tracking in YAML files in a declarative way the deployed helm charts. Benefits are:

  • Keep a directory of chart value files and maintain changes in version control.
  • Apply CI/CD to configuration changes.
  • Periodically sync to avoid skew in environments.

It does not have a stated release cadence or support cycle. It does follow Semantic Versioning 2.0.0 but is still in an early stage of development and versioned 0.x.

Helmfile does not talk directly to Kubernetes APIs so it’s a component that’s less likely to break after an upgrade. However, it is developing rapidly and is more likely to have various bugs.

Problems with the upgrade process

The upgrade process itself up to now hasn’t been particularly problematic. Up to now, we have live upgraded all the way from 1.6 to 1.12, one release at a time, without encountering issues. However as we tie our work closer and closer to the Kubernetes infrastructure and adopt or build more tools that use the API or the libraries, this is bound to break. We have consciously avoided it (e.g. by not adopting yet ingresses or service meshes, or by not providing write access to the API to developers) up to now, in order to minimize risk during upgrades, but this is clearly going to change.

Kubernetes component upgrades share the same problems afflicting other Golang software (e.g. Envoy). The build process is heavily dependent on utilizing Docker, with external Docker images fetched from various image registries and a swarm of, and in some cases fast moving, vendored dependencies. Untangling it is a monumental task that not even distros have managed to bring into fruition. The most recent discussion in Debian is at #971515.

These build process issues have resulted in the creation of one-off build environments in the WMCS infrastructure (and that only because it’s readily available), which are not easily reconstructible. Those add nothing substantial to the build process aside from access to the internet, which isn’t desirable from production machines. No extra vetting takes place code wise and we end up using various tricks to add the vendor directories in the Debian packages that are required, consuming time and effort.

Policy proposal

Version choices

Follow security supported kubernetes releases as close as possible while making sure to satisfy the version requirements of all the components. The above practically means we will make sure to always use a version that is in the currently supported list of upstream, unless we have some requirement that can not be met.

Using existing upstream binaries

All the components listed above, release linux amd64 binaries. We will utilize those, putting them in Debian packages via a simplified debian/rules process that fetches them, verifies as much as possible (cryptographic signatures, advertised checksum hashes). The Debian packages wrapping part is to maintain the current status quo and continue to utilize our current Debian based infrastructure.

Utilizing production builder hosts

With the above out of the way, we ‘ll be able to move back to building our packages for the components we are interested in in production. We ‘ll utilize the webproxy cache hosts sparingly, to fetch the resources we require (e.g. .tar.gz files having the binaries, .asc files having signatures, or checksums).

Tracking component version requirements

We will track component versions manually for now, but consider making the process more streamlined in the future, e.g. by some spreadsheet that is updated via a triggered but automatic way.