Kubernetes/Images

From Wikitech
Jump to navigation Jump to search

OCI Images

We restrict only running images from the production Docker registry, aka docker-registry.wikimedia.org (aka docker-registry.discovery.wmnet inside production networks), which is available publicly. This is for the following purposes:

  1. Make it easy to do security updates when necessary (just rebuild all the containers & redeploy)
  2. Faster deploys, since this is in the same network (vs dockerhub, which is retrieved over the internet)
  3. Access control is provided totally by us, less dependent on dockerhub
  4. We control what's in our images.

This is enforced at the network level, but we plan to add a webhook that enforces it on kubernetes.

Image building

Images are separated in 3 "generations":

  • base images
  • production images
  • service images

that are dependent on each other in the above order. A service image (e.g. Mathoid) will depend on a production image (e.g. buster-node10) that will depend on a base image (e.g. wikimedia-buster), forming a tree. Base images can never get deployed, but both service images and production images are deployed. The former for powering a service, the latter for providing infrastructure functionality (e.g. metrics collecting, TLS demarcation, etc).

Base images

Those are the 1st layer of the tree. They are built on the designated production builder host (look at manifests/site.pp to figure out which host has that role) and are is built using the command build-base-images. This code uses debuerreotype to build the image and push it to the registry, and the script we use is a modified version of the process that is used for the "official" Debian images on dockerhub. Note that you need to be root to build / push docker containers. Suggest using sudo -i for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory. It's a very simplistic approach but it works well for this use case.

Production images

The code building those has been written from scratch. It's a tool called docker-pkg. It is still run on the same builder host as above, but will automatically infer versions, what needs or does not need to be built and dependencies. The repo containing the definitions to those images is at operations/docker-images/production-images and the command /usr/local/bin/build-production-images is used to build them. Again, it's suggested that one would be using sudo -i for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory.


sudo -i
# cd /srv/images/production-images/
# git pull
# build-production-images

Note: The production-images repo does not have CI at present and changes need to be manually merged.

Services images

Those are built by the Deployment pipeline using Blubber. They are being created automatically on every git merged commit per software.

Local builds

It is sometimes convenient to test builds on your local workstation. To this end, you need docker-pkg and docker installed, and a checkout of production-images.git.

From the git checkout you can test images rebuild:

# rebuild all images as needed (no upload to registry)
docker-pkg -c config.yaml build images
# only a subset of images
docker-pkg -c config.yaml build images --select '*myimage*'