Jump to content

Kubernetes/Kubernetes Workshop/Step 9

From Wikitech

Step 9: Wikimedia Foundation Style Kubernetes

The Wikimedia Foundation runs a number of kubernetes clusters in production. Kubernetes is in use behind the scenes in the WMCS environment to provide compute environments (https://wikitech.wikimedia.org/wiki/Help:Toolforge/Web#Kubernetes) and is also used as a microservices platform in running wikis (https://wikitech.wikimedia.org/wiki/Kubernetes).

For the microservices platform the implementation team has decided that Dockerfiles and YAML provide too much functionality and add additional and unnecessary complexity to the development and deployment process. Instead, developers use WMF internal tools to generate WMF-style configuration files, which eliminates much of the complexity and issues that we listed before - for example a non-root user is enforced, namespaces per application are enforced and one is limited to the WMF docker image registry, enforcing the use of approved and updated images.

Look for Deployment Pipeline and Blubber on wikitech for an explanation on more of the goals and details of the architecture.

Let’s step through the wikitech tutorial to see how that works.

This tutorial should just work with the current setup on the machine that we have been using so far. At the end of the tutorial you should have 3 new docker images. The first two will just print “Hello World” and the third one will be a node.js service, also outputting “Hello World” over HTTP over port 8001. One big difference will be that the WMF tools generate a Dockerfile that uses WMF approved base images, which allows for tighter control over software in use for security purposes.

At the foundation base images are controlled by SRE, similar to OS installs and managed for updatedness and security. You can see the images at https://docker-registry.wikimedia.org/ and you need to specify one of these images to be able to run in production. If you look at the blubber generated Dockerfile, you will see that another security mechanism is implemented, the application does run as a normal user called “runuser” in a group called “runuser” rather than running as “root” as it is the default. In the default configuration “runuser” does not have write access to the application directory, see parameter “insecurely” in the blubber User Guide https://wikitech.wikimedia.org/wiki/Blubber/User_Guide.

Blubber supports a number of programming languages and package mechanisms such as debian/apt, node/npm and python/pip.

BTW, instead of installing blubber, one can just use the blubberoid service via curl such as:

curl -s -H 'content-type: application/yaml' --data-binary @<file>.yaml https://blubberoid.wikimedia.org/v1/<variant>

Where file is the spec file for blubber and variant is XXX.

Or install the bash function as shown in:  https://wikitech.wikimedia.org/wiki/Blubber/Download.

The docker images generated via Blubber can be run locally on your docker installation for testing and debugging. Once done shutdown the docker image(s) - docker ps and docker kill.

Hands-on: Running on minikube

We can run the image created in the WMF style on our local minikube via manual deployment and service YAML files and kubectl, but in production we use a templatized version of helm that preconfigures most parameters. To do so we follow most the next  tutorial at: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial which explains how to run HelloWorldOid, the third image created, on k8s.

The tutorial starts from the beginning including building a docker image steps that we already have executed and can skip. Let’s focus on the essential steps for getting the docker image to run on our local machine.

In order to do so you have to run a WMF utility to create the files for helm. The system that you have been using so far should have all required software installed: docker, minikube and helm. You already have the first repository for HelloWorldOid downloaded, but make sure you get the next three installed/cloned: integration/config, deployment-charts and local-charts.

Start with the docker image for helloworldoid. Change into its directory, the Dockerfile should be right there. In the last step we built an image, tagged it and ran it under docker. You can see the image with “docker images”

As seen in the previous section, minikube uses a different local docker installation with its own images. In order to built for that docker we have to switch:

  • eval $(minikube docker-env)
  • docker images

Rebuild the image and retag so the minikube can access the image locally. In the past we have avoided that action by uploading the image to the dockerhub and pulled it from there. That continues to work, but this new way we can save some cycles and work locally.

  • docker build --tag helloworldoid .
  • docker images

Let’s see if we can run it under minikube using “normal” helm. Copy the files from simpleapache: Chart.yaml, values.yaml, and the templates directory.

Change Chart.yaml to the right name and description, erase all lines of values.yaml as we will harcode everything. Then adapt the files in templates, rename simpacheapachedeployment.yaml to deployment.yaml and simpleapacheservice.yaml to service.yaml. Change all references of simpleapache to helloworldoid in deployment.yaml, set the values of replicas to 1, set the image to helloworldoid:latest and the imagePullPolicy to Never.

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: helloworldoid
 labels:
   app: helloworldoid
spec:
 replicas: 1
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: helloworldoid
 template:
   metadata:
     labels:
       app: helloworldoid
   spec:
     containers:
      - name: helloworldoid
        image: helloworldoid:latest
        imagePullPolicy: Never

Similar changes to service.yaml, name and port

kind: Service
apiVersion: v1
metadata:
 name: helloworldoid
spec:
 selector:
   app helloworldoid
 type: LoadBalancer
 ports:
 - protocol: TCP
   port: 8001
   targetPort: 8001

A “helm install hw helloworldoid” should run an instance of our image.

We now have a way to see how the application behaves under minikube and can, for example run a load test and monitor resource usage - see this load-test for an example and take a look at some ideas here https://wikitech.wikimedia.org/wiki/User:Alexandros_Kosiaris/Benchmarking_kubernetes_apps

Note that heapster does not work under our installation of k8s anymore. It has been replaced by the metrics-server.

Hands-on: The Deployment Pipeline Tutorial - https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Tutorial

Deployment pipeline kubernetes does not support bringing your own Dockerfile or your own deployment and service files. These YAML files are generated for you and automatically include a number of supporting elements in kubernetes production, such as TLS encryption in transit via Envoy and monitoring via Prometheus. The generated YAML files require modifications that range from simple to more complicated depending on the complexity of the application.

Pull a copy of deployment-charts:

  • git clone ssh://gerrit.wikimedia.org:29418/operations/deployment-charts

Run the create_new_service.sh script and inform the following 3 parameters (name, port, imagename) when prompted:

hello, 8001, helloworldoid

This will create a new directory called hello with the necessary helmfiles.

The template assumes that the image comes from the WMF docker registry at docker-registry.wikimedia.org. If the image is only local, which is our case here, edit the hello/.templates/deployment.yaml file and delete the reference for  “{{ .Values.docker.registry }}/”.  It should look similar to:

         image: "{{ .Values.main_app.image }}:{{ .Values.main_app.version }}"

BTW, Setting the image to “” in values.yaml does not work as the “/” will still be prepended.

In addition the image pull policy needs to be set to Never in hello/values.yaml.

We can now run: helm install hello hello/

If there is an error similar to “Values.networkpolicy.egress.enabled>: nil pointer evaluating interface {}.egress” add:

networkpolicy:

 egress:

   enabled: false

At the end of the hello/values.yaml file. This is a bug under investigation.

With the command minikube service list, you can get the URL for hello and test its functioning with curl

At the end delete your release using helm.