Kubernetes/Kubernetes Workshop/Kubernetes at the Wikimedia Foundation (WMF)
Overview
At the end of this module, you should be able to:
- Use WMF internal tools to generate WMF-style configuration files.
- Run the Blubber tutorial.
- Run the deployment pipeline tutorial.
Step 1 - The Blubber Tutorial
The Wikimedia Foundation runs several Kubernetes clusters in production. Kubernetes is used behind the scenes in the Wikimedia Cloud Services (WMCS) environment to provide compute environments. In running wikis, WMF makes use of Kubernetes as a microservices platform.
The implementation team has decided that Dockerfiles and YAML provide too much functionality and add additional and unnecessary complexity to the development and deployment process for the microservices platform. Instead, developers use WMF internal tools to create WMF-style configuration files. These customized configuration files eliminate a large chunk of the complex issues you saw earlier.
For example, a non-root user runs the containers and runs each cluster in unique namespaces per application. You can only use the approved and updated images available in the WMF Docker Image registry.
Running the Blubber tutorial does not require any additional software, and at the end of the tutorial, you should have three new Docker images. The first two images will give an output of Hello World, while the third image, a node.js service, will also provide an output of Hello World over HTTP on port 8001.
A huge difference will be that WMF tools generate a Dockerfile. This Dockerfile implements security best practices using only WMF-approved base images, allowing for more firm control over the software.
The Site Reliability Engineering (SRE) team controls WMF's base images, like how you would manage the installation and updating of your Operating System (OS) for security purposes.
You can view the images available on the registry. You specify one of these images when you want to run a production workload. In the Blubber-generated Dockerfile, the application runs as a regular user called runuser rather than running as the root user (by default). The regular user belongs to a group called runuser.
In the default configuration, runuser does not have write access to the application's directory. View the parameter insecurely in Blubber's User Guide.
Blubber supports several programming languages and package mechanisms such as Debian/apt, node/npm, and Python/pip. Instead of installing Blubber, you can use the blubberoid service via curl: curl -s -H 'content-type: application/yaml' --data-binary @<file>.yaml https://blubberoid.wikimedia.org/v1/<variant>
, or you install the bash function following this guide.
Note:
- The value for <file> is the spec file for Blubber.
- The value for <variant> is XXX.
You can run the docker images generated via Blubber locally on your Docker installation for testing and debugging. Shut down the Docker image(s) after you are done with this step:
$ docker ps
$ docker kill
Step 2 - Running Images on minikube
Using manual deployment, you can run the Blubber image you created in the step above following WMF guidelines on your local minikube. However, you use a helm template in production, and this helm template has pre configured parameters. You have to complete this tutorial before proceeding with the rest of the steps listed in this module.
The previous step in this module shows that minikube uses a different local Docker installation with its images. 1. Switch to a local Docker installation:
$ eval $(minikube docker-env)
$ docker images
2. Rebuild and retag the simpleapache image so the minikube can access the image locally:
$ docker build --tag helloworldoid .
$ docker images
3. To run the Docker image on minikube using regular helm, you copy the following files from the simpleapache application: chart.yaml
, values.yaml
, and the templates directory.
4. Rename the simpacheapachedeployment.yaml
file to deployment.yaml
and simpleapacheservice.yaml
to service.yaml
.
5. Change all references of simpleapache
to helloworldoid
in deployment.yaml
file, set the values of replicas to 1. Set the image’s name to helloworldoid:latest
and the value of imagePullPolicy
to Never.
6. The contents of the new service.yaml
file are:
service.yaml
kind: Service
apiVersion: v1
metadata:
name: helloworldoid
spec:
selector:
app: helloworldoid
type: LoadBalancer
ports:
- protocol: TCP
port: 8001
targetPort: 8001
7. Run an instance of your image:
$ helm install hw .
NAME: hw
LAST DEPLOYED: Day Month Date HR:MIN:SEC YEAR
NAMESPACE: default
STATUS: deployed
…………………………………………………………
You can now take note of the application's behavior while it runs on minikube. For example, you can run a load test and monitor resource usage. You can also browse through this wiki page for more information.
Note: heapster no longer works on our installation of k8s, and the metrics-server has replaced it.
Hands-on Demo: The Deployment Pipeline Tutorial
[pipeline Kubernetes] does not support bringing your Dockerfile or deployment and service files. Instead, the pipeline generates the needed files for you. These yaml files include several supporting elements in Kubernetes production, such as TLS encryption in transit via Envoy and monitoring via Prometheus. The generated yaml files require modifications that range from simple to more complicated, depending on the complexity of the application.
In this demo, you will work with the deployment pipeline. To access a repository on Gerrit, view the request wiki.
1. Pull a copy of deployment charts and install Ruby:
$ git clone ssh://gerrit.wikimedia.org:29418/operations/deployment-charts
$ sudo apt install ruby-git
2. Run the create_new_service.sh
script and provide the values of the following parameters - name, port, imagename when prompted:
name: hello
port: 8001
imagename: helloworldoid
$ ./create_new_service.sh
Please input the PORT on which the service will run
> 8001
Please input the NAME of the service
> hello
Please input the IMAGE full label for the service
> helloworldoid
………………………………………………...
The details above will create a new directory called hello with the necessary helm files. The template assumes that the image comes from the WMF Docker registry.
3. If the image is only local (which is the case here), edit the hello/templates/_container.tpl
file and delete the reference for Template:.Values.docker.registry/
.
The new value is: image: "Template:.Values.main app.image:Template:.Values.main app.version"
.
4. Set the value for pull_policy to Never in the hello/values.yaml file. 5. Run your image:
$ helm install hello .
NAME: hello
LAST DEPLOYED: Day Month Date HR:MIN:SEC Year
NAMESPACE: default
STATUS: deployed
……………………………………..
6. Get the URL for hello and make a curl request:
$ minikube service list
$ curl URL
7. Delete your release after running this demo:
$ helm delete hello
Note:
- Setting the image to “” in
values.yaml
does not work as the / will still be prepended. - If there is an error similar to “Values.networkpolicy.egress.enabled>: nil pointer evaluating interface {}.egress”, add the following at the end of your hello/values.yaml file. (This is a bug under investigation.):
networkpolicy:
egress:
enabled: false
Next Module
Module 10: Using WMF Kubernetes to run your services