Jump to content

Kubernetes/Kubernetes Workshop/Step 10

From Wikitech

Step 10: Running your own service under WMF Kubernetes

Draft Version, might not work yet fully!


Let’s step through the deployment for a new sample service called “calc”. It is a simple http server written in python3 that performs basic calculator functionality accessible via an API.

A call such as curl http://localhost:8080/api?2+5 will return a JSON formatted answer of 7. Going directly to the URL with a browser opens a form that can be used for interactive use.

The code is hosted at https://github.com/wkandek/calc and can be executed locally with “python3 server.py”. Requirements are python3 and the ply module, used for parsing and the psutil module used for memory reporting. Both can be installed through pip, however debian and ubuntu have also native libraries available.

For anything beyond local testing we need to get a repository on Wikimedia’s gerrit. See https://www.mediawiki.org/wiki/Gerrit/New_repositories/Requests to request a project/repository on gerrit.

We have done that and the code is also hosted on gerrit at https://gerrit.wikimedia.org/g/blubber-doc/example/calculator-service

Hands-on: download and test

Clone the repository. Run the service by "python3 server.py" and try to use it via browser or curl. There is a tests directory which contains a number of tests to check if the code is functional. The tests can be run by using “pytest tests/test01.py” (or pytest-3 on myubuntu machine) and depend on pytest and the requests module. They also expect the service to be running and listening on port 8080.

There are also local tests within the server.py program that can be executed by running pytest server.py. These tests exercise only the parser and the calculation part, they do not test the webserver portion.

Hands-on: dockerize

The service is simple to run under docker. Select a python3 based image and install the ply and psutil module, copy the single source file and run it. The following Dockerfile will do that:

FROM python:latest
COPY server.py  /
RUN pip3 install ply psutil
EXPOSE 8080
CMD ["python3", "server.py"]
  • docker build . --tag calc
  • docker run -d -p 8080:8080 calc:latest
  • curl http://localhost:8080/api?4+4

We can run the provided tests in the test directory via pytest. This will make a number of http calls to the server and add, subtract, multiply and divide basic numbers.

  • pytest tests/test01.py # On ubuntu called pytest-3 when installed via apt-get, via pip it is called pytest, confusing


To test the code within docker, which will become useful in the upcoming automated build on WMF's build pipeline use the following Dockerfile, which copies the test01.py file and changes to the docker entrypoint to a script called Entrypoint.sh, which calls both the server and the test program

FROM python:latest
COPY Entrypoint.sh /
COPY tests/test01.py /
COPY server.py /
RUN pip3 install ply psutil pytest requests
EXPOSE 8080
CMD ["./Entrypoint.sh"]

Entrypoint.sh:

#!/bin/sh
python3 server.py testing & pytest test01.py

And a build and run cycle. Sample Output:

============================= test session starts ==========================
platform linux -- Python 3.8.5, pytest-6.2.2, py-1.10.0, pluggy-0.13.1
rootdir: /
collected 9 items
server.py ....                                                           [100%]
============================== 9 passed in 0.05s ==========================

Hands-on: WMF dockerize

To get the service to run under production kubernetes we need to use blubber to generate our Dockerfile, which will base it on a wikimedia foundation based image. Browsing on docker-registry.wikimedia.org you can find a python3 based image called “python3”. Blubber also allows you to install python3 modules via pip.

The following blubber YAML file works and generates a usable Dockerfile. Note that the server generates local files for the parser and needs filesystem access, so insecurely: true. Bonus: try removing that and tracking down the issue that generates…

It is best to minimize the production installation. That might require some experimentation and recreation of the application to determine minimal module footprint, especially if you are developing on a non-debian system. A bare-bones docker image debian:buster that can be used interactively via “docker run -it debian:buster /bin/bash” might be useful as a starting point. A disciplined use of “venv” might also work well.

The minimal dependencies coming out of that process is only the ply module.

?? In general you should use the native debian installation for modules rather than the python/pip way of installing modules, as you will be working with a tested stack of software. In our case ply is available as python3-ply on the debian level, so that would be the way to go.??

Letś check out both paths...

For the pip based install we pass in dependencies via the requirements.txt file. Here it  contains only two lines:

ply==3.1.1

pytest==6.2.2

The pytest module is only necessary for testing and does not need to be installed in the production copy. Letś keep that in mind for later when we integrate withe CI system.

Use the WMF python3 image and install ply and pytest and do a round of interactive testing to see if everything works.

version: v4
base: docker-registry.wikimedia.org/python3:0.0.2
runs: { insecurely: true }
apt: { packages: [python3-setuptools] }
variants:
 buildcalc:
     python:
         version: python3
         requirements: [requirements.txt]
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["python3", "server.py"]

A call to blubber gets us the Dockerfile:

  • curl -s -H 'content-type: application/yaml' --data-binary @calcblubber.yaml https://blubberoid.wikimedia.org/v1/buildcalc > Dockerfile
FROM docker-registry.wikimedia.org/python3:latest AS calc
USER "root"
ENV HOME="/root"
ENV DEBIAN_FRONTEND="noninteractive"
RUN apt-get update && apt-get install -y "python3-setuptools" && rm -rf /var/lib/apt/lists/*
RUN python3 "-m" "easy_install" "pip" && python3 "-m" "pip" "install" "-U" "setuptools" "wheel" "tox"
RUN groupadd -o -g "65533" -r "somebody" && useradd -l -o -m -d "/home/somebody" -r -g "somebody" -u "65533" "somebody" && mkdir -p "/srv/app" && chown "65533":"65533" "/srv/app" && mkdir -p "/opt/lib" && chown "65533":"65533" "/opt/lib"
RUN groupadd -o -g "900" -r "runuser" && useradd -l -o -m -d "/home/runuser" -r -g "runuser" -u "900" "runuser"
USER "somebody"
ENV HOME="/home/somebody"
WORKDIR "/srv/app"
ENV PIP_FIND_LINKS="file:///opt/lib/python" PIP_WHEEL_DIR="/opt/lib/python"
RUN mkdir -p "/opt/lib/python"
COPY --chown=65533:65533 ["requirements.txt", "./"]
RUN python3 "-m" "pip" "wheel" "-r" "requirements.txt" && python3 "-m" "pip" "install" "--target" "/opt/lib/python/site-packages" "-r" "requirements.txt"
COPY --chown=65533:65533 ["./server.py", "./server.py"]
ENV PATH="/opt/lib/python/site-packages/bin:${PATH}" PIP_NO_INDEX="1" PYTHONPATH="/opt/lib/python/site-packages"
ENTRYPOINT ["python3", "server.py"]
LABEL blubber.variant="calc" blubber.version="0.8.0+cb55e3b"

pytest-3 tests/test01.py and rebuild:

  • docker build . --tag calc
  • docker run -d -p 8080:8080 calc:latest
  • curl http://localhost:8080/api?4+4

Also try the option with called pytest instead of python3 in the entrypoint line to see how a test runs under WMF style docker.

Broken as of: 1/22/2021 ply installation fails

Alternative: install via debian package python3-ply

version: v4
base: docker-registry.wikimedia.org/python3:0.0.2
runs: { insecurely: true }
apt: { packages: [python3-ply] }
variants:
 buildcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["python3", "server.py"]

And rebuild:

  • curl -s -H 'content-type: application/yaml' --data-binary @calcblubber.yaml https://blubberoid.wikimedia.org/v1/buildcalc > Dockerfile
  • docker build . --tag calc
  • docker run -d -p 8080:8080 calc:latest
  • curl http://localhost:8080/api?4+4

To execute the tests under docker we can use the variant mechanism in the expanded blubber file:

version: v4
base: docker-registry.wikimedia.org/python3:0.0.2
runs: { insecurely: true }
apt: { packages: [python3-ply] }
Variants:
 testcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["pytest-3", "server.py"]
 buildcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["python3", "server.py"]
  • curl -s -H 'content-type: application/yaml' --data-binary @calcblubber.yaml https://blubberoid.wikimedia.org/v1/testcalc > Dockerfile
  • docker build . --tag calc
  • docker run calc:latest - this will run pytest-3 server.py and exercise the local tests

Note: installed via pip pytest can be run as pytest, installed via apt it is called pytest-3

Hands-on: Get the code into the WMF Deployment Pipeline

To get our code running on the production kubernetes cluster we use the Deployment Pipeline. The Deployment Pipeline is a project developed by WMF Release Engineering team and provides a structured set of tools that automate building, testing, publishing and executing of docker images.

You have already seen one of the tools in action: blubber, that generates the Dockerfile to use.

The Deployment Pipeline is implemented on top of open source tools such as gerrit, jenkins, zuul and kubernetes, with added locally developed tools that give it a coherent workflow.

For more information see: Wikitech:Deployment_pipeline.

We will go through a full implementation of deployment pipeline lifecycle, and will include as many facts as possible. It is a multi step, complex process, but in essence a one time investment to get your code and procedures adapted to the Deployment Pipeline’s norms and enable friction free releases.

To start with our code first has to be hosted on gerrit. We can request a project/repository on gerrit through the Gerrit Request page. We request  “blubber-doc/example/calculator-service”.

Once the repository is created, certain features need to be enabled (Submit? What else)

The integration of our project with the deployment pipeline happens through the config.yaml file in the .pipeline directory of the repository. We will use a file that performs two basic functions in two pipelines called test_pl and publish_pl. These names are free form and follow no special formatting, you can select them in a way that document their function. Our file references both the calcblubber-debian.yaml file that we used to generate our Dockerfile and the testcalc and buildcalc variants specified in it.

The pipeline’s intended use is:

  • test_pl to build an image and test it with pytest server.py  (or pytest-3 if installed the debian apt way), thus build references the variant to build (testcalc) and run is set to true.
  • publish_pl: to  push a production grade image to the WMF repository, so that it can be used in a deployment. Here the variant is different (buildcalc) which builds an image and executes the webserver in server.py, rather than just testing it.
    • Note that running the production image in test will not be successful, as the CI expects the image to exit with a return code of 0 for a clean run or with an error code where applicable. The production image however never exists, so it is not a good image to test.

.pipeline/config.yaml

pipelines:
 test_pl:
   blubberfile: calcblubber-debian.yaml
   stages:
     - name: run-test
       build: testcalc
       run: true
 publish_pl:
   blubberfile: calcblubber-debian.yaml
   stages:
     - name: production
       build: buildcalc
       publish:
         image:
           tags: [stable]

.pipeline/calcblubber-debian.yaml

version: v4
base: docker-registry.wikimedia.org/python3:0.0.2
runs: { insecurely: true }
apt: { packages: [python3-ply, python3-pytest] }
Variants:
 testcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["pytest-3", "server.py"]
 buildcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["python3", "server.py"]

The repository itself needs to contain the server.py file.

In addition jenkins and zuul have to be informed about the new pipelines, by adding information in the respective config files: jjb/project-pipelines.yaml and zuul/layout.yaml. This process is documented here: PipelineLib/Guides/How to configure CI for your project

Notice that the page uses a pipeline called “test” as an example, whereas we are using “test_pl” and “publish_pl”.

jjb/project-pipelines.yaml

- project:
   # blubber-doc/examples/calculator-service
   name: calculator-service
   pipeline:
     - test_pl
     - publish_pl
   jobs:
     # trigger-calculator-service-pipeline-test_pl
     # trigger-calculator-service-pipeline-publish_pl
     - 'trigger-{name}-pipeline-{pipeline}'
     # calculator-service-pipeline-test_pl
     # calculator-service-pipeline-publish_pl
     - '{name}-pipeline-{pipeline}''

Notice that in the zuul file “test” refers to a stage and not a pipeline defined in the example. zuul/layout.yaml

  - name: blubber-doc/example/calculator-service
   test:
     - trigger-calculator-service-pipeline-test_pl
   gate-and-submit:
   # all test jobs must have a gate and submit pipeline defined
     - noop
   postmerge:
     - trigger-calculator-service-pipeline-publish_pl

These files are all under source control in gerrit and can be edited by yourself. In case of problems reach out to the release engineering team and they can help.

In any case, after the edits are approved, release engineering needs to perform a number of steps to tell jenkins and zuul about the new pipelines.

Now a push to the repository will execute the test_pl pipeline and cause an image to be built and the test to be run. A link is added to gerrit for the output of the test run or you can look for it on the integration.wikimedia.org server via the webinterface.

Sample output (redacted for length) see a full copy at: https://integration.wikimedia.org/ci/job/calculator-service-pipeline-test_pl/5/console

Started by upstream project "trigger-calculator-service-pipeline-test_pl" build number 5
…
Cloning repository git://contint2001.wikimedia.org/blubber-doc/example/calculator-service
Will build the detached SHA1
...
[Pipeline] { (test_pl: run-test)
FROM docker-registry.wikimedia.docker-registry.org/python3:0.0.2 AS testcalc
USER "root"
ENV HOME="/root"
ENV DEBIAN_FRONTEND="noninteractive"
RUN apt-get update && apt-get install -y "python3-ply" "python3-pytest" && rm -rf /var/lib/apt/lists/*
RUN groupadd -o -g "65533" -r "somebody" && useradd -l -o -m -d "/home/somebody" -r -g "somebody" -u "65533" "somebody" && mkdir -p "/srv/app" && chown "65533":"65533" "/srv/app" && mkdir -p "/opt/lib" && chown "65533":"65533" "/opt/lib"
RUN groupadd -o -g "900" -r "runuser" && useradd -l -o -m -d "/home/runuser" -r -g "runuser" -u "900" "runuser"
USER "somebody"
ENV HOME="/home/somebody"
WORKDIR "/srv/app"
COPY --chown=65533:65533 ["./server.py", "./server.py"]
ENTRYPOINT ["pytest-3", "server.py"]
LABEL blubber.variant="testcalc" blubber.version="0.8.0+cb55e3b"
Success code from [200‥200]
[Pipeline] writeFile
[Pipeline] sh
+ docker build --pull --force-rm=true --label jenkins.job=calculator-service-pipeline-test_pl --label jenkins.build=5 --label zuul.commit=eadb086ffa60900ff3ecf1ba250d642e6470e939 --label ci.project=blubber-doc-example-calculator-service --label ci.pipeline=test_pl --file .pipeline/Dockerfile.205xby44 .
[Pipeline] echo
step: run, config: ['name':'run-test', 'build':['variant':'testcalc', 'context':'.'], 'run':['image':'${.imageID}', 'arguments':[], 'env':[:], 'credentials':[:]]]
[Pipeline] echo
exec docker run --rm sha256:'12f253acaab5'
[Pipeline] timeout
Timeout set to expire in 20 min
[Pipeline] {
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] sh
+ set +x
============================= test session starts ==========================
platform linux -- Python 3.5.3, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
rootdir: /srv/app, inifile:
collected 4 items
server.py ....
=========================== 4 passed in 0.12 seconds =======================
stage run-test completed. exported: ['stage':'run-test', 'imageID':'12f253acaab5']
+ docker rmi --force 12f253acaab5
stage teardown completed. exported: ['stage':'teardown']
Finished: SUCCESS

Great - we have run a test CI job successfully. Notice that we have not needed code reviews for our project so far, as we are still in the testing phase.

Hands-on: improve the testing

The test/test01.py file uses the HTTP request module to make “real” client requests and verifies their correctness, testing the JSON returns, etc. To run this test we need to run the server and the test client concurrently. We make the following modifications:

  • Entrypoint is not python3 server.py anymore but a shell script called Entrypoint.sh
  • We copy test01.py from the tests directory
  • We copy Entrypoint.sh
  • We need the requests module in addition to the other python modules
  • Entrypoint.sh calls python3 server.py & and then pytest-3 test01.py

The changes are all in Entrypoint.sh and calcblubber-debian.yaml

calcblubber-debian.yaml

version: v4
base: docker-registry.wikimedia.org/python3:0.0.2
runs: { insecurely: true }
apt: { packages: [python3-ply, python3-pytest, python3-requests] }
variants:
 testcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
         - from: local
           source: ./tests/test01.py
           destination: ./test01.py
         - from: local
           source: ./Entrypoint.sh
           destination: ./Entrypoint.sh
     entrypoint: ["./Entrypoint.sh"]
 buildcalc:
     copies:
         - from: local
           source: ./server.py
           destination: ./server.py
     entrypoint: ["python3", "server.py"]

Entrypoint.sh

#!/bin/sh
python3 server.py testing & pytest-3 test01.py

Test the change locally:

  • curl -s -H 'content-type: application/yaml' --data-binary @calcblubber-debian.yaml https://blubberoid.wikimedia.org/v1/testcalc > Dockerfile
  • $ docker build .
    • Successfully built a1ec5c0e98fd
  • $ docker run -d a1ec5c0e98fd
    • 7dae4a88553eae6bf84a1e43ebd09099c3a079f2e1d8987f70616eb603710b0e
  • $ docker logs 7dae4a88553eae6bf84a1e43ebd09099c3a079f2e1d8987f70616eb603710b0e
=========================== test session starts =====================
platform linux -- Python 3.5.3, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
rootdir: /srv/app, inifile:
collected 9 items
test01.py .........
========================= 9 passed in 0.12 seconds ===================
  • See: https://integration.wikimedia.org/ci/job/calculator-service-pipeline-test_pl/10/console for a recent run

Hands-on: get the image into the production repository and test it

Once the tests have run successfully in CI, we want to create a production docker image. By +2ing the change in gerrit, we will cause the second CI pipeline to be executed, the one we specified in the gate-and-submit and postmerge section in the zuul config file. A docker image will be built, this time with a different entrypoint that just runs the server.py program and then the image is pushed to the WMF repository.

See: https://integration.wikimedia.org/ci/job/calculator-service-pipeline-publish_pl/lastBuild/ for details.

The docker images can be browsed to and seen at: https://docker-registry.wikimedia.org/. Note that the homepage is built roughly hourly, so updates could be delayed.

You should run and test the created image locally on minikube by using the following kubectl YAML files. Note that we pull the image from the foundation docker registry server.

Now would also be the time to specify replicas and CPU and memory limits. Our application is small, so we have only 1 replica and used 0.1 CPUs and 64 MB as our limits. With a larger application some deeper performance tests should guide those numbers.

calcdeployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
 name: calc
 labels:
   app: calc
spec:
 replicas: 1
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: calc
 template:
   metadata:
     labels:
       app: calc
   spec:
     containers:
      - name: calc
        image: docker-registry.wikimedia.org/wikimedia/blubber-doc-example-calculator-service:stable
        imagePullPolicy: Always
        resources:
          requests:
            memory: "64Mi"
            cpu: "100m"
          limits:
            memory: "64Mi"
            cpu: "100m"

calcservice.yaml

kind: Service
apiVersion: v1
metadata:
 name: calc
spec:
 selector:
   app calc
 type: LoadBalancer
 ports:
 - protocol: TCP
   port: 80
   targetPort: 8080

After applying both files on your local minikube with kubectl apply, you should be able to access the application via the URL that the command “minikube service calc” returns.

Hands-on: Security Review

Get a security review of the application.

Hands-on: Configuring helm for production releases

Now that we have a working image built to WMF specification, we can prepare for its release into production. This is a multi step process requiring configuration files to be generated and a number of decisions and conversations with SRE. It is typically a one-time process and once all parameters are defined subsequent code releases are quick and independent.

First let the SRE team know about the new service by filing a Phabricator task. See

https://wikitech.wikimedia.org/wiki/Services/FirstDeployment#New_service_request for more information, ignore the parts about puppet as they do not apply to microservices on kubernetes.

The task for calculator-service is T273807.

Second we need to create a set of helm configuration files for production kubernetes using the create_new_service.sh script in the deployment-charts repository. This will create the calculator-service directory that contains template files for helm that will define the release.

Beyond just running the container as we have done with helm before, this process provides for additional  functionality, for example it will frontend the service automatically with an envoy proxy that will provide HTTPS termination and usage monitoring for the application.

The process is described here. SRE ServiceOps provides assistance if needed, but the basic steps are to clone the deployment-charts repo and run create_new_service.sh and provide name, port and imagename, in our case calculator-service, 8080 and blubber-doc/example/calculator-service.

Example output:

./create_new_service.sh
Please input the name of the service
calculator-service
Please input the port the application is listening on
8080
Please input the docker image to use:
blubber-doc-example-calculator-service:stable
~/deployment-charts/charts/calculator-service ~/gwm/deployment-charts
~/deployment-charts
~/deployment-charts/charts/calculator-service/templates ~/gwm/deployment-charts
~/deployment-charts
~/deployment-charts/charts/calculator-service/templates ~/gwm/deployment-charts
~/deployment-charts
You can edit your chart (if needed!) at ./deployment-charts/charts/calculator-service

The files created are the baseline configuration to be run under kubernetes. They deal with the service itself and are prepared for a number of mandatory (for example TLS encryption) and recommended (prometheus monitoring) supporting areas of the service.

The following files have been created and provide configuration options for:

  • calculator-service/Chart.yaml - ??
  • calculator-service/values.yaml - the base file for setting ports, etc
  • calculator-service/defaulty-network-policy.yaml - ??
  • calculator-service/templates/configmap.yaml - for configuring the equivalent kubernetes concepts
  • calculator-service/templates/deployment.yaml - for configuring the equivalent kubernetes concepts
  • calculator-service/templates/secret.yaml - for configuring the equivalent kubernetes concepts
  • calculator-service/templates/service.yaml - for configuring the equivalent kubernetes concepts
  • calculator-service/templates/tests/test-service-checker.yaml - uses the swagger specification of the service to provide a test of the service during helm lint - our service does not have a swagger spec yet, so we will disable that test.

Our chart files need editing:

  • We do not use statsd, so all references to monitoring need to be deleted
    • Values.monitoring in the templates deployment, networkpolicy and configmap
    • Delete the config/prometheus-statsd.conf file and the config directory
  • We do not have a swagger specification for our API (yet…) so delete the templates/tests/test-service-checker.yaml as well
  • We do not use prometheus for monitoring so set that to false in charts/deployment.yaml
  • In the charts/calculator-service/values.yaml file
  • cpu to 0.1m for all, memory to 100m for all
  • Define 2 ENV vars CALC_VERSION and CALC_TESTMODE in public
  • Service deployment: production/minikube choice
  • Readiness probe path: /healthz
  • Delete the monitoring: section

Commit your changes and select a reviewer from the Service Operations team.

Review the results from the helm lint that gets executed when you commit to see if everything passed and if a +2 was awarded by jenkins-bot

Once the review is done, which is likely to be multiple cycles depending on the complexity of the service, you can +2 the service (no submit necessary) and a bot will copy the charts into our official Chart Museum (https://github.com/helm/chartmuseum) implementation at https://helm-charts.wikimedia.org/.  You can check for the chart by downloading the index.yaml file at https://helm-charts.wikimedia.org/stable/index.yaml. For more information on Chart Museum at WMF see: https://wikitech.wikimedia.org/wiki/ChartMuseum and https://wikitech.wikimedia.org/wiki/ChartMuseum#Interacting_with_ChartMuseum

? deployment.yaml: latest vs stable?

Summary: We have used the deployment pipeline tools to configure our service in production. That required to file a ticket with SRE, define a TLS port, run the craete_new_service.sh script to generate the necessary config files to helm, modify the files to define our service characteristics around monitoring, TLS, etc. and shepherd it through the review cycle. Finally our helm chart is stored in Chart Museum, ready for the next step.

Hands-on: Create a helmfile

On top of helm we use an additional mechanism called helmfile. Helmfile applies a template to the helm charts. The template change the charts depending on what environment we are pushing the deployment to. So for example in staging we may want to run with a different value for replicas or loglevel than in production. The helmfile allows us to set these different variable.

A helmfile is configured through YAML files. Take a look at the file for calculator-service in helmfile.d/services/calculator-service/helmfile.yaml where we are specifying 1 replica for staging and 2 replicas for production. If you are working on your own service start by copying an existing file

More more information https://wikitech.wikimedia.org/wiki/Deployments_on_kubernetes#Deploying_with_helmfile and Deploy a service to staging

Once the helmfile is in gerrit under XXX, reviewed and committed we are ready to go and can release our code/image to the various environments:

  • Staging: helmfile xxx yyy
  • Production

But how does outside traffic get to our service? That is one of the few steps of the setup that has be done manually, because we use our own custom installation of LVS based load balancers that are not integrated into kubernetes. SRE will setup the load balancers and necessary DNS for the service.

??Note: Only after the charts are created already as the helm linter used is determining the syntax is dependent on the existence of the charts.??