Portal:Toolforge/Admin/Networking and ingress
This page describes the design, topology and setup of Toolforge's networking and ingress, specifically those bits related to webservices in the new kubernetes cluster.
The following diagram is used to better describe the network topology and the different components involved.
When an user visits a webservice hosted in Toolforge, the following happens:
- nginx+lua: The first proxys is what we know as dynamicproxy. The nginx+lua setup knows how to send requests to the legacy k8s + webservices grid. There is a fall-through route for the new k8s cluster.
This proxy provides SSL termination for both
toolforge.org. There are DNS A records in both domains pointing to floating IP associated with the active proxy VM.
- haproxy: Then, haproxy knows which actual nodes are alive in the new k8s cluster. Proxy for both the k8s api-server on tcp/6443 and the web ingress on tcp/30000.
There is a DNS A record with name
k8s.tools.eqiad.wmflabspointing to the IP of this VM.
- nginx-ingress-svc: There is a nginx-ingress service of type NodePort, which means every worker node listens on tcp/30000 and direct request to the nginx-ingress pod.
- nginx-ingress pod: The nginx-ingress pod will use ingress objects to direct the request to the appropriate service, but ingress objects need to exists beforehand.
- api-server: Ingress objects are created using the k8s API served by the api-server. They are automatically created using the webservice command, and the k8s API allows users to create and customize them too.
- ingress-admission-controller: There is a custom admission controller webhook that validates ingress config objects to enforce valid configurations.
- ingress object: After the webhook, the ingress objects are added to the cluster and the nginx-ingress pod can consume them.
- tool svc: The ingress object contained a mapping between URL/path and a service. The request is now in this tool specific service, which knows how to finally direct the query to the actual tool pod.
- tool pod: The request finally arrives at the actual tool pod.
This section contains specific information about the different components involved in this setup.
There are mainly 2 different kinds of elements: those running outside kubernetes and those running inside.
Information about components running outside kubernetes.
This component is described in another wikitech page.
This setup is fairly simple, deployed by puppet using the role
We have a DNS name
k8s.tools.eqiad1.wikimedia.cloud (new) or
k8s.tools.eqiad.wmflabs (legacy) with an
A record pointing to the active VM.
There should be a couple of VMs in a cold-standby setup (only one is actually working).
The haproxy configuration involves 2 ports, the "virtual servers":
- 6443/tcp for the main kubernetes api-server
- 30000/tcp for the ingress
Each "virtual server" has several backends:
- in the case of the api-server, backends are the controller nodes.
- in the case of the ingress, backends are every worker nodes.
Explanation of the different components inside kubernetes.
We use calico as the network overlay inside kubernetes. There is not a lot to say here, since we mostly use the default calico configuration. We only specify the CIDR of the pod network. There is a single yaml file containing all the configuration, and deployed by puppet.
This file is
modules/toolforge/templates/k8s/calico.yaml.erb in the puppet tree and
/etc/kubernetes/calico.yaml in the final control nodes.
To load (or refresh) the configuration inside the cluster, use:
root@k8s-control-1:~# kubectl apply -f /etc/kubernetes/calico.yaml configmap/calico-config unchanged customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org unchanged customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org unchanged clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrole.rbac.authorization.k8s.io/calico-node unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged daemonset.apps/calico-node configured serviceaccount/calico-node unchanged deployment.apps/calico-kube-controllers unchanged serviceaccount/calico-kube-controllers unchanged
Some things to take into account:
- Mind that all configuration is based on a specific version of calico. By the time of this writting, the version is v3.8.
- The version of calico we use harcodes calls to iptables-legacy, so in Debian Buster we were forced to switch everything to that, rather than the default iptables-nft. See
We use nginx to process the ingress configuration in the k8s cluster. There is a single yaml file containing all the configuration, and deployed by puppet.
This file is
modules/toolforge/files/k8s/nginx-ingress.yaml in the puppet tree and
/etc/kubernetes/nginx-ingress.yaml in the final control nodes.
To load (or refresh) the configuration into the cluster, use:
root@k8s-control-01:~# kubectl apply -f /etc/kubernetes/nginx-ingress.yaml namespace/ingress-nginx unchanged configmap/nginx-configuration unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/nginx-ingress unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress unchanged role.rbac.authorization.k8s.io/nginx-ingress unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress unchanged deployment.apps/nginx-ingress configured service/ingress-nginx unchanged
Some things to take into account:
- Mind that all configuration is based on a specific version of kubernetes/ingress-nginx. By the time of this writting, the version is v0.25.1.
- There are serveral modifications done to the configuration as presented in the docs, to fit into our environments. Changes are docummented in the yaml file.
- One of the most important changes is that we configured the nginx-ingress pods to listen on 8080/tcp rather than the default (80/tcp).
- The nginx-ingress deployment requires a very specific PodSecurityPolicy to be deployed beforehand (likely
- we don't know yet which scale factor do we need for the nginx-ingress deployment (i.e, how many pods to run).
We have a tool called fourohfour which is set as the default backend for nginx-ingress. This tool presents an user friendly 404 page.
Ingress objects can be created in 2 ways:
- directly using the kubernetes API
- by using the webservice command.
Objects have this layout. Toolforge tool name hello is used as example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: hello namespace: hello spec: rules: - host: hello.toolforge.org http: paths: - backend: serviceName: hello servicePort: 8080
This ingress object is pointing to a service which should have this layout:
apiVersion: v1 kind: Service metadata: name: hello namespace: hello spec: selector: app: hello ports: - protocol: TCP port: 8080 targetPort: 8080
Note that both objects are namespaced to the concrete tool namespace.
ingress admission controller
This k8s API webhook checks ingress objects before they are accepted by the API itself. It enforces (or prevents) ingress configurations that may produce malfunctioning in the webservices running in kubernetes, like pointing URLs/path to tools that are not ready to handle them.
The code is written in golang, and can be found here:
How to test the setup
TODO: add info.
- Toolforge k8s RBAC and PodSecurityPolicy -- documentation page
- phabricator T228500 - Toolforge: evaluate ingress mechanism -- original ticket to design the ingress (epic)
- phabricator T234037 - Toolforge ingress: decide on final layout of north-south proxy setup -- original ticket to design the outer proxy layout
- phabricator T234231 - Toolforge ingress: decide on how ingress configuration objects will be managed -- original ticket to design ingress object management
- phabricator T235252 - Toolforge: SSL support for new domain toolforge.org -- original ticket to handle SSL support for the domain toolforge.org and the new k8s cluster in general
- phabricator T234617 - Toolforge. introduce new domain toolforge.org -- original ticket to introduce the domain toolforge.org
- phabricator T234032 - Toolforge ingress: create a default landing page for unknown/default URLs -- about the fourohfour tool
- Resolution on new domains usage for WMCS
- Deploying Toolforge kubernetes