Portal:Cloud VPS/Admin/Magnum

From Wikitech

Magnum is the OpenStack Kubernetes as a Service offering. It can be used to easily deploy kubernetes clusters to a cloud VPS project.

Deploy a cluster

openstack coe cluster create <cluster name> --cluster-template <cluster template name> --master-count 1 --node-count <worker count>

Get kube config file

After this is built get the config file to connect with kubectl using

openstack coe cluster config <cluster name> --dir <path to put "config" file in>

Notes

At time of writing clusters can only be deployed with a single control node. T326436 tracks getting more.

Templates

Magnum clusters are deployed using a template that defines what the cluster will look like to openstack. This includes what image will be used, networking information, control and worker node sizes, and what version of kubernetes will be deployed when using the cluster.

Create a template

For an eqiad1 cluster the following will define the correct networking information

openstack coe cluster template create <template name> \
--image <image name> \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
[--docker-volume-size <size> \]
--master-flavor <flavor> \
--flavor <flavor> \
--coe kubernetes \
--labels kube_tag=<version>-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
[--public]

A generic example would look like:

openstack coe cluster template create k8s23 \
--image Fedora-CoreOS-38 \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g2.cores1.ram2.disk20 \
--flavor g2.cores1.ram2.disk20 \
--coe kubernetes \
--labels kube_tag=v1.23.15-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
--public

For codfw1dev you would want different networking options:

openstack coe cluster template create core-37-k8s23-100g \
--image Fedora-CoreOS-38 \
--external-network wan-transport-codfw \
--fixed-subnet cloud-instances2-b-codfw \
--fixed-network lan-flat-cloudinstances2b \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g2.cores1.ram2.disk20 \
--flavor g2.cores1.ram2.disk20 \
--coe kubernetes \
--labels kube_tag=v1.23.15-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
--public

Template options

There are several optional options.

--docker-volume-size will define how large of a volume, if any, to attach to nodes, this volume is used for storing docker images. If none is specified the disk space available to the image itself (as part of its flavor definition) will be used instead.

--public will define if the template can be seen beyond the project it was created in.

kube_tag can be found at https://hub.docker.com/r/rancher/hyperkube/tags

Scale a cluster

openstack coe cluster resize <cluster name> <number of workers>

Upgrade a cluster

(This has not been seen to work outside of devstack)

openstack coe cluster upgrade <cluster name> <template to upgrade to>

If this gets stuck you may need to uncordon any cordoned nodes and then run:

openstack coe cluster resize <cluster name> <current number of workers>

to reset the cluster status. Then run the upgrade again.

While updating kubectl won't work, but services will still be running and available.

Add cinder csi to cluster

Setup cinder csi: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md#using-the-manifests

example cloud.conf:

[Global]
application-credential-id = <application credential id>
application-credential-secret = <application credential secret>
domain-name = default
auth-url = https://openstack.eqiad1.wikimediacloud.org:25000/v3
tenant-id = <project id>
region = eqiad1-r

tenant-id can be found from:

openstack server show <id of a k8s cluster node>

it will be listed as project_id

base64 -w 0 cloud.conf
git clone https://github.com/kubernetes/cloud-provider-openstack.git
cd cloud-provider-openstack
vim manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml # replace base64 part with base64 output from above
kubectl create -f manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
kubectl -f manifests/cinder-csi-plugin/ apply


Setup cinder as the default storage class: sc.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
  availability: nova
kubectl apply -f sc.yaml