Portal:Cloud VPS/Admin/Magnum

From Wikitech
Jump to navigation Jump to search

Overview

Magnum is the openstack k8saas offering. It can be used to easily deploy kubernetes clusters to a cloud VPS project.

Deploy a Cluster

openstack coe cluster create <cluster name> --cluster-template <cluster template name> --master-count 1 --node-count <worker count>

Get kube config file

After this is built get the config file to connect with kubectl using

openstack coe cluster config <cluster name> --dir <path to put "config" file in>

Notes

At time of writing clusters can only be deployed with a single control node. T326436 tracks getting more.

Templates

Magnum clusters are deployed using a template that defines what the cluster will look like to openstack. This includes what image will be used, networking information, control and worker node sizes, and what version of kubernetes will be deployed when using the cluster.

Create a template

For an eqiad1 cluster the following will define the correct networking information

openstack coe cluster template create <template name> \
--image <image name> \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
[--docker-volume-size <size> \]
--master-flavor <flavor> \
--flavor <flavor> \
--coe kubernetes \
--labels kube_tag=<version>-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
[--public]

A generic example would look like:

openstack coe cluster template create k8s23 \
--image magnum-fcos-37-20221127 \
--external-network wan-transport-eqiad \
--fixed-network lan-flat-cloudinstances2b \
--fixed-subnet cloud-instances2-b-eqiad \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g2.cores1.ram2.disk20 \
--flavor g2.cores1.ram2.disk20 \
--coe kubernetes \
--labels kube_tag=v1.23.15-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
--public

For codfw1dev you would want different networking options:

openstack coe cluster template create core-37-k8s23-100g \
--image magnum-fcos-37-20221127 \
--external-network wan-transport-codfw \
--fixed-subnet cloud-instances2-b-codfw \
--fixed-network lan-flat-cloudinstances2b \
--dns-nameserver 8.8.8.8 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--docker-volume-size 100 \
--master-flavor g2.cores1.ram2.disk20 \
--flavor g2.cores1.ram2.disk20 \
--coe kubernetes \
--labels kube_tag=v1.23.15-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true \
--floating-ip-disabled \
--public

Template Options

There are several optional options.

--docker-volume-size will define how large of a volume, if any, to attach to nodes, this volume is used for storing docker images. If none is specified the disk space available to the image itself (as part of its flavor definition) will be used instead.

--public will define if the template can be seen beyond the project it was created in.

kube_tag can be found at https://hub.docker.com/r/rancher/hyperkube/tags

scale a cluster

openstack coe cluster resize <cluster name> <number of workers>

upgrade a cluster

(This has not been seen to work outside of devstack)

openstack coe cluster upgrade <cluster name> <template to upgrade to>

If this gets stuck you may need to uncordon any cordoned nodes and then run:

openstack coe cluster resize <cluster name> <current number of workers>

to reset the cluster status. Then run the upgrade again.

While updating kubectl won't work, but services will still be running and available.