Ganeti

From Wikitech
Jump to navigation Jump to search

Introduction

Ganeti is a clustered virtual machine management software tool built on top of existing virtualization technologies such as Xen or KVM and other open source software. It supports both KVM and Xen. At WMF we only have KVM as an enabled hypervisor. Primary Ganeti web page is http://www.ganeti.org/.

At WMF, ganeti is used as a cluster management tool for production-network VPSes with services that help us run our clusters. After an evaluation process of Openstack vs Ganeti, Ganeti was chosen as a more fitting software for the job at hand.

Architecture Overview

Ganeti is architected as a shared nothing virtualization cluster with job management. There is one master node (per-site) that receives all jobs to be executed (create a VM, delete a VM, stop/start VMs, etc). The master node can be swapped between a preconfigured number of master candidates in case of a hardware failure. This avoids a single point of failure for cluster management operations.

For VM storage we use the DRBD backend. This provides "RAID-1 over the network" block devices which ensure that in the event of failure of a single hardware node, VMs may be restarted with minimal disruption on their secondary (backup) node. This also provides the capability to migrate VMs between hardware nodes (e.g. for hardware maintenance) without requiring a centralized storage backend.

Within a cluster, there's the notion of a node group. That's practically a group of hardware nodes. Think of it as nothing more than a subcluster division. Operations are usually constrained within a node group and do not cross the boundaries unless specifically instructed to. In our deployment, node groups are aligned with the physical row location of the hardware inside the datacenter (where possible). This allows us to diversify VMs across physical row locations within a single site, taking advantage of the additional of fault tolerance this can provide. In caching pop locations there is a single, "default" node group.

A high level overview of the architecture is here http://docs.ganeti.org/ganeti/2.12/html/_images/graphviz-246e5775f608681df9f62dbbe0a5d4120dc75f1c.png and more discussion about it is in http://docs.ganeti.org/ganeti/2.12/html/design-2.0.html

Ganeti Clusters

A cluster is identified by an FQDN (e.g ganeti01.svc.eqiad.wmnet) which corresponds to an IPv4 address. That IPv4 address is "floating", meaning that it is owned by the current master.

Administration always happens via the master. It is the only node where all commands can be run and hosts the API. Failover of a master is easy but manual. See below for more information on how to do it.

Hardware Conventions

As of Dec 2019 we have standardized on raid-5 host configuration in eqiad/codfw and an SSD backed raid-1 configuration in caching pop sites.

Connect to an existing cluster

Just ssh to its FQDN

  ssh ganeti01.svc.eqiad.wmnet

Building a new cluster

Host Preparation

Ensure hardware virtualization is enabled in BIOS

Hardware virtualization must be enabled for KVM. If you encounter errors relating to KVM, double check that hardware virtualization is enabled inside BIOS > System Settings > Processor Settings

Ensure 'ganeti' Volume Group is present

Our current ganeti partman layout creates a large raid device and formats it as swap to work around a shortcoming in the debian installer (fixme)

When a new ganeti host is built, you must manually remove the large swap device on /dev/md2, create a PV on /dev/md2, and create a VG called ‘ganeti’. Be sure to remove the stale swap entry from /etc/fstab.

Ensure network bridge(s) are present

Bridges are used to connect virtual machines to the network interface of the hardware node. Currently these are manually configured. Each bridge corresponds to a network VLAN, on either a physical or tagged interface.

Looking at the simpler case of the caching pop deployment, there is a single bridge called ‘private’ that corresponds with private network interface. This bridge interface assumes the IP of the node.

Example configuration:

 #/etc/network/interfaces
 
 # The loopback network interface
 auto lo private
 iface lo inet loopback
 
 iface ens3f0np0 inet manual
 
 iface private inet static
     bridge_ports 	ens3f0np0
     bridge_stp	off
     bridge_maxwait	0
     bridge_fd	0
         address 10.20.0.31/24
         gateway 10.20.0.1
         dns-nameservers 10.3.0.1
         dns-search ulsfo.wmnet
         up /sbin/ip token set ::10:20:0:31 dev private
         up ip addr add 2620:0:862:102:10:20:0:31/64 dev private
 
 ## The original network interface configuration
 #allow-hotplug ens3f0np0
 #iface ens3f0np0 inet static
 #	address 10.20.0.31/24
 #	gateway 10.20.0.1
 #	dns-nameservers 10.3.0.1
 #	dns-search esams.wmnet
 #     pre-up /sbin/ip token set ::10:20:0:31 dev ens3f0np0
 #     up ip addr add 2620:0:862:102:10:20:0:31/64 dev ens3f0np0

Ensure intra-cluster SSH is permitted by the iptables/ferm firewall

Cluster nodes must be able to ssh to one another in order for Ganeti to function. Double-check that puppet has configured this by attempting to SSH between the Ganeti cluster hosts being deployed.

Generate an RAPI certificate for the cluster

Ganeti RAPI TLS certificates are generated and manged with Cergen. You'll need to...

  • Add an entry to the ganeti yaml certificate manifest. Don't use a password as ganeti-rapi did not support this (at time of writing)
  • Generate the certificate
  • Add the key file to the puppet private/secret ssl certificate store
  • Add a dummy key file to the labs/private ssl certificate store. !!! do not use the real key here, use a dummy key!!! See other existing ganeti keys in labs/private for an example
  • Add the public certificate to the puppet files/ssl certificate store.

Initialize the new cluster

An example of a initializing a new cluster:

  sudo gnt-cluster init \
  --no-ssh-init \ 
  --enabled-hypervisors=kvm \
  --vg-name=ganeti \
  --master-netdev=private \
  --hypervisor-parameters kvm:kvm_path=/usr/bin/qemu-system-x86_64,kvm_flag=enabled,serial_speed=115200,migration_bandwidth=64,migration_downtime=500,kernel_path= \
  --nic-parameters=link=private \ 
  ganeti01.svc.codfw.wmnet

The above is the way we currently have our clusters configured

Synchronize ssh_host_rsa_key across cluster nodes

Since the ganeti service address floats between master candidate nodes, the host key for the cluster service address remain the same across master changes. To accomplish this we must synchronize /etc/ssh/ssh_host_rsa_key and /etc/ssh/ssh_host_rsa_key.pub across the Ganeti cluster in that site.

Copy the contents of /etc/ssh/ssh_host_rsa_key and /etc/ssh/ssh_host_rsa_key.pub from the master node to the other nodes within the cluster, add a known_hosts entry for this shared host key to /var/lib/ganeti/known_hosts. Then bounce the sshd on each node.

Add nodes to cluster

Run gnt-node add $hostname_to_be_added for each node that should be added to the cluster

Verify cluster health

Run `gnt-cluster verify` to perform a health check on the cluster. Address any errors that may occur before moving on.

Ensure microcode flags are passed down to VM instances

In addition we need to adapt the QEMU settings so that new microcode flags are passed down to the instance:

   sudo gnt-cluster modify -H kvm:cpu_type=IvyBridge\\,+pcid\\,+invpcid\\,+spec-ctrl\\,+ssbd\\,+md-clear

Create a test VM

The cluster should now be ready for VM creation, follow the 'create a VM' steps below and address any errors that may occur.

Modify an existing cluster

Modifying the cluster to change defaults, parameters of hypervisors, limits, security model etc is possible. An example of modifying the cluster is given below.

  sudo gnt-cluster modify -H kvm:kvm_path=/usr/bin/qemu-system-x86_64,kvm_flag=enabled,kernel_path=

To get an idea of what is actually modifiable do a:

  sudo gnt-cluster info

and then lookup in ganeti documentation the various options [1]

Destroy the cluster

Destroying the cluster is a one way street. Do not do it lightly. An example of destroying a cluster:

 sudo gnt-cluster destroy  --yes-do-it

do note that various things will be left behind. For example /var/lib/ganeti/queue/ will not be deleted. It's up to you if you want to delete it or not, depending on the case.

Add a node

Adding a new hardware node to the cluster to increase capacity

  sudo gnt-node add ganeti1002.eqiad.wmnet

Listing cluster nodes

Listing all hardware nodes in a cluster:

  sudo gnt-node list

That should return something like:

  Node                   DTotal  DFree MTotal MNode MFree Pinst Sinst
  ganeti1001.eqiad.wmnet 427.9G 427.9G  63.0G  391M 62.4G     0     0
  ganeti1002.eqiad.wmnet 427.9G 427.9G  63.0G  289M 62.5G     0     0
  ganeti1003.eqiad.wmnet 427.9G 427.9G  63.0G  288M 62.5G     0     0
  ganeti1004.eqiad.wmnet 427.9G 427.9G  63.0G  288M 62.5G     0     0

The columns are respectively: Disk Total, Disk Free, Memory Total, Memory used by node itself, Memory Free, Instances for which this node is primary, instances for which this node is secondary

The nodegroup information you be obtained via a command like

   sudo gnt-node list -o name,group

which would return the name and the group of node

   Node                   Group
   ganeti1001.eqiad.wmnet row_C
   ganeti1002.eqiad.wmnet row_C
   ganeti1003.eqiad.wmnet row_C
   ganeti1004.eqiad.wmnet row_C
   ganeti1005.eqiad.wmnet row_A
   ganeti1006.eqiad.wmnet row_A
   ganeti1007.eqiad.wmnet row_A

Detecting the master node

The current master node of a cluster is printed in the MOTD when logging into a Ganeti server. It can also be detected with

 sudo gnt-node list -o name,master_candidate,master

View the job queue

Ganeti has a job queue built-in. Most of the times it's working fine but if something is taking too long it might be helpful to check what's going on in the job queue

  gnt-job list

and getting a job id from the result

  gnt-job info #ID

Cluster upgrades

Hardware/software upgrades on a ganeti cluster can happen with 0 downtime to the VMs operations. The procedure to do so is outlined below. In case a shutdown/reboot is needed the procedure to empty to node is described in the rolling reboot section

Do the software upgrade (if needed)

  apt upgrade, apt install <component>, debdeploy, whatever is the correct method

throughout the cluster. It should have 0 repercussions to any VM anyway. Barring a Ganeti bug in the upgraded version, the cluster itself should also have 0 problems. Between minor versions (e.g. 2.12 -> 2.15) it may be required to run some upgrade script. Read the changelog/upgrades notes. The Debian maintainer builds the package in a way that both versions will be installed and until you run said script the old version will still be used.

Rolling reboot

Doing a rolling reboot of the cluster is easy. Empty every node, reboot it, check that it is online, proceed to the next. The one thing to take care is to not reboot the master without failing it over first.

Failover the master

Choose a master candidate that suits you. You can get master candidates by

  sudo gnt-node list -o name,master_candidate

Login and

  sudo gnt-cluster master-failover

The cluster IP will now be served by the new node and the old one is no longer the master.

Cluster rebalancing

There might be a time where the cluster will look/actually be unbalanced. That will be true after a rolling reboot of the nodes. Doing a rebalancing is easy and baked into ganeti, all it takes is running a command

  sudo hbal -L -X -G <node_group>

Please run it in a screen session. It might take quite a while to finish. The jobs have been submitted so it's fine losing that session but it's still prudent. A list of node groups can be displayed with "sudo gnt-group list".

The cluster will calculate a current score, run some heuristic algorithms to try and minimize that score and then execute the commands require to reach that state.

Cluster certificates

Ganeti creates and manages an internal Certificate Authority (CA). This CA is used for internal component communication and it's overall rather well hidden in day to day operations. There is a CA and node (aka client) certs that are distributed to the nodes. All nodes have the CA and the key to the CA, it's part of the ability to assume the master functionality. We 've decided to not mess with this CA as it is internal to the software and cluster state and we don't want to expose the key of the Puppet CA to the ganeti nodes.

The CA issues certs for the client nodes, the spice functionality (we don't use that) as well as the RAPI (remote API). Since we treat that CA as a blackbox, we don't want to use those. So in order to allow the rest of the infrastructure we run ganeti-rapid with the -K parameter and instead use a certificate we ship using Puppet. This can be confusing so keep an eye out.

Renew cluster certificates

The cluster certificates can be renewed at any point in time. While time consuming, it is a safe command to run (keep in mind that other operations will not be able to be run however during that timeframe). The command to do so is:

  sudo gnt-cluster renew-crypto --new-cluster-certificate --new-node-certificates --new-rapi-certificate --new-spice-certificate --reason="<insert name> renewing crypto"

Note that we renew the cluster CA, the client certs, the spice certificate and the RAPI certificate. Don't be alarmed about renewing the RAPI certificate. As pointed out above, while the CA does issues one, we override the command to start ganeti-rapid and don't use the file that is created by the CA. So we can safely renew that too.

It will ask you to run gnt-cluster verify. Do so. If you see any errors, try to correct them and repeat.

Expired cluster certificates

So, you 've just met something like

 Error checking node ganeti2004.codfw.wmnet: Error 60: server certificate verification failed. CAfile: /var/lib/ganeti/server.pem

What has happened is that the CA (and client certs) have expired. First, don't panic. While it must be fixed, it's not going to cause issues immediately. Relax and read through this section to understand what needs to be done.

Normally, we would just use the command in the section above to just renew everything but if we are this position we can't right away and need to perform a couple of more actions before that.

Prep work:

There's a couple of things that might interfere with you working. Those are puppet and ganeti cronjobs. Disable both before starting otherwise, you might end up scratching your head as race conditions might interfere with your work. To do so, run disable-puppet "<insert comment>" on all hosts via cumin and move /etc/cron.d/ganeti to some other place temporarily.

  1. Make sure you are on the master (if you used the ganeti01.svc.<site>.wmnet service url it should be already so).
  2. Write down the nodes participating in the cluster. Running the following should do it:
sudo gnt-node list
  1. Stop ganeti across the cluster. A cumin command 'systemctl stop ganeti' should do it. Don't worry, VMs will keep on running.
  2. Create a new CA. The easiest way to do so is:
chmod 0600 /var/lib/ganeti/server.pem && openssl req -new -newkey rsa:2048 -days 1825 -nodes -x509 -keyout /var/lib/ganeti/server.pem -out /var/lib/ganeti/server.pem -batch && chmod 0400 /var/lib/ganeti/server.pem
  1. Copy that cert to all hosts. Since the nodes have SSH allowed between them, you can just do that in a for loop
for i in ganetiX001.<site>.wmnet ganetiX002.<site>.wmnet ... ; do scp /var/lib/ganeti/server.pem $i:/var/lib/ganeti/server.pem ; ssh $i chmod 0400 /var/lib/ganeti/server.pem ; done
  1. Start ganeti across the cluster.
  2. Run a gnt-node list and note that this time the capacity of the hosts and usages are displayed normally and not with a question mark.
  3. Now run the gnt-cluster renew-crypto command from above.
  4. Follow the point about running gnt-cluster verify
  5. Correct errors if any and repeat.
  6. You are out of the woods.

Now re-enable puppet and move back /etc/cron.d/ganeti from wherever you placed it.

Node operations

Reboot/Shutdown for maintenance a node

Select a node that needs rebooting/shutdown for brief hardware maintenance and empty of primary instances

   sudo gnt-node migrate -f ganeti1004.eqiad.wmnet

Now a

   sudo gnt-node list

should return 0 primary instances for the node. It is safe to reboot it or shut it down for a brief amount of time for hardware maintenance.

VMs without DRBD disk template

Note that there is a chance that this isn't true. We have a few instances that can't be migrated to other nodes. Those are things for which replicated storage (aka DRBD) is either an overkill (e.g. d-i-test) or a liability (e.g. etcd hosts where the extra IO latency by DRBD causes issues). For those VMs, which have the plain disk template, it's safe to just reboot the host, ganeti will DTRT.

After reboot, before you migrate the next node run

    sudo gnt-cluster verify-disks

It should display "No disks need to be activated." (possibly multiple times, one per ganeti nodegroup) before the next node can be rebooted (this ensures that the DRBD is synced fully)

Shutdown a node for a prolonged period of time

Should the node be going down for an undetermined amount of time, also move the secondary instances

   sudo gnt-node migrate -f <node_fqdn>
   sudo gnt-node evacuate -s  <node_fqdn> # Removes it as a secondary as well

The second command means moving around DRBD pairs and syncing disk data. It is bound to take a long time, so find something else to do in the meanwhile

Now a

   sudo gnt-node list 

should return 0 for both primary instances as well as secondary instances. Before powering off the node we need to remove it from the cluster as well

   sudo gnt-node remove <node_fqdn>

NOTE: Do not forget to readd it after it is fixed (if it ever is)

   sudo gnt-node add <node_fqdn>

Failed hardware node

When a host is having problem (hardware/kernel/otherwise) and it's unreachable, the are a number of possible avenues to solve the issue, but an empirical good way out is:

  • Just powercycle the host. If that works, it's probably the faster way out. Most services should anyway be set up highly available and if we got one that is not we either should set it that way or not care too much when it fails. If this works, you are done, if not keep on reading.
  • If the above doesn't work (the node never comes back up), start the VMs on another host. This can be done with
   sudo gnt-node failover -f <node_fqdn>

in some cases you may have to ignore the consistency checks (this has never happened in our setup), pass --ignore-consistency. Again all important services are set in high available setups (and could easily reimaged) be so this will only severely bite VMs that are not setup that way.

  • Remove the host from the cluster
   sudo gnt-node remove <node_fqdn>
  • Debug/fix/RMA the node.

VM operations

Listing VMs

   gnt-instance list 

Create a VM

Creating a VM is easy. Most of the steps are the same as for production so keep in mind the regular process as well.

However, the wmf-auto-reimage assumes, as of 2019, that you are working with a physical host with an IPMI interface, so it cannot be used for VMs. So while the network boot installation will be unattended, VM creation is explained below, and Puppet signing and other steps is done with the manual procedure.

Verify cluster resource availability

Before starting check if the VM resources that you need are available in the cluster. Here's an example:

   elukey@ganeti1003:~$ sudo gnt-node list
   Node                   DTotal  DFree MTotal MNode MFree Pinst Sinst
   ganeti1001.eqiad.wmnet 707.4G   9.2G  62.9G 37.6G 24.9G    11    11
   ganeti1002.eqiad.wmnet 707.4G  69.2G  62.9G 40.4G 21.5G    10    11
   ganeti1003.eqiad.wmnet 707.4G 244.6G  62.9G 39.4G 21.2G    11    11
   ganeti1004.eqiad.wmnet 707.4G 285.1G  62.9G 55.8G  6.8G    10     8
   ganeti1005.eqiad.wmnet   2.1T 225.8G  62.8G 16.8G 46.0G     6    18
   ganeti1006.eqiad.wmnet   2.1T 806.7G  62.8G 55.0G  7.5G    13     5
   ganeti1007.eqiad.wmnet   2.1T   1.1T  62.8G 44.2G 16.6G    12    12
   ganeti1008.eqiad.wmnet   2.1T 834.4G  62.8G 47.8G 11.3G    13     7

First thing, check netbox to get an idea about what Ganeti nodes are in the various rows. Then check the DFree (disk free) and MFree (memory free) columns of the Ganeti nodes in the above table. In theory there should be sufficient disk/memory space on all nodes in the row that you are planning to use, otherwise you might get failures when creating the VM.

If you want to confirm the reason why creating a VM fails is really due to lack of resources you can look into the ganeti command.log. For example in eqiad you would check the current master node ganeti1003:

[ganeti1003:~] $ sudo grep failure /var/log/ganeti/commands.log
OpPrereqError: ("Can't compute nodes using iallocator 'hail': Request failed: Group row_C (preferred): No valid allocation solutions, failure reasons: FailMem: 9, FailDisk: 3", 'insufficient_resources')

Assign a hostname/IP

Same process as for hardware. Assign the IP/hostname and make sure DNS changes are live before going forward. This means you will also get to decide on which row this VM will live in. We don't have ganeti on all rows, make sure you allocate the IP on a row we got ganeti in. You can get the rows via

   sudo gnt-group list

which should return something like the output below

   Group Nodes Instances AllocPolicy NDParams
   row_A     3        20 preferred   ovs=False, ssh_port=22, ovs_link=, spindle_count=1, exclusive_storage=False, cpu_speed=1,ovs_name=switch1, oob_program=
   row_C     4        19 preferred   ovs=False, ssh_port=22, ovs_link=, spindle_count=1, exclusive_storage=False, cpu_speed=1,ovs_name=switch1, oob_program=

This means we can place VMs in rows A and C for this DC. Keep the name of the group (e.g. row_A, row_C) you will need it when creating the VM.

If you are adding a new type of server with a yet unused naming convention, please edit Infrastructure_naming_conventions and add it. You should also add Cumin aliases to cover selecting the new hostnames. See Cumin#Global_grammar_host_selection and edit modules/profile/templates/cumin/aliases.yaml.erb in operations/puppet. If you are making a new 'cluster', you'll also need to add a $cluster_$dc definitions to hieradata/common/monitoring.yaml.

Create the VM

Ganeti VMs are created with the sre.ganeti.makevm cookbook. You can use it from any Cumin host like the following:

elukey@cumin1001:~$ sudo cookbook sre.ganeti.makevm --help
usage: cookbook [-h] [--vcpus VCPUS] [--memory MEMORY] [--disk DISK]
                [--network {public,private,analytics}]
                {codfw_A,codfw_B,eqiad_A,eqiad_C} fqdn

Create a new Virtual Machine in Ganeti

Examples:
    Create a VM with 1 VCPU and 3 gigabytes of ram and 100 gigabytes of disk on codfw:

        makevm codfw_B mytestvm.codfw.wmnet --vcpus 1 --memory 3 --disk 100

positional arguments:
  {codfw_A,codfw_B,eqiad_A,eqiad_C}
                        Ganeti cluster identifier codfw_A, codfw_B, eqiad_A,
                        eqiad_C
  fqdn                  The FQDN for the VM.

optional arguments:
  -h, --help            show this help message and exit
  --vcpus VCPUS         The number of virtual CPUs to assign to VM (default:
                        1).
  --memory MEMORY       The number of gigabytes of ram to allocate to the VM
                        (default: 1).
  --disk DISK           Number of gigabytes of disk to allocate to the VM
                        (default: 10).
  --network {public,private,analytics}
                        Specify if a private, analytics or public IP address
                        is required (default: private).



elukey@cumin1001:~$ sudo cookbook sre.ganeti.makevm eqiad_A an-tool1006.eqiad.wmnet --vcpus 2 --memory 4 --disk 150 --network analytics

Please note that creating a new instance takes several minutes, most of the times due to the provisioning of the disk space. The makevm script returns the output of the Ganeti command periodically, in order to get a regular feedback about the status of the VM creation. The cookbook returns all the output of the Ganeti command at the end (see T203963#5297888 for more info) so it might appear that everything is hanging while it is not. You can ssh to the Ganeti's master node and check /var/log/ganeti logs to get the current status of your VM (this is needed until the cookbooks will be able to support running cumin commands with interactive output).

Update the DHCP config with the MAC of the new VM

The makevm cookbook will print the MAC address of the virtual NIC of the freshly created instance at the end. Next you need to add a DHCP entry with that MAC by editing the modules/install_server/files/dhcpd/linux-host-entries.ttyS0-115200 file.

After merging the patch, you can force a puppet run on the installation servers to speed up the installation process;

 sudo cumin install* 'run-puppet-agent'

Update partman config (if needed)

The netboot.cfg files configures the partition setup of a server. For all Ganeti VMs that's consistently flat.cfg virtual.cfg. When adding a new instance you need to verify whether a partman config is assigned. Most server types use a globbing scheme like idp* and if you're adding a further IDP host you can simply skip this step. If you however add a server with a new standalone name, then you need to merge a puppet for modules/install-server/files/autoinstall/netboot.cfg

After that you need to run Puppet on the apt* servers:

  sudo cumin apt* 'run-puppet-agent'

Start the VM

  gnt-instance start <fqdn>

and connect to the console

  gnt-instance console <fqdn>

Ctrl+] to leave the console

Set boot order to disk

Assuming the installation goes on well but before it finishes, you need to set the boot order back to disk. This is a limitation of the current version of the Ganeti software and is expected to be solved (upstream is aware).

  gnt-instance modify \
  --hypervisor-parameters=boot_order=disk \
  <fqdn>

Note: when the VM has finished installing, it will shutdown automatically. The Ganeti software includes HA checks and will promptly restart it. We rely on this behaviour to have the VM successfully installed. However, if you list the VMs during this phase you will see the VM in ERROR_down. Don't worry, this is expected.

Note 2: If you need to set a boot_order back to PXE to reinstall, it's "boot_order=network" (For KVM the boot order is either "floppy", "cdrom", "disk" or "network".)

Sign Puppet certs

For baremetal hosts, this part is handled by the reimage script (wmf-auto-reimage). As of 2020, VMs are not supported, so we need to do it as in the manual installation instructions for Ganeti VMs.

The process is as follows:

On a puppetmaster, use:

install_console FQDN

To obtain a root shell. Next run:

puppet agent -tv

To initiate puppet on the new host. The first Puppet run triggers a request for the certificate, it should show up when running "sudo puppet cert -l" on puppetmaster1001. Next approve the certificate with

sudo puppet cert -s FQDN

Once the first puppet run has completed, run the interface_automation/ImportPuppetDB Netbox script with the hostname (not FQDN) to synchronize the VM interfaces to Netbox. This will be automated but for now is still manual.

Assign role to the VM in puppet

As usual. If the VM will not be installed immediately turn it off (see shutdown below). Running VMs that are not in Puppet will alarm.

Delete a VM

The preferred method is to run the decommissioning cookbook Decom_script that will take care of everything, including deleting the VM itself. See also the actions performed by the sre.hosts.decommission cookbook.

To instead irrevocably delete manually a VM run:

  gnt-instance remove <fqdn>

In all cases please remember to clean up Puppet/DHCP/DNS entries afterwards.

Shutdown/startup a VM

  gnt-instance startup <fqdn>
  gnt-instance shutdown <fqdn>

Note: In the shutdown command, ACPI will be used to achieve a graceful shutdown of the VM. A 2 minute timeout exists however, after which the VM will be forcefully shutdown. In case you prefer to not wait those 2 minutes, --timeout exists and can be used like so

  gnt-instance shutdown --timeout 0 <fqdn>

Get a console for a VM

You can get log into the "console" for a Ganeti instance via

  gnt-instance console <fqdn>

The console can be left with "ctrl + ]"

Resize a VM

Increase/Decrease CPU/RAM

Make sure first that the cluster has adequate space for whatever resource you want to increase (if you do want to increase and not decrease a resource). This is done manually by a combination of grafana statistics for CPU/Memory utilization and the output of gnt-node list for disk space utilization. After that you can issue the following command to increase/decrease the memory size and number of Virtual CPUs assigned to a VM

 gnt-instance modify -B memory=<X>[gm],vcpus=<N> <fqdn>

where X, N are physical numbers. X can be suffixed by g or m for Gigabytes or Megabytes (please don't do Terabytes ;))

Adding disk space

Adding space to an existing disk is possible. But it is not advisable. Only do so sparingly and when you know it's the best course of action. A good example is if you know that the VM will be reimaged after the process. The reason is that growing a disk with a partition table in it means that the partition table will need to be updated as well (and after that the filesystem as well). This is a non automated process, error prone and requires at least 2 reboots (one for the VM to see the new disk size, and one for the VM to safely use the new partition table - that latter one can be avoided manually, but it's usually cheaper to just reboot the VM).

But do note that the resizing of partitions and filesystems is up to you, as ganeti can't do it for you. The command would be:

 gnt-instance grow-disk <fqdn> <#> X[gmt]

where # is the index of the disk (it's 0 indexed). You can get the disks allocated to a VM using gnt-instance info <fqdn>. Again X is a physical number suffixed for Gigabytes/Megabytes/Terabytes.

Adding a disk

Adding a disk is also easy if you want to avoid the mess that will result from having to resize partitions/filesystems. Thus it is recommended over resizing a disk. The command would be:

 gnt-instance modify --disk add:size=X[gmt] <fqdn>

Again X is a physical number suffixed for Gigabytes/Megabytes/Terabytes.

Removing a disk

In case a VM no longer requires a disk, it can be removed. The command would be:

 gnt-instance modify --disk <#>:remove <fqdn>

where # is the index of the disk (it's 0 indexed). You can get the disks allocated to a VM using gnt-instance info <fqdn>

Reinstall / Reimage a VM

Just like a physical server OS reinstall this will destroy the contents of the machine and requires appropriate netboot configs to be in place. Proceed with caution!

Also note the wmf-auto-reimage script does not work for virtual machines, so a manual procedure has to be done, as explained below.

 FQDN=<fqdn>

Shutdown the VM

 gnt-instance shutdown $FQDN

Set boot device to network

 gnt-instance modify --hypervisor-parameters=boot_order=network $FQDN

Start instance and attach to console (ctrl-] to detach)

 gnt-instance start $FQDN && gnt-instance console $FQDN

After the OS install has finished (or while it is successfully under way from a separate terminal) set the boot device back to disk

 gnt-instance modify --hypervisor-parameters=boot_order=disk $FQDN

Finally, after the install finishes, boot the system into the fresh OS install and attach to the console (ctrl-] to detach)

 gnt-instance start $FQDN && gnt-instance console $FQDN

After you get the server login you need to proceed with the manual installation procedure.

Renumber (aka change network) a VM

Say you want to change the networking configuration of a VM. It's usually better to create a new VM in the correct row instead. However there are some benefits to renumbering a VM like avoiding to have to re instantiate some database or some other lengthy to setup process. Some other indicative reasons for that would be:

  • Spreading a cluster of VMs offering a service across more network rows.
  • Rebalancing VMs across rows to free up capacity in a problematic row.

Steps are:

  1. Make sure the VM isn't powering some service (e.g. depool from pybal, if the VM is in a cluster, remove from the rest of the cluster's configuration)
  2. Choose a new row in the DC. Same rules are for creating a VM apply.
  3. Schedule downtime.
  4. Update DNS. Make sure you don't forget updating IPv6 :-)
  5. Update the networking configuration in /etc/network/interfaces. This is an error prone process with no review. Make sure you get it right otherwise you are in for some painful debugging
  6. Update the /etc/hosts referencing the host with the new IPv4 address
  7. If possible, power off the VM.
  8. Run the following command
sudo gnt-instance change-group --to=<row> <instance>

Wait it out.

Power on or reboot the VM

Notes

All of the commands that have a Y/N prompt can be forced with a -f. For example the following will spare you the prompt

  gnt-instance remove -f <fqdn>

All commands are actually jobs. If you would rather not wait on the prompt --submit will do the trick

  gnt-instance shutdown --submit <fqdn>

API