Portal:Cloud VPS/Admin/VM access
This page describes the types of VMs hosted in cloud-vps, and the process for gaining shell access to each.
As of today (2025-08-22) only puppet managed VMs can be access with cumin. VMs of other types will likely present as unreachable or may cause cumin to time out.
Puppet managed VMs
The vast majority of the VMS in cloud-vps are created with standard Debian base images and are managed by puppet. These instances can generally be identified by their base image which will have an unadorned name like "debian-12.0-bookworm" or "debian-12.0-bookworm (replaced xxxx-xx-xx)".
ssh using ldap keys
If all is well with the instance, sssd and pam provide ssh access using keys stored in a users ldap account. This form of access is detailed at length in the user documentation.
ssh using root keys
In addition to pam-configured ssh keys, cloud-vps roots can also have puppet installs a key in /etc/ssh/userkeys/root. These keys come from modules/profile/templates/wmcs/instance/root-authorized-keys.erb in the operations/puppet repo.
virsh console access
As a last resort, The cloud-init script that runs during initial VM boot starts a logged-in local console session on all VMs. To access this console, first determine which hypervisor is hosting the VM (either via the instance overview in Horizon or with openstack server show <id>) and then ssh directly to the host and use virsh console:
andrew@cloudcontrol1011:~$ sudo wmcs-openstack server show 3b4d17a7-8b66-4246-8e38-b86b7c39c471 | grep hypervisor
| OS-EXT-SRV-ATTR:hypervisor_hostname | cloudvirt1063.eqiad.wmnet
andrew@cloudvirt1063:~$ sudo virsh console 3b4d17a7-8b66-4246-8e38-b86b7c39c471
Connected to domain 'i-000eb8d2'
Escape character is ^] (Ctrl + ])
root@util-abogott-trixie:~# hostname -f
util-abogott-trixie.testlabs.eqiad1.wikimedia.cloud
root@util-abogott-trixie:~#
Unmanaged VMs
Select projects have access to unpuppetized base images, or may have 'bring your own' images. For the most part these images are supported on an 'at your own risk' basis, and admins may or may not be able to gain console access.
private key ssh
It is very unlikely that these VMs will have pam or ldap integration configured. Typically access is gained by injecting an ssh key at boot time with cloud-init; you can discover the name of this key from nova, but of course it is up to the creating user to keep track of the private key.
andrew@cloudcontrol1011:~$ sudo wmcs-openstack server show 99fb70b3-7371-40a0-b7e2-f47cec9d3ded | grep key_name
| key_name | mykey
virsh console access
Depending on specifics of the base image, it may or may not be possible to access the VM using 'virsh console' as described above. The console may produce a login prompt, in which case a default user/password may be configured depending on the base image used.
Trove database instances
Trove database instances can only be found in the 'Trove' project, and are based off of a special-purpose Ubuntu base image. If everything is working well, users will only interact with these VMs via trove APIs, and the VMs themselves are largely disposable as the data itself is stored on a cinder volume. The Trove VM serves largely to run docker containers which contain the actual database server software.
root key access
For emergencies, each trove VM is configured with a root key named trove-debug. The corresponding private key is in the private puppet repo, modules/secret/secrets/ssh/wmcs/trove/openstack-trove-debug-key-eqiad1.
Note that access via ssh will also require messing with the host's security groups to provide ssh access; this is closed by default.
Octavia amphorae
An Octavia load balancer is based on a VM running haproxy. In Octavia's terminology, this load-balancing server is called an 'amphora'. Amphorae are only ever found in the 'octavia' project and they are based off of a custom amphora-specific base image.
root key access
As with Trove workers, these servers are created with a default public key, in this case named amphorakey. The private key can be found in the private puppet repo but is pre-installed on each cloudcontrol as /etc/octavia/certs/id_rsa so the standard process for accessing an amphora console is to ssh from a cloudcontrol:
andrew@cloudcontrol1011:~$ sudo ssh -i /etc/octavia/certs/id_rsa ubuntu@172.16.24.166
Welcome to Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-139-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
Last login: Thu Jun 5 22:11:57 2025 from 172.20.1.25
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@amphora-b78a77df-0cf9-472f-a438-1121d10e9978:~$
Magnum workers
Magnum also uses worker VMs based off of a custom ready-made base image. Unlike with Trove or Octavia, however, these VMs are hosted in the same project as the requested magnum cluster. This means they're more likely to show up on cumin queries and look like broken VMs.
Currently (2025-08-22) these base images are based off of Fedora Core. In future releases (likely within a year) they will convert to a Ubuntu as we migrate to the container-api helm driver.
private ssh key
When creating a new magnum cluster, the user has the option of injecting a public key to each worker. If they do so then those workers can be access with that key, much as any other unmanaged VM.
virsh console access
Magnum workers are not logged into their console by default, so virsh console is not useful for accessing these servers.
kubectl debug
Magnum worker nodes can be reached via kubectl if you have the k8s config with access keys. That process is described here:
https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#node-shell-session