Portal:Cloud VPS/Admin/Maintenance
This page is for routine maintenance tasks performed by Cloud Services operators. For troubleshooting immediate issues, see the troubleshooting page.
Toolforge has a similar page for its own maintenance.
cloudvirt reboot checklist
- Notify users on cloud-announce -- one week in advance if possible
- 'schedule downtime for this host and all services' in icinga
- 'schedule downtime for this host and all services' for checker.tools.wmflabs.org in icinga
- If VMs will be affected:
- Collect a list of nodes and their current state on the cloudvirt in question:
openstack server list --all-tenants --host <hostname>
- depool all affected tool exec nodes
- failover tools nodes as needed https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/Admin#Failover
- failover nova-proxy as needed
- Kubernetes nodes should generally be fine as long as only one cloudvirt is rebooted at a time
- Collect a list of nodes and their current state on the cloudvirt in question:
- Reboot host
- Wait for host to reboot, verify ssh access still works
- If VMs were affected:
- VMs should boot after the reboot. If nova doesn't do it for whatever reason, it's a bug. You can use the list collected in previous state to bring each VM to the previous state. Wait 5-10 seconds after each restart to avoid flooding the Nova control plane.
- repool all affected exec nodes
Live Migrating Virtual Machines
All hypervisors using the Puppet role `role::wmcs::openstack::eqiad1::virt_ceph` can support live migration. Also note that the libvirt CPU architecture and capabilities must match the source and destination migration targets.
Live migration command:
$ openstack server migrate <virtual machine uuid> --live <target hypervisor>
After running the command Nova will begin the migration process. The Nova compute log files `/var/log/nova/nova-compute.log` on the source and destination hypervisors can be used to follow along or troubleshoot the migration process.
Openstack Upgrade test plan
Upgrading openstack mostly consists of updating config files, changing openstack::version in hiera and then running puppet a bunch of times. In theory each individual openstack service is compatible with the n+1 and n-1 version so that the components don't have to be upgraded in a particular order.
That said, we have a test cluster, so it's best to run a test upgrade there before rolling things out for prod. Here are things to test:
- Keystone/Ldap
- Openstack service list
- Openstack endpoint list
- Create new account via wikitech
- Set up 2fa for new account
- Verify new user can log in on wikitech
- Create new project via wikitech
- Set keystone cmdline auth to new user
- Verify new user has no access to new project
- Keystone commandline roles
- Assign a role to the new account
- Remove role from new account
- Wikitech project management
- Add new user to a project
- Promote user to projectadmin
- Verify new user can log in on Horizon
- Verify new user can view instance page for new project
- Demote user to normal project member
- Nova
- Instance creation ad boot with different glance images
- verify dns entry created
- Verify ldap record created
- ssh access
- check puppet run output
- Assignment/Removal of floating IPs
- Security groups
- Remove instance from ssh security group, verify ssh is blocked
- Replace instance in ssh security group, verify ssh works again
- Add/remove source group and verify that networking between existing and new instances in the same project changes accordingly
- Instance deletion
- Verify dns entry cleaned up
- Verify ldap record removed
- Instance creation ad boot with different glance images
- Glance
- Openstack image list
- Create new image
- Test instance creation with new image
- Designate
- Assign/remove dns entry from Horizon
- Dynamic Proxy
- Create/delete proxy
Admin/Maintenance scripts
Information on our admin/maintenance scripts.
credentials
Puppet installs novaenv.sh on the openstack controller. In order to run nova and glance shell commands without having to add a thousand args to the commandline:
root@cloudcontrol1003:~# source novaenv.sh
or:
user@cloudcontrol1003:~$ source <(sudo cat /root/novaenv.sh)
wmcs-openstack
You can do basic sudo usage of the openstack command using our custom wrapper which simply loads credentials on the fly:
user@cloudcontrol1003:~$ sudo wmcs-openstack server list --all-projects
[...]
migration scripts
A collection of scripts to move VM instances around.
wmcs-cold-migrate
The wmcs-cold-migrate tool will shut down an instance, copy it to the specified target host, and boot it on the new host.
root@cloudcontrol:~# nova list --all-tenants --host <source>
root@cloudcontrol:~# wmcs-cold-migrate <args> 7d4a9768-c301-4e95-8bb9-d5aa70e94a64 <destination>
This can take quite a while, so run this in a 'screen' session. At the end of the migration, a prompt will show up asking for confirmation before the final cleanup (including origin disk deletion).
This can be used in other deployments. This example shows how to use it in the codfw1dev deployment:
root@cloudcontrol2001-dev:~# wmcs-cold-migrate --datacenter codfw --nova-db-server localhost --nova-db nova 85d106d7-5c4e-4955-871d-a88dfb5d2a1e cloudvirt2001-dev
To move VMs in the old main region:
root@cloudcontrol1004:~# wmcs-cold-migrate --region eqiad --nova-db nova 5a41a2b1-5bdd-4d52-ba1c-72273b4fe6f3 cloudvirt1005
Check --help for concrete details of input arguments and their default value.
wmcs-cold-nova-migrate
TODO: fill me. Apparently not working.
wmcs-live-migrate
TODO: fill me.
wmcs-region-migrate
See Neutron migration.
wmcs-region-migrate-security-groups
See Neutron migration.
wmcs-region-migrate-quotas
See Neutron migration.
novastats
A collection of scripts to fetch info from nova.
wmcs-novastats-imagestats
The wmcs-novastats-imagestats script can be run periodically to list which images are currently in use -- it can also answer the question 'what instances use image xxx'? As obsolete images are abandoned they can be deleted from glance to save disk space.
wmcs-novastats-alltrusty
TODO: fill me.
wmcs-novastats-flavorreport
TODO: fill me.
wmcs-novastats-puppetleaks
TODO: fill me.
wmcs-novastats-capacity
TODO: fill me.
wmcs-novastats-imagestats
TODO: fill me.
wmcs-novastats-dnsleaks
See detecting laeked DNS records.
wmcs-novastats-proxyleaks
TODO: fill me.
DNS
Admin scripts related to DNS operations.
wmcs-makedomain
To create subdomains under the wmflabs.org domain or any other primary domain. The base domain belongs to the wmflabsdotorg project or other project, so this script creates the subdomain there and then transfer ownership to the desired project.
We need this because designate doesn't allow us to create subdomains in a given project if the superdomain belongs to other different project.
root@cloudcontrol1004:~# wmcs-makedomain --help
usage: wmcs-makedomain [-h] [--designate-user DESIGNATE_USER]
[--designate-pass DESIGNATE_PASS]
[--keystone-url KEYSTONE_URL] --project PROJECT
[--domain DOMAIN] [--delete] [--all]
[--orig-project ORIG_PROJECT]
Create a subdomain and transfer ownership
optional arguments:
-h, --help show this help message and exit
--designate-user DESIGNATE_USER
username for nova auth
--designate-pass DESIGNATE_PASS
password for nova auth
--keystone-url KEYSTONE_URL
url for keystone auth and catalog
--project PROJECT project for domain creation
--domain DOMAIN domain to create
--delete delete domain rather than create
--all with --delete, delete all domains in a project
--orig-project ORIG_PROJECT
the project that is oiginally owner of the superdomain
in which the subdomain is being created. Typical
values are either wmflabsdotorg or admin. Default:
wmflabsdotorg
root@cloudcontrol1004:~# source novaenv.sh
root@cloudcontrol1004:~# wmcs-makedomain --project traffic --domain traffic.wmflabs.org
root@cloudcontrol1004:~# wmcs-makedomain --project toolsbeta --domain toolsbeta.eqiad1.wikimedia.cloud --orig-project admin
root@cloudcontrol1003:~# wmcs-makedomain --project huggle --domain huggle.wmcloud.org --orig-project cloudinfra
This script is what we use to handle this general-user FAQ entry: Help:Horizon_FAQ#Can_I_create_a_new_DNS_domain/zone_for_my_project,_or_records_under_the_wmflabs.org_domain?
wmcs-wikireplica-dns
This script is used to create and update FQDN and domain names related to wikireplicas and other services.
See Portal:Data_Services/Admin/Wiki_Replica_DNS for more details.
TODO: should we simply merge wmcs-wikirelpica-dns docs here?
wmcs-populate-domains
When a new designate/pdns node is set up, designate does not pre-emptively populate domains in pdns; it only updates a domain when that domain is edited.
This script enumerates domains (aka 'zones' in designate) and creates a dummy record in each. It then cleans up those domains. That seeming no-op prompts designate to create and sync each domain with pdns.
Like most wmcs admin scripts, this should be run on a cloudcontrol node with standard novaadmin environment settings.
others
Other useful scripts.
wmcs-spreadcheck
This script can check whether different instance types are spread enough into different cloudvirt servers. Useful in a project with many similar VM instances which may want to run them in separate cloudvirts to avoid potential disasters.
Is usually set up as a NRPE script for icinga. The script requires a yaml configuration file like this:
project: tools classifier: bastion-: bastion checker-: checker elastic-: elastic flannel-etcd-: flannel-etcd k8s-etcd-: k8s-etcd mail: mail paws-worker-: paws prometheus-: prometheus proxy-: proxy redis-: redis services-: services sgebastion-: sgebastion sgeexec-: sgeexec sgegrid-master: sgemaster sgegrid-shadow: sgemaster sgewebgrid-generic-: sgewebgrid-generic sgewebgrid-lighttpd-: sgewebgrid-lighttpd static-: static worker-: worker
And is usally used like this:
root@cloudcontrol1003:~# wmcs-spreadcheck --config /etc/wmcs-spreadcheck-tools.yaml
wmcs-vm-extra-specs
When a virtual machine is created the requested flavor data is copied to the instance, and any future updates to the flavor are ignored. This script will connect to the Nova database and directly modify extra specs.
$ wmcs-vm-extra-specs --help usage: wmcs-vm-extra-specs [-h] [--nova-db-server NOVA_DB_SERVER] [--nova-db NOVA_DB] [--mysql-password MYSQL_PASSWORD] uuid {quota:disk_read_bytes_sec, quota:disk_read_iops_sec, quota:disk_total_bytes_sec, quota:disk_total_iops_sec, quota:disk_write_bytes_sec, quota:disk_write_iops_sec} spec_value
wmcs-instancepurge
Some projects have limited VM lifespan. For those projects, a systemd timer periodically runs the wmcs-instancepurge script. That script checks the age of each VM in the project and deletes VMs over a certain age (after emailing warnings for a few days first.)
To expose a new project to lifespan limits, add a section to the profile::openstack::eqiad1::purge_projects key in puppet/hiera/common/profile/openstack/eqiad1.yaml. Be sure to include documentation about why the project is being auto-purged.
profile::openstack::eqiad1::purge_projects: - project: sre-sandbox days-to-nag: 10 days-to-delete: 15