Portal:Cloud VPS/Admin/Host aggregates
Cloud VPS uses host aggregates to assign different hypervisors to different use-cases. For a quick overview of which host is assigned where, run this on a controller node:
user@cloudcontrol1004:~$ for i in $(sudo wmcs-openstack hypervisor list -f value -c ID); do sudo wmcs-openstack hypervisor show $i -c service_host -c aggregates ; done
+--------------+---------------+
| Field | Value |
+--------------+---------------+
| aggregates | ['ceph'] |
| service_host | cloudvirt1021 |
+--------------+---------------+
+--------------+----------------------+
| Field | Value |
+--------------+----------------------+
| aggregates | ['standard', 'ceph'] |
| service_host | cloudvirt1022 |
+--------------+----------------------+
[..]
Sometimes you want to see the aggregates of a host that's not online anymore, for that you can use:
user@cloudcontrol1004$ host="cloudvirt1027"; for agg in $(sudo wmcs-openstack aggregate list -c ID -f value); do sudo wmcs-openstack aggregate show -f value -c hosts "$agg" | grep -q "$host" && echo "Host found in aggregate $agg"; done
Host found in aggregate 4
How it Works
In brief: Each aggregate is given one or more properties. If a flavor also has a property assigned, VMs using that flavor can only be scheduled in aggregates with that property. Complete documentation about host aggregates can be found in the upstream documentation.
Here is an example of creating an aggregate to associate with particular flavors:
root@cloudcontrol1003:~# openstack aggregate create --zone nova db
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2020-02-27T22:46:59.843845 |
| deleted | False |
| deleted_at | None |
| id | 5 |
| name | db |
| updated_at | None |
+-------------------+----------------------------+
root@cloudcontrol1003:~# openstack aggregate set --property db-host='true' db
root@cloudcontrol1003:~# openstack aggregate add host db cloudvirt1019
+-------------------+-----------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------+
| availability_zone | nova |
| created_at | 2020-02-27T22:46:59.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'cloudvirt1019'] |
| id | 5 |
| metadata | {u'wmcs-db-host': u'true', u'availability_zone': u'nova'} |
| name | db |
| updated_at | None |
+-------------------+-----------------------------------------------------------+
root@cloudcontrol1003:~# openstack flavor set --property aggregate_instance_extra_specs:db-host=true largedb
Note: If a flavor has no aggregates associated with it, the nova scheduler will schedule VMs of that flavor anywhere, including on spare or out-of-service hosts. In order to avoid that, icinga will raise a warning if it finds unassigned flavors.
Properties
Properties may be created as needed; these currently include the following:
standard
The 'standard' property is associated with the 'standard' host aggregate. This property predates our adoption of Ceph and is now largely unused.
db-host
Large database server flavors maintained by WMCS staff are tagged with 'db-host'. They are typically scheduled on hardware that's used exclusively for database hosting.
wdqs
The wdqs project has some very large flavors targeted to particular special-purpose hardware. These flavors are tagged with 'wdqs'.
ceph
Hypervisors that are configured to use the Ceph storage cluster for virtual machine storage.
localdisk
Hypervisors that support VMs with localstorage (possibly in addition to VMs with Ceph storage). Note that most of our newer hypervisors have little to no storage space for VMs; this property should be assigned only to hypervisors with large available storage.
network-agent
Network agent, so either linuxbridge
or ovs
. This property is planned to be temporary and should be removed once T364457 is complete.
Aggregates
Each aggregate can be tagged with zero or more properties. If an aggregate has zero properties, that indicates that nothing should be scheduled on associated hardware (possibly because it is broken or held in reserve).
standard
The 'standard' aggregate is where almost all self-service VMs will be scheduled. It is associated with the 'standard' property and corresponds to the 'filter pool' that predated our use of host aggregates.
db
The 'db' aggregate is associated with the 'db-host' property. Hypervisors in the 'db' aggregate are reserved for WMCS-managed database hosting VMs.
wdqs
Hypervisors in the 'wdqs' aggregate are reserved for use by the wdqs project.
spare
Hypervisors in the 'spare' aggregate are being kept empty in case of an unexpected failure of a 'standard' host. Nothing will be automatically scheduled on these hosts. One spare host is generally enough; typically we reserve the very newest host as a spare because it likely has the highest capacity.
maintenance
Hypervisors in the 'maintenance' aggregate are hosts with known hardware problems. They are either empty or in the process of being drained; nothing new will be scheduled here.
toobusy
Hypervisors in the 'toobusy' aggregate are hosts in active service but which have been removed from the 'standard' aggregate in order to prevent new VMs for scheduling there. This is useful when the nova scheduler misfires and overloads a hypervisor.
ceph
Hypervisors that are configured to use the Ceph storage cluster for virtual machine storage.
localdisk
Hypervisors that support creation of VMs with local root drives. We need to have exactly three servers in this aggregate in order to support VMs that are etcd nodes. This aggregate should be used very sparingly, probably ONLY for etcd notes. Hypervisors in this aggregate are more complicated to maintain since they can not be easily drained via live migration.
network-linuxbridge
Hosts using the Linux bridge Neutron agent, sets the network-agent=linuxbridge
property. This property is planned to be temporary and should be removed once T364457 is complete.
network-ovs
Hosts using the Open vSwitch Neutron agent, sets the network-agent=ovs
property.. This property is planned to be temporary and should be removed once T364457 is complete.
Common Actions
To remove a host from the standard aggregate and add it to the maintenance aggregate (thus preventing new VMs being scheduled there):
root@cloudcontrol1003:~# openstack aggregate add host maintenance <hostname>
root@cloudcontrol1003:~# openstack aggregate remove host standard <hostname>
Note: do not remove a host from the 'standard' aggregate without adding it to a different one. Non-scheduling aggregates like 'maintenance' and 'spare' are used to document the status of a host. Also please log any change in aggregates with !log admin
.
To inspect which hosts are part of an aggregate:
root@cloudcontrol1005:~# openstack aggregate show ceph
To create a new flavor that uses the standard aggregate:
root@cloudcontrol1003:~# openstack flavor create --ram 127861 --disk 3460 --vcpus 16 <flavorname>
root@cloudcontrol1003:~# openstack flavor set --property aggregate_instance_extra_specs:standard=true <flavorname>
You can also get per-hypervisor information:
user@cloudcontrol1004:~ $ sudo wmcs-openstack hypervisor show cloudvirt1031.eqiad.wmnet -f value -c aggregates
['ceph']
But that wont work for offline hypervisors. The up/down state can be listed with:
user@cloudcontrol1004:~ $ sudo wmcs-openstack hypervisor list
+----+--------------------------------+-----------------+-------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+--------------------------------+-----------------+-------------+-------+
| 1 | cloudvirt1021.eqiad.wmnet | QEMU | 10.64.20.40 | up |
| 2 | cloudvirt1022.eqiad.wmnet | QEMU | 10.64.20.41 | up |
| 3 | cloudvirt1019.eqiad.wmnet | QEMU | 10.64.20.38 | up |
| 4 | cloudvirt1020.eqiad.wmnet | QEMU | 10.64.20.39 | up |
| 5 | cloudvirt1023.eqiad.wmnet | QEMU | 10.64.20.42 | up |
| 6 | cloudvirt1024.eqiad.wmnet | QEMU | 10.64.20.43 | up |
| 7 | cloudvirt1018.eqiad.wmnet | QEMU | 10.64.20.34 | up |
| 8 | cloudvirt1017.eqiad.wmnet | QEMU | 10.64.20.33 | up |
| 9 | cloudvirt1016.eqiad.wmnet | QEMU | 10.64.20.32 | up |
| 11 | cloudvirt1014.eqiad.wmnet | QEMU | 10.64.20.30 | up |
| 17 | cloudvirt1025.eqiad.wmnet | QEMU | 10.64.20.49 | down |
| 18 | cloudvirt1013.eqiad.wmnet | QEMU | 10.64.20.29 | up |
| 19 | cloudvirt1030.eqiad.wmnet | QEMU | 10.64.20.54 | down |
| 20 | cloudvirt1029.eqiad.wmnet | QEMU | 10.64.20.53 | down |
| 21 | cloudvirt1026.eqiad.wmnet | QEMU | 10.64.20.50 | down |
| 22 | cloudvirt1027.eqiad.wmnet | QEMU | 10.64.20.51 | down |
| 23 | cloudvirt1028.eqiad.wmnet | QEMU | 10.64.20.52 | down |
| 24 | cloudvirt1012.eqiad.wmnet | QEMU | 10.64.20.28 | up |
| 34 | cloudvirt-wdqs1001.eqiad.wmnet | QEMU | 10.64.20.44 | up |
| 35 | cloudvirt-wdqs1003.eqiad.wmnet | QEMU | 10.64.20.46 | up |
| 36 | cloudvirt-wdqs1002.eqiad.wmnet | QEMU | 10.64.20.45 | up |
| 40 | cloudvirt1032.eqiad.wmnet | QEMU | 10.64.20.74 | up |
| 43 | cloudvirt1031.eqiad.wmnet | QEMU | 10.64.20.73 | up |
| 46 | cloudvirt1033.eqiad.wmnet | QEMU | 10.64.20.75 | up |
| 49 | cloudvirt1034.eqiad.wmnet | QEMU | 10.64.20.76 | up |
| 52 | cloudvirt1036.eqiad.wmnet | QEMU | 10.64.20.78 | up |
| 55 | cloudvirt1037.eqiad.wmnet | QEMU | 10.64.20.79 | up |
| 58 | cloudvirt1035.eqiad.wmnet | QEMU | 10.64.20.77 | up |
| 61 | cloudvirt1038.eqiad.wmnet | QEMU | 10.64.20.80 | up |
| 64 | cloudvirt1039.eqiad.wmnet | QEMU | 10.64.20.81 | up |
+----+--------------------------------+-----------------+-------------+-------+