Jump to content

Portal:Cloud VPS/Admin/Keepalived

From Wikitech

This page contains information on how to introduce L3 network high availability for arbitrary services running on virtual machines in Cloud VPS by using keepalived.

The documentation here is based on original notes by Jason Hedden.

Puppet manifest

In the puppet profile you want HA-enabled, declare the keepalived class. You should have at least 3 hiera keys for configuring the service:

class profile::myprofile (
    Array[Stdlib::Fqdn] $keepalived_vips     = lookup('profile::myprofile::keepalived::vips',     {default_value => ['localhost']}),
    Array[Stdlib::Fqdn] $keepalived_peers    = lookup('profile::myprofile::keepalived::peers',    {default_value => ['localhost']}),
    String              $keepalived_password = lookup('profile::myprofile::keepalived::password', {default_value => 'notarealpassword'}),
) {
    class { 'someawesomeservice': }
    
    class { 'keepalived':
        auth_pass => $keepalived_password,
        peers     => delete($keepalived_peers, $::fqdn),
        vips      => $keepalived_vips.map |$host| { ipresolve($host, 4) },
    }
}

The profile::myprofile::keepalived::vips key have an array of FQDNs which are going to be made HA by keepalived. In profile::myprofile::keepalived::peers you include all the servers in the keepalived cluster, i.e, all the VMs that will share the VIPs. Finally, profile::myprofile::keepalived::password should contains a random string that enables auth in the cluster, and should probably live in the labs/private.git repo.

Example hiera config:

profile::wmcs::paws::keepalived::peers:
- paws-k8s-haproxy-1.paws.eqiad.wmflabs
- paws-k8s-haproxy-2.paws.eqiad.wmflabs
profile::wmcs::paws::keepalived::vips:
- k8s.svc.paws.eqiad1.wikimedia.cloud

Neutron configuration

This documentation assumes the VIP is a VM instance address (i.e, 172.16.x.x). To future proof the setup, is best practice to pre-allocate the VIP address in Neutron, so the address is reserved and never allocated to anything else by mistake:

user@cloudcontrol1004:~$ sudo wmcs-openstack --os-project-id=myproject port create --network 7425e328-560c-4f00-8e99-706f3fb90bb4 my-vip-port
+-----------------------+-----------------------------------------------------------------------------+
| Field                 | Value                                                                       |
+-----------------------+-----------------------------------------------------------------------------+
[..]
| fixed_ips             | ip_address='172.16.1.171', subnet_id='a69bdfad-d7d2-4cfa-8231-3d6d3e0074c9' |
| name                  | my-vip-port                                                                 |
| network_id            | 7425e328-560c-4f00-8e99-706f3fb90bb4                                        |
| port_security_enabled | True                                                                        |
| project_id            | myproject                                                                   |
| status                | DOWN                                                                        |
+-----------------------+-----------------------------------------------------------------------------+

Note an IP address was allocated, in this example 172.16.1.171.

In the actual neutron ports for the VM instances, allow traffic using that allocated address:

user@cloudcontrol1004:~$ sudo wmcs-openstack port set --allowed-address ip-address=172.16.1.171 $PORT_UUID_VM1
[..]
user@cloudcontrol1004:~$ sudo wmcs-openstack port set --allowed-address ip-address=172.16.1.171 $PORT_UUID_VM2
[..]

The port UUID for each VM can be queried using sudo wmcs-openstack --os-project-id myproject port list and searching for the VM IP address as reported in sudo wmcs-openstack --os-project-id myproject server list.

DNS

Make sure the VIP FQDN is using the address allocated by neutron for the port.

See also