Wikimedia Cloud Services team/EnhancementProposals/Production Cloud services relationship

From Wikitech
WARNING: the final version of this document lives on Cross-Realm_traffic_guidelines. Is left here for historical reasons

This page contains information related to the production / cloud network & realms relationship, policies, guidelines and best practices for planning new services and for when reworking old ones.

The production network (production services) and CloudVPS network are different realms in terms of users, scope, security, access, maintenance, etc. It is undesirable (and rejected) to add new connections or bridges between the 2 realms if not following the guidelines presented in this document.

However, the situation is not that simple. As of this writing, the hardware that che CloudVPS service itself (running openstack components, etc) lives in the production network. These cloud hardware servers are managed like any other production service (ssh, puppet, monitoring, cumin, install, etc).

Also, there could be several currently-running services that don't follow the pattern presented here. These will be reviewed on a case-by-case basis.

General case

A service to be offered only to CloudVPS VM instances should run inside CloudVPS.

Network flows originate and terminate inside the CloudVPS virtual network. There are no cross-realm connections.

The upstream openstack project refers to this kind of use cases as east-west network traffic.

requirements

  • Nothing special other than the usual, capacity on CloudVPS, etc.

example: toolsdb

The toolsdb service was previously running on a hardware server in the production private network. Network connections from clients to the database were crossing realms.

It was eventually moved inside CloudVPS. A VM instance was created to offer the exact same service, without any need to cross realms.

example: cloudinfra

The cloudinfra openstack project was created to host several cloud-specific services for VMs, such as shared NTP, MX and puppetmaster services.


Generic network access cloud --> prod

A CloudVPS VM instance uses the cloud edge network to contact a production service.

The network flows through both the production network backbone (core routers, etc) and the CloudVPS edge network (neutron, cloudsw, etc) and is seen like any other internet peer from the production service point of view. Therefore, the 2 realms are not actually bridged directly.

The upstream openstack project refers to this kind of use cases as north-south network traffic.

requirements

This use case must meet the following:

  • The production service uses a public IPv4 address.
  • The client CloudVPS VM instance use either the general NAT egress address or a floating IP. The private VM address don't reach the production service.
  • The production service might be only meant for CloudVPS. But it uses a public IPv4 address, so firewalling should exists to only allow incoming connections from CloudVPS (from the general NAT egress address).
  • It has been evaluated if the service could be moved inside the cloud instead (i.e, general case)

Additional considerations:

  • the production service is likely behind LVS, as is the standard way for public production services to be offered to the internet.

example: dumps

The dumps service uses a public IPv4 address. CloudVPS client VM instances accessing the dumps service do so as any other internet client.

The dumps NFS service, however, is not offered to the whole internet, but just to CloudVPS clients (and Analytics systems). This is controlled by using firewalling policies.

The dumps service doesn't need to know the client VM private address, so VMs use the general cloud egress NAT to access the service (TODO: we hope this sentence will be true in the near future, see phab:T272397).

example: wikis

CloudVPS client VM instances access wikis like any other internet peer.

(TODO: we hope this sentence will be true in the future, but it is not as of this writing, given they skip the general egress NAT, see News/CloudVPS_NAT_wikis).


Generic network access prod --> cloud

This case is when a CloudVPS VM is offering a service that is meant to be used from the production network.

The network connections flow through both the production network backbone (core routers, etc) and the CloudVPS edge network (neutron, cloudsw, etc) and is seen like any other internet peer from the CloudVPS VM instance point of view. Therefore, the 2 realms are not actually bridged directly.

The upstream openstack project refers to this kind of use cases as north-south network traffic.

requirements

This use case must meet the following:

  • if the service is HTTP/HTTPS, then the VM instance uses a web proxy created in horizon.
  • if the service is not HTTP/HTTPS, then the VM instance uses a floating IP address.
  • if only production is intended to consume the service, neutron security groups should be used to firewall the access.

Both the production service and CloudVPS use public addresses, and thus each internal private addresses aren't seen in either side of the connection.

example: puppet catalog compiler

The puppet catalog compiler (PCC) is a service that runs natively in CloudVPS. It is consumed by some production services (jenkins, gerrit, etc). It uses a web proxy created using horizon (https://puppet-compiler.wmflabs.org/)

example: SAL

The Server Admin Log tool runs in Toolforge and is widely used by production. There is, however, nothing special in this tool to restrict access to production.


Using isolation mechanisms

This case covers an architecture in which a service is offered to/from the cloud realm using hardware servers instead of software defined (virtual) components. The hardware components are part of the cloud realm.

The most prominent case of this situation are the openstack services and hosts themselves. We create one or more logical/physical (software or hardware) isolation layers to prevent unwanted escalation from one realm to the other.

Most of the other models in this document cover either north-south native services or extremely simple east-west native services that can easily run inside the cloud. However, we often find ourselves in a dead end when trying to adapt east-west native services that cannot run inside the cloud to a north-south approach covered by other models in this document. In general, we find it challenging to adopt others models when a given service involves a combination of:

  • heavy data replication or storage requirements (in the TB range)
  • extremely granular networking access (such as per-VM file locking or similar)
  • extremely granular policing (such as firewalling, rate-limiting or similar)
  • chicken-egg problems (running a mission-critical cloud component inside the cloud itself)
  • other complex architectures that require dedicated hardware.

So, basically, this case covers a way to have logical/physical components that are dedicated to CloudVPS, are considered to be part of the cloud realm, and therefore aren't natively part of the production realm.

The upstream openstack project refers to this kind of use cases with a variety of names, depending on the actual implementation details:

  • provider hardware, hardware in service of openstack software-defined virtual machine clients.
  • external network, external to the openstack software-defined realm. Also mentioned in some documents as provider networks.
  • basically, an east-west service running on hardware, without being virtual/sofware-defined inside the cloud itself.

requirements

This use case must meet the following:

  • there is no reasonable way the service can be covered by the other models described in this document.
  • the hardware server must be double-homed, meaning they have 2 NICs ports, one for control plane (ssh, install, monitoring, etc) and other for data plane (where the actual CloudVPS traffic is flowing).
  • the control/data plane is separated by an isolation barrier that has been identified as valid, secure and strong enough to meet the security demands.
  • if the situation requires it, there is a dedicated physical VLAN/subnet defined in the switch devices to host the affected services.
  • if there is a dedicated VLAN/subnet, it has L3 addressing that is also dedicated to CloudVPS, and is clearly separated from other production realm networks, this is, using 172.16.x.x addressing.
  • if there is a dedicated VLAN/subnet, all the network flows are subject to firewalling and network policing for granular access control.
  • CloudVPS VMs clients accessing the services may or may not use NAT to access them.
  • in case the service needs some kind of backend data connection to a production service, such connection will use the normal network border crossing, with egress NAT on the cloud realm side and standard service mechanism (like LVS) on production realm side.

Example of isolation layers:

  • a linux network namespace
  • a docker container
  • a KVM virtual machine

example: openstack native services

Openstack native services use several layering mechanisms to isolate the 2 realms, for example:

  • neutron, linux network namespace + vlans. For a VM to cross realm it would need to escalate the vlan and/or the linux network namespace in which the neutron virtual router lives.
  • nova, kvm + vlans. For a VM to cross realm it would need to escalate vlan isolation and/or the kvm hypervisor.


See also