Portal:Cloud VPS/Admin/notes/Neutron ideal model
This page holds information on the ideal model for Neutron. This model have 2 requisites:
- serves CloudVPS use cases
- can work securely with other prod infrastructure
This also affects the Toolforge service, since is just a CloudVPS project. Other services, specially Data Services (wiki-replicas, quarry, paws, etc) might be affected by any changes to the model.
The idea is to reach a common understanding and agreement and coordinate next steps in a shared plan.
Requisites and use cases
A good understanding of use cases and requisites is key to develop a proper solution.
access to instances
Access to VMs (ssh tcp/22) is unrestricted. Users use a SSH bastion server to proxy to actual instances. The CloudVPS bastion servers usually lives in the bastion Cloud VPS project.
networking connections
Interesting networking connections to take into account when developing the ideal model. It is especially interesting for us to know special cases in which VMs connect outbound to servers other than the OpenStack physical infrastructure which are considered "production" hosts.
- LDAP is used for both authentication and authorization in Cloud VPS projects (uid information, ssh public key storage, sudoer rules lists, etc)
- Shinken needs to connect to the nova API to know about running instances.
- Various Toolforge tools (for example openstack-browser) need to be able to query the OpenStack APIs using unprivileged credentials.
- NFS storage is used by a large number of Cloud VPS projects and their instances (especially Toolforge)
- DNS server for instances? (TODO: fill me)
- Anything that the Foundation's production network exposes to the general internet should also be available to Cloud VPS instances
- Toolforge and other projects need to be able to access MediaWiki and RESTBase APIs for production wikis
LVS
Using LVS in Cloud VPS is an interesting use case. But may be complex to achieve and to do it securely.
See main page Neutron with LVS for details..
Ideal model
In the future, an ideal model for Cloud VPS by means of Neutron would have these implementations details and/or restrictions:
- complete addressing separation from production infrastructure (specifically for VM addressing).
- production servers would see any Cloud VPS clients as coming from the internet.
- we have complete control on which connections (and why) happen between prod and CloudVPS that requires special handling. (i.e., to any production services which would not be accessible from random hosts on the internet)
- supporting services (like NFS, wiki-replicas, LDAP, etc) are also handled with care.
- transport networking between CloudVPS and prod routers use public addressing instead of private one.
- full IPv6 support in Cloud VPS, meaning that every single VM has his own IPv6 allocated
- HTTP/HTTPS connections going outside CloudVPS (including Toolforge) and reaching prod Wikis are clearly identified (by IP addresses, UserAgent, etc).
- supporting services are self-contained in CloudVPS to the extend we can (i.e, proxies, databases, etc).
- our network security policy (filtering) is clear and robust.
- VLAN isolation is clear. This means that Cloud VPS stuff isn't in shared VLANs with other, unrelated services.
- control boxes don't use public addressing (i.e, cloudcontrol1001.wikimedia.org vs cloudcontrol1003.eqiad.wmnet). TODO: put openstack public HTTP endpoints are in WMF proxies?
- TODO: Do we want to keep a single network for all instances to be scattered across, or provide per-project networks? Considerations may include IPv6 (see point 6), LVS (see T207554), and ease of separating different projects' internal services.
- CloudVPS uses BGP to advertises its prefixes to the prod routers, instead of static routes
- CloudVPS is distributed across rows (current L2 failures domains)
- TODO: prometheus metrics related to CloudVPS (both VMs and hardware) are stored in a different server other than prod ones??
Things that are good in the current model
The current model (as implemented by eqiad1, since main is going to be deprecated soon) has several things done the right way:
- Actually using Neutron. Worth noting.
- VMs are using the 172.16 range, which means we no longer use any addresses in the 10.0/8 range. This cloud-instances2-b-eqiad (vlan 1105) (points #1 and #10 in the ideal model)
Things to improve in the current model
Our current model is already in place, and we see lots of marging for improvement until we reach the ideal model.
Some concerns have been registered already in phabricator tasks.
- T207536 - Move various support services for Cloud VPS currently in prod into their own instances (and subtasks)
- T206261 - Routing RFC1918 private IP addresses to/from WMCS floating IPs
- T209011 - Change routing to ensure that traffic originating from Cloud VPS is seen as non-private IPs by Wikimedia wikis
- T174596 - dmz_cidr only includes some wikimedia public IP ranges, leading to some very strange behaviour
- T207663 - Renumber cloud-instance-transport1-b-eqiad to public IPs
Specifically, some issues we may agree we have and we may need to reconsider:
- openstack physical hardware servers are in prod subnets (DC rows) for management. Is not clear if this should be in the ideal model.
- the dmz_cidr mechanism takes precedence over floating IPs (see below).
dmz_cidr considerations
The dmz_cidr mechanism which is currently working in both main and eqiad1 allows VMs to interact with prod servers without any NAT. In fact, according to the current configuration in eqiad1, all prod can be reached by VMs in this deployment without SNAT.
This wide opening is not ideal, since we don't really know who/what/why these connections exist.
Also, this mechanism takes precedence over floating IPs. If we already have floating IPs for a VM (and we can identify this VM by his public IP), why to skip the NAT then?
supporting services
The list of supporting services is not clear, or complete.
- NFS servers (TODO: add IPs and perhaps some other information)
- labstore1001/2 - Spares that will be decommissioned with labstore1003
- labstore1003 - Scratch and maps mounts - to be decommissioned
- labstore1004/5 - Tools, project and misc NFS (DRBD cluster)
- labstore1006/7 - Dumps distribution web and NFS (non-replicated copies of each other)
- cloudstore1008/9 - The future labstore1003, currently in build phase
- labstore2003/4 - Backup servers in CODFW for the labstore1004/5 DRBD volumes (one project, the other misc)
- LDAP servers (TODO: elaborate - this service has outgrown purely labs usage and is relied upon by several important prod misc services)
- Wiki-replicas (TODO: fill me. Is this a supporting service or just a service offered to CloudVPS)
- ToolsDB
- OSMDB
- m5
- metrics/monitoring (TODO: elaborate)
- Web servers (by means of labweb servers)
Plans
We will have to develop a plan to move from the current model to the ideal one.
- 2018-11-29 meeting notes
- 2018-12-20 meeting was cancelled.
- We agreed in having another meeting in Jan 2019
See also
Other pages that might be revelant to better understand this topic: