Portal:Cloud VPS/Admin/notes/Neutron ideal model

From Wikitech

This page holds information on the ideal model for Neutron. This model have 2 requisites:

  • serves CloudVPS use cases
  • can work securely with other prod infrastructure

This also affects the Toolforge service, since is just a CloudVPS project. Other services, specially Data Services (wiki-replicas, quarry, paws, etc) might be affected by any changes to the model.

The idea is to reach a common understanding and agreement and coordinate next steps in a shared plan.

Requisites and use cases

A good understanding of use cases and requisites is key to develop a proper solution.

access to instances

Access to VMs (ssh tcp/22) is unrestricted. Users use a SSH bastion server to proxy to actual instances. The CloudVPS bastion servers usually lives in the bastion Cloud VPS project.

networking connections

Interesting networking connections to take into account when developing the ideal model. It is especially interesting for us to know special cases in which VMs connect outbound to servers other than the OpenStack physical infrastructure which are considered "production" hosts.

  • LDAP is used for both authentication and authorization in Cloud VPS projects (uid information, ssh public key storage, sudoer rules lists, etc)
  • Shinken needs to connect to the nova API to know about running instances.
  • Various Toolforge tools (for example openstack-browser) need to be able to query the OpenStack APIs using unprivileged credentials.
  • NFS storage is used by a large number of Cloud VPS projects and their instances (especially Toolforge)
  • DNS server for instances? (TODO: fill me)
  • Anything that the Foundation's production network exposes to the general internet should also be available to Cloud VPS instances
    • Toolforge and other projects need to be able to access MediaWiki and RESTBase APIs for production wikis

LVS

Using LVS in Cloud VPS is an interesting use case. But may be complex to achieve and to do it securely.

See main page Neutron with LVS for details..

Ideal model

In the future, an ideal model for Cloud VPS by means of Neutron would have these implementations details and/or restrictions:

  1. complete addressing separation from production infrastructure (specifically for VM addressing).
  2. production servers would see any Cloud VPS clients as coming from the internet.
  3. we have complete control on which connections (and why) happen between prod and CloudVPS that requires special handling. (i.e., to any production services which would not be accessible from random hosts on the internet)
  4. supporting services (like NFS, wiki-replicas, LDAP, etc) are also handled with care.
  5. transport networking between CloudVPS and prod routers use public addressing instead of private one.
  6. full IPv6 support in Cloud VPS, meaning that every single VM has his own IPv6 allocated
  7. HTTP/HTTPS connections going outside CloudVPS (including Toolforge) and reaching prod Wikis are clearly identified (by IP addresses, UserAgent, etc).
  8. supporting services are self-contained in CloudVPS to the extend we can (i.e, proxies, databases, etc).
  9. our network security policy (filtering) is clear and robust.
  10. VLAN isolation is clear. This means that Cloud VPS stuff isn't in shared VLANs with other, unrelated services.
  11. control boxes don't use public addressing (i.e, cloudcontrol1001.wikimedia.org vs cloudcontrol1003.eqiad.wmnet). TODO: put openstack public HTTP endpoints are in WMF proxies?
  12. TODO: Do we want to keep a single network for all instances to be scattered across, or provide per-project networks? Considerations may include IPv6 (see point 6), LVS (see T207554), and ease of separating different projects' internal services.
  13. CloudVPS uses BGP to advertises its prefixes to the prod routers, instead of static routes
  14. CloudVPS is distributed across rows (current L2 failures domains)
  15. TODO: prometheus metrics related to CloudVPS (both VMs and hardware) are stored in a different server other than prod ones??

Things that are good in the current model

The current model (as implemented by eqiad1, since main is going to be deprecated soon) has several things done the right way:

  • Actually using Neutron. Worth noting.
  • VMs are using the 172.16 range, which means we no longer use any addresses in the 10.0/8 range. This cloud-instances2-b-eqiad (vlan 1105) (points #1 and #10 in the ideal model)

Things to improve in the current model

Our current model is already in place, and we see lots of marging for improvement until we reach the ideal model.
Some concerns have been registered already in phabricator tasks.

Specifically, some issues we may agree we have and we may need to reconsider:

  • openstack physical hardware servers are in prod subnets (DC rows) for management. Is not clear if this should be in the ideal model.
  • the dmz_cidr mechanism takes precedence over floating IPs (see below).

dmz_cidr considerations

The dmz_cidr mechanism which is currently working in both main and eqiad1 allows VMs to interact with prod servers without any NAT. In fact, according to the current configuration in eqiad1, all prod can be reached by VMs in this deployment without SNAT.

This wide opening is not ideal, since we don't really know who/what/why these connections exist.

Also, this mechanism takes precedence over floating IPs. If we already have floating IPs for a VM (and we can identify this VM by his public IP), why to skip the NAT then?

supporting services

The list of supporting services is not clear, or complete.

  • NFS servers (TODO: add IPs and perhaps some other information)
    • labstore1001/2 - Spares that will be decommissioned with labstore1003
    • labstore1003 - Scratch and maps mounts - to be decommissioned
    • labstore1004/5 - Tools, project and misc NFS (DRBD cluster)
    • labstore1006/7 - Dumps distribution web and NFS (non-replicated copies of each other)
    • cloudstore1008/9 - The future labstore1003, currently in build phase
    • labstore2003/4 - Backup servers in CODFW for the labstore1004/5 DRBD volumes (one project, the other misc)
  • LDAP servers (TODO: elaborate - this service has outgrown purely labs usage and is relied upon by several important prod misc services)
  • Wiki-replicas (TODO: fill me. Is this a supporting service or just a service offered to CloudVPS)
  • ToolsDB
  • OSMDB
  • m5
  • metrics/monitoring (TODO: elaborate)
  • Web servers (by means of labweb servers)

Plans

We will have to develop a plan to move from the current model to the ideal one.

See also

Other pages that might be revelant to better understand this topic: