Jump to content

Obsolete:Portal:Toolforge/Admin/Dynamicproxy

From Wikitech
This page contains historical information. It may be outdated or unreliable.
2024

This page contains information on the Dynamicproxy component in the Toolforge infrastructure.

NOTE: This page was created by consolidating content from Portal:Toolforge/Admin/Webservice and Portal:Toolforge/Admin#WebProxy into this single page.

Components

Dynamicproxy is a custom-made ingress mechanism that handles the communication between the end users in the internet and internal webservices running in Toolforge.

The are several key components in the dynamicproxy setup:

  • nginx, doing the actual proxy operations and SSL termination.
  • urlproxy.lua, the code that nginx uses to read redis data (for webservices running in the grid).
  • redis, for storing real-time data about proxy backends (for webservices running in the grid).
  • proxylistener.py, custom daemon that keeps redis data up-to-date (for webservices running in the grid).
  • webservice, the custom command that end users run to start/stop webservices running in Toolforge.
  • webservice-runner, I don't know about this one yet.

TL;DR:

  • Nginx running on tools-proxy-* instances terminates TLS connections for toolforge.org.
  • The nginx config includes a lua module named urlproxy.lua which implements lookup logic to find a backend host:port to handle the request (for webservices running in the grid).
  • The data that urlproxy.lua uses for this lookup is stored in Redis (for webservices running in the grid).
  • The data in Redis comes from the webservice system:
    • For a job grid hosted webservice:
      • webservice-runner runs on the job grid as the entry point of the submitted job
      • webservice-runner contacts a proxylistener.py service running on the active tools-proxy-* instance via TCP on port 8282
      • proxylistener.py updates Redis with the host:port sent by webservice-runner
    • For a Kubernetes hosted webservice:
      • the urlproxy.lua script simply defaults to sending request to the kubernetes ingress

Interesting files:

  • modules/dynamicproxy/templates/urlproxy.conf
  • modules/dynamicproxy/files/urlproxy.lua
  • modules/toollabs/files/proxylistener.py
  • modules/toollabs/files/kube2proxy.py
  • modules/dynamicproxy/files/invisible-unicorn.py (only used for domainproxy?)

Components in the tools-proxy servers

These use a number of tools to feed information about web services into redis and remove it when it no longer applies. The dynamicproxy class in puppet is then used to read from redis (via lua) to provide the aforementioned list endpoint as well as the actual proxy of the web services themselves under its domain name as path items.

Grid web services

Grid services are added and removed using the proxylistener python script. This service runs on port 8282 and is used to register and deregister service locations with the proxy (by talking to the local redis service directly). The logs are found at /var/log/proxylistener.

In addition to this, the webgrid exec nodes all have the portgrabber.py library and the portreleaser scripts. These are still used at the end of jobs, especially ones that are killed by grid engine and friends in an epilog for the web queues. The epilog runs /usr/local/bin/portreleaser, which will call the proxylister script on 8282. The /usr/local/bin/portgrabber script does not appear to run anymore directly.

Kubernetes web services

The front proxy defaults to sending request to the kubernetes ingress. The kubernetes ingress knows how to contact each tool webservice running there. See Portal:Toolforge/Admin/Kubernetes/Networking_and_ingress for more details.

Lua scripts

To manipulate the proxy, the nginx lua modules are used. The details of this are all in the dynamicproxy module in puppet: modules/dynamicproxy/manifests/init.pp

TODO: document dynamicproxy a bit better

Components in other servers

Components running in other servers than the tools-proxy servers themselves. Mostly the more user-facing parts.

In bastion nodes

Toolforge bastions have the webservice command installed, which is the end-user interface to manage the lifecycle of webservices.

In grid nodes

Each webgrid job submission runs the webservice-runner script when a new web service is launched via the webservice command on a bastion. The script is part of the tools-webservice Debian package: https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-webservice

The webservice-runner script contacts the tools-proxy nodes via port 8282 to register the random port from 1024-65535 with the active proxy (set via active_proxy_host hiera variable for the old grid and profile::toolforge::active_proxy_host hiera variable on the new grid) on the proxy's local redis.

(TODO: determine if the portgrabber.py script in the toolforge profile actually does anything considering the above -- it may for some other services?)

In grid cron servers

The tools/manifest/webservicemonitor.py script runs after being installed by the tools-manifest package: https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-manifest

Webservicemonitor ensures that services are running on the gridengine environment using a combination of submit host commands (such as qstat) and access the proxy's list endpoint at <active_proxy_host>:8081/list. It will only function properly if it can read that endpoint correctly.

The script works as a kind of reconciliation loop by comparing the list of registered tool->host:port mappings gathered from the active proxy server, the list of jobs running on the grid from qstat, and service.manifest files gathered from the tool $HOME directories. The manifest data is the primary driver for this reconciliation. Each manifest is checked to determine if it:

  • contains a "web: ..." declaration
  • contains a "backend: gridengine" declaration
  • contains a "distribution: ..." declaration matching the distribution of the instance running webservicemonitor

If any of the checks fail, the manifest is skipped.

The qstat output is searched for a running job matching the "web: ..." type and the tool name. If a matching job is found, its state is checked to determine if it should be treated as 'running' (state contains r, s, w, of h).

If the job is found to not be running or the tool is not in the list of known proxy backends the job is eligible for submission to the job grid. Before actually submitting the job, the manifest is checked to see if "too many" restart attempts have happened since the last time the job was seen to be running. (FIXME: pretty sure this is actually broken in the code. The tracking data is not persisted.)

In kubernetes worker nodes

The webservice command on bastions can also simply launch pods on Kubernetes behind a deployment that is namespaced for the tool. Nothing special is done here since tools-webservice is actually communicating through the Kubernetes API, to register each webservice as an ingress object.


SSL termination

The SSL terminatoin for Toolforge webservices is in the dynamicproxy component, specifically in nginx.

The nginx daemon should redirect http:// request to the equivalent https:// URL.

TODO: add information about the different domains and how certificates are handled. Link to Portal:Toolforge/Admin/SSL_certificates, etc.

Operations

This section contains documentation on common operations admins can do related to the dynamicproxy component.

deployment

To build a new dynamicproxy server, simply create a VM instance using the role::wmcs::toolforge::proxy role.

TODO: hiera? puppet caveats? TODO: floating IPs? DNS records?

failover

This needs review. It is possibly best to change the ip with the floating ip command instead of server since this causes indirect communication with neutron.

The most simple failover consist on:

  • associate the floating IP with the new proxy server.
  • refresh hiera keys to point to the now-active proxy server.
  • make sure DNS records (public and internal, mind split-DNS) are correct.
  • run puppet in all VMs in tools for them to know about the new proxy server.

Details on the steps:

1. Switch the floating IP for toolforge.org (Currently 185.15.56.11) from one proxy to the another (if tools-proxy-01 is the current active one, switch to tools-proxy-02, and vice versa). The easiest way to verify that the routing is proper (other than just hitting toolforge.org) is to tail /var/log/nginx/access.log on the proxy machines. This is an invasive operation.

:# Find instance with floating ip
user@cloudcontrol1004:~$ sudo wmcs-openstack --os-project-id tools server list | grep -C 0 '185.15.56.11' | awk '{print $4, $2}'

:# Move floating IP from one instance to another
user@cloudcontrol1004:~$ sudo wmcs-openstack --os-project-id tools server remove floating ip <current instance UUID>  185.15.56.11
[.. wait a few seconds for Neutron to catch up ..]
user@cloudcontrol1004:~$ sudo wmcs-openstack --os-project-id tools server add floating ip <intended instance UUID> 185.15.56.11

If this is a temporary failover (e.g. in order to reboot the primary proxy) then this is all that's needed -- as soon as the IP is assigned back to the primary then service will resume as usual. If, on the other hand, the original proxy is going to be down for a few minutes, continue with the following steps to make the switch official. Any proxy should be able to serve at the web frontend. Registering and deregistering services is handled by the "active_proxy_host", which is needed for longer failovers, see next.

2. Stop redis on the current active proxy to prevent lost changes as the hiera value is updated. This should help prevent some problems, though launching webservices will be broken until a redis primary is back up. Use hiera (horizon) to set the active proxy host (profile::toolforge::active_proxy_host) to the hostname (not fqdn) of the newly active proxy. This will update /etc/active-proxy as well as each of the tools-proxy servers for who is the active replication master. There is a period of time where values on individual Tool project instances will not be consistent and this is a race condition possibility for active changes, thus the reason for shutting down redis.

3. Run puppet on the DNS recursor hosts (cloudservices1003 and cloudservices1004). This is required for internal hits to toolforge.org to resolve. See https://phabricator.wikimedia.org/diffusion/OPUP/browse/production/modules/role/manifests/labs/dnsrecursor.pp;32dac026174c4eda1383713fda818a6a30c5e981$54

4. Force a puppet run on all the proxies and webgrid nodes using Cumin:

:# in the tools project
user@cloud-cumin-01:~$ sudo cumin --force -x 'O{project:tools name:tools-.*webgrid-.*} or O{project:tools name:tools-proxy-.*}' 'run-puppet-agent'
[..]

:# in the toolsbeta project
user@cloud-cumin-01:~$ sudo cumin --force -x 'O{project:toolsbeta name:toolsbeta-.*webgrid-.*} or O{project:toolsbeta name:toolsbeta-proxy-.*}' 'run-puppet-agent'
[..]

5. Ensure puppet is running on all proxy hosts and the replication roles have switched.

5. Test toolforge.org and restarting a webservice.

debug

Some debugging hints.

Redis status:

root@tools-proxy-05:~# redis-cli hgetall prefix:dplbot
.*
http://tools-webgrid-lighttpd-1205.tools.eqiad.wmflabs:33555

root@tools-proxy-05:~# grep /var/lib/redis/tools-proxy-05-6379.aof -e 'dplbot' -C 5
(...)
HDEL
$13
prefix:dplbot
$2
.*
*4
(...)
HSET
$13
prefix:dplbot
$2
.*
$60
http://tools-webgrid-lighttpd-1202.tools.eqiad.wmflabs:44504

Nginx status:

root@tools-proxy-05:~# tail -f /var/log/nginx/access.log
[..]¡

typical issues

Some typical issues we found when managing the dynamicproxy setup.

  • If you add new proxy hosts, you will need to update the hiera variable toollabs::proxy::proxies and then restart ferm on the proxy hosts and flannel etcd servers or changes won't take effect at the firewall level.
  • Problems with flannel: Did you check UDP? phab:T213711#4878941

See also