MediaWiki On Kubernetes
MediaWiki-on-Kubernetes (or mw-on-k8s for short) is an initiative to transition the MediaWiki at WMF deployment from dedicated Application servers to Kubernetes. This page contains information about what is changing as part of this transition.
What traffic is currently served by MediaWiki on k8s
As of today, the following sites are fully served by MediaWiki on k8s (config file):
- test2.wikipedia.org
- test.wikidata.org
- www.mediawiki.org
- office.wikimedia.org
- vrt-wiki.wikimedia.org
- all closed wikis
- 4% of all traffic
Server groups
MediaWiki on Kubernetes is deployed to both the main Eqiad and Codfw datacenters, in the wikikube
kubernetes clusters. While the situation is in constant evolution, we currently have the following server groups:
mw-debug
For external requests with X-Wikimedia-Debug, like the old mwdebug VMs. Accessible using thek8s-experimental
option of the WikimediaDebug browser extension.mw-web
For external requests from web browsers (via the CDN).mw-api-ext
For external requests to the API (via the CDN).mw-api-int
For internal requests to the API (from other services).mw-jobrunner
For internal requests from the JobQueue runners (except videoscaling jobs).
What is in a MediaWiki pod
Each MediaWiki pod in Kubernetes contains 8 containers:
mediawiki-main-tls-proxy
- running Envoy, as service mesh and TLS terminator.mediawiki-main-httpd
- running the Apache httpd daemon.mediawiki-main-app
- running the PHP daemon.mediawiki-main-mcrouter
- running mcrouter.mediawiki-main-rsyslog
- running rsyslog, to collect MediaWiki logs (for Logstash) and Apache access logs.mediawiki-main-{php-fpm,mcrouter,httpd}-exporter
- the Prometheus exporters for PHP, mcrouter and Apache httpd.
For more detailed information please see our MediaWiki On Kubernetes/How it works page.
How to manage changes to the infrastructure
Given we're in a transition phase between the old puppet-managed systems and MediaWiki running on Kubernetes, we tried to keep things shared as much as possible. This means that deploying Puppet changes for app servers influences future mw-on-k8s deploys. However, Puppet merely prepares the host where the Kubernetes images are built, it doesn't perform a Kubernetes deployment. Applying infra-level changes to Kubernetes services and doing a code deployment are exactly the same procedure, but we need additional care when merging infrastructure changes.
Things that propagate from Puppet to MediaWiki-on-Kubernetes include:
- The list of logging brokers, and the udp2log host.
- The list of service proxy endpoints to offer, and the list of all available too (out of service::catalog).
- The list of MediaWiki sites and Apache configuration parameters (e.g. which domain names for Apache vhosts), but not the Apache config template itself!
- The list of memcached servers.
- The GeoIP and GeoIPInfo data
So, whenever you want to change any of the above things, you will need to:
- check what your change would modify on a
role::deployment_server::kubernetes
host - if it changes a file under/etc/helmfile-defaults/mediawiki
then the following applies to your change - only merge the change during a "MediaWiki infrastructure" window routinely scheduled on Deployments calendar, or otherwise well outside of any MediaWiki code deployment window. This is done to allow both SREs and MW developers/deployers to monitor their deployments independently and avoid unexpected consequences.
- after the change is merged, ensure a puppet run happens on the MediaWiki deployment server, then proceed to re-deploy all MediaWiki service groups on Kubernetes.
How to deploy MediaWiki on Kubernetes
Manual deployment
scap sync-world --stop-before-sync
rebuilds the image and updates the Helmfile release files, so you can call helmfile apply afterwards to manually update the releases
Automatic deployment
scap sync-world --k8s-only
rebuilds the image and only triggers the deployment to mw-on-k8s, leaving the regular Apache targets untouched
Full image rebuild and deployment
This can be useful especially when we need to update a base image:
scap sync-world --k8s-only -D full_image_build:true
Troubleshooting
Dashboards
Logs
- Apache2 AccessLog dashboard (OpenSearch)
- php-fpm slowlog dashboard (OpenSearch)
- php-fpm errrolog dashboard (OpenSearch)
Graphs
MediaWiki REPL
Any deployer can launch, from a deployment server, a REPL shell (either shell.php
or eval.php
) using the mw-debug-repl
script:
you@deploy1002 $ sudo mw-debug-repl
Error: a wiki should be provided on the command line
mw-debug-repl - launch a MediaWiki REPL in the kubernetes mw-debug environment.
Usage: mw-debug-repl [-e] [-d <datacenter>] [-w <wiki-name>|<wiki-name>]
OPTIONS:
-e Launch eval.php, instead of the default shell.php as REPL
-d|--datacenter Pick a specific datacenter (by default the master will be picked)
-w|--wiki Pick a wiki. For compatibility reasons, the flag can be omitted.
-h|--help Show this help message
EXAMPLES:
Launch an eval.php shell for itwiki in eqiad
$ sudo /usr/local/bin/mw-debug-repl -e -d eqiad --wiki itwiki
# Also valid:
$ sudo /usr/local/bin/mw-debug-repl -e --datacenter eqiad itwiki
Launch shell.php for enwiki
$ sudo /usr/local/bin/mw-debug-repl enwiki
$ sudo /usr/local/bin/mw-debug-repl --wiki enwiki
you@deploy1002 $ sudo /usr/local/bin/mw-debug-repl enwiki
Finding a mw-debug pod in eqiad...
Now running shell.php for enwiki inside pod/mw-debug.eqiad.pinkunicorn-59b5df7ffd-h2xxq...
Psy Shell v0.11.10 (PHP 7.4.33 — cli) by Justin Hileman
> echo $wmgServerGroup
kube-mw-debug⏎
>
Get a shell on a production pod
you@deploy1002 $ sudo -i
root@deploy1002 # kube-env admin eqiad
root@deploy1002 # kubectl -n mw-web get pods
NAME READY STATUS RESTARTS AGE
mw-web.eqiad.canary-68998c7b48-2jt8g 8/8 Running 0 2d13h
mw-web.eqiad.canary-68998c7b48-r5hng 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-69sm2 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-7vtrl 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-9rr9l 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-gjgrk 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-j88nr 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-lpq9f 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-lwhwj 8/8 Running 0 2d13h
mw-web.eqiad.main-7f65fbb9c8-xr64t 8/8 Running 0 2d13h
root@deploy1002 # kubectl -n mw-web exec pod/mw-web.eqiad.main-7f65fbb9c8-69sm2 -c mediawiki-main-app -it -- /bin/bash
www-data@mw-web:/$
strace a production PHP process
root@deploy1002 # kubectl -n mw-web describe pod mw-web.eqiad.main-7f65fbb9c8-69sm2 | grep Node:
Node: kubernetes1023.eqiad.wmnet/10.64.32.21
root@deploy1002 # kubectl -n mw-web exec pod/mw-web.eqiad.main-7f65fbb9c8-69sm2 -c mediawiki-main-app -- ps -eo pid,pidns,args
PID PIDNS COMMAND
1 4026533595 php-fpm: master process (/etc/php/7.4/fpm/php-fpm.conf)
1345418 4026533595 php-fpm: pool www
1346141 4026533595 php-fpm: pool www
1346193 4026533595 php-fpm: pool www
1346539 4026533595 php-fpm: pool www
1346750 4026533595 php-fpm: pool www
1346818 4026533595 php-fpm: pool www
1348934 4026533595 php-fpm: pool www
1349988 4026533595 php-fpm: pool www
2367349 4026533595 ps -eo pid,pidns,args
Let's strace PID 1346750 in the container. The container doesn't have strace, but we can run it on the host. But it's in a PID namespace so we have to figure out what the PID is in the root namespace. In the host, /proc/$pid/status contains the PID in both the container's namespace and the root namespace. We can confirm the identification by checking that the pidns is the same.
$ ssh kubernetes1023.eqiad.wmnet
you@kubernetes1023 $ grep NStgid.*1346750 /proc/*/status
/proc/69585/status:NStgid: 69585 1346750
you@kubernetes1023 $ sudo ps -p 69585 -o pidns
PIDNS
4026533595
you@kubernetes1023 $ sudo strace -p 69585
strace: Process 69585 attached