Every Toolforge tool can run a dedicated <toolname>.toolforge.org website. Toolforge provides the
webservice command which is used to start and stop the web server for each tool. Toolforge supports websites written in several programming languages including PHP, Python, Node.js, Java, Ruby and others. Toolforge also provides some support services which can help you make your website’s visitors safe from tracking by third party services.
webservice command uses convention over configuration for some aspects of how the website is deployed. You’ll find details for different programming languages below.
Using the webservice command
You can use the
webservice command to
restart, and check the
status of a webserver.
$ ssh login.toolforge.org $ become my_cool_tool $ webservice start
webservice --help to get a full list of arguments.
Without any additional arguments or configuration files,
webservice start will currently start a PHP 7.3 Kubernetes container serving content from your tool's $HOME/public_html directory using lighttpd as the web server software.
webservice command has the concept of a "template" file which can be used to store arguments (and eventually other structured content) for starting a webservice. The code will look for a
--template=... command line argument and fallback to looking for a $HOME/service.template file. The $HOME/service.template file is what most tools will be expected to use, but we may find interesting uses for multiple templates in a single tool as well.
A webservice template file is a YAML document. It can contain these settings:
- backend: the backend to use (equivalent to
- cpu: the CPU reservation to ask for on Kubernetes (equivalent to
- mem: the memory reservation to ask for on Kubernetes (equivalent to
- replicas: the number of Pod replicas to use (equivalent to
- type: the type of webservice to start (equivalent to
- extra_args: extra arguments to pass to the backend (not used by most backends)
By saving desired startup state in a file, the user can use simple
webservice stop; webservice start commands again!
Choosing a backend
The Kubernetes backend provides more modern software versions and is the default backend. The Grid Engine backend is used primarily by legacy tools which were developed before Kubernetes was available. Toolforge administrators recommend that you try using Kubernetes first for new tools and only use the Grid Engine backend if there is a technical limitation that prevents your tool from running inside Kubernetes.
Both the Kubernetes and Grid Engine backends share common infrastructure services for serving web sites. Toolforge has an Nginx server configured as a proxy server which handles all inbound requests to your tool's web server. This proxy server takes care of providing TLS termination and then reverse proxies the inbound request to your tool's web service. Web servers running on Kubernetes have a second Nginx proxy server running as the "Ingress" component inside the Kubernetes cluster. See Portal:Toolforge/Admin/Kubernetes/Networking and ingress for detailed information about the network and web request routing used by the Toolforge Kubernetes cluster.
Toolforge also includes a 404 handler service which will respond to HTTP requests for tools which do not exist and tools which are not currently running a web service. This service is implemented as the fourohfour tool which runs on the Kubernetes backend.
Kubernetes (k8s) is a platform for running containers. Kubernetes web servers have access to newer versions of most software than the Grid Engine provides. K8s also provides a more robust system for restarting tools automatically following an application crash.
Maintainer visible differences from Grid Engine based Web services
- Each process runs inside a Docker container, orchestrated by Kubernetes.
- Provides better resource isolation (one tool can not take down other tools by consuming all RAM or CPU)
- Better health checking (monitoring built into Kubernetes, not a hack we wrote)
- Less complex proxy setup, leading to fewer proxy related outages / issues
- Containers available based on newer Debian versions (Buster)
- Newer software versions than those available with Debian Stretch
- It is not possible to interact with the Grid Engine from Kubernetes (no
- Kubernetes backend has specific
-m MEMORY, --mem MEMORY Set higher Kubernetes memory limit -c CPU, --cpu CPU Set a higher Kubernetes cpu limit -r REPLICAS, --replicas REPLICAS Set the number of pod replicas to use
The Grid Engine backend runs your web server as a job on a Debian Stretch grid exec node. This is similar to the way that
jsub runs any grid job you submit, but there is a separate exec queue on the grid for running jobs started by
Switching between Kubernetes and Grid Engine
From Kubernetes to Grid Engine
$ webservice --backend=kubernetes stop $ webservice --backend=gridengine start
From Grid Engine to Kubernetes
$ webservice --backend=gridengine stop $ webservice --backend=kubernetes <type> start
Default web server (lighttpd + PHP)
Node.js web services
Other / generic web servers
You can run other web servers that are not directly supported. This can be accomplished using the
generic webservice type on the Grid Engine backend or a runtime specific type on the Kubernetes backend.
webservice --backend=kubernetes golang start|stop|restart|shell SCRIPT
webservice --backend=kubernetes jdk11 start|stop|restart|shell SCRIPT
webservice --backend=kubernetes ruby25 start|stop|restart|shell SCRIPT
webservice --backend=gridengine generic start|stop|restart SCRIPT
To start a webserver that is launched by a script at
/data/project/toolname/code/server.bash, you would launch it with:
$ webservice --backend=gridengine generic start /data/project/toolname/code/server.bash
Your script will be passed an HTTP port to bind to in an environment variable named PORT. This is the port that the Nginx proxy will forward requests for https://YOUR_TOOL.toolforge.org/ to. When using the Kubernetes backend, PORT will always be 8000. When using the Grid Engine backend, PORT will change each time the
webservice start or
webservice restart command is run.
Common tasks and guides
Hosting large files
Tool storage uses NFS which has limited storage and network bandwidth. If your tool requires a large, static file (for example serving up a container image or tarball), please store that file in the 'Download' project rather than storing it in your tools home directory.
The Download project hosts https://download.wmcloud.org, a public read-only web server for large file storage. If you would like a file added, create a Phabricator ticket or contact WMCS staff directly to have the file added.
Serving static files
Files placed in a tool's
$HOME/www/static directory are available directly from the URL
tools-static.wmflabs.org/toolname. This does not require any action on the tool's part — putting the files in the appropriate folder (and making the directory readable) should 'just work'.
You can use this to serve static assets (CSS, HTML, JS, etc) or to host simple websites that don't require a server-side component.
Load external assets using our CDN services
- Libraries (Browse libraries)
- Toolforge provides an anonymizing reverse proxy to cdnjs.
- Fonts (Search fonts)
- Toolforge provides an anonymizing reverse proxy to Google fonts.
- Maps (Documentation)
- Wikimedia provides maps servers with data from OpenStreetMap.
Runtime memory limits
- Kubernetes: 2GiB for most runtimes (Java's limit is 4GiB).
- Grid Engine: 4GiB
Requesting additional tool memory
Kubernetes web servers start with a default limit on both runtime memory and cpu power. These limits vary slightly based on which runtime language (PHP, Python, Java, etc) you are using. The
--mem command line arguments can be used to increase these defaults up to the quota limit for your tool's Kubernetes namespace. See Kubernetes#Quotas and Resources for instructions on requesting an increased quota for your tool.
For Grid Engine webservices, request more tool memory by opening a Phabricator task
A Cloud Services administrator will review your request and can create a
/data/project/.system/config/$TOOLNAME.web-memlimit configuration file that will adjust the limit.
An Nginx proxy sits between your webservice and the user. By default this proxy buffers the response sent from your server. For some use cases, including streaming large quantities of data to the browser, this can be undesirable. Buffering can be disabled on a per-request basis by sending an
X-Accel-Buffering: no header in your response.
A default image will be served by the shared proxy layer if your webservice returns a 404 Not Found response when asked for /favicon.ico. This default icon is the same as the one found at https://tools-static.wmflabs.org/toolforge/favicons/favicon.ico.
A default response will be served by the shared proxy layer if your webservice returns a 404 Not Found response when asked for /robots.txt. The default robots.txt response denies access to all compliant web crawlers. We decided that this "fail closed" approach would be safer than a "fail open" telling all crawlers to crawl all tools.
Any tool that does wish to be indexed by search engines and other crawlers can serve their own /robots.txt content. Please see https://www.robotstxt.org/ for more information on /robots.txt in general.
Communication and support
We communicate and provide support through several primary channels. Please reach out with questions and to join the conversation.
|Phabricator Workboard||#Cloud-Services||Task tracking and bug reporting|
|IRC Channel||#wikimedia-cloud connect||General discussion and support|
|Mailing List||cloud@||Information about ongoing initiatives, general discussion and support|
|Announcement emails||cloud-announce@||Information about critical changes (all messages mirrored to cloud@)|
|News wiki page||News||Information about major near-term plans|
|Cloud Services Blog||Clouds & Unicorns||Learning more details about some of our work|
|Wikimedia Technology Blog||techblog.wikimedia.org||News and stories from the Wikimedia technical movement|