From Wikitech
Jump to: navigation, search

This page describes the technical aspects of deploying Maps service on Wikimedia Foundation infrastructure.


Maps service component diagram
Maps service deployment diagram

The maps service consists of Kartotherian - a nodejs service to serve map tiles, Tilerator - a non-public service to prepare vector tiles (data blobs) from OSM database into Cassandra storage, and TileratorUI - an interface to manage Tilerator jobs. There are four servers in the maps group: maps-test200{1,2,3,4}.codfw.wmnet that run Kartotherian (port 6533, NCPU instances), Tilerator (port 6534, half of NCPU instance), TileratorUI (port 6535, 1 instance). Also, there are four Varnish servers per datacenter in the cache_maps group.


Importing database

maps2001 is actually not the best server for this - we should switch it around with maps2002, as it has 12 cores and 96GB RAM.

  • From - find the file with the latest available date, but do NOT use "latest", as that might change at any moment.
  • curl -x webproxy.eqiad.wmnet:8080 -O
  • curl -x webproxy.eqiad.wmnet:8080 -O
  • md5sum -c planet-151214.osm.pbf.md5
  • PGPASSWORD="$(< ~/osmimporter_pass)" osm2pgsql --create --slim --flat-nodes nodes.bin -C 40000 --number-processes 8 --hstore planet-151214.osm.pbf -H maps-test2001 -U osmimporter -d gis
  • additional steps to import shapes and create some indexes / functions / ... are documented in Kartotherian sources.


  • Tables are created by osm2pgsql, no need for an initial DDL script.


Kartotherian servers map tiles by getting vector data from Cassandra, applying the style to it, and returning raster images. It is also capable of serving a "static image" - a map with a given width/height/scaling/zoom, and can server vector tiles directly for on-the-client rendering (WebGL maps).

To see the tiles without Varnish cache, connect to Kartotherian using an ssh tunnel, e.g. ssh -L 6533:localhost:6533 maps-test2001.codfw.wmnet and browse to http://localhost:6533


Tilerator is a backend vector tile pre-generation service that picks up jobs from a Redis job que, copying tiles from a Postgres DB, using sql queries into vector tiles stored in Cassandra. Postgres DBs are set up on each of the maps hosts, one master and 3 slaves. Technically, Tilerator is not even a generator, but rather a "batch copying" service, which takes tiles from one configured source (e.g. a tile generator from SQL), and puts it into another source (e.g. Cassandra tile store).


TileratorUI is used to inspect maps, including internal data sources, and to add jobs to the Tilerator job queue. Actually, TileratorUI is the same code as Tilerator, but started with a different configuration. Connect to TileratorUI using an ssh tunnel, e.g. ssh -L 6535:localhost:6535 maps-test2001.codfw.wmnet and navigating to http://localhost:6535. There, you can view any style (use set style to change it), or to schedule a job by setting all relevant fields and Control+Clicking the tile you want to schedule.


Quick cheat sheet

  • Style is specified in upper left corner
    • Set it to genview to view tiles generated on the fly. Caution: that means that if you zoom out to low levels tiles can take more that 10 minutes to generate.
  • Alt+click (Option+click on Mac) on map to enqueue regeneration jobs.
    • This requires src and dst to be set. For the most basic operation, on-demand regeneration of tiles, set src to gen and dst to whatever Cassandra keyspace is used for tile storage (currently v3).
    • By default, only tile clicked upon will be regenerated.
    • Set fromZ and beforeZ to regenerate a bunch of layers under the clicked tile.
  • Click on source to view the currently active sources configuration.

See full Tilerator documentation for all commands & parameters.

Dynamic tile sources


To create a new Cassandra data source, post something like this to the /sources as a text body. Default table name is tiles. If table or keyspace is not there, you have to use createIfMissing parameter.
  uri: cassandra://
    keyspace: v2
    table: tiles2a
    cp: [maps-test2001.codfw.wmnet, maps-test2002.codfw.wmnet, maps-test2003.codfw.wmnet, maps-test2004.codfw.wmnet]
    username: {var: cassandra-user}
    password: {var: cassandra-pswd}
#    repfactor: 4
#    durablewrite: 0
#    createIfMissing: true

Dynamic Layer Generator

To generate just a few layers from database, create a layer filter and a layer mixer:
  uri: bridge://
    npm: ["osm-bright-source", "data.xml"]
      dbname: gis
      host: ""
      type: postgis
      host: localhost
      user: {var: osmdb-user}
      password: {var: osmdb-pswd}
  xmlLayers: [admin, road]

  uri: layermixer://
    sources: [{ref: v2}, {ref: gentmp}]
Once set, POST a job to copy mixtmp into the storage v2 e.g.

src=mixtmp dst=v2 baseZoom=0 fromZoom=5 beforeZoom=6 parts=10

Generating Tiles

Generate all tiles for zooms 0..7, using generator gen, saving into v3 everything including the solid tiles, up to 4 jobs per zoom.

src=gen dst=v3 parts=4 baseZoom=0 fromZoom=0 beforeZoom=8 saveSolid=1

Generated tiles only if they already exist in v2 source, and save them into v3, on zooms 8..15, 60 jobs per zoom.

src=gen dst=v3 parts=60 baseZoom=0 fromZoom=8 beforeZoom=16 sourceId=v2

Bulk Copying

The fastest way to copy a large number of tiles from one source to another is to use a large number of parts and specify saveSolid=true (skips solid tile detection). E.g. to copy all z16 tiles from v2 to v3, use src=v2 dst=v3 zoom=16 parts=60 saveSolid=true


  • Clear the Postgres data directory and init the database from backup (replace maps2001.codfw.wmnet by the postgres master):

rm -rf /srv/postgresql/9.4/main/* && sudo -u postgres pg_basebackup -X stream -D /srv/postgresql/9.4/main/ -h maps2001.codfw.wmnet -U replication -W

Puppetization and Automation


  • passwords and postgres replication configuration is set in Ops private repo (root@palladium:~/private/hieradata/role/(codfw|eqiad)/maps/server.yaml)
  • other configuration in puppet/hieradata/role/(codfw|common|eqiad)/maps/*.yaml
  • cassandra::rack is defined in puppet/hieradata/hosts/maps*.yaml
  • the role::maps::master / role::maps::slave roles are associated to the maps nodes (site.pp)

Manual steps

  • On first start of a Cassandra node, it is necessary to clean the data directory rm -rf /srv/cassandra/*. See Cassandra documentation for more details.
  • To initialize the first Cassandra node, we need to add the local node to the list of seeds by manually editing /etc/cassandra/cassandra.yaml and restarting cassandra:
    # Addresses of hosts that are deemed contact points.
    # Cassandra nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        # seeds is actually a comma-delimited list of addresses.
        # Ex: "<ip1>,<ip2>,<ip3>"
        # Omit own host name / IP in multi-node clusters (see
       - seeds:,, # '''add local node here to initialize the first Cassandra node'''
  • change the cassandra super user password to match the one configured in private repo using cqlsh:
 cqlsh <maps1001.eqiad.wmnet> -u cassandra
 ALTER USER cassandra WITH PASSWORD '<password>';
  • Setup of user access / rights for cassandra
 cat /usr/local/bin/maps-grants.cql | cqlsh <maps1001.eqiad.wmnet> -u cassandra
  • Setup replication of Cassandra system_auth according to documentation.
  • Initial data load of OSM into postgresql is done by running /usr/local/bin/osm-initial-import on the postgresql master node. Cassandra should be shutdown during initial import to free memory for osm2pgsql.
osm-initial-import \
    -d <date_of_import> \
    -p <password_file> \
    -s  <state_file_url> \
    -x webproxy.eqiad.wmnet:8080
  • If the postgresql master already has data, the slave initialization will timeout in puppet. It then needs to be run manually:
service postgresql@9.4-main stop

rm -rf /srv/postgresql/9.4/main
mkdir /srv/postgresql/9.4/main
chown postgres: /srv/postgresql/9.4/main/
chmod 700 /srv/postgresql/9.4/main/

sudo -u postgres pg_basebackup \
  --xlog-method=stream \
  --pgdata=/srv/postgresql/9.4/main/ \
  --host='''<postgres_master>''' \
  --username=replication \
  --write-recovery-conf \

service postgresql@9.4-main start

  • Initial creation of cassandra keyspace: To prevent accidental modification of schema, Tilerator source configuration does not allow to create schema by default. The sources file used by tilerator / kartotherian is configured in /etc/(kartotherian|tilerator|tileratorui)/config.yaml, look for the sources: key. This is a reference to a sources file in the kartotherian / tilerator source directory. For example /srv/deployment/tilerator/deploy/src/sources.prod2.yaml.

The easiest way to create a new keyspace is to run Tilerator with a custom sources file, which instructs tilerator to create the missing keyspace. For example, create a temporary file, e.g. /home/yurik/my-source-file with the following configuration (change v3 with the keyspace declared in the sources configuration file):

  uri: cassandra://
    keyspace: v3
    cp: {var: cassandra-servers}
    username: {var: cassandra-user}
    password: {var: cassandra-pswd}
    # These parameters are only used if keyspace needs to be created:
    repfactor: 4
    durablewrite: 0
    createIfMissing: true

And run this bash script:

node /srv/deployment/tilerator/deploy/src/scripts/tileshell.js \
  # Use TileratorUI configuration (including password variables)
  --config /etc/tileratorui/config.yaml \
  # Use this sources file instead of the one in the config file
  --source /home/yurik/my-source-file

Tileshell will not exit, so ^C it after it reports "done".

  • On existing server, record all existing tiles as a list of tile indexes (path and generatorId need to be adapted)
node /srv/deployment/tilerator/deploy/src/scripts/tileshell.js \
  --config /etc/tileratorui/config.yaml \
  # List all tiles in the "v5" source
  -j.generatorId v5 \
  # Which zoom to enumerate
  -j.zoom 14 \
  # File to write indexes to
  --dumptiles /home/yurik/all-tiles-14.txt \
  # Instead of dumping indexes in "zoom/x/y" format, write one number indexes (0..4^zoom-1)
  --dumprawidx  \
  # If dumptiles file already exists, override it
  • Instead of generating the entire zoom level, you may want to generate just the tiles in a list (all parameters might need to be adapted)
node /srv/deployment/tilerator/deploy/src/scripts/tileshell.js \
  --config /etc/tileratorui/config.yaml \
  # List of tile indexes, unique and sorted, one per line.
  # Indexes can be 0..4^zoom-1
  -j.filepath /home/yurik/all-tiles-14.txt \
  # All tile indexes in the file belong to zoom 14
  # Without this parameter, the file must contain zoom/x/y triplets
  -j.fileZoomOverride 14 \
  # generate zoom levels   10 <= zoom < 16
  -j.fromZoom 10 -j.beforeZoom 16 \
  # Copy tiles from "gen" source to "v3" source
  -j.generatorId gen -j.storageId v3 \
  # If tile already exists in "v3", but "gen" produces an empty tile, delete it in v3

Building Kartotherian and Tilerator

While Kartotherian uses the same build system as Service-template-node, it also needs to use npm shrinkwrap to prevent migration from Mapnik 3.5.13 to 3.5.14 due to the updated CLIB dependency. After your system is configured for the regular service-template-node build, you need to do these annoying steps to actually build it (identical for both Kartotherian and Tilerator):

# Make sure the source is checked out to the latest master
deploy$ cd src
src$ git pull
src$ cd ..
# Update npm dependencies
deploy$ npm update
deploy$ npm shrinkwrap
deploy$ mv npm-shrinkwrap.json ..
deploy$ git clean -f -d
deploy$ git checkout master
deploy$ git submodule update
deploy$ mv ../npm-shrinkwrap.json .
# Compare npm-shrinkwrap.json to the previous version, revert Mapnik's update back to 3.5.13
deploy$ meld .
deploy$ git add npm-shrinkwrap.json
deploy$ git commit
deploy$ git push # or git review and +2 via gerrit
deploy$ cd ../kartotehrian

# During the build Docker container will now use the latest npm-shrinkwrap
kartotherian$ ./server.js build --deploy-repo --force
kartotherian$ cd ../deploy
deploy$ git review