Wikidata query service

From Wikitech
Jump to navigation Jump to search
Wikidata Query Service components
Wikidata Query Service components

Wikidata Query Service is the Wikimedia implementation of SPARQL server, based on Blazegraph engine, to service queries for Wikidata and other data sets. Please see more detailed description in the User Manual.

See also

Development environment

You will need java and maven, java can be installed from:


The source code is in gerrit project wikidata/query/rdf. In order to start working on Wikidata Query Service codebase, clone this repository:

git clone 

or github mirror:

git clone

or if you want to push changes and have a Gerrit account:

git clone ssh:// 

After update submodules:

$ cd wikidata-query-rdf
[../wikidata_query_rdf]$ git submodule update --init
Submodule 'gui' ( registered for path 'gui'


Then you can build the distribution package by running:

cd wikidata-query-rdf
./mvnw package

and the package will be in the dist/target directory. Or, to run Blazegraph service from the development environment (e.g. for testing) use:

bash war/

Add "-d" option to run it in debug mode. If your build is failing cause your version of maven is a different one you can:

 mvn package -Denforcer.skip=true

In order to run Updater, use:

 bash tools/

The build relies on Blazegraph packages which are stored in Archiva, and the source is in wikidata/query/blazegraph gerrit repository. See instructions on Mediawiki for the case where dependencies need to be rebuilt.

See also documentation in the source for more instructions.

Build Blazegraph

If there are changes needed to Blazegraph source, they should be checked into wikidata/query/blazegraph repo. After that, the new Blazegraph sub-version should be built and WDQS should switch to using it. The procedure to follow:

  1. Commit fixes (watch for extra whitespace changes!)
  2. Update README.wmf with descriptions of which changes were done against mainstream
  3. Blazegraph source in master branch will be on snapshot version, e.g. 2.1.5-wmf.4-SNAPSHOT - set it to non-snapshot: mvn versions:set -DnewVersion=2.1.5-wmf.4
  4. Make local build: mvn clean; bash scripts/; mvn -f bigdata-war/pom.xml install -DskipTests=true
  5. Switch Blazegraph version in maim pom.xml of WDQS repo to 2.1.5-wmf.4 (do not push it yet!). Build and verify everything works as intended.
  6. Commit the version change in Blazegraph, push it to the main repo. Tag it with the same version and push the tag too.
  7. Run to deploy: mvn -f pom.xml -P deploy-archiva deploy -P Development; mvn -f bigdata-war/pom.xml -P deploy-archiva deploy -P Development
  8. Commit the version change in WDQS, and push to gerrit. Ensure the tests pass (this would also ensure Blazegraph deployment to Archiva worked properly).
  9. After merging the WDQS change, follow the procedure below to deploy new WDQS version.
  10. Bump Blazegraph master version back to snapshot - mvn versions:set -DnewVersion=2.1.5-wmf.5-SNAPSHOT - and commit/push it.



We're currently running on the following servers:

  • public cluster, eqiad: wdqs1003, wdqs1004, wdqs1005
  • public cluster, codfw: wdqs2001, wdqs2002, wdqs2003
  • internal cluster, eqiad: wdqs1006, wdqs1007, wdqs1008
  • internal cluster, codfw: wdqs2004, wdqs2005, wdqs2006

These clusters are in active/active mode (traffic is sent to both), but due to how we route traffic with GeoDNS, the primary cluster (usually eqiad) sees most of the traffic.

Server specs are similar to the following:

  • CPU: dual Intel(R) Xeon(R) CPU E5-2620 v3
  • Disk: 1600GB raw raided space SSD
  • RAM: 128GB


Icinga group

Grafana dashboard:

Grafana frontend dashboard:

WDQS dashboard:



The source code is in gerrit project wikidata/query/rdf, github mirror:

The GUI source is in wikidata/query/gui project and is the submodule of the main project. The deployment version of the GUI is in production branch, which is cherry-picked from master branch when necessary. Production branch should not contain test & build service files (which currently means some cherry-picks will have to be manually merged).

Labs Deployment

Note that currently deployment is via git-fat (see below) which may require some manual steps after checkout. This can be done as follows:

  1. Check out wikidata/query/deploy repository and update gui submodule to current production branch (git submodule update).
  2. Run git-fat pull to instantiate the binaries if necessary.
  3. rsync the files to deploy directory (/srv/wdqs/blazegraph)

Use role role::wdqs::labs for installing WDQS. You may also want to enable role::labs::lvm::srv to provide adequate diskspace in /srv.

Command sequence for manual install:

git clone
cd deploy
git fat init
git fat pull
git submodule init
git submodule update
sudo rsync -av --exclude .git\* --exclude scap --delete . /srv/wdqs/blazegraph

Production Deployment

Production deployment is done via git deployment repository wikidata/query/deploy. The procedure is as follows:

  1. mvn package the source repository.
  2. mvn deploy -Pdeploy-archiva in the source repository - this deploys the artifacts to archiva. Note that for this you will need repositories wikimedia.releases and wikimedia.snapshots configured in ~/.m2/settings.xml with archiva username/password.
  3. Install new files (which will be also in dist/target/service-* to deploy repo above. Commit them. Note that since git-fat uses archiva as primary storage, there can be a delay between files being deployed to archiva and them appearing on rsync and ready for git-fat deployment.
  4. Use scap deploy to deploy the new build.

The puppet role that needs to be enabled for the service is role::wdqs.

It is recommended to test deployment checkout on beta (see above) before deploying it in production.

GUI deployment

GUI deployment files are in repository wikidata/query/deploy-gui branch production. It is a submodule of wikidata/query/deploy which is linked as gui subdirectory.

To build deployment GUI version, use grunt deploy in gui subdir. This will generate patch for deploy repo that needs to be merged in gerrit (currently manually). Then update submodule gui on wikidata/query/deploy to latest production head and commit/push the change. Deploy as per above.

Data reload procedure

  1. Go to icinga and schedule downtime:
  2. Depool: HOME=/root sudo depool
  3. Remove data loaded flag: rm /srv/wdqs/data_loaded
  4. Stop the updater: sudo service wdqs-updater stop
  5. Turn on maintenance: touch /var/lib/nginx/wdqs/maintenance
  6. Stop Blazegraph: sudo service wdqs-blazegraph stop
  7. Prepare data for loading (can be done in advance at any time)
  8. Remove old db: rm /srv/wdqs/wikidata.jnl
  9. Start blazegraph: sudo service wdqs-blazegraph start, check that /srv/wdqs/wikidata.jnl is created.
  10. Check logs: sudo journalctl -u wdqs-blazegraph -f
  11. Load data: bash -n wdq -d /srv/wdqs/dump/data
  12. Restore data loaded flag: touch /srv/wdqs/data_loaded
  13. Start updater: sudo service wdqs-updater start
  14. Check logs: sudo journalctl -u wdqs-updater -f
  15. Reload categories from weekly dump: /usr/local/bin/ or if needed to be done manually: bash categories; bash categories
  16. Reload daily diffs:
    1. For the day next to the weekly dump's day: {TS} fromDump{WEEKLYTS}-, where TS is today's date, YYYYMMDD format, and WEEKLYTS is the date of the weekly dump (would be one day earlier than TS). E.g.: 20181007 fromDump20181006-. Note the dash at the end of the prefix.
    2. For each following day: {TS} where TS is the day of the diff. E.g. 20181008.
    3. If possible, it is recommended to reload the data close to the date of the weekly dump, to minimize amount of dailies that are needed to load. As an alternative, one can perform weekly reload as soon as the new weekly dump is ready.
  17. Wait till the updater catches up
  18. Turn off maintenance: rm /var/lib/nginx/wdqs/maintenance
  19. Repool: HOME=/root sudo pool

Data transfer procedure

Transferring data from between nodes is typically faster than recovering from a dump. The port 9876 is opened between the wdqs nodes of the same cluster for that purpose. The procedure is as below.

  1. depool source and destination nodes: sudo HOME=/root depool
  2. shutdown wdqs-blazegraph and wdqs-updater on both source and destination hosts
  3. on destination node: nc -l -p 9876 | pigz -c -d | tee >( sha256sum > /dev/stderr ) | pv -b -r > /srv/wdqs/wikidata.jnl
  4. on source node: cat /srv/wdqs/wikidata.jnl | tee >( sha256sum > /dev/stderr ) | pigz -c | nc -w 3 <destination_fqdn> 9876
  5. verify the transfer with sha256sum
  6. copy /srv/wdqs/ to the destination node
  7. on destination node: touch /srv/wdqs/data_loaded
  8. on destination node: sudo chown blazegraph: /srv/wdqs/wikidata.jnl
  9. restart wdqs-blazegraph and wdqs-updater on both source and destination
  10. pool source and destination nodes: sudo HOME=/root pool

For copying categories instance data, same procedure with the following changes:

  • The file name is categories.jnl
  • Instance that needs to be down/restarted is wdqs-categories (no need to touch updater)


Scaling strategy



If you need more info, talk to User:Smalyshev, User:Gehel or anybody from mw:Discovery team.