Wikidata query service

From Wikitech
Jump to navigation Jump to search

Wikidata Query Service is the Wikimedia implementation of SPARQL server, based on Blazegraph engine, to service queries for Wikidata and other data sets. Please see more detailed description in the User Manual.

See also: https://www.mediawiki.org/wiki/Wikidata_query_service/Implementation

Hardware

We're currently running on three servers in eqiad: wdqs1003, wdqs1004, wdqs1005 and three servers in codfw: wdqs2001, wdqs2002 and wdqs2003. Those two clusters are in active/active mode (traffic is sent to both), but due to how we route traffic with GeoDNS, the eqiad cluster sees most of the traffic.

Server specs are similar to the following:

  • CPU: dual Intel(R) Xeon(R) CPU E5-2620 v3
  • Disk: 800GB raw raided space SSD
  • RAM: 128GB

Monitoring

Icinga group

Grafana dashboard: https://grafana.wikimedia.org/dashboard/db/wikidata-query-service

Grafana frontend dashboard: https://grafana.wikimedia.org/dashboard/db/wikidata-query-service-frontend

WDQS dashboard: http://discovery.wmflabs.org/wdqs/

Deployment

Sources

The source code is in gerrit project wikidata/query/rdf, github mirror: https://github.com/wikimedia/wikidata-query-rdf

The GUI source is in wikidata/query/gui project and is the submodule of the main project. The deployment version of the GUI is in production branch, which is cherry-picked from master branch when necessary. Production branch should not contain test & build service files (which currently means some cherry-picks will have to be manually merged).

Labs Deployment (beta)

Note that currently deployment is via git-fat (see below) which may require some manual steps after checkout. This can be done as follows:

  1. Check out wikidata/query/deploy repository and update gui submodule to current production branch (git submodule update).
  2. Run git-fat pull to instantiate the binaries if necessary.
  3. rsync the files to deploy directory (/srv/wdqs/blazegraph)

See also Wikidata Query service beta.

Production Deployment

Production deployment is done via git deployment repository wikidata/query/deploy. The procedure is as follows:

  1. mvn package the source repository.
  2. mvn deploy -Pdeploy-archiva in the source repository - this deploys the artifacts to archiva. Note that for this you will need repositories wikimedia.releases and wikimedia.snapshots configured in ~/.m2/settings.xml with archiva username/password.
  3. Install new files (which will be also in dist/target/service-*-dist.zip) to deploy repo above. Commit them. Note that since git-fat uses archiva as primary storage, there can be a delay between files being deployed to archiva and them appearing on rsync and ready for git-fat deployment.
  4. Use scap deploy to deploy the new build.

The puppet role that needs to be enabled for the service is role::wdqs.

It is recommended to test deployment checkout on beta (see above) before deploying it in production.

GUI deployment

GUI deployment files are in repository wikidata/query/deploy-gui branch production. It is a submodule of wikidata/query/deploy which is linked as gui subdirectory.

To build deployment GUI version, use grunt deploy in gui subdir. This will generate patch for deploy repo that needs to be merged in gerrit (currently manually). Then update submodule gui on wikidata/query/deploy to latest production head and commit/push the change. Deploy as per above.

Data reload procedure

  1. Go to icinga and schedule downtime: https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?host=wdqs2002
  2. Depool: HOME=/root sudo depool
  3. Remove data loaded flag: rm /srv/wdqs/data_loaded
  4. Stop the updater: sudo service wdqs-updater stop
  5. Turn on maintenance: touch /var/lib/nginx/wdqs/maintenance
  6. Stop Blazegraph: sudo service wdqs-blazegraph stop
  7. Prepare data for loading (can be done in advance at any time)
  8. Remove old db: rm /srv/wdqs/wikidata.jnl
  9. Start blazegraph: sudo service wdqs-blazegraph start, check that /srv/wdqs/wikidata.jnl is created.
  10. Check logs: sudo journalctl -u wdqs-blazegraph -f
  11. Load data: bash loadData.sh -n wdq -d /srv/wdqs/dump/data
  12. Restore data loaded flag: touch /srv/wdqs/data_loaded
  13. Start updater: sudo service wdqs-updater start
  14. Check logs: sudo journalctl -u wdqs-updater -f
  15. Reload categories: /usr/local/bin/reloadCategories.sh or if needed to be done manually: bash createNamespace.sh categories; bash forAllCategoryWikis.sh loadCategoryDump.sh categories
  16. Wait till the updater catches up
  17. Turn off maintenance: rm /var/lib/nginx/wdqs/maintenance
  18. Repool: HOME=/root sudo pool

Data transfer procedure

Transferring data from between nodes is typically faster than recovering from a dump. The port 9876 is opened between the wdqs nodes of the same cluster for that purpose. The procedure is as below.

  1. depool source and destination nodes: sudo HOME=/root depool
  2. shutdown wdqs-blazegraph and wdqs-updater on both source and destination hosts
  3. on destination node: nc -l -p 9876 | pigz -c -d | tee >( sha256sum > /dev/stderr ) | pv -b -r > /srv/wdqs/wikidata.jnl
  4. on source node: cat /srv/wdqs/wikidata.jnl | tee >( sha256sum > /dev/stderr ) | pigz -c | nc -w 3 <destination_fqdn> 9876
  5. verify the transfer with sha256sum
  6. on destination node: touch /srv/wdqs/data_loaded
  7. on destination node: sudo chown blazegraph: /srv/wdqs/wikidata.jnl
  8. restart wdqs-blazegraph and wdqs-updater on both source and destination
  9. pool source and destination nodes: sudo HOME=/root pool

Contacts

If you need more info, talk to User:Smalyshev, User:Gehel or anybody from mw:Discovery team.

Usage