There are a number of Redis clusters and instance in Wikimedia production.

  • redis_maps (maps* hosts) used by Maps.
  • redis_misc (rdb*hosts) used by multiple services detailed below.
  • webperf (mwlog1001 host) used by Arc Lamp for collecting PHP profiling samples.

Outside production, we have MediaWiki-Vagrant and MediaWiki-Vagrant in Cloud VPS which are configured by default to use a Redis instance for local object caching and session store.

Cluster redis_maps

See Maps.

Cluster redis_misc

The role redis::misc is for our general purpose master-replica cluster in eqiad and codfw DCs. Each rdb* node has 5 instances (ports 6378, 6379, 6380, 6381, 6382) because redis is single threaded. A mapping of usages is below.

The servers are setup as 2 independent pairs. This is for HA purposes and it's up to the application to use it that way. Conversely not all applications are able to do so.

Consumers:

  • Changeprop: Uses Redis for rate limiting (actively uses both instances).
  • changeprop-jobqueue: Uses Redis for job deduplication (actively uses both instances).
  • ORES: Uses Redis for caching and queueing (one active instance).
  • docker-registry: (one active instance).

Pair 1

Port redis db Usage
6378 0
6379 0 changeprop/cpjobqueue/api-gateway
6380 0
6381 0 unallocated
6382 0

Pair 2

Port redis db Usage
6378 0 ORES cache
6379 0 changeprop/cpjobqueue/api-gateway
6380 0 ORES queue
6381 0 unallocated
6382 0 docker-registry

Servers

Each master has a replica. Masters use odd numbers (e.g. rdb1005) and replicas the subsequent even number (e.g. rdb1006). Master-replica instances use the same ports e.g. rdb0003:6379 would replicate to rdb0004:6379

Services

Change propagation (or changeprop) is a service that runs on Kubernetes nodes by listening to topics on Kafka for events, and then translating them into HTTP requests to various systems. It is also responsible for cache evictions to happen on all services like RESTBase. Changeprop talks to Redis via Nutcracker.

Related puppet code

  • hieradata/role/common/redis/misc/master.yaml
  • hieradata/role/common/redis/misc/slave.yaml
  • modules/role/manifests/redis/misc/master.pp
  • modules/role/manifests/redis/misc/slave.pp

Other Info

Using Redis

Connecting

redis-cli is installed on all servers where redis-server is installed. This will leave you at a redis prompt where you can enter commands interactively.

Some useful commands

  • AUTH <somepass> authenticate
  • INFO status information, including:
# Replication
role:slave
master_host:10.64.0.24
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
<snip>
# Keyspace
db0:keys=9351936,expires=9291239,avg_ttl=0
  • KEYS <pattern-here> list of all keys matching the given pattern. Use this sparingly! This query could take seconds to complete
  • QUIT closes the connection.

Using Redis from other Services

Some services may require or be able to use Redis, and this Redis cluster is appropriate for that.

As noted above, each pair of Redis servers in each data center have five separate instances on different ports, a majority of which are not in use; the first step to using the Redis server in production service is to choose an unused instance/port pair which can be located by examining Hiera data for what is currently in use: a relatively straight forward way to do this is to use git grep '\Wrdb[12]' within a Puppet tree, which shows every use of an rdb address. A similar procedure may be used to find a port that is unallocated.

Once a port/host combination for each datacenter is chosen, it is as simple as referring to those from the Puppet state which will use them.

Using Redis from a service requires a password; the password may be obtained from the Hiera key ::passwords::redis::main_password in hieradata/role/common/redis/misc/master.yaml in the private repository. It is currently the convention to introduce a new private Hiera key to store the password for your service's use, however this is obviously inefficient and subject to change.

Other references

Commands are easy, they all depend on the data type (hash, set, list, etc). Here's a quick reference.

Configuration is likewise pretty straightforward with perhaps the exception of the snapshotting, aof and memory settings; here's the sample config file.

Former clusters

Cluster redis_sessions

The "redis_sessions" cluster was co-located on the main mc* hosts that also serve memcached, as was used by MediaWiki.

The cluster had a capacity of 8GB in total (16 shards with 520MB each, downsized to 8 shards as of April 2021, T280582).

The cluster was stable in its utilization at a fairly constant 3GB of live data at any given time (as of July 2021, T212129#6283230).

Past consumers in MediaWiki:

  • MainStash backend, generic interface used by various features and extensions to store secondary data that should persist for multiple weeks without LRU eviction. The MainStash backend was moved out to the x2 database as part of T212129
  • Prior to 2020, MediaWiki core session data was stored in Redis, via $wgSessionCacheType, and has since moved to Cassandra (T206016).
  • Prior to Oct 2021, GettingStarted extension, which stored lists of articles for new editors to edit.
  • Prior to Jul 2022, CentralAuth authentication tokens (short-lived). Moved to memcached via mcrouter-primary-dc (T278392).
  • Pror to Jul 2022, CentralAuth session data. Moved to Cassandra (T267270).
  • Prior to Aug 2022, Rdbms-ChronologyProtector offsets (short-lived). Moved to dc-local memcached (T314453).

The decomission task is T267581: Phasing out "redis_sessions" cluster

See also

  • memcached
  • nutcracker (AKA twemproxy), the proxy used by all application servers to contact memcached (but not redis as of 2015, except it does again as of 2016)
  • Redis explained