PoolCounter

From Wikitech
Jump to: navigation, search

PoolCounter is a network daemon which provides mutex-like functionality, with a limited wait queue length. If too many servers try to do the same thing at the same time, the wait queue overflows and some configurable action might be taken by subsequent clients, such as displaying an error message or using a stale cache entry.

It was created to avoid massive wastage of CPU due to parallel parsing when the cache of a popular article is invalidated (the "Michael Jackson problem"), but has later been put to other uses as well, such as limiting thumbnail scaling requests.

MediaWiki uses PoolCounter via an abstract interface (see $wgPoolCounterConf) which allows alternative implementations.

Source

The implementation used by Wikimedia is in Extension:PoolCounter.

  • The server source is in the /daemon directory. It can be installed via APT, with the poolcounter package. The script that comes with the PoolCounter extension is in Debian-native form, so recompilation is just svn export && dpkg-buildpackage.
  • The client source is in the /includes/poolcounter directory.

There is also a Redis-based default implementation in MediaWiki core, and an experimental Python client for the daemon in Thumbor.

Architecture

The server is a single-threaded C program based on libevent. It does not use autoconf, it just has a makefile which is suitable for a normal Linux environment. It has currently has no daemonize code, and so is backgrounded by start-stop-daemon -b.

In MediaWiki, the client must be a subclass of PoolCounter and the class holding the application-specific logic must be a subclass of PoolCounterWork. See mw:Manual:$wgPoolCounterConf#Usage for details.

Protocol

The network protocol is line-based, with parameters separated by spaces (spaces in parameters are percent-encoded). The client opens a connection, sends a lock acquire command, does the work, sends a lock release command, then closes the connection. The following commands are defined:

ACQ4ANY <key> <active worker limit> <total worker limit> <timeout>
This is used to acquire a lock when the client is capable of using the cache entry generated by another process. If the active pool worker limit is exceeded, the server will give a delayed response to this command. When a client completes its work, all processes which are waiting with ACQ4ANY will immediately be woken so that they can read the new cache entry.
ACQ4ME <key> <active worker limit> <total worker limit> <timeout>
This is used to acquire a lock when cache sharing is not possible or not applicable, for example when an article rendering request involves a non-default stub threshold. When a lock of this kind is released, only one waiting process will be woken, so as to keep the worker population the same.
RELEASE <key>
releases a lock
STATS [FULL|UPTIME]
show statistics

The possible responses for ACQ4ANY/ACQ4ME:

LOCKED
successfully acquired a lock. Client is expected to do the work, then send RELEASE.
DONE
sent to wake up a waiting client
QUEUE_FULL
there are more workers than <total worker limit>
TIMEOUT
there are more workers than <active worker limit>; no slot was freed up after waiting for <timeout> seconds
LOCK_HELD
trying to get a lock when one is already held

For RELEASE:

NOT_LOCKED
client asked to release a lock that did not exist
RELEASED
lock successfully released

For any command:

ERROR <message>

Configuration

The server does not require configuration. Configuration of pool sizes, wait timeouts, etc. is done dynamically by the client. Installation of the poolcounter package is done via puppet, specifically by role::poolcounter.

The client settings we use are in Wikimedia's operations/mediawiki-config repository (in wmf-config/PoolCounterSettings.php):

$wgPoolCountClientConf

servers 
An array of server IP addresses. Adding multiple servers causes locks to be distributed on the client side using a consistent hashing algorithm.
timeout 
The connect timeout in seconds.

$wgPoolCounterConf

The key in this configuration array identifies the type of work, and needs to be passed to PoolCounterWorkViaCallback as the first parameter. (the MediaWiki class which does the work. Currently only parsing is defined, and the parsing job has the ID "ArticleView". The following parameters must be given for each class:

class 
must be PoolCounter_Client
timeout 
The amount of time in seconds that a process should wait for a lock before it gives up and takes some other action. In the current implementation, the other action is to return stale HTML to the user, if it is available. If there is no stale cache entry, an error will be shown.
workers 
The maximum number of processes that may simultaneously hold the lock. Setting this to a value greater than 1 helps to prevent malfunctioning servers from degrading service time, at the expense of wasted CPU.
maxqueue 
The maximum number of processes that may wait for the lock. If this is exceeded, the effect is the same as an instant timeout. Setting this to a sufficiently low value prevents a lock which is held for a very long period of time from jeopardising the stability of the cluster as a whole.
slots 
Maximum number of workers working on this task type, regardless of key.

Testing

$ echo 'STATS FULL' | nc -w1 tarin 7531 
uptime: 633 days, 15209h 42m 26s
total processing time: 85809 days 2059430h 0m 24.000000s
average processing time: 0.957994s
gained time: 1867 days 44820h 50m 24.000000s
waiting time: 390 days 9365h 18m 24.000000s
waiting time for me: 389 days 9343h 3m 28.000000s
waiting time for anyone: 22h 14m 53.898438s
waiting time for good: 520 days 12503h 48m 24.000000s
wasted timeout time: 473 days 11375h 2m 44.000000s
total_acquired: 7739031655
total_releases: 7736374042
hashtable_entries: 119
processing_workers: 119
waiting_workers: 216
connect_errors: 0
failed_sends: 1
full_queues: 10294544
lock_mismatch: 227
release_mismatch: 0
processed_count: 7739031536

The hour counts may be broken due to a bug which was fixed in version control but never deployed by repackaging.