Jump to content

Obsolete:Parser cache expansion

From Wikitech
(Redirected from Parser cache expansion)
See Parser cache for current information.
This page contains historical information. It may be outdated or unreliable.
2011

Our parser cache is currently small and RAM-based, utilizing memcached. There may be significant performance improvements to be had with a disk-based cache, which would make far more aggressive caching feasible.

Statistics

Tim's rrdtool graphs for parser cache statistics show the parser cache hit ratio is hovering around 30%:

http://noc.wikimedia.org/cgi-bin/pcache-hit-rate.py?period=1d

He believes almost all of the yellow area in that graph is due to inadequate cache size. Parser CPU from cacheable sources is profiling at about 58% of CPU time. So if we had a larger parser cache, our overall CPU usage would be reduced by up to 40%.

The request rate is around 600 req/s at daily peak:

http://noc.wikimedia.org/cgi-bin/pcache-request-rate.py?period=1d

The entire memcached pool is 158GB. The parser cache shares that space with a lot of bulky objects such as the revision cache and preprocessor XML. Compare this to a single text squid server, for example sq71:

http://torrus.wikimedia.org/torrus/CDN?path=%2FSquids%2Fpmtpa%2Ftext%2Fsq71.wikimedia.org%2Fbackend%2FUsage%2FClient_requests

It has 250GB of cache space, and it's serving 450 req/s of roughly the same sized objects as the parser cache. A single server like this could do a better job of serving the parser cache than our entire memcached cluster.

The additional latency which would be incurred by moving this cache to disk is not an issue. For external storage, we see a latency of 13ms. Compare that to our Article::view() average real time of 835ms.

Hardware requirements

  • At least 1TB of storage space to start with, with an easy expansion path.
  • It should be fault-tolerant, so that we can increase our utilisation of apache CPU without worrying about the possibility of a cache clear leading to an overload.
  • Number of servers: ?

Technology to investigate

Status quo

  • memcached - Possible to throw more hardware at it, but expansion is tricky, and is all RAM based, making it expensive for this use

In the running

Considered but rejected

  • memcachedb - doesn't seem to have built-in cache eviction support and doesn't appear to be an active project
  • Ehcache