Caching overview

From Wikitech
Jump to navigation Jump to search
Wikimedia infrastructure

Data centres and PoPs

Networking

HTTP Caching


MediaWiki

Media

Logs

Cache Clusters

We currently host the following Varnish cache clusters at all of our datacenters:

  • cache_text - Primary cluster for MediaWiki and various app/service (e.g. RESTBase, phabricator) traffic
  • cache_upload - Serves upload.wikimedia.org and maps.wikimedia.org exclusively (images, thumbnails, map tiles)

Old clusters that no longer exist:

  • cache_bits - Used to exist just for static content and ResourceLoader, now decommed (traffic went to cache_text)
  • cache_mobile - Was like cache_text but just for (m|zero)\. mobile hostnames, now decommed (traffic went to cache_text)
  • cache_parsoid - Legacy entrypoint for parsoid and related *oid services, now decommend (traffic goes via cache_text to RestBase)
  • cache_maps - Served maps.wikimedia.org exclusively, which is now serviced by cache_upload
  • cache_misc - Miscellaneous lower-traffic / support services (e.g. phabricator, metrics, etherpad, graphite, etc). Now moved to cache_text.

Headers

X-Cache

X-Cache is a comma-separated list of cache hostnames with information such as hit/miss status for each entry. The header is read right-to-left: the rightmost is the outermost cache, things to the left are progressively deeper towards the applayer. The rightmost cache is the in-memory cache, all others are disk caches.

In case of cache hit, the number of times the object has been returned is also specified. Once "hit" is encountered while reading right to left, everything to the left of "hit" is part of the cached object that got hit. It's whether the entries to the left missed, passed, or hit when that object was first pulled into the hitting cache. For example:

X-Cache: cp1066 hit/6, cp3043 hit/1, cp3040 hit/26603

An explanation of the possible information contained in X-Cache follows.

Not talking to other servers

  • hit: a cache hit in cache storage. There was no need to query a deeper cache server (or the applayer, if already at the last cache server)
  • int: locally-generated response from the cache. For example, a 301 redirect. The cache did not use a cache object and it didn't need to contact another server

Talking to other servers

  • miss: the object might be cacheable, but we don't have it
  • pass: the object was uncacheable, talk to a deeper level

Some subtleties on pass: different caches (eg: in-memory vs. on-disk) might disagree on whether the object is cacheable or not. A pass on the in-memory cache (for example, because the object is too big) could be a hit for an on-disk cache. Also, it's sometimes not clear that an object is uncacheable till the moment we fetch it. In that case, we cache for a short while the fact that the object is uncachable. In Varnish terminology, this is a hit-for-pass.

If we don't know an object is uncacheable until after we fetch it, it's initially identical to a normal miss. Which means coalescing, other requests for the same object will wait for the first response. But after that first fetch we get an uncacheable object, which can't answer the other requests which might have queued. Because of that they all get serialized and we've destroy the performance of hot (high-parallelism) objects that are uncacheable. hit-for-pass is the answer to that problem. When we make that first request (no knowledge), and get an uncacheable response, we create a special cache entry that says something like "this object cannot be cached, remember it for 10 minutes" and then all remaining queries for the next 10 minutes proceed in parallel without coalescing, because it's already known the object isn't cacheable.

The content of the X-Cache header is recorded for every request in the webrequest log table.

Graph

 __________________
| browser/the webz |
|__________________|
    |
    |
    |
  ____________________
 |  LVS               |
 |    (load balancer) |
 |____________________|
     |
     |
     |
     |
    __________________________________________________________
   |  Edge Frontend (Varnish)                                 |
   |     Short-lived cache (~10sec, mostly to prevent DDOS)   |
   |     Stored in memory                                     |
   |__________________________________________________________|
         |
         |
         |
       _______________________________
      | Edge Backend (Varnish/ATS)    |
      |  Long-lived cache             |
      |  Stored on disk               |
      |_______________________________|
            |
            |
            |
           _______________________________________
          | Apaches (MediaWiki PHP)               |
          |                                       |
          |   * Cache-Control for page view HTML: |
          |      max-age is 14 days               |
          |   * wikitext parsercache:             |
          |      Expires at 22 days               |
          |       / wgParserCacheExpireTime
          |_______________________________________|

Cache software

We currently (July 2019) use Varnish as the frontend, in-memory software for caching, while Apache Traffic Server is responsible for the the on-disk, persistent cache.

MediaWiki

Invalidating content

For Varnish:

  • When pages are edited, their canonical url is proactively purged in Varnish by MediaWiki.

For ParserCache: Values in ParserCache are verifiable by revision ID. Edits will naturally update it.

  • puppet: manifests/misc/maintenance.pp
    • class misc::maintenance::parsercachepurging
      • Set to 22 days (expire age=2592000)

Past events

  • 2013: Prevent white-washing of expired page-view HTML.
    • Various static aspects of a page are not tracked or versions, as such, when the max-age expires, a If-Not-Modified must not return true after expiry even if the database entry of the wiki page was unchanged.
    • More info: https://phabricator.wikimedia.org/T46570
  • 2016: Decrease max object ttl in Varnish