This page describes Wikimedia's media storage infrastructure.
FIXME: Remove information on Ceph; remove reference to pmtpa data center
When we talk about "media storage", we refer to the storage & serving of user-uploaded content (typically images, PDF, video & audio files), served from upload.wikimedia.org. It includes both original content or content generated by other sources. The files can be broadly grouped into the following categories:
- "Originals": originally uploaded content
- Thumbnails: variable arbitrarly-sized thumbnails of original content, scaled on demand by imagescalers
- Transcoded videos: multiple format (Ogg, WebM) multiple -but preset- resolution (360p, 720p etc.) conversion of originally uploaded videos
- Rendered content:
The media storage architecture involves the following closely coupled components.
There are the usual, tiered multiple layers of Varnish HTTP caching proxies, serving the upload.wikimedia.org domains.
The upload Varnish setup is special in a few ways:
- There are certain provisions for handling Range requests that also needed a special Varnish version, due to its importance for serving large video files.
- The config contains rewriting rules to handle the conversion from upload.wikimedia.org URLs to ms-fe Swift API URLs.
- The config has special support for handling 404 from media storage on thumbnails URLs, retrying instead using an image scaler.
The last two were written to replace the previous Swift middleware written in Python to prepare for the Ceph transition as well as cut some of the issues that the middleware had with cascading failures looping each other in case of simple incidents. As of July 2013, those two are implemented but inactive, pending the full Ceph roll-out.
This is the actual storage layer where files/objects are being stored to and retrieved from.
Historically, media storage was composed of a few NFS servers (ms7 for originals, ms5 for thumbs, and ms6 for a thumbs cache in esams) that all MediaWiki application servers had mount points to and MediaWiki wrote files using regular filesystem calls. This was unscalable, fragile and inelegant. In 2012, a combined effort from the platform and technical operations teams was made to replace this with a separate infrastructure, interfacing with MediaWiki with a separate API.
Openstack Swift was picked as the new platform and the Swift API was picked because of its simplicity and it being native to the Swift implementation. Because of Swift's certain limitations and in particular geographically-aware replication between datacenters which affected the eqiad migration, as well as certain Swift's shortcomings with data consistency and performance, as of 2013 Ceph with its Swift-compatible layer (radosgw) is also being evaluated for the same purpose, with pmtpa running Swift and eqiad running Ceph and a final decision between the two to be taken in late 2013.
Image & video scalers
Image scalers are a special group of application servers that are otherwise normal and running MediaWiki. Their sole purpose is to receive thumb scaling requests for arbitrary originals & sizes and scaling them down on request. While there a number of constraints for resource usage & security purposes, they are performing resource-intensive operations on foreign content and thus can frequently misbehave, which is why they are grouped separately.
Video scalers are similar, but because of the nature of their work, instead of a per request basis, they are performing their work as part of job queue processing.
Files are grouped into containers that have 5 components: a project (wikipedia), a language (en), a repo, a zone and, optionally, a shard.
Project can also be "global" for certain global items. Note that there are a few exceptions to the project names, with the most notable being Wikimedia Commons that has a project name of "wikipedia" for historical reasons.
Rendered content (timeline, math, scope and captcha) all have their zone set to render and their repo set to their respective category. Regular media files have repo set to local. Zones are public, for public unscaled media, thumb for thumbnails/scaled media, transcoded for transcoded videos, temp for temporary files created by e.g. UploadStash, and deleted for unscaled media that have had their on-wiki entries deleted. These are defined and categorized on the MediaWiki configuration option $wgFileBackends.
Historically, files were put under directories on a filesystem and directories were sharded per wiki in a two-level hierarchy of 16 shards per level, totaling 256 uniformly sharded directories. On the Swift era, the hope was that such a sharding scheme would be unneeded, as the backend storage would handle such a complexity. This hope ultimately proved to be untrue and for certain wikis, the amount of objects per container is large enough that it created scalability problems on Swift. To address this issue, multiple containers were created for those large projects. These were shared into a flat (one level) 256 shards (00-ff), with the exception of the deleted zone that was sharded to 1296 shards (00-zz). The list of large projects that has sharded containers is currently defined in three places: a) MediaWiki's $wmfSwiftBigWikis, b) Swift's shard_container_list (proxy-server.conf, via puppet) and c) under the rewrite Varnish's rewrite configuration in puppet.
The previous, two-level scheme is kept as the name of the object on all containers as well as the public upload.wikimedia.org URLs, irrespective of whether the project is large enough to have sharded containers. This was made for compatibility reasons, as well as to give us the capability of sharding more containers in the future if they get large enough. For those that are sharded, the name of the shard matches the name of the object's second-level shard and the shard of derived contents (thumbnail) remains the same as the shard of the original that produced it.
A few examples:
- Wikimedia Commons' File:She_Has_a_Name_2012_-_Death.jpg has an image that maps (via the database) to http://upload.wikimedia.org/wikipedia/commons/f/f5/She_Has_a_Name_2012_-_Death.jpg. The Swift object name will be f/f5/She_Has_a_Name_2012_-_Death.jpg and since Commons is large enough to have sharded containers, it will be located under the container name wikipedia-commons-local-public.f5. The 800px filename for that file will have a URL of http://upload.wikimedia.org/wikipedia/commons/thumb/f/f5/She_Has_a_Name_2012_-_Death.jpg/800px-She_Has_a_Name_2012_-_Death.jpg and will be present under the wikipedia-commons-local-thumb.f5 container.
- elwiki's File:Plateia_syntagmatos_Athina.jpg maps to http://upload.wikimedia.org/wikipedia/el/0/01/Plateia_syntagmatos_Athina.jpg. The Swift object name will be 0/01/Plateia_syntagmatos_Athina.jpg but since it is a small wiki, the container original & thumb be placed on are wikipedia-el-local-public and wikipedia-el-local-thumb (no shard)
When a user requests a page from a public wiki, links to scaled media needed for the page (e.g. http://upload.wikimedia.org/project/language/thumb/x/xy/filename.ext/NNNpx-filename.ext) are generated, but the scaled media themselves are not generated at that time. As the thumb sizes are arbitrary, it is not possible to pregenerate them either, therefore the only way to handle this is to generate them on demand and cache them. On the MediaWiki side, this is accomplished by using Thumbor to generate thumbnails.
When Varnish can't find a copy of the requested thumbnail - whether it's a thumbnail that has never been requested before, or ones that fell out of Varnish cache - Varnish hits the Swift proxies.
For private wikis, Varnish doesn't cache thumbnails, because Mediawiki-level authentication is required to ensure that the client has access to the desired content (is logged into the private wiki). Therefore, Varnish passes the requests to Mediawiki, which verifies the user's credentials. Once authentication is validated, Mediawiki proxies the HTTP request to Thumbor. A shared secret key between Mediawiki and Thumbor is used to increase security.
This information is outdated.
We currently have Swift running in pmtpa and Ceph running in eqiad. MediaWiki has a the capability of running with multiple backends, with one of them being the primary (where reads come from and what MediaWiki cares most about in terms of file operation success). This is configured by means of the FileBackendMultiWrite setting for $wgFileBackends, after creating a local-ceph and a local-swift SwiftFileBackend instance.
This is currently disabled because of Ceph's stability issues, that due to to the synchronous nature of FileBackendMultiWrite, would propagate to production traffic. Until this is enabled, we have two ways of syncing files:
- For original media content, MediaWiki has a journal mechanism that keeps all changes into a database table, and scripts exist to replay that journal to the other store.
- For all other content, operations/software has a software of our own called swiftrepl which traverses containers on both sides and syncs them.
First request for a thumbnail image
- Request for http://upload.wikimedia.org/project/language/thumb/x/xy/filename.ext/NNNpx-filename.ext is received by an LVS server.
- The LVS server picks an arbitrary Varnish frontend server to handle the request.
- Frontend Varnish looks for cached content for URL in in-memory cache.
- Frontend Varnish computes hash of URL and uses that hash to select a consistent backend Varnish server.
- The consistent hash routing ensures that all frontend Varnish servers will select the same backend Varnish server for a given URL to eliminate duplication in the backend cache layer.
- Frontend Varnish requests URL from backend Varnish.
- Backend Varnish looks for cached content for URL in SSD based cache.
- Backend Varnish requests URL from media storage cluster.
- Request for URL from media storage cluster received by an LVS server.
- The LVS server picks an arbitrary frontend Swift server to handle the request.
- The frontend Swift server rewrites the URL to map from the wiki URL space into the storage URL space.
- The frontend Swift server request new URL from Swift cluster.
- The 404 response for the URL is caught in the frontend Swift server.
- The frontend Swift server constructs a URL to request the thumbnail from an image scaler server via /w/thumb_handler.php.
- The image scaler server requests the original image from Swift.
- This goes back to the same LVS -> Swift frontend -> Swift backend path as the thumb request came down from the Varnish backend server.
- The image scaler transforms the original into the request thumbnail image.
- The image scaler stores the resulting thumbnail in Swift.
- The image scaler returns the thumbnail as a http response to the frontend Swift server's request.
- The frontend Swift server returns the thumbnail image as a http response to the backend Varnish server.
- The backend Varnish server stores the response in it's SSD-backed cache.
- The backend Varnish server returns the thumbnail image as a http response to the frontend Varnish server.
- The frontend Varnish server stores the response in it's in-memory cache.
- The frontend Varnish server returns the thumbnail image as a http response to the original requestor.
Removing archived files
Occasionally, there is a need to eradicate the content of files that have been deleted & archived on the wikis (e.g. for illegal to distribute content). To serve this purpose, there is a MediaWiki maintenance script, eraseArchivedFile.php, that handles the deletion of both the content and its thumbnails from all configured FileBackend stores, as well as the purging of those from frontend HTTP caches. The script takes either the filename as input:
user@mwmaint1001:~$ mwscript eraseArchivedFile.php --wiki commonswiki --filename 'Example.jpg' --filekey '*' Use --delete to actually confirm this script Purging all thumbnails for file 'Example.jpg'...done. Finding deleted versions of file 'Example.jpg'... Would delete version 'f6mypp1mxmrj2aoxfucxwo2sj8eb9ww.jpg.jpg' (20130604053028) of file 'Example.jpg' Done
or the filekey (e.g. as given in a Special:Undelete URL) as an argument:
user@mwmaint1001:~$ mwscript eraseArchivedFile.php --wiki commonswiki --filekey 'f6mypp1mxmrj2aoxfucxwo2sj8eb9ww.jpg' Use --delete to actually confirm this script Purging all thumbnails for file 'Example.jpg'...done. Would delete version 'f6mypp1mxmrj2aoxfucxwo2sj8eb9ww.jpg.jpg' (20130604053028) of file 'Example.jpg'
(note how it needs to be invoked with --delete to confirm all actions)