Schema changes

From Wikitech
Jump to navigation Jump to search

Schema changes on a live database are hard, and they should not be done lightly. That doesn't mean that schema changes should be avoided, not at all, a good database design is essential for security and performance.

If you are a MediaWiki hacker, please note that the following only applies to changes that will be deployed to WMF servers. MediaWiki schema changes and standards are not covered here. But it has been agreed that update.php is avoided in our servers, as it would cause huge production issues. For that reasons, the following workflow has been created in order to attend schema change requests as fast as possible.

What is not a schema change

  • Creating new tables - please follow the Creating_new_tables process.
  • Dropping tables.
    • Lower priority and normally do not block anyone, while being even more dangerous than an alter. File them as a regular #DBA team task. Please only create the ticket (or tag #DBA) once the tables can be removed (e.g: no pending tickets to be merged/deployed).
  • Other "cleanup" schema changes (e.g. a column that is stopped being used but it is not blocking code). While those are real schema changes, if they don't block code deployments, just use the #DBA tag to avoid prioritizing them over those that are actual blockers.
  • Data migrations - long running scripts moving data between tables, or converting its format or location. There is some advice available, mostly to avoid data loss or creating lag.

Workflow of a schema change

  1. Add the schema-change tag to the relevant Phabricator task the moment you propose a change that involves a change in the table design
  2. Make sure to involve as many relevant people as possible before merging. For example, core tables such as "revision" or "user" are used by many developers, including 3rd party extension developers, so it will require wide consensus to change them (e.g. wikitech-l messages and a deep revision on a phabricator ticket). Please include at least a DBA also, even for changes that may look trivial, as they can provide constructive feedback if warned in time. Sometimes simple schema changes require a lot of preparation for deployment.
  3. Once the solution has been agreed among all developers (usually that means merged to HEAD including the schema change, but disabled on configuration), and you need to apply that to the live databases, create a separate specific task (subtask of the main issue) for the #schema-change-in-production, tag also #DBA project, providing the following specific information, one per batch (group of alters to be done at the same time):
    1. The ALTER TABLEs to run (usually, a link to a commit diff is the best way)
    2. Where to run those changes (specific dblist file or the complete list of wikis, in text, to apply the change to). "All wikis" or "everywhere" are not specific enough and has been the cause of outages in the past.
    3. When to run those changes (if it depends on a particular code deployment or can be done at any time)
    4. If the schema change is backwards compatible (is compatible with the current code deployed and can be performed at any time now)
    5. If the schema change has been tested already on some of the test/beta wikis. Usually, as a last test, change should be applied to testwiki first.
    6. If it involves new columns or tables, if the data should be made available on the labs replicas and/or dumps or not because they contain private or sensitive data (consult legal if you are unsure). Similar question if it involves deletion of data previously available on labs.
    7. Is this schema change blocking a feature rollout?

Example ticket: https://phabricator.wikimedia.org/T119752

The alter will be performed as soon as possible within the available resources. If the schema change has to be done in a particular timeframe (for example, depending on code deployment), pinging with enough time in advance is strongly suggested (an easy change can take 1 week, but a very complex one can take months).

Additional notes on scheduling (a.k.a. How much will my schema change will take to be deployed?)

Schema changes are usually attended in the order they are requested (assuming they can be applied right away). If in doubt, request it/notify #DBAs as early as possible. SREs follow a 3-month goal schedule, so task prioritization and support needs are discussed at the end of March, June, September and December. Those teams that communicated with SREs Managers or the DBA team before the 3-month period explicitly will have a guaranteed window and dedicated time in the following 3 months. Those that communicate late in the 3-month schedule will be queued in order and susceptible of delays due to variable workload and variable available time (e.g. vacations) time.

Complex schema changes- those that require a) multiple schema changes and data migrations to happen serially, and/or b) a primary server or primary datacenter switch, could take even additional time, and you should coordinate with the DBAs to be able to keep track of the progress.

Related tags

Wikimedia projects

These are used to request the attention of an operator and are proactively monitored by WMF DBAs:

  • DBA tag (part of SRE Data Persistence): Tickets related to database administration of the WMF, and issues found on them. E.g.: "Please recover the table X on day Y.". "db1022 is down" "db1047 has lag"
  • Schema-change-in-production tag: A schema change has been fully agreed (equivalent to +2, even if it is not yet merged), and we are waiting for the DBAs to perform the change on the WMF databases. Please note that a complex schema change could take up several months to be performed once it has started to be applied.

MediaWiki projects

These are freely managed by MediaWiki developers and are not proactively monitored by WMF DBAs, as they may or not be merged, or may or may not be used by WMF wikis:

  • MediaWiki-Database component: Tickets related to programming/fixing issues of all kinds of database backends. DBAs can and will be happy to help with this, but have to be notified.
  • schema-change tag: a change in MediaWiki that is work in progress. Wikimedia DBAs should be (ideally) taken into consideration for possible optimizations, potential problems, etc.

Advice on schema changes

  • All tables must have a primary key. That is not a suggestion, it is a MediaWiki policy. Primary keys prevent duplicate rows, allow better performance for ROW-based replication, and are required to perform online schema changes by many tools. Add one to your table before it gets too big.
  • Do not break compatibility with existing code. Usually, that forces to perform two code deployments, one where code is compatible with new and old structure, and the final one. While it is a lot of more effort, please take into account that schema changes are not transactional, they could take hours or even days to be deployed on all servers, on all wikis. Adding new columns at the end of the table is another way to do it, as code should be compatible with that. Once data has been migrated or filled in, the old columns could be dropped.
  • If a table is deployed to multiple wikis/databases, any schema changes must be applied to all wikis where that table is present for consistency.
  • New indexes, at least on core functionality, should be documented on tables.json. This improves maintainability and allows to detect redundant indexes or perform further alter changes with better context of what each index is used for.
  • DBAs are here to help, not to be an obstacle. If you have any doubt about a schema change (data types, performance, etc.), just ask.
  • Adding new columns at the end is preferred, although not a rule

Dangers of schema changes

  • Certain ALTER operations require to stop writes for a while while they are ongoing. For the master, this means that new edits and updates to the wikis are stopped or fail, for the slaves, it means replication lag, which can lead to other kind of bad consequences, such as querying stale data, or, with the protections put in place, not being able to read at all (as master reads are avoided as much as possible).
  • ALTER TABLES on large tables can take days to execute (currently, ALTERing the enwiki revision table takes 4-5 days per server. We have 20 enwiki servers.)
  • Even if an online alter table is possible (with the built-in online ALTER TABLE functionality or pt-online-schema-change), or the table is very small to not care about online changes, in mysql there exists a failsafe called metadata locking that avoids SELECTs running with stale metadata information. This metadata-locking means that, while SELECTS are ongoing, the ALTER TABLE cannot start, and it is queued. While the alter table is queued, all subsequent transactions are also blocked. Please note that tools like pt-online-schema change, while online, requires also metadata locking on the original table due to trigger creation. This happens on tables such as the image table on Commons, and other popular tables.
  • Schema changes can impact the performance of the server while ongoing, making it more susceptible to lag problems or overload. In order to perform a schema change successfully it is needed to prepare the deployment by warming up the tables on the slaves to minimize lag issues, and configure the online schema change tool adequately.
  • All schema change, like any code deployment, is a source of potential application breakage. Sometimes, in the most strange ways: deleting an apparently unused index can lead to a server failure if it was, in-fact, used and the application starts performing full table scans, maximizing connection count. Adding a new index can cause a better index to stop being used due to confusion of the mysql query planner. Please note that testing on beta is a requirement, but not a guarantee that things will work in production due to the high amount of traffic and the large amount of rows on certain tables (query plan is dynamic and depends on the status of the table, it is not static).
  • Schema changes, unlike code, cannot be rolled back. Only the reverse schema change can be executed again, assuming data has not been lost in the process. What is worse, changes done with ALTER TABLE, if failed or cancelled, start rollbacking the process, that could take the same or more than applying the changes in the first place.
  • Sometimes, adding new columns has a lot of overhead in space consumed, and that could lead to server failure if the schema change fails do the lack of disk space. This is particularly impacting on large tables such as revision or page, as certain schema changes require to copy all table data, temporarily doubling its size while the schema change is ongoing.
  • Schema changes have to be coordinated so two incompatible changes are not running at the same time. Also, by grouping schema changes happening on the same table, usually a huge speed up is achieved and the applications are in a degraded state for less time.

For all those reasons, which can be summarized on assuring our system's reliability, schema changes should be carefully reviewed and applied by a Database administrator, or by someone at Operations/Platform engineering to avoid larger issues.

See also



This page is a part of the SRE Data Persistence technical documentation
(go here for a list of all our pages)