Wikimedia Cloud Services team/EnhancementProposals/cinder backups

From Wikitech

This page contains several options we are currently considering for introducing cinder backups.

Context

We're currently planning on migrating valuable data into cinder volumes, such as the content of our NFS shares, or Trove database contents to replace Toolsdb.

Cinder volumes are created from ceph storage. We need a way to backup the content of the volumes outside of the ceph storage farm.

Some related phabricator tasks:

Storage size

NFS shares have the following size:

  • tools: using 6TB of 8TB
  • maps: using 5TB out of 8TB
  • other projects not tools or maps: using 2TB out of 5TB
  • scratch: using 2TB out of 4TB
  • total: using 15TB out of allocated 25TB

Our cloud DBs have the following size:

  • toolsdb: using 2.5TB out of 3.4TB
  • osmdb: using 1.2TB out of 1.6TB
  • wikilabels: using 2GB out of 1.6TB
  • total: using 4TB out of 6.6TB

Totals:

  • in use: 19TB
  • allocated: 31.6TB

Storage needs for the new system:

  • if we need to store 2 full backups + several incrementals, we would need: ~130TB (total allocated * 4)
  • if we need to store 1 full backup + several incrementals, we would need: ~95TB (total allocated * 3)
  • if we need to store 2 full backups + several incrementals for the actual data usage we have today, we would need: ~76TB (total in use * 4)
  • if we need to store 1 full backup + several incrementals for the actual data usage we have today, we would need: ~57TB (total in use * 3)

Some of our servers:

  • cloudbackup2001: using 18TB out of 214TB (195TB free)
  • cloudbackup2002: using 10TB out of 214TB (204TB free)
  • cloudbackup1001: 24TB (minus RAID usage) [still in procurement]
  • cloudbackup1002: 24TB (minus RAID usage) [still in procurement]

Options

We're currently considering a couple of options.

Option 1: use cinder backup API + POSIX backup driver on cloudbackups servers

In this option, we would configure the cinder-backup service with the POSIX storage driver. The cinder-backup service would run on cloudbackups servers.

Then, create a LVM volume on a cloudbackup server to host the cinder data.

The data flow would be ceph farm <-> cloudbackup directly.

The backup/restoration logic would be controlled using the cinder backup API.

Pros

  • Self service. Users may trigger backups by themselves. We can restrict who can use the backup API though.
  • Little to no code maintenance on our side. The backup logic is mostly implemented by openstack cinder upstream.

Cons

  • Backups features in the cinder backup API seem to be simplified. No compression, no data deduplication, no cyphering, etc.

Option 2: use backy2

This is similar to what we use for virtual machine disks stored in ceph.

In this option the backup servers would be running a systemd timer that would do the backups of the configured volumes onto the backup servers themselves. The traffic is mostly between the ceph cluster and the backups nodes.

This will require:

  • Connectivity/auth to ceph
  • Configuration on puppet side of what should be backed up
  • Writing/adapting some code (probably not a lot)
  • A well documented (hopefully automated) way to restore the backups

This might require:

  • Connectivity/auth to openstack, if we want to do some VM<->volume mapping (ex. stating the VMs/projects that we want backups instead of directly the rbd volumes).

Pros

  • Backy does deduplication of blocks, so it might save some space (though it's way more useful when backing up the OS volumes, as those are very similar between many VMs).

Cons

  • We have to build and maintain the code.
  • There's no tenancy or integration with horizon, so it will require considerable work if we want to use it as "backup as a service" for our users.

Discarded options

using cinder backup API + swift

The cinder backup API supports backing up volumes to a swift API, and in fact is the default setting upstream.

However, our current model for the swift API involves storing data on the ceph storage farm.

Therefore doing this would mean we're backing up the data in the exact same system (ceph).

See also

TODO