Portal:Data Services/Admin/Labstore

From Wikitech
Jump to navigation Jump to search

Labstore is the prefix for the naming of a class of servers that fulfill different missions. The common thread is off-compute-host storage use cases that serve the VPS instances and Tools.

Clusters (or intended as a cluster and currently a SPoF)

Misc Cluster

Servers: labstore1003 (active), cloudstore1008 & cloudstore1009 (work in progress replacement, see phab:T209527)

  • An NFS share large enough to be used as general scratch space across projects
    • /data/scratch
  • Maps project(s) also have tile generation on a share here temporarily.
  • (proposed) Quota limited rsync backup service for Cloud VPS tenants (phab:T209530)

Components: NFS

Secondary Cluster

Servers: labstore1004, labstore1005

  • Tools project share that is used operationally for deployment
  • Tools home share that is used for /home in ToolForge
  • Misc home and project shares that are used by all NFS enabled projects

Components: DRBD, NFS, nfs-manage, maintain-dbusers, nfs-exportd, BDSync

Dumps

Servers: labstore1006, labstore1007

Components: ???

Offsite backup

Servers: labstore2003, labstore2004

  • labstore2003 acts as a backup server for the "tools-project" logical volume from labstore100[45]
  • labstore2004 acts as a backup server for the "misc project" logical volume from labstore100[45]

Components: BDSync, Backups

Components

DRBD

DRBD syncs block devices between two hosts in real time.

Useful commands

DRBD status
cat /proc/drbd

NFS

nfs-manage

This script is meant as the entry point to bringing up and taking down the DRBD/NFS stack in its entirety.

nfs-manage status
nfs-manage up
nfs-manage down

nfs-exportd

Dynamically generates the contents of /etc/export.d to mirror active projects and shares as defined in /etc/nfs-mounts.yaml.

This daemon fetches project information from OpenStack to know the IPs of the instances and add them to the exports ACL.

See ::labstore::fileserver::exports.

WARNING: there is a known issue, in case some openstack component is misbehaving (for example, keystone), it may see 0 instances in every project and thus flush all exports. To prevent this issue, a special case was added that skips operations if the resulting exports is 0. There is also a cron job that backs up exports to /etc/exports.bak.

maintain-dbusers

We maintain the list of accounts to access the Wiki Replicas on the labstore server in the secondary cluster that is actively serving the Tools project share. The script writes out a $HOME/replica.my.cnf file to each user and project home containing MySQL connection credentials. This uses LDAP to get a list of accounts to create.

The credential files are created with the immutable bit set with chattr to prevent deletion by the Tool account.

The code pattern here is that you have a central data store (the db), that is then read/written to by various independent functions. These functions are not 'pure' - they could even be separate scripts. They mutate the DB in some way. They are also supposed to be idempotent - if they have nothing to do, they should not do anything.

Most of these functions should be run in a continuous loop, maintaining mysql accounts for new tool/user accounts as they appear.

populate_new_accounts

  • Find list of tools/users (From LDAP) that aren't in the `accounts` table
  • Create a replica.my.cnf for each of these tools/users
  • Make an entry in the `accounts` table for each of these tools/users
  • Make entries in `account_host` for each of these tools/users, marking them as absent

create_accounts

  • Look through `account_host` table for accounts that are marked as absent
  • Create those accounts, and mark them as present.

If we need to add a new labsdb, we can do so the following way:

  • Add it to the config file
  • Insert entries into `account_host` for each tool/user with the new host.
  • Run `create_accounts`

In normal usage, just a continuous process running `populate_new_accounts` and `create_accounts` in a loop will suffice.

TODO:

 - Support for maintaining per-tool restrictions (number of connections + time)

Known Issues

Some minority portion of users who had legacy accounts in the older Labsdb cluster did not get accounts created when we transitioned from an older labsdb account management system.

We can resolve this by removing the accounts in labsdb1001 and labsdb1003 and regenerating accounts and credentials.

BDSync

We use the WMF bdsync package on both source and destination backup hosts. Backup hosts run a job periodically to sync a remote block device from a remote target to an LVM device locally.

Backups

Uses bdsync via rsync using SSH as a transport to copy block devices over the network.

Mounting a backup

When mounting a backup from a DRBD device, you have to tell the OS what kind of filesystem it is, since it just sees "DRBD".

Example:

$ mount -t ext4 /dev/backup/tools-project /mnt/tools-project/

Additionally, it is very important to unmount the backup as soon as work is done with it, because the backup jobs will fail if the device is mounted.