Jump to content

Portal:Data Services/Admin/Shared storage

From Wikitech

Read-write NFS shares are currently hosted on per-project Cloud VPS VMs. In addition, the clouddumps servers host the dumps files in a read-only fashion. This documentation is for the read-write shares; the dumps shares are documented on the clouddumps page.

The system strictly uses NFSv4 and where possible (aka Debian Buster+ clients) NFSv4.2 to take advantage of some locking improvements in 4.1 and 2. Using a strict v4 system allows the firewall to accept only port 2049, which is nice and clean and doesn't require rpcbind/portmap on clients (see port 111 vulnerability). However, exports are defined using the old NFSv3 style without using the virtual filesystem feature of v4. The reasons for that at this time are:

  • There is little value to the virtual filesystem unless you think your clients need to be able to discover other shares.
  • You can use v3 style exports in v4, despite this being rarely documented online.
  • Using the virtual filesystem requires mounting any volumes to be shared under a specific hierarchy on the server, which typically results in a lot of bind mounts. When trying to fail over, those bind mounts will refuse to unmount unless you can force all clients to stop writing files and holding locks for a bit. Turning the filesystem readonly would result in far more breakage than simply not using bind mounts and smoothing the failover. In the past, the solution was to not run the script to failover and instead run the commands by hand and reboot the server when it refuses to unmount the volumes. This is fixed by using v3 style exports since a umount -f will eventually unmount the share in that setup.

Operations

NFS volume cleanup

This has moved to Portal:Toolforge/Admin/Runbooks/ToolsNfsAlmostFull.

NFS client operations

This information is outdated.

When significant changes are made on an NFS server, clients that mount that often need actions taken on them to recover from whatever state they are suddenly in. To this end, the cumin host file backend is there to work in tandem with the nfs-hostlist script. The latter script will generate a list of VMs by project and specified mounts where NFS is mounted. Currently, you must be on a cloudinfra cumin host to run these commands. The list of cumin hosts can be seen on Cumin#WMCS Cloud VPS infrastructure.

The nfs-hostlist script takes several options (some are required):

  • -h Show help
  • -m <mount> A space-delimited list of "mounts" as defined in the /etc/nfs-mounts.yaml file generated from puppet (it won't accept wrong answers, so this is a pretty safe option)
  • --all-mounts Anything NFS mounted (but you can still limit the number of projects)
  • -p <project> A space-delimited list of OpenStack projects to run against. This will be further trimmed according to the mounts you selected. (If you used -m maps and -p maps tools, you'll only end up with maps hosts)
  • --all-projects Any project mentioned in /etc/nfs-mounts.yaml, but you can still filter by mounts.
  • -f <filename> Without this, the script prints to STDOUT.

Example:

  1. First, create your host list based on the mounts or projects you know you will be operating on. For example, if I was making a change only to the secondary cluster, which currently serves maps and scratch, one might generate a host list with the command:
    bstorm@cloud-cumin-01:~$ sudo nfs-hostlist -m maps scratch --all-projects -f hostlist.txt
    
    Note that root/sudo is needed because this interacts with cumin's query setup to get hostnames. It will take quite a while to finish because it also calls openstack-browser's API to read Hiera settings.
  2. Now you can run a command with cumin across all hosts in hostlist.txt similar to
    bstorm@cloud-cumin-01:~$ sudo cumin --force -x 'F{/home/bstorm/hostlist.txt}' 'puppet agent -t'
    

It is sensible to have the host list generated shortly before the changes will take place to respond quickly as needed with cumin when you need to.

How to enable NFS for a project

Follow this runbook

Reference

nfs-exportd

Dynamically generates the contents of /etc/export.d to mirror active projects and shares as defined in /etc/nfs-mounts.yaml, every 5 minutes.

This daemon fetches project information from OpenStack to know the IPs of the instances and add them to the exports ACL.

See cloudnfs::fileserver::exports.

There is a known issue, in case some OpenStack component is misbehaving (for example, keystone), this will typically return a 401. Please don't allow this to make it past the traceback. We want exceptions and failures in the service instead of letting it remove exports. There is also a cron job that backs up exports to /etc/exports.bak.

Backups

TODO: cinder-backups