Keyholder is a set of scripts that allow a group of users to use an SSH key without sharing the private key with the members of the group. This is accomplished by running a locked-down instance of
ssh-agent, and running
ssh-agent-proxy in front of it. The proxy allows trusted users to list the ssh-agent's identities and to send the agent signing requests. Access to a given key is restricted to members of a given Unix group and will be rejected when the user is not a member.
Administration of Keyholder is done via a shell script,
keyholder located under
/usr/local/sbin which is typically not in a normal user
keyholder -- Manage shared SSH agent keyholder status Lists service status and the fingerprints of all identities currently represented by the agent keyholder add KEY Add a private key identity to the agent keyholder arm Add all keys in /etc/keyholder.d keyholder disarm Deletes all identities from the agent keyholder start/stop/restart Start / stop / restart the keyholder service
Keys are stored under /srv/private/modules/secret/secrets/keyholder/ in the Puppet#Private puppet repo.
We use Keyholder for SCAP deploys. Keyholder starts automatically, but a user with root has to load both the mwdeploy and deploy-service identities. This is by design. If the agent is unarmed, an Icinga check will issue the following alert:
PROBLEM - Keyholder SSH agent on tin is CRITICAL: Keyholder is not armed. Run `keyholder arm` to arm it.
To arm the agent, log in to deploy1002, and run
keyholder arm. The agent will automatically attempt to load the secret deployment key, which is protected with a passphrase. The passphrases for production are stored in Pwstore, for WMCS OpenStack projects they are usually stored on the project-specific puppetmaster.
|cumin_openstack_master||production for WMCS|
|cloud_cumin_master||production for WMCS|
WMCS projects passphrases
|eventlogging||beta cluster||passphrase is: 'eventlogging'|
|dumpsdeploy||beta cluster||passphrase is (without the quote marks): 'some boring passphrase'|
You can verify that the agent is armed by running
SCAP deployers do not have access to the private key or the passphrase, so in case the deployment server is rebooted, SCAP deployments will be blocked until a root arms the agent.
A deployer can then use the proxy to connect to a host:
SSH_AUTH_SOCK=/run/keyholder/proxy.sock ssh -oIdentitiesOnly=yes -oIdentityFile=/etc/keyholder.d/<somekey> <somehost>
Generating a key for a new identity or update an existing one
Use the following commands on puppetmaster1001 (so it never touches the network or other host before being setup into the private repo):
When you generate the new SSH key with the
puppetmaster1001# cd /root puppetmaster1001# ssh-keygen -t ed25519 -b 256 -C "/etc/keyholder.d/<identity>" -f <identity> # check keys look fine puppetmaster1001# mv <identity> <identity>.pub /srv/private/modules/secret/secrets/keyholder/ puppetmaster1001# cd /srv/private/modules/secret/secrets/keyholder/; git add <identity> <identity>.pub; git commit application_host# run-puppet-agent deployment_host# run-puppet-agent # note there are 2 deployment hosts, one for each dc as of this writing deployment_host# keyholder restart # you can just add, I did a restart because I was updating one deployment_host# keyholder arm
If using ED25519 keys is not possible because incompatibilities (e.g. network equipment or access to PSUs), it is possible to failback to generate RSA keys:
puppetmaster1001# ssh-keygen -t rsa -b 4096 -C "/etc/keyholder.d/<identity>" -f <identity>
- Scap3 documentation for ssh access https://doc.wikimedia.org/scap/scap3/ssh-access.html