I either use Secretive (Mac only) to store my private key in the Secure Enclave and therefore have a key per device, or I use Tailscale SSH and don’t have to worry about SSH keys at all
Otherwise, I use bitwarden now that it has first-class support for SSH keys. their ssh agent is acceptable enough for making dotfiles easily portable to a new desktop-ish device.
I'm a bit lazy and don't really do "one key per device" and moreso one key per secrets medium. I played with yubikey ssh key/auth for a bit, but it's just a bit too clunky.
I used to use Secretive, but transitioned away as I wanted to use x25519 keys rather than ECDSA. I think the key type is limited by macOS, not the application, but it looks like there has been no progress there and I can't go back until that is an option.
Next up is that I don't want long-lived ssh keys. I want short-lived ssh certificates bound to the device. I wrote ssh-tpm-ca-authority to try and solve this, but I was not super happy with how it turned out. https://github.com/Foxboron/ssh-tpm-ca-authority.
These days I run a smallstep CA in my home infrastructure, but I want this to issue short-lived hardware bound ssh certificates.
My current goal is to try and repurpose the device-atttest-01 ACME challenge to issue ssh certificates. For this to work I wrote my own attestation CA that slots into step-ca, and wrote up my own acme client and one agent that currently only does PKCS11 client mTLS certificates to my browser. All of this is very WIP and I plan to write up more things about this soon.
My goal is to hack support for a Content-Type: ssh-certificate header to the ACME certificate retrieval chain in smallstep to issue ssh certificates, and then ram these into my ssh-tpm-agent through the agent socket.
I've set it up, but did not manage to have it ask me for a PIN :/ Then I watched your FOSDEM'25 talk, where you show a private key and that got me wondering why even setting a password at that point ?
Anyway, as someone who've always thought yubikeys are too expensive and too easy to loose/forget, this is something I've been looking for. Many thanks !
I've set it up, but did not manage to have it ask me for a PIN :/
Uhh, that might a bug. Figuring out a valid askpass binary is a bit hit and miss. I also suspect I might not be bubbling up an error properly somewhere.
Check if you have the binary somewhere along those paths, if you do and it's not listed please file an issue and I'll add it.
Then I watched your FOSDEM'25 talk, where you show a private key and that got me wondering why even setting a password at that point ?
So, this is a bit complicated.
The TPM keys can be bound to a valid boot state of your machine. ssh-tpm-agent does not support this yet, so any key you make are not created to only be used for a valid and known configuration of your machine. That type of problem is hard and I haven't sat down to figure out a good UX around this.
What this means is that if you have a discrete (read physical) TPM device on your machine. Someone could get your key blob, take the TPM device and transfer both to another motherboard and start using your key. Or if you don't use Secure Boot, anyone with a valid linux ISO can boot and abuse the key on your machine. Setting a PIN is at that point required to make sure the key can't be abused this way.
An SSH_SK_PROVIDER implementation which will communicate with TPM, and delegate touch/verify to external processes which could be critical in decrypting TPM based credentials.
OR ssh-agent implementation as we have now doing the same.
Right, I've only focused on the ssh-agent protocol, and not soooo sure how well the SSH_SK_PROVIDER stuff is going to work with Golang.`
and delegate touch/verify to external processes which could be critical in decrypting TPM based credentials.
This is deceivingly complicated because Linux can't actually provide you with a safe fingerprint/touch system as they are just /dev/ nodes with no way to validate if someone is faking it or not.
There are infrastructure as part of this with the TPM specc, but I have not seen any code using it.
Post inspired by me looking in shame at my mess of a .ssh directory, in which lies a trove of keys, with no memory of what half of these are keys to, but too afraid to delete.
So my practices are not working that well, but in general i try to compartmentalize, using a key for one “section”, usually a host.
Key per host definitely needs a wrapper script on generation, no way anyone can keep track of that. There's no real security advantage unless they're all encrypted with different passwords though.
If you use IdentitiesOnly and specify keys for domains/subnets in .ssh/config, you make cross-context identity matching not absolutely free for privacy-attacking adversary (especially between leaky contexts, like forges where some metadata API shows user keys). This forces listing the use cases in .ssh/config, making tracking feasible, too!
Otherwise… well, if things like Debian key generation story are expected to happen again but have shorter timespans, such key management limits the impact. Although, I guess, you still need to keep track of per-key targets, as knowing what is the affected keys but not knowing how to rotate them sounds more pain than gain.
context identity matching not absolutely free for privacy-attacking adversary (especially between leaky contexts, like forges where some metadata API shows user keys).
I think there's a caveat there, if you view SSH keys as pure key material, that's totally a valid stance.
You might still want multiple stable identities for different forge contexts that take work to match, or want the attacker getting public keys from a non-forge SSH target to not get forge matches for free.
But also yeah, I know how to match my identities — but getting to high confidence takes some cost, and many trackers don't want noticeable per-entity costs so let them get what they pay for.
I'm planning to do the same. The latest release of libssh (0.12) added support for using FIDO/U2F keys, so once I've added support for that to the terminal I use I'll try switching. Will have to get a second Yubikey as a backup sometime, but I've been planning to do that anyway to backup the non-SSH things I already use my Yubikey for.
I have a 1Password-managed SSH key that I use for pretty much everything. For company-managed infrastructure, we use short term SSH key certificates issued by Vault for granting and managing access.
I used to recommend Teleport over Vault but since the AGPL switch it becomes harder in organizations that are AGPL-averse
The other side of that coin is that SSM actually works on non-AWS hosts (although at this moment it still uses AWS for the control plane) as well as allowing connectivity without necessitating inbound traffic to the hosts
I use gpg-agent and only have 2 yubikey-backed keys that I use (1 device/key is stored for backup in a fire safe). I followed the drduh YubiKey Guide which is excellent.
This configures the cloned repo to use the specified key to talk to the remote. No need to mess with ssh-agent or .ssh/config. Particularly useful if you have keys for multiple Github accounts.
If you need to work with multiple Github accounts on the same machine it's more convinient to use conditional git config and put all account-specific settings (different emails at least) there:
I do that with a couple keys for different identities, and my ssh config determines which one is used based on host rules.
The key (heh) part is you can point IdentityFile to the public key so the private doesn't need to be on disk.
One key for work one key personal. On yubikeys. With the copy printed out on paper in a folder with all other recovery stuff.
I'm not a fan of the per-host or other multiple keys approach. If you're tempted, you should know exactly why you're going that way and be able to explain the threat model for it / what's improved. (My reason for splitting out work is that they may require rotating the key at some point for BS compliance reasons and I don't want to rotate my personal key)
IMO this is the best option if you want access to your keys in a distributed fashion (ie from multiple hardware devices). Carrying a Yubikey around sounds like a PITA to me.
Carrying a Yubikey around sounds like a PITA to me.
Mine just lives on my keychain and it's no effort at all to bring it around. It's attached with a little plastic quick release thing so it's easy to take off and put back on.
The tl;dr: you can use ~/.ssh/config for most basic use cases. These days password managers such as 1Password change the game a little bit, though, so you might want to investigate that.
I use tailscale ssh, I’m already authenticated by the VPN so it doesn’t make sense to re-authenticate again (and in the very few cases where it doesn’t, check mode forces a re-auth). For the few servers not on Tailscale, a SSH key derived from the authentication subkey of my PGP key.
By the way, you can use your own OpenID Connect provider for authenticating to Tailscale. I’m using Codeberg (with TOTP 2FA enabled); see Codeberg as an OIDC Provider for Tailscale.
I store them on my yubikeys, using gpg mode not the newer certificate-based mode for ssh. I have a distinct ssh key for each yubikey, which allows me to rotate them when the yubikeys are physically destroyed (my cats found one, a while back...). The private keys never exist on a computer, they are only ever held on the hardware tokens.
I use gpg-agent in its ssh-agent emulation mode, which allows me to forward the agent as usual.
Not what you are asking here but I thought you and others may find it relevant and interesting nevertheless: you can sign your git commits using your SSH keys too (guide), which reduces the need of maintaining a separate PGP/GPG identity.
I used this guide to help me set up SSH keys (and GPG) on Yubikeys. This approach also allows for backups (multiple keys). https://github.com/drduh/YubiKey-Guide
I have one key per «context», different keys on different client devices. Keys are backed up as «append-only data». They live in ~/.ssh, listed in ~/.ssh/config with appropriate targets, and IdentitiesOnly is on. I use ssh-agent for unlocking once per session (where hibernation counts as breaking the session). I do have some scripting integration for unlocking the password manager and loading keys into ssh-agent, but key file storage is independent.
these days I use my gpg key/agent, which is on a security key. I have my master key on one hardware token, and signing/auth/encryption keys on one I carry. Offline paper backups in safe.
this means if anything happens I can get a new yubikey/nitrokey/whatever, and authorize it from my offline master. it should work everywhere
I have 3 keys: personal, work, git-sign, they all live in ~/.ssh, backed up in my KeePassXC and on a separate flshdrive.
In my ~/.ssh/config I set which keys(and not only) to use for each host.
I have one SSH key pair per device, stored only on said device. The passphrase is stored in Bitwarden and the OS' keychain so the SSH key is unlocked automatically upon login. Since I have multiple devices and add those to the relevant services (e.g. both my desktop and laptop have access to GitHub, my servers, etc) I don't worry about backing up the SSH keys, as in the worst case I can just generate a new one and bootstrap it using another device.
One per system, I sync public keys with scripts that pull from my public website, GitHub/gitlab post them publicly as well, when systems go away, I delete the public key from the site and they revoke when synced
I have been meaning to use from my OnlyKey hardware (open source unlike Yubico’s offering), but my last laptop (the one that caught on fire) didn’t have USB-A so it wsa always too much of a pain to deal with.
One key per host, and I put the pubkeys in ldap instead of .ssh/authorized_keys. SSH servers read the pubkeys from ldap when authenticating using AuthorizedKeysCommand in sshd_config
I just use one key per machine, using whichever OS-level key store makes sense to auto-unlock or OS-level auth-- Kwallet since I'm on KDE, but Keychain on macOS, or whatever Windows has, etc. It's not the strongest security compared to a separate key per context/host or whatever... but realistically, I can't think of many situations where I'd be compromised in a way where some but not all my keys would be stolen. (Yubikey-based or TPM-based SSH keys sounds pretty appealing though... this thread has given me some homework)
But massive quality-of-life hint I learned: the SSH keys from your GitHub account are public! Same for Codeberg, probably other forges too. So if you're setting up a new server and want to get SSH up and running real quick, you can do:
But massive quality-of-life hint I learned: the SSH keys from your GitHub account are public! Same for Codeberg, probably other forges too. So if you're setting up a new server and want to get SSH up and running real quick, you can do:
I have some ideas on how to standardize on the URL through some .well-known mechanism. I just need to write down the proposal properly and write up a key retrieval CLI thing.
well-known-public-keys: a spec for sharing public keys over TLS. #192
For my own stuff, where I can configure sshd, I use epithet. Where I need some stable key, instead of being able to use certs, I keep my keys in a yadm secret in my dotfile repo.
My basic setup for "client" devices is pretty conventional: one primary user ssh key generated on-device, stored in ~/.ssh/id_ed25519, passphrase-protected, using ssh-agent. Currently on my primary computer I use KeePassXC to load the key into the ssh-agent (the passphrase to unlock it is stored in the password vault, but the keyfile itself is still in the traditional location).
Having a key-per-client is not really a compartmentalization thing for me, as much as it is a way to incrementally adopt better defaults. I can't really see myself rotating my key on a system that I'm using day-to-day due to the disruption, but setting up a new system is always disruptive to some extent, so I might as well e.g. upgrade from an rsa to ed25519 key while I'm dealing with it anyways. It also helps ~/.ssh/authorized_keys maintenance, by transforming "is this key still required" to "is this computer still in use".
For automated processes running on servers or as some other user than my primary login, I provision more limited-access SSH keys and store them outside of .ssh, with a descriptive name, and have the process manually specify the keyfile to use each time. That's mainly to be intentional about the usage of the credentials, as otherwise it'd be too easy for some other process to run with them, especially as I can't reasonably store them passphrase-protected.
I usually have files named according to the "username@host" scheme and an ssh_config that picks them up based on the scheme. If username@host doesnt exist, it falls back to some username@default key (first id_ed255yaddayadda, then id_rsa etc)
I recommend one per "service" or one per account per service. i.e. one for Github, one for Codeberg, etc.
For individual hosts, I also tend to group by a security boundary, like personal hosting, friends, guest network, work, etc. If you can get away with it, use the CA feature, so you have a SSH CA setup for home, etc.
I try to think about the failure mode if a key is stolen/lost/etc. Ideally it exposes only a small-subset of what you can do.
For storage, I use 1Password, but any thing similar works great: Secretive, Yubikey, Bitwarden, etc.
z0mbix | 23 hours ago
I either use Secretive (Mac only) to store my private key in the Secure Enclave and therefore have a key per device, or I use Tailscale SSH and don’t have to worry about SSH keys at all
https://secretive.dev/
jamesog | 23 hours ago
+1 to this. I only use Secretive now, but I don't need to distribute my keys. I use a Yubikey as a CA and sign each device's public key: https://jamesog.net/2023/03/03/yubikey-as-an-ssh-certificate-authority/
seabre | 22 hours ago
Using a yubikey as a certificate authority is a great idea.
pta2002 | 22 hours ago
I had no idea this was possible! This looks great, my 8 or so SSH keys are getting unwieldy.
Foxboron | 21 hours ago
macos supports this natively these days.
https://gist.github.com/arianvp/5f59f1783e3eaf1a2d4cd8e952bb4acf
heckie | 22 hours ago
+1 to secretive on my macbook.
Otherwise, I use bitwarden now that it has first-class support for SSH keys. their ssh agent is acceptable enough for making dotfiles easily portable to a new desktop-ish device.
I'm a bit lazy and don't really do "one key per device" and moreso one key per secrets medium. I played with yubikey ssh key/auth for a bit, but it's just a bit too clunky.
gcollazo | 21 hours ago
Just learned about Secretive, thanks. The 1Password ssh agent was the last thing keeping me there. I might try migrating away.
altano | 20 hours ago
Wow, super interesting.
If you lose your machine you lose your key, I assume? How do you handle that?
atmosx | 15 hours ago
By securely storing backup keys offsite?
By the way the SE protects from this kind of situation. So the HW doesn’t matter as much as the key itself. And the keys are accessible only by you.
Another negative is that you can’t work “remotely” by SSHing to that computer since the biometric 2FA will fail.
FIGBERT | 4 hours ago
I used to use Secretive, but transitioned away as I wanted to use
x25519keys rather thanECDSA. I think the key type is limited by macOS, not the application, but it looks like there has been no progress there and I can't go back until that is an option.Foxboron | 21 hours ago
I have some opinions.
I wrote
ssh-tpm-agentto store hardware bound SSH keys. https://github.com/Foxboron/ssh-tpm-agent. Blog post: https://linderud.dev/blog/store-ssh-keys-inside-the-tpm-ssh-tpm-agent/Next up is that I don't want long-lived ssh keys. I want short-lived ssh certificates bound to the device. I wrote ssh-tpm-ca-authority to try and solve this, but I was not super happy with how it turned out. https://github.com/Foxboron/ssh-tpm-ca-authority.
These days I run a smallstep CA in my home infrastructure, but I want this to issue short-lived hardware bound ssh certificates.
My current goal is to try and repurpose the
device-atttest-01ACME challenge to issue ssh certificates. For this to work I wrote my own attestation CA that slots intostep-ca, and wrote up my own acme client and one agent that currently only does PKCS11 client mTLS certificates to my browser. All of this is very WIP and I plan to write up more things about this soon.https://github.com/Foxboron/attezt
My goal is to hack support for a
Content-Type: ssh-certificateheader to the ACME certificate retrieval chain insmallstepto issue ssh certificates, and then ram these into myssh-tpm-agentthrough the agent socket.We'll see how far I get.
martinkirch | 2 hours ago
I've set it up, but did not manage to have it ask me for a PIN :/ Then I watched your FOSDEM'25 talk, where you show a private key and that got me wondering why even setting a password at that point ?
Anyway, as someone who've always thought yubikeys are too expensive and too easy to loose/forget, this is something I've been looking for. Many thanks !
Foxboron | an hour ago
Uhh, that might a bug. Figuring out a valid
askpassbinary is a bit hit and miss. I also suspect I might not be bubbling up an error properly somewhere.https://github.com/Foxboron/ssh-tpm-agent/blob/master/askpass/askpass.go#L43
Check if you have the binary somewhere along those paths, if you do and it's not listed please file an issue and I'll add it.
So, this is a bit complicated.
The TPM keys can be bound to a valid boot state of your machine.
ssh-tpm-agentdoes not support this yet, so any key you make are not created to only be used for a valid and known configuration of your machine. That type of problem is hard and I haven't sat down to figure out a good UX around this.What this means is that if you have a discrete (read physical) TPM device on your machine. Someone could get your key blob, take the TPM device and transfer both to another motherboard and start using your key. Or if you don't use Secure Boot, anyone with a valid linux ISO can boot and abuse the key on your machine. Setting a PIN is at that point required to make sure the key can't be abused this way.
The gist with a valid ssh TPM key for my github is here :) https://gist.github.com/Foxboron/e15fcaa3c497c40c4c8e75130f551e2e
abbe | 9 hours ago
Thanks for
ssh-tpm-agent. I wonder if-SKkeys can be implemented with ssh-tpm-agent ?Foxboron | 7 hours ago
How are your imagining this to work?
abbe | 6 hours ago
An
SSH_SK_PROVIDERimplementation which will communicate with TPM, and delegate touch/verify to external processes which could be critical in decrypting TPM based credentials.OR
ssh-agentimplementation as we have now doing the same.Foxboron | 6 hours ago
Right, I've only focused on the ssh-agent protocol, and not soooo sure how well the
SSH_SK_PROVIDERstuff is going to work with Golang.`This is deceivingly complicated because Linux can't actually provide you with a safe fingerprint/touch system as they are just
/dev/nodes with no way to validate if someone is faking it or not.There are infrastructure as part of this with the TPM specc, but I have not seen any code using it.
[OP] mt | a day ago
Post inspired by me looking in shame at my mess of a .ssh directory, in which lies a trove of keys, with no memory of what half of these are keys to, but too afraid to delete.
So my practices are not working that well, but in general i try to compartmentalize, using a key for one “section”, usually a host.
vpr | 22 hours ago
Key per host definitely needs a wrapper script on generation, no way anyone can keep track of that. There's no real security advantage unless they're all encrypted with different passwords though.
What threat model are you defending against?
k749gtnc9l3w | 22 hours ago
If you use IdentitiesOnly and specify keys for domains/subnets in
.ssh/config, you make cross-context identity matching not absolutely free for privacy-attacking adversary (especially between leaky contexts, like forges where some metadata API shows user keys). This forces listing the use cases in.ssh/config, making tracking feasible, too!Otherwise… well, if things like Debian key generation story are expected to happen again but have shorter timespans, such key management limits the impact. Although, I guess, you still need to keep track of per-key targets, as knowing what is the affected keys but not knowing how to rotate them sounds more pain than gain.
regalialong | 10 hours ago
I think there's a caveat there, if you view SSH keys as pure key material, that's totally a valid stance.
But you can sign git commits with a SSH key, so having a stable and well-known identity for forges is useful in that context.
k749gtnc9l3w | 10 hours ago
You might still want multiple stable identities for different forge contexts that take work to match, or want the attacker getting public keys from a non-forge SSH target to not get forge matches for free.
But also yeah, I know how to match my identities — but getting to high confidence takes some cost, and many trackers don't want noticeable per-entity costs so let them get what they pay for.
sarcasticadmin | 23 hours ago
Separate hardware tokens (Yubikeys) for personal and work. Keys never live on any machine and are portable. Never going back.
vpr | 22 hours ago
How do you manage backups?
Old job had us generate them on "never networked" machines and then imported to the key.
sarcasticadmin | 22 hours ago
I do the same thing. That way I have 2 hardware tokens (main and backup) with the same key and then a separate offline backup.
davidg | 19 hours ago
I'm planning to do the same. The latest release of libssh (0.12) added support for using FIDO/U2F keys, so once I've added support for that to the terminal I use I'll try switching. Will have to get a second Yubikey as a backup sometime, but I've been planning to do that anyway to backup the non-SSH things I already use my Yubikey for.
grahamc | a day ago
I have a 1Password-managed SSH key that I use for pretty much everything. For company-managed infrastructure, we use short term SSH key certificates issued by Vault for granting and managing access.
mdaniel | 17 hours ago
I used to recommend Teleport over Vault but since the AGPL switch it becomes harder in organizations that are AGPL-averse
The other side of that coin is that SSM actually works on non-AWS hosts (although at this moment it still uses AWS for the control plane) as well as allowing connectivity without necessitating inbound traffic to the hosts
bryfry | 23 hours ago
I use gpg-agent and only have 2 yubikey-backed keys that I use (1 device/key is stored for backup in a fire safe). I followed the drduh YubiKey Guide which is excellent.
skobes | 22 hours ago
A trick I learned for git is:
git clone -c core.sshCommand="ssh -i ~/.ssh/mykey_ed25519" [url]
This configures the cloned repo to use the specified key to talk to the remote. No need to mess with ssh-agent or .ssh/config. Particularly useful if you have keys for multiple Github accounts.
cx | 10 hours ago
If you do want to do it via ssh/config, you can do it like:
then when you clone, instead of
do
ig0rm | 11 hours ago
If you need to work with multiple Github accounts on the same machine it's more convinient to use conditional git config and put all account-specific settings (different emails at least) there:
benoliver999 | 23 hours ago
I use ssh-agent with keys stored in KeepassXC. Works better than I'd thought it would.
ThinkChaos | 22 hours ago
I do that with a couple keys for different identities, and my ssh config determines which one is used based on host rules.
The key (heh) part is you can point
IdentityFileto the public key so the private doesn't need to be on disk.benoliver999 | 22 hours ago
Yeah I have different personal and work keepasses, the idea being I can just hand over the database to the next guy when I leave the company.
viraptor | 23 hours ago
One key for work one key personal. On yubikeys. With the copy printed out on paper in a folder with all other recovery stuff.
I'm not a fan of the per-host or other multiple keys approach. If you're tempted, you should know exactly why you're going that way and be able to explain the threat model for it / what's improved. (My reason for splitting out work is that they may require rotating the key at some point for BS compliance reasons and I don't want to rotate my personal key)
chloe | 23 hours ago
I use the 1Password SSH agent to log in to my servers (Windows, macOS, Linux). Works remarkably well and I don't have to store ~/.ssh keys locally.
atmosx | 15 hours ago
IMO this is the best option if you want access to your keys in a distributed fashion (ie from multiple hardware devices). Carrying a Yubikey around sounds like a PITA to me.
Sharparam | 7 hours ago
Mine just lives on my keychain and it's no effort at all to bring it around. It's attached with a little plastic quick release thing so it's easy to take off and put back on.
jperras | 23 hours ago
A long, long time ago I wrote a post about this. It's mostly still relevant: https://nerderati.com/2011-03-17-simplify-your-life-with-an-ssh-config-file/
The tl;dr: you can use
~/.ssh/configfor most basic use cases. These days password managers such as 1Password change the game a little bit, though, so you might want to investigate that.Rovanion | 21 hours ago
I store my SSH key in my hardware Yubikey and use a ssh-agent, and it is fantastic!
Diti | 19 hours ago
I use
tailscale ssh, I’m already authenticated by the VPN so it doesn’t make sense to re-authenticate again (and in the very few cases where it doesn’t, check mode forces a re-auth). For the few servers not on Tailscale, a SSH key derived from the authentication subkey of my PGP key.By the way, you can use your own OpenID Connect provider for authenticating to Tailscale. I’m using Codeberg (with TOTP 2FA enabled); see Codeberg as an OIDC Provider for Tailscale.
Irene | 17 hours ago
I store them on my yubikeys, using gpg mode not the newer certificate-based mode for ssh. I have a distinct ssh key for each yubikey, which allows me to rotate them when the yubikeys are physically destroyed (my cats found one, a while back...). The private keys never exist on a computer, they are only ever held on the hardware tokens.
I use gpg-agent in its ssh-agent emulation mode, which allows me to forward the agent as usual.
boramalper | 11 hours ago
Not what you are asking here but I thought you and others may find it relevant and interesting nevertheless: you can sign your git commits using your SSH keys too (guide), which reduces the need of maintaining a separate PGP/GPG identity.
rjzak | 22 hours ago
I used this guide to help me set up SSH keys (and GPG) on Yubikeys. This approach also allows for backups (multiple keys). https://github.com/drduh/YubiKey-Guide
k749gtnc9l3w | 22 hours ago
I have one key per «context», different keys on different client devices. Keys are backed up as «append-only data». They live in
~/.ssh, listed in~/.ssh/configwith appropriate targets, andIdentitiesOnlyis on. I usessh-agentfor unlocking once per session (where hibernation counts as breaking the session). I do have some scripting integration for unlocking the password manager and loading keys into ssh-agent, but key file storage is independent.m_a_r_k | 21 hours ago
I used to use one key, one machine.
these days I use my gpg key/agent, which is on a security key. I have my master key on one hardware token, and signing/auth/encryption keys on one I carry. Offline paper backups in safe.
this means if anything happens I can get a new yubikey/nitrokey/whatever, and authorize it from my offline master. it should work everywhere
olex | 21 hours ago
I have 3 keys: personal, work, git-sign, they all live in ~/.ssh, backed up in my KeePassXC and on a separate flshdrive. In my ~/.ssh/config I set which keys(and not only) to use for each host.
yorickpeterse | 21 hours ago
I have one SSH key pair per device, stored only on said device. The passphrase is stored in Bitwarden and the OS' keychain so the SSH key is unlocked automatically upon login. Since I have multiple devices and add those to the relevant services (e.g. both my desktop and laptop have access to GitHub, my servers, etc) I don't worry about backing up the SSH keys, as in the worst case I can just generate a new one and bootstrap it using another device.
aae | 19 hours ago
One per system, I sync public keys with scripts that pull from my public website, GitHub/gitlab post them publicly as well, when systems go away, I delete the public key from the site and they revoke when synced
parisosuch | 18 hours ago
Proton Pass, one master password I use nowhere else but have it memorized.
mdaniel | 16 hours ago
I wasn't aware they had this functionality, so if it interests others, too: https://protonpass.github.io/pass-cli/commands/ssh-agent/
although they sure do like doing this bullshit https://github.com/protonpass/pass-cli
toastal | 13 hours ago
I have been meaning to use from my OnlyKey hardware (open source unlike Yubico’s offering), but my last laptop (the one that caught on fire) didn’t have USB-A so it wsa always too much of a pain to deal with.
ki9 | 12 hours ago
One key per host, and I put the pubkeys in ldap instead of
.ssh/authorized_keys. SSH servers read the pubkeys from ldap when authenticating usingAuthorizedKeysCommandin sshd_confighmaddocks | 10 hours ago
I have a key per site/service and I store them in 1Password. 1Password is super annoying but I live with it.
j11g | 9 hours ago
I store them in Bitwarden.
kylewlacy | 9 hours ago
I just use one key per machine, using whichever OS-level key store makes sense to auto-unlock or OS-level auth-- Kwallet since I'm on KDE, but Keychain on macOS, or whatever Windows has, etc. It's not the strongest security compared to a separate key per context/host or whatever... but realistically, I can't think of many situations where I'd be compromised in a way where some but not all my keys would be stolen. (Yubikey-based or TPM-based SSH keys sounds pretty appealing though... this thread has given me some homework)
But massive quality-of-life hint I learned: the SSH keys from your GitHub account are public! Same for Codeberg, probably other forges too. So if you're setting up a new server and want to get SSH up and running real quick, you can do:
Easy way to give me access to a server! ...which you can do if you'd like, but I'd encourage you to add your own keys too =)
Foxboron | 7 hours ago
I have some ideas on how to standardize on the URL through some
.well-knownmechanism. I just need to write down the proposal properly and write up a key retrieval CLI thing.https://github.com/C2SP/C2SP/issues/192
tazjin | 7 hours ago
One key per host, with the key material on yubikeys.
There are some systems that can not deal with these keys (looking at you, Gerrit!), and these require an individual approach each time ...
brianm | a day ago
For my own stuff, where I can configure sshd, I use epithet. Where I need some stable key, instead of being able to use certs, I keep my keys in a yadm secret in my dotfile repo.
nortti | 5 hours ago
My basic setup for "client" devices is pretty conventional: one primary user ssh key generated on-device, stored in ~/.ssh/id_ed25519, passphrase-protected, using ssh-agent. Currently on my primary computer I use KeePassXC to load the key into the ssh-agent (the passphrase to unlock it is stored in the password vault, but the keyfile itself is still in the traditional location).
Having a key-per-client is not really a compartmentalization thing for me, as much as it is a way to incrementally adopt better defaults. I can't really see myself rotating my key on a system that I'm using day-to-day due to the disruption, but setting up a new system is always disruptive to some extent, so I might as well e.g. upgrade from an rsa to ed25519 key while I'm dealing with it anyways. It also helps ~/.ssh/authorized_keys maintenance, by transforming "is this key still required" to "is this computer still in use".
For automated processes running on servers or as some other user than my primary login, I provision more limited-access SSH keys and store them outside of .ssh, with a descriptive name, and have the process manually specify the keyfile to use each time. That's mainly to be intentional about the usage of the credentials, as otherwise it'd be too easy for some other process to run with them, especially as I can't reasonably store them passphrase-protected.
freddyb | 23 hours ago
I usually have files named according to the "username@host" scheme and an ssh_config that picks them up based on the scheme. If username@host doesnt exist, it falls back to some username@default key (first id_ed255yaddayadda, then id_rsa etc)
diktomat | 6 hours ago
Is there some trick/automation behind this or do you just configure each user@host pair individually?
freddyb | 5 hours ago
More or less like so (thrown together, not an actual config because I'm on the wrong device):
And then
%his the remote host name and%rthe remote username (not local user name!)jak2k | 22 hours ago
I have two keys in my KeePass database: One for git signing and git server authentication and one for connecting to my server via SSH.
zie | 4 hours ago
I recommend one per "service" or one per account per service. i.e. one for Github, one for Codeberg, etc.
For individual hosts, I also tend to group by a security boundary, like personal hosting, friends, guest network, work, etc. If you can get away with it, use the CA feature, so you have a SSH CA setup for home, etc.
I try to think about the failure mode if a key is stolen/lost/etc. Ideally it exposes only a small-subset of what you can do.
For storage, I use 1Password, but any thing similar works great: Secretive, Yubikey, Bitwarden, etc.