At the urging of the tech lead at the time, my team has pursued the ultimate ssh trick: not using it.
i.e. we intentionally do not enable (human) ssh access to the production hosts. In an autoscaling AWS world, logging into an individual machine by hand is the last thing you want to be doing. So we are learning the (sometime difficult!) lessons of how to rely only on what our logging, tracing, monitoring, and deployment automation (including snapshotting) can afford us. I suspect sooner or later we will break down and swap in a login-enabled image to diagnose some sticky problem, but -- as much as I resented the idea when he presented it -- it's an interesting discipline.
This is great, I think at first glance it works at scale, but I think a lot of shops not in the 'startup world', older IT shops still working out of the 90s and 2000's will find that hard to do. When you have 5 monster work horse servers instead of 50 VMs where you can spin up 50 more within 30 minutes it's hard to get away from direct server access for the devops team.
I like the idea of it, given how expendable servers are.
I'd love to hear any instances where you have SSH'ed in, or what kind of bugs would prompt you to want to.
Maybe because you didn't have some monitoring on a specific aspect of the server, or an application bug where you needed to use strace - something like that?
This is the same principle I have started to use with Docker. It runs a single application, stuff logs, but besides that there is no way to get into it, no ssh, nothing.
Lot of places disable it for the production hours of operation and enable it for maintenance windows. Done using access control not via disabling it though
The ~/.ssh/config file was something I discovered when my my company was going through some changes and I had 2 different usernames for accessing internal systems.
Host hosta hostb hostc
User usera
Host *
User userb
Since my local system username was different than many of the remote systems I could wildcard to a different default username then had a list of servers that would use my other username.
The bad thing is, as the blog post shows, "Host" above is really just an alias. So if I have an entry like:
Host hosta.mycompany.com
User usera
and then try to do "ssh hosta", even if hosta resolves to hosta.mycompany.com, it won't match the config entry, as config entry data is all used prior to DNS lookup.
so, I would say that the easiest solution here is tab completion. Suitably modern bash-completion should expand entries in ~/.ssh/config (as well as ~/.ssh/known_hosts if its not hashed).
Any list of 'SSH tricks' should contain: 'ssh -D 2001 user@host.com'. This creates a SOCKS proxy on localhost:2001 that goes through host.com. For example, I use a digital ocean instance hosted in the US to tunnel through, so that I can watch hulu (I'm from Europe).
There are also lots of legitimate uses for a SOCKS proxy, such as sshing to a useful machine in a corp network and using something like FoxyProxy to direct *.intranet sites via the SOCKS proxy.
It's not all about fricken video piracy :)
Also commonly used in academia when you are not on campus but need access to IP-restricted journals. Or to freak out your lab mates and start the robot from home.
Requires openssh > 6.0 to work consistently with controlpersist. I still have problems on Centos 6.x boxes, so I still use nc in proxycommand for those hosts.
I have the following in my ~/.ssh/config, which for those who don't know, maintains the connection even after you disconnect so you can reconnect really quickly:
ControlMaster auto
ControlPath /tmp/%r@%h:%p
ControlPersist yes
Having said that, sometimes I need to remove the entry from /tmp to reconnect if my network settings have changed.
Have you used ControlPath with a very long hostname? I haven't worked out how to stop one of my IPv6 hosts failing with that setup - the DNS RR is really long (causes problems with IRC, too!)
Hmm. I have one control file with 44 chars in it. That is probably the longest one I have hit. You could always give it a dns cname alias or something.
man pages are nice references, but in a lot of cases they seem to be kind of obtuse. They'll have a short technically correct description of every individual flag, but not really what you'd use the flag for, or how it combines with other flags.
Most man pages need a lot more examples. I like blog posts like this, which are pretty example-driven. They lay out an actual use case, and then walk through the flags necessary to achieve that goal. A man page starts with the flags, and assumes you'll know when and why to use them.
FWIW, I think it can be helpful to discover features of tools in smaller chunks, in the context of a blog post. man pages can be very dry and it's not always easy to think of all the possible ways some particular flag could help you.
I'm not necessarily defending this particular post, just making a general point about the more esoteric features of something like ssh :)
Actually apparently it doesn't have a man page, I had to use less --help.
Anyways, it turns out there's a ton of functionality I never thought to look for. I only ever see "| less" and that's all it ever was to me. There's nothing more I really expected out of it than that, really.
But that's just how it goes, for the most part. You learn the basics about how to use a given tool, enough to serve the purpose you originally sought it out for, and then that's it, everything else is just noise. less lets me easily scroll through whatever output I pipe to it, ssh gets me onto another server, what more would I need or expect?
But then an article like this comes along and prompts me to look more closely at something I had been taking for granted, showing it to be much more versatile than I thought.
I liked the mention of Ansible (which thankfully abstracts away the need to log into a server via SSH altogether), but the author left out the fact that you can easily use any ansible module (250+ right now, more added all the time[1]) to manage your servers ad-hoc.
Or use the same syntax to build a playbook that you can run to manage infrastructure with the `ansible-playbook` command. Since Ansible uses SSH as it's transport (in most cases—you can do it other ways), if you can connect to a server via SSH (and who can't?), you can have it completely managed/version-controlled pretty simply.
Or if there are multiple matches for file or directory names, it'll list them like bash normally would. This seamless integration is so awesome, I can highly recommend ssh keys (and cygwin for Windows users).
Often (non-IT) companies' firewalls do not allow anything but HTTP and HTTPS traffic and you have to go through proxies. That implies that you cannot get to the outside using SSH. In my days as a freelance consultant, I used Corkscrew (http://www.agroman.net/corkscrew/) to get SSH access.
sshuttle (https://github.com/apenwarr/sshuttle) is one of my favorite SSH tricks. It behaves like a VPN more than other ssh-based proxies that I've used.
It doesn't really matter how many you have, you still need to protect them. Encrypt your laptop, lock your screen when you get up for a break, etc, etc.
Personally, I use one key pair per source user/device pair and then comment in the authorized keys where each public key is from. To me, it is more likely that a user/machine pair will be compromised than a key. This makes cutting a compromised box or account off easy. :-)
Personally, I like to have different keys that I treat with different levels of care/paranoia. I'm not particularly worried about leaving my github key 'added' to my ssh agent 24/7, but I don't do that with my work production key.
I prefer minimising the amount of work required to replace compromised keys. e.g.: I have a private SSH key on my work-supplied computer which I consider to be "compromised" for private purposes, but it's perfectly usable for work inside that company.
Use IdentityFile to specify which key to use with which remote host and you're golden.
edit: I also have that private SSH key on my personal computers so I can use SSH when working from home. Then when I stop working with that company I can simply remove that key, rather than generating a new SSH key and redistributing public keys to hosts that I use regularly.
edit edit: Using IdentityFile also helps automate the process of redistributing keys when you decide to generate a new one.
When I stopped working for $company they had to remove my public key from all their authorized_keys files. There is no need for me to re-generate my private key...
Ouf, no thanks. I have a key per location. I have a single key for the machines I log in to at work. Another key for AWS. Another key for github. I also have separate keys for all of the above services for my semi-persistent local virtual machines.
I know. I was clarifying for the comment I replied to that there are situations where the steps you're citing are relevant, because there are situations where you have multiple keys on the same user/system.
i.e. we intentionally do not enable (human) ssh access to the production hosts. In an autoscaling AWS world, logging into an individual machine by hand is the last thing you want to be doing. So we are learning the (sometime difficult!) lessons of how to rely only on what our logging, tracing, monitoring, and deployment automation (including snapshotting) can afford us. I suspect sooner or later we will break down and swap in a login-enabled image to diagnose some sticky problem, but -- as much as I resented the idea when he presented it -- it's an interesting discipline.
Anyone else living by that principle?