Securing remote access via SSH, dealing with passwords
SSH is one of the best inventions since sliced bread. With the advent of networked computing, remote access to a system was a given. Networks are susceptible to sniffing, so it wasn't until SSH that you could rely on a secure terminal session. Use of SSH could have prevented Kevin Mitnick's spoofing attack on Tsutomu Shimomura in 1995.
A sad fact of SSH is that we're still using passwords. Passwords are pretty hopeless, by and large. Noone likes having to remember passwords, let alone different passwords for different things. Passwords can also be lost, forgotten or stolen. They're not ideal for their job, which is helping you prove that you're you.
The use of secure passwords or more so the prevention of the use of passwords in place of more secure authentication methods discussed is of particular importance in a web hosting environment. Web hosting servers are connected to a public network from which by default any of the millions of Internet users can attempt to maliciously compromise the security of the server.
That said, SSH is indispensable, and there's plenty we can do to improve the situation. If you've already been compromised, you probably want to do some cleanup right now, this article may help: Security Cleanup of a Compromised Hacked or Cracked Host
The first problem most people have with passwords is that they're too short. Older systems used to have an 8-character limit on passwords; anything longer would be ignored. Modern computers are very fast and can test every possible 8-character password in no time at all (this is called "brute-forcing"). This is made even worse by the fact that many people choose short AND easily-guessable passwords. If you're trying to break someone's password, you'll do well by trying dictionary words first, along with names of pets, kids, relatives, holiday destinations, etc.
If you're still unconvinced, security guru Bruce Schneier ran an article a couple of years ago about an analysis of 34,000 stolen Myspace account passwords. The average password is 8-characters long, and uses only letters and numbers. That's a lot of possible passwords but it doesn't really matter, seeing as plenty are easily guessable.
There's a cute saying about writing that goes something like, "sentences are like a girl's skirt: the shorter the better, but they should cover the most important parts". Passwords are the exact opposite, they should fall all the way to the ground! (maybe with some nice knife pleats, but I digress)
Short passwords have been known for a long time, and most large organisations will have policies enforcing the use of longer passwords. This can effectively thwart dictionary and brute-force attacks, but opens the door to another class of problems. Noone will ever guess "uhe8734ytc83y4t77tynycntse4ys" as your password, but when a password is too long to remember, everyone's first instinct is to write it down somewhere. All too often this is on a piece of paper close to the computer, stuck to the monitor for easy reading (!), under the keyboard, in a desk drawer, etc.
This is well and good if you're at home and the system you're logging into is on the other side of the world, but it just doesn't cut it in an environment where those who might set out to exploit you work in the same office.
Adding to this problem is the fact that long passwords are a hassle to type. If passwords are a hassle, people are especially inclined to pick easier (ie. weaker) passwords, defeating the purpose.
It seems that what we're after are passwords that are:
- easy to remember
- resistant to attack
We can easily go a long way towards meeting these criteria. At Anchor, we use software to automatically generate randomised passwords for our users when we make a new account, or they need a reset. This is mostly for administrative efficiency, but we take advantage of the chance to produce more secure passwords. We could ask the user for a password they'd like, but as we know, users are notoriously bad at picking good passwords.
A random password generated by our system
The software strings together random dictionary words, separated by numbers and punctuation. You'll also notice the uppercase 'J' in 'Jack'. We've found these passwords are easy for users to remember, and even comical at times. A memorable password immediately thwarts the need to keep it written down.
It's quite long, meaning it's less susceptible to shoulder surfing if used in a public place. The use of punctuation and numbers increases the character space of the password and largely rules out guessing. Coupled with the length, this defeats the smarter brute-forcing techniques in crackers like John the Ripper, which try various "nearby" derivatives of dictionary words.
One final consideration is that our users don't get to choose it (they can change the password later if they feel like it, though). An easily-overlooked advantage is that the password is assured to be different from other passwords the user has. In the case of a compromise or password theft, the attacker doesn't get a free pass to the victim's other accounts (eg. online banking, medical matters).
This isn't perfect, but it's much closer to something you can be comfortable about using.
Public key authentication
So far we've covered passwords, which are now better, but they're still lacking. They still hinge on you proving that you know something. A better system is to use what's referred to as a "token", something you have in your possession. This is where public keys come in. (this is technically called a "keypair", consisting of a private key and a public key)
Public keys use asymmetric cryptography, meaning different keys are used for encrypting and decrypting data. The practical upshot of this is that you can prove that you possess the private key without showing it to anyone. This works well for SSH. You give the server your public key (it's not a secret, you can give it to anybody), then whenever you want to login, you can prove that you're in possession of the matching private key (your PC does some heavy-duty maths on your behalf).
If an attacker wants to login as you, they now need to steal your private key. It's not written down so they can't just pick it up or remember it, but it's still on your computer. If they can use your computer, they can steal your key. To prevent this, the key should be encrypted with a password. Yes, it's back to passwords again, but this is hugely more difficult to attack, now that you've got a long password that's easy to remember and hard to guess.
It's out of scope to cover the exact procedure for using public keys here, but good guides can be had and it's easy to setup.
Depending on how paranoid you are, you can also carry your keys on a removable USB drive. This helps keep them under your control, plus it's just plain cool.
Location-based access control
One of the core philosophies in security is Defense In Depth. This means that you don't rely on any one security measure to protect you, you use multiple layers. Castles are a great example of this; you'd have a moat, high walls, gates, guards, traps, etc. Attackers might be able to defeat one or some of these, but to get in they'd have to defeat ALL of them. That's defense in depth.
In most situations, you know who's going to be logging on to your system. For most of our customers, chances are it'll be themselves working from their home/office, and maybe their developers from their offices. So why would we be interested in letting some zombie computer in Russia or China even bother trying? Obviously we're not, and we can enforce it. In fact, this can be done in a number of places, which is a good defense against potential undiscovered bugs in different pieces of software.
If users are connecting from a fixed address/es, we can configure the firewall to only respond to requests from those addresses. Any other connections will be silently ignored, as though SSH isn't running at all. But what if the firewall is down? To this we can add TCP Wrappers, which allows us to implement similar restrictions. TCP Wrappers also allows you to easily log connections and perform other actions if required.
These measures aren't very fine-grained, however; there's no support for per-user restrictions. Our first stop here is with Pluggable Authentication Modules (PAM). PAM isn't strictly coupled to SSH, it's an authentication layer provided by the OS. By making a small change to our PAM configuration, we can restrict access on a per-user basis.
First create the file /etc/security/ssh_users
# Each entry is a single line, and consists of three colon-separated fields; # 1. an allow/deny specifier (plus/minus respectively) # 2. the username # 3. the source of the connection +:username:.my.home.isp.net +:someotherguy:office.in.america.net +:anotherguy:220.127.116.11 -:ALL:ALL
Then edit your /etc/pam.d/sshd, adding a line to the start of the "account" entries
account required pam_access.so accessfile=/etc/security/ssh_users
The syntax is quite flexible, and you can specify remote hosts as fully qualified hostname, domain suffixes, IP addresses and IP ranges. For full details you should read the system's manual page for pam_access. This configuration of PAM can only be done at a system-wide level though, not as an ordinary user. For one final measure, you can do something like this as a user if you use public keys to login. SSH supports a "from" directive on your key, which states where it may be used from. This is specified in your authorized_keys files, prepended to each key listing.
from="*.anchor.net.au,*.randw1.nsw.optusnet.com.au" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAetcetcetc=
With this, I'm allowing access from work, and a very specific Optus exchange (this gets around the fact that the IP address isn't static). By disabling the password on that account, I can enforce that access is ONLY possible with a public key, protected by a strong password, from a limited number of well-controlled locations.
Defence against brute-force attacks
If you have to use passwords, you're really hoping noone guesses your password. SSH already imposes a 2-second timer on failed password attempts and is vastly slower than an offline attack, but it doesn't actually stop someone trying. It also fills up your logs, which is just plain annoying and can lead to you running out of diskspace in an extreme situation.
Brute-force attacks are really obvious when you're watching them, it'd be nice to do something about it when you spot one. There's a few ways to do this, the one we deploy on most of our dedicated servers is called sshd_sentry. sshd_sentry is a perl script that watches the SSH logs. When it detects too many failed logins for an account in a short period of time, it'll add a temporary entry to the TCP Wrappers denial list. We find this to be quite effective, and helps protect against situations where SSH gets flooded with malicious login attempts and stops responding for a short while.
Another option available in modern kernels is a matching module called "recent". While not able to differentiate between users, it offers detection at a very early stage, in the firewall, before it gets far enough to waste time talking to the SSH daemon. The firewall maintains a table of recent activity, and can be configured to drop any requests from a remote IP address if it decides they're opening too many connections to be considered legitimate.
The best thing is that it's dead-easy to implement, it's only a few lines of iptables rules. It's nice to create a separate table for the brute-force matching, as it offers extra flexibility should you need to cover more services.
# make the table iptables -N brute # whitelist specific IPs/ranges iptables -A brute -s 18.104.22.168/20 -j RETURN # if they've "hit" twice in 15sec, just drop the packets and stop processing iptables -A brute -m recent --update --name brute --seconds 15 --hitcount 2 -j DROP # assuming they're not already dropped, record a "hit" iptables -A brute -m recent --name brute --set # if they get through, return to our regular rules iptables -A brute -j RETURN # elsewhere in our ruleset... # new ssh connections get checked for brute-forcing # we ONLY test new connections, otherwise we'll disrupt legitimate connections right after they're established iptables -A INPUT -p tcp --dport ssh -m state --state NEW -j brute # allow existing connections, or connections that got through the "brute" table without being harmed iptables -A INPUT -p tcp --dport ssh -j ACCEPT
There's a potential Denial of Service attack here if an attacker knows your IP address and spoofs requests to the SSH server, but judicious use of the --rttl flag to iptables can dodge this, along with whitelisting.
Appropriate access and security
The great thing about this is that you can mix and match to suit your situation and increase your security (thus reducing the chance of a compromise). As an example, it's widely recommended that remote access to the root account be disabled, but we believe there's more to it. We do in fact login to servers as root on a regular basis. We'd go nuts trying to manage servers otherwise. The catch is that only password-protected public keys are used, each staff member has their own for manageability. Access is only allowed from our office, which has a VPN link direct to the datacentre. The security here is a mix of technology and policy, something which is often overlooked.
The important thing is that it's appropriate for the task at hand. At the other end of the scale, we're not too concerned about an account that's only used for mail; we're happy enough to put a decent password on it and check our email from internet cafes.
As a practical example, the user running your website doesn't need the ability to run sudo (allowing escalation of privileges), so they shouldn't have it. If your site is broken, the last thing you want the attacker doing is using the system against you. For this reason, we create separate user accounts for site-content and management tasks.
By employing some (or all!) of the measures above, you can have an extremely secure system that isn't an onerous burden to login to on a day-to-day basis. On a well-implemented system the additional security will pose little extra hassle for legitimate users, but is clammed up tight against would-be intruders, who have to break multiple systems to get in (there's that Defense In Depth principle again).
We've seen our fair share of compromises of machines at Anchor, but these are few and far between. Most are on servers not managed by our team, and all of them could have been prevented. While not a comprehensive list, there are some features we notice in these systems that make them hard to secure effectively.
- Weak passwords are the Number 1 enemy that we've seen
- Systems with lots of users (lots of potential targets)
- Systems where accounts aren't directly managed by us (more chance for users to pick weak passwords)
- It may not be practical or feasible to setup public key authentication (non-techy userbase, remote sites make workstation setup difficult)
- Fast-changing userbase makes password/key management difficult/impractical
- Logins shared by multiple users (makes accountability a nightmare)
- Users needing to connect from varying locations (cripples the use of location-based controls)
- Administrators simply not being security-conscious, or unaware of these great security measures
If you're an Anchor customer and would like to ensure your dedicated server is using some (or ALL!) of the above principles please contact us. If you're not, we hope you found this article useful and please feel free to contact us to discuss any of the concepts discussed.
- Generate long passwords that are hard to remember too, useful for databases where you'll use them in your code anyway
- Recent-match module for the iptables firewall/packet filter, with examples
- "Homepage" of sshd_sentry
- Fail2ban, another popular brute-force mitigation tool
- Articles about strong passwords