Free Sample notification from my office firewall — list of perps who tried to hack my system last night. Every Internet facing system has thousands of such attempts daily. The ones you don’t see are the ones you need to worry about.
As a responsible computer owner, you must take steps to protect your hardware from compromise: pretending that the bad guys don’t want to use your system for a zombie doesn’t work any more than walking down main street naked and pretending no one will look. If you let them use your system then you are part of the problem.
Mad as in crazy. No one in my family suffers from insanity: mostly we enjoy it. It’s hereditary, you know, parents catch it from their children.
I spent the first six weeks of this year sick in bed, which eliminated my chance to do annual maintenance on my network. Recently I received email informing me that my network was being used as a base for brute force attacks and port scans. After verifying from my firewall logs that SSH was leaving my network just as the other sysop said, I started tracking down the culprit computer. The long and short of it was that several of the Microsoft systems in my network did have the usual viruses, which I removed. Yes, they had Microsoft’s Security Essentials anti virus stuff installed, but that is not enough: they get re-infected daily in normal use. My desktop had just been reformatted and upgraded to Mint 14 the month before and it still showed clean. And the CentOS file server that I installed seven years ago and had just become Internet facing was now dirty. Very dirty.
I removed 79 viruses from the file server and realized that what I probably needed to do was the upgrade that I had intended to do four years ago. Scant resources of time and money continuously frustrate my efforts to be optimal 8). I also found more interesting things on the file server that I will share with you here in case you encounter a similar situation. The most likely way this machine was compromised was that I had been asked to make it available via Internet so that staff could work remotely, I had complied quickly at the end of the day, and in my haste I made some serious mistakes: this machine had been installed several years prior and my assumptions about its configuration were wrong. ASSume makes an ass out of U&Me. Amazingly, there were no SSH log entries on the file server showing the hackers use of the system.
I assumed that I had Deny Hosts on this machine, but I did not already have DenyHosts installed. DenyHosts bans an IP (by making an entry in /etc/hosts.deny — yes, I know that’s old skool but it works nicely) after a specified number of failed attempts to login. This by itself alone would be a lethal mistake, as it would have left the box vulnerable to brute force password attempts all night and eventually they would succeed, since we live in a continuous fog of hack attempts — brute force attempts try to log in as root with every conceivable password beginning with ‘password’, ‘123456’, and moving right along until something works. Normally remote login as root is not a problem for my systems because I have already banned ‘root’ as a remote login: you must login using another account and then shell to root. HOWEVER, making another lethal error, I did not recognize that this distribution allowed root login by default. I found and corrected this error, but too late.
In retrospect, having a system that runs seven years without doing anything much to it is good compared to for-profit server OSes which require daily busywork and reboots, but most *nix admins really won’t want to associate with me after hearing I’m this much of a hack. So don’t tell them. Seven years ago, when I configured this system, I knew less than I know today.
Looking further into things, I decided to add a nightly task that scanned and updated every computer in the premises every night — the computers are programmed in BIOS to turn on with RTC alarm and then they will boot to Linux. A cron job then runs some safe time after that, updates the system, scans and removes any malware, and turns the machine back off. I decided to add it as a cron job owned by root and used crontab to add it, but somehow I also looked at /etc/cron.d, /etc/cron.daily, and so forth. There was a really interesting script in the /etc/cron.daily directory that collected the ssh access logs, tarred them to a temp file, deleted the SSH logs, and emailed the temp file to a @gmail.com address. It also ran port scans. Ah. Culprit found, and this explained why I could not see anything in SSH log files. This script was not detected as a hack by clamscan. I deleted the script. Should have saved it, I know.
What clamscan did find was four root kits. Oddly, I could not delete these four files even as root. I mounted the disks in a new Linux box and examined them (booting from another drive, not the infected drives). These four files could not be deleted in this configuration either, even as root.
Now, I am not used to being told as root that I cannot do something. Usually I am told that I was obeyed and allowed later to discover that the system did what I asked it to do rather than what I wanted it to do. But here is the bottom line: *nix files do have owners and permits, which are controlled with chown and chmod. HOWEVER, they also have “attributes” which can be viewed with lsattr and controlled with chattr. These attributes can make a file “append only”, “immutable” and more, so even root cannot delete the file.
Of course by the time I found even the first part of this I realized that there would be more hidden than I would discover, and so I would need to archive the old disk drives in the safe and install a new file server on new drives (seven years is long enough to recommend an upgrade to new drives).
So here is a summary of SOME of the things I have learned to do in preparing a system for an Internet facing job. Most of them are not new to me now, but I share them to help:
1. Use an up to date distro, remove all packages not needed for the machine’s core purpose. “apt-get update“, apt-get dist-upgrade“, “clamscan -r –remove –infected /” nightly. After being hacked, you must use different account logins and passwords — assume all prior login information has been reverse engineered and published on the Internet. It probably has.
1.a. Automate daily update and upgrades, and scans. Put the scan logs into an unusual location to complicate hacking automation. Insert the cron job to automate virus scanning and updates with “crontab -e” and follow the instructions in the file. IF you still have any Microsoft systems on your network, realize that they will have viruses and other malware installed daily in normal use. You must scan all such nightly and remove the viruses, running from a non-Microsoft partition. The best configuration would be to have uniform hardware in diskless workstations with network boot from virtual machines which can be more easily maintained and protected.
1.b. Set the firewall to allow only those ports absolutely needed for the machine’s specific purpose.
2. Install DenyHosts before the Internet can see the machine. Set it to email you reports. You can then keep tabs on how many hacks you are getting daily. If that drops all at once, you probably need to look into why. apt-get update, apt-get install denyhosts.
3. Be sure the /etc/ssh/sshd_config file has “PermitRootLogin no“. If possible, also change port to something other than port 22. Port scanners will still find it, but make it interesting for them. I have no idea why the powers that be would ever set PermitRootLogin as yes by default.
4. Try to avoid installing samba. All *nix boxes, Mac and Linux, can easily connect with SSH. If all access is through SSH2 then you can focus on SSH security. Samba can introduce vulnerabilities.
5. Build it on a Virtual Machine if you have adequate hardware — you can restore a compromised VM from a good backup by merely copying the files from backup.
6. Segment your network. We break ours (now) into multiple sections according to employee job function. Some of our internal subnets have only a single computer to allow work-from-home access for high level (financial, security, etc) workers without granting any access to these areas for others. Wireless can also be separated out from wired and different access points programmed on physically separate wiring for staff and public accessibility. Only one of our segments has Windows computers — the “public” segment. Windows is hacked daily and our nightly scans (hopefully) are killing all the new bugs each day. Once a machine is infected it can explore your network, but by segmenting your public area physically — not just by using different subnets: make physically separate electrical wiring, firewalls, and servers — you make the task of infecting your critical infrastructure a little more interesting.
7. Don’t assume your workers are too lazy, too stupid, or too foolish to honor sensible security steps. Educate your people on why they need a password that is at least eight characters long. Share why the wireless passwords change every so often. Tell them why there is a staff wireless and a public wireless and they are not to tell anyone the staff wireless password. Explain why they are not allowed to cruse the net on that special PC in finance. Share why Windows XP is not allowed in the facility and why all notebooks must have approved, up to date, anti-virus software working on them, yes even on Macs. If all else fails tell them “That’s just how we roll.” For some reason that works when all else fails.
If you have more ideas to contribute on this topic, please feel free to comment. I read each comment before it is approved for public visibility so please be patient. Registration is required to comment. I do not otherwise use, sell, share, or divulge registered user’s information, however I certainly understand if you do not want to bother. But you please understand that I do this to control what is posted, because experience has shown that it is best to do so.