More from: file server

Ramblings of a mad Admin

john-headset

Mad as in crazy. No one in my family suffers from insanity: mostly we enjoy it. It’s hereditary, you know, parents catch it from their children.

I spent the first six weeks of this year sick in bed, which eliminated my chance to do annual maintenance on my network. Recently I received email informing me that my network was being used as a base for brute force attacks and port scans. After verifying from my firewall logs that SSH was leaving my network just as the other sysop said, I started tracking down the culprit computer. The long and short of it was that several of the Microsoft systems in my network did have the usual viruses, which I removed. Yes, they had Microsoft’s Security Essentials anti virus stuff installed, but that is not enough: they get re-infected daily in normal use. My desktop had just been reformatted and upgraded to Mint 14 the month before and it still showed clean. And the CentOS file server that I installed seven years ago and had just become Internet facing was now dirty. Very dirty.

I removed 79 viruses from the file server and realized that what I probably needed to do was the upgrade that I had intended to do four years ago. Scant resources of time and money continuously frustrate my efforts to be optimal 8). I also found more interesting things on the file server that I will share with you here in case you encounter a similar situation. The most likely way this machine was compromised was that I had been asked to make it available via Internet so that staff could work remotely, I had complied quickly at the end of the day, and in my haste I made some serious mistakes: this machine had been installed several years prior and my assumptions about its configuration were wrong. ASSume makes an ass out of U & Me. Amazingly, there were no SSH log entries on the file server showing the hackers use of the system.

I assumed that I had Deny Hosts on this machine, but I did not already have DenyHosts installed. DenyHosts bans an IP (by making an entry in /etc/hosts.deny — yes, I know that’s old skool but it works nicely) after a specified number of failed attempts to login. This by itself alone would be a lethal mistake, as it would have left the box vulnerable to brute force password attempts all night and eventually they would succeed, since we live in a continuous fog of hack attempts — brute force attempts try to log in as root with every conceivable password beginning with ‘password’, ‘123456’, and moving right along until something works. Normally remote login as root is not a problem for my systems because I have already banned ‘root’ as a remote login: you must login using another account and then shell to root. HOWEVER, making another lethal error, I did not recognize that this distribution allowed root login by default. I found and corrected this error, but too late.

In retrospect, having a system that runs seven years without doing anything much to it is good compared to for-profit server OSes which require daily busywork and reboots, but most *nix admins really won’t want to associate with me after hearing I’m this much of a hack. So don’t tell them. Seven years ago, when I configured this system, I knew less than I know today.

Looking further into things, I decided to add a nightly task that scanned and updated every computer in the premises every night — the computers are programmed in BIOS to turn on with RTC alarm and then they will boot to Linux. A cron job then runs some safe time after that, updates the system, scans and removes any malware, and turns the machine back off. I decided to add it as a cron job owned by root and used crontab to add it, but somehow I also looked at /etc/cron.d, /etc/cron.daily, and so forth. There was a really interesting script in the /etc/cron.daily directory that collected the ssh access logs, tarred them to a temp file, deleted the SSH logs, and emailed the temp file to a @gmail.com address. It also ran port scans. Ah. Culprit found, and this explained why I could not see anything in SSH log files. This script was not detected as a hack by clamscan. I deleted the script. Should have saved it, I know.

What clamscan did find was four root kits. Oddly, I could not delete these four files even as root. I mounted the disks in a new Linux box and examined them (booting from another drive, not the infected drives). These four files could not be deleted in this configuration either, even as root.

Now, I am not used to being told as root that I cannot do something. Usually I am told that I was obeyed and allowed later to discover that the system did what I asked it to do rather than what I wanted it to do. But here is the bottom line: *nix files do have owners and permits, which are controlled with chown and chmod. HOWEVER, they also have “attributes” which can be viewed with lsattr and controlled with chattr. These attributes can make a file “append only”, “immutable” and more, so even root cannot delete the file.

Of course by the time I found even the first part of this I realized that there would be more hidden than I would discover, and so I would need to archive the old disk drives in the safe and install a new file server on new drives (seven years is long enough to recommend an upgrade to new drives).

So here is a summary of SOME of the things I have learned to do in preparing a system for an Internet facing job. Most of them are not new to me now, but I share them to help:

1. Use an up to date distro, remove all packages not needed for the machine’s core purpose. “apt-get update“, apt-get dist-upgrade“, “clamscan -r –remove –infected /” nightly. After being hacked, you must use different account logins and passwords — assume all prior login information has been reverse engineered and published on the Internet. It probably has.

1.a. Automate daily update and upgrades, and scans. Put the scan logs into an unusual location to complicate hacking automation. Insert the cron job to automate virus scanning and updates with “crontab -e” and follow the instructions in the file. IF you still have any Microsoft systems on your network, realize that they will have viruses and other malware installed daily in normal use. You must scan all such nightly and remove the viruses, running from a non-Microsoft partition. The best configuration would be to have uniform hardware in diskless workstations with network boot from virtual machines which can be more easily maintained and protected.

1.b. Set the firewall to allow only those ports absolutely needed for the machine’s specific purpose.

2. Install DenyHosts before the Internet can see the machine. Set it to email you reports. You can then keep tabs on how many hacks you are getting daily. If that drops all at once, you probably need to look into why. apt-get update, apt-get install denyhosts.

3. Be sure the /etc/ssh/sshd_config file has “PermitRootLogin no“. If possible, also change port to something other than port 22. Port scanners will still find it, but make it interesting for them. I have no idea why the powers that be would ever set PermitRootLogin as yes by default.

4. Try to avoid installing samba. All *nix boxes, Mac and Linux, can easily connect with SSH. If all access is through SSH2 then you can focus on SSH security. Samba can introduce vulnerabilities.

5. Build it on a Virtual Machine if you have adequate hardware — you can restore a compromised VM from a good backup by merely copying the files from backup.

6. Segment your network. We break ours (now) into multiple sections according to employee job function. Some of our internal subnets have only a single computer to allow work-from-home access for high level (financial, security, etc) workers without granting any access to these areas for others. Wireless can also be separated out from wired and different access points programmed on physically separate wiring for staff and public accessibility. Only one of our segments has Windows computers — the “public” segment. Windows is hacked daily and our nightly scans (hopefully) are killing all the new bugs each day. Once a machine is infected it can explore your network, but by segmenting your public area physically — not just by using different subnets: make physically separate electrical wiring, firewalls, and servers — you make the task of infecting your critical infrastructure a little more interesting.

7. Don’t assume your workers are too lazy, too stupid, or too foolish to honor sensible security steps. Educate your people on why they need a password that is at least eight characters long. Share why the wireless passwords change every so often. Tell them why there is a staff wireless and a public wireless and they are not to tell anyone the staff wireless password. Explain why they are not allowed to cruse the net on that special PC in finance. Share why Windows XP is not allowed in the facility and why all notebooks must have approved, up to date, anti-virus software working on them, yes even on Macs. If all else fails tell them “That’s just how we roll.” For some reason that works when all else fails.

If you have more ideas to contribute on this topic, please feel free to comment. I read each comment before it is approved for public visibility so please be patient. Registration is required to comment. I do not otherwise use, sell, share, or divulge registered user’s information, however I certainly understand if you do not want to bother. But you please understand that I do this to control what is posted, because experience has shown that it is best to do so.


Cloud Services Breached, data stolen, at Epsilon

Article at http://www.eweek.com/c/a/Security/Epsilon-Data-Breach-Highlights-Cloud-Computing-Security-Concerns-637161

This is why we are skeptical of buying “cloud” services — that is use of a file server — off site in some far eastern country. We have all been able to install our own file servers and web servers for years, and we feel that the risk of trusting an outside group to securely store our private company information is unreasonable for the purported business advantage. After all, an American computer consulting firm can be hired to install Linux and Apache for around $150/hour, and be done in maybe half a day. Individuals can be hired as 1099 contractors for a 3rd of that and mostly the same results — working Linux file server or web server, no license hassles, almost no virus problems.

Individuals supplying their own information on a at-risk foreign server is one thing — and consumers are foolish — but placing customer data or private corporate data “in the cloud” is just asking for Sarbanes-Oxley trouble.

I have found a way that DOES shut it all down, the criminals cannot get around it, and all it requires is the participation of you, the Fed Up Business user.

1. Install Google Chrome for your computer, or another web browser in which you can easily control which web sites are allowed to run javascript.

2. These next steps will turn javascript off by default. There is a really easy one-click way to turn it on for the sites you trust, which I will show you in a moment. Here is how you shut it off in Google Chrome: click the wrench in the upper right corner, then click Options. If you are using Linux click Preferences, Options is on Windows.

3. Click Under the Hood, then Content Settings, then Java Script, and finally check the radio button that says DO NOT ALLOW any site to run javascript. I’ll show you how to let sites you trust run javascript in a moment. Just click it.

Turn off Javascript as a default
Turn off Javascript as a default. Click the photo for a bigger picture.

 

4. Click the close buttons to return to browsing. Javascript will now be OFF unless you tell it otherwise.

The reason this works is that the scammers depend upon your browser trusting them to do anything they want on your computer. Their  link actually jumps off to another web site. This other web site runs javascript to sneak malware onto your computer with the infamous “drive by download”. Turning off javascript by default stops them.

NOW how do you enable javascript for sites you trust, for example your bank or FaceBook, that need it?  EASY. In the upper right corner of the URL box, next to the wrench and the star, a special symbol will appear for only web sites who need javascript when it is not turned on. Just Click It. On that web site, javascript will be allowed to work. On new web sites that you do not yet know if you trust, javascript will not work until you click the little icon.

Click the little icon to allow javascript
Click the little icon to allow javascript. Click the photo for a bigger picture.

 

This is a simple solution that every businessperson can use to protect her or himself from exploitation. This is also a good policy in general because many of the viruses and other malware planted on peoples’ computers must have javascript to be installed. By turning off javascript by default, you are protected. At least until you click the little button to turn it on.

So be sure, be cynical, and practice safe software.