Previous Thread
Next Thread
Print Thread
Linux System Cracked - Lessons #27559 11 Jul 08 12:33 PM
Joined: Jun 2001
Posts: 11,794
J
Jack McGregor Offline OP
Member
OP Offline
Member
J
Joined: Jun 2001
Posts: 11,794
I'm about to make an unscheduled trip to beautiful Omaha, all because a Linux system got compromised by Internet attack. Fortunately, the system is running well enough to keep the application going (A-Shell being nearly invincible!), but it's otherwise nearly crippled. Can't execute simple commands like ps, rpm, vi (vue works!), all the networking commands are broken, the backup program user interface is broken, etc. To compound matters, the problem started happening so long ago that it isn't even clear how far back we need to go (or just reload the OS). But I thought it might be nice to share a few warnings/thoughts/reminders about how to avoid this kind of situation:

1. Don't expose TELNET to the Internet. (That was not the problem here.) Use SSH instead. Same goes for FTP - use SSH (SFTP) instead and block the TELNET and FTP services from the Internet. (You need to leave TELNET running for local connections in order to use SSH tunneling though.)

2. Make sure that you don't have obvious login/passwords (like user/user, staff/staff, etc.) We're not actually sure if this is how they broke in, but we can see from the logs hundreds of attempts to log in with various typical usernames (user, staff, admin, john, fred, etc.) It should go without saying that the root password better be secure.

3. Use a firewall, and configure it properly. In this case, they had a Cisco firewall, and it was supposed to be blocking outside incoming connections except for a few special IP addresses (like mine), but apparently a lightening strike caused it to lose its rules and revert to wide-open incoming access. If possible, require all the remote users to come from fixed IP addresses (which are easy to configure in even the simplest Internet routers).

4. Employ some kind of log-checking mechanism. Any suggestions here would be most welcome! RHEL includes a thing called LogWatch, which can be used to display or output a semi-consolidated listing of all or selected log activity, but you still have to arrange to have it execute on a daily basis and for someone to review the logs. Better still would be to use software to dynamically look for exceptions, but there doesn't seem to be clear, obvious, and simple-to-configure choice that I can see. There are commercial utilities such as TripWire, and there are open source utilities such as swatch, but they all require a fair amount of time spent figuring out what you want them to look for and how to get them to do it. (In this case, merely reporting failed login attempts would probably have been sufficient to raise the red flag.)

5. Use a "real" backup tool (I still like BackupEdge) that allows you to fully restore the root filesystem. This is probably what I will end up using. But even this is not helpful if you don't have a good rotation of backup tapes, including week-end or month-end tapes going back at least a couple of months. (In this case, we know the problem started more than a month ago, but aren't sure because that's how far back the logs go.)

6. Tip: edit /etc/logrotate.conf to change the default rotate 4 to a higher value to increase the number of archived copies of the logs).

7. Configure your disks so that the application is in a separate filesystem (e.g. /vm or /u). Not only does this isolate the application and the root from problems originating in the other, it also makes it a lot easier to do an emergency restore (or reload) of the root filesystem, without exposing the application to risk of unwanted over-writing.

8. Install security patches. There was a time when Linux was considered reasonably immune from exploitable security flaws (and still is much much better than Windows, which requires updates on a near daily basis to remain secure), but if your server is exposed to the Internet, you need to think about updates. (This is one of the advantages of going with a commercial distribution, like RHEL, which comes with a subscription for patches.)

9. Turn off any services you aren't using - such as httpd, etc.

Re: Linux System Cracked - Lessons #27560 15 Jul 08 03:13 PM
Joined: Sep 2002
Posts: 5,471
F
Frank Online Content
Member
Online Content
Member
F
Joined: Sep 2002
Posts: 5,471
Thanks Jack... will have to employ a lot of these suggestions...

How did the rescue wind up working out?

Re: Linux System Cracked - Lessons #27561 17 Jul 08 11:01 AM
Joined: Jun 2001
Posts: 11,794
J
Jack McGregor Offline OP
Member
OP Offline
Member
J
Joined: Jun 2001
Posts: 11,794
Just got back, although the actual rescue didn't take that long, because I was able to boot on a RecoverEDGE CD and fully initialize/restore the root file system from a tape made about 6 months ago. Some system admin tasks had to be repeated (adding/updating users, printers, etc.) but because the A-Shell application was on a separate filesystem and on a separate physical drive, even the total reinitialization of the system drive had no effect on the application.

A couple of additional observations:

1. Under RHEL 4/5, a 5GB root may not be big enough, particularly if you subscribe to updates, unless you carefully pare down all the packages you don't need. When I originally spec'd the system out, I figured 5GB should be more than enough, with all of the package updates, backup archives, etc., they outgrew that a long time ago. My workaround at the time was to create a symbolic link from a couple of the really big directories in the root filesystem to a secondary filesystem /u2. But that complicated the recovery, which is simplest if you can just restore the root filesystem. This is where RecoverEDGE turned out to be very useful, since the CD boot module contained a full set of disk reformatting tools, allowing me to reconfigure the hard drive to increase the root filesystem from 5GB to 15GB.

2. Separating the system and application files on separate physical drives or mirrors (not just separate filesystems) turns out to have additional benefits. I did it originally for performance reasons, but in this case it also made it easy to complete reorganize the primary disk partitions without touching the application. Had the problem been a disk crash or loss of an entire mirror/RAID set, the two drive scenario would still have been a big advantage. If the failed drive was the system drive, you can do a simplified root filesystem restore, or even reload the OS from scratch, without touching the application. And if it was the application drive that failed, the recovery will be quick and easy (assuming daily backups) by eliminating the need to do an emergency boot or hassle with the OS stuff. Another benefit of this separation is that you can do quick intra-day backups from the application drive to the system drive (assuming it has room for a spare filesystem) of critical files.

So to summarize those two points, I recommend a root partition of at least 15 GB, and that you separate the application onto a separate physical drive (or mirror/RAID set) if you can, but in no case should you let the application share a filesystem with the OS. Assuming that the physical drives have plenty of space, add a spare filesystem to the system drive which can be used to back up critical files from the application, giving you another route to quickly restore in the case of problems with the application drive.

3. Review your backup media utilization. I still like the Tandberg streamer tapes, which are pretty reliable, although they do sometimes fail. What often happens is that you start out with a dozen or so and sensible daily/weekly rotations, then after a year, you find that a couple of tapes have been tossed, a couple lost, a couple put aside for safe-keeping, leaving only a small and inadequate rotation. Don't put yourself in the position of depending on a single piece of media - if that fails, you should be able to use the prior day's media (which only works if the same tape isn't used every night)!

See http://www.microsabio.net/ubb2/ultimatebb.cgi?ubb=get_topic;f=9;t=000089 for a discussion of using the Internet as an additional backup medium.


Moderated by  Jack McGregor, Ty Griffin 

Powered by UBB.threads™ PHP Forum Software 7.7.3