Saturday, November 19, 2011

Linux Administrator's Security Guide - IV



keeps a history file of commands executed in ~username/.bash_history, this file can make for extremely interesting reading, as oftentimes many admins will accidentally type their passwords in at the command line. Apache handles all of its logging internally, configurable from httpd.conf and extremely flexible with the release of Apache 1.3.6 (it supports conditional logging). Sendmail handles its logging requirements via syslogd but also has the option (via the command line -X switch) of logging all SMTP transactions straight to a file. This is highly inadvisable as the file will grow enormous in a short span of time, but is useful for debugging. See the sections in network security on Apache and Sendmail for more information.

General log security

Generally speaking you do not want to allow users to see the log files of a server, and you especially don’t want them to be able to modify or delete them. Generally speaking most log files are owned by the root user and group, and have no permissions assigned for other, so in most cases the only user able to modify the logs will be the root user (and if someone cracks the root account all bets are off). There are a few extra security precautions you can take however, the simplest being to use the “chattr” (CHange ATTTRibutes command) to set the log files to append only. This way in the event of a problem like a /tmp race that allows people to overwrite files on the system they cannot significantly damage the log files. To set a file to append only use: 
chattr +a filename 
only the superuser has access to this function of chattr. If you set all your log files to append only you must remember that log rotation programs will fail as they will not be able to zero the log file. Add a line to the script to unset the append only attribute:
chattr -a filename
and add a line after the log rotation script to reset the append only flag. If you keep log files on the system you may also wish to set them immutable so they cannot be tampered with as easily, to set the file immutable simply:
chattr +i filename
and this will prevent any changes (due to /tmp races, etc.) to the file unless the attacker has root access (in which case you’re already in a world of hurt). 
chattr -i filename
only the root user has access to the immutable flag.

System logging

One feature of Linux (and most unices) is the syslog and klog facilities which allow software to generate log messages that are then passed to alog daemon and handled (written to a local file, a remote server, given to aprogram, and so on).
sysklogd / klogd
In a nutshell klogd handles kernel messages, depending on your setup this can range from almost none to a great deal if for example you turn on process accounting. It then passes most messages to syslogd for actual handling (that is it places the data in a physical file). The man pages for sysklogd, klogd and syslog.conf are pretty good with clear examples. One exceedingly powerful and often overlooked ability of syslog is to log messages to a remote host running syslog. Since you can define multiple locations for syslog messages (i.e. send all kern messages to the /var/log/messages file, and to console, and to a remote host or multiple remote hosts) this allows you to centralize logging to a single host and easily check log files for security violations and other strangeness. There are several problems with syslogd and klogd however, the primary ones being the ease of which once an attacker has gained root access to deleting/modifying log files, there is no authentication built into the standard logging facilities. 
The standard log files that are usually defined in syslog.conf are:
/var/log/messages
/var/log/secure
/var/log/maillog
/var/log/spooler
The first one (messages) gets the majority of information typically; user logins, TCP_WRAPPERS dumps information here, IP firewall packet logging typically dumps information here and so on. The second typically records entries for events like users changing their UID/GID (via su, sudo, etc.), failed attempts when passwords are required and so on. The maillog file typically holds entries for every pop/imap connection (user login and logout), and the header of each piece of email that goes in or out of the system (from whom, to where, msgid, status, and so on). The spooler file is not often used anymore as the number of people running usenet or uucp has plummeted, uucp has been basically replaced with ftp and email, and most usenet servers are typically extremely powerful machines to handle a full, or even partial newsfeed, meaning there aren't many of them (typically one per ISP or more depending on size). Most home users and small/medium sized business will not (and should not in my opinion) run a usenet server, the amount of bandwidth and machine power required is phenomenal, let alone the security risks. 
You can also define additional log files, for example you could add:
kern.* /var/log/kernel-log
And you can selectively log to a separate log host:
*.emerg @syslog-host
mail.* @mail-log-host
Which would result in all kernel messages being logged to /var/log/kernel-log, this is useful on headless servers since by default kernel messages go to /dev/console (i.e. someone logged in at the machines). In the second case all emergency messages would be logged to the host “syslog-host”, and all the mail log files would be sent to the “mail-log-host” server, allowing you to easily maintain centralized log files of various services. The default syslogd that ships with most Linux distributions is horribly insecure, log files are easily tampered with (or outright destroyed), and logging across the network is completely insecure as well as dangerous for the servers involved. I do not advise using syslog if you actually have a need for reliable logging (i.e. the ability to later view log files in the event of a break-in). 
The default file permissions on the log files are usually read / write for root, and nothing for anyone else. In addition to this you can (and should) set the files append only (remember to take logrotate into account though, it needs to zero the files). This will prevent any deletion / modifications to the log files unless root unsets the append only attribute first. 
secure-syslog
The major problem with syslog however is that tampering with log files is trivial (setting the log files append only with “chattr +a” helps, but if an attacker gains root, they can unset the attribute). There is however a secure version of syslogd, available at http://www.core-sdi.com/english/freesoft.htm (these guys generally make good tools and have a good reputation, in any case it is open source software for those of you who are truly paranoid). This allows you to cryptographically sign logs to ensure they haven’t been tampered with. Ultimately, however, an attacker can still delete the log files so it is a good idea to send them to another host, especially in the case of a firewall to prevent the hard drive being filled up. 
next generation syslog
Another alternative is “syslog-ng” (Next Generation Syslog), which seems much more customizable then either syslog or secure-syslog, it supports digital signatures to prevent log tampering, and can filter based on content of the message, not just the facility it comes from or priority (something that is very useful for cutting down on volume). Syslog-ng is available at: http://www.balabit.hu/products/syslog-ng/.
Nsyslogd
Nsyslogd supports tcp, and SSL for logging to remote systems. It runs on a variety of UNIX platforms and you can download it from: http://coombs.anu.edu.au/~avalon/nsyslog.html.

Log monitoring

Log files are not much good unless you actually check them once in a while, this is an almost impossible task for most of us however due to the sheer volume of log files. There are a variety of tools to automate these tasks however.
Psionic Logcheck
Psionic Logcheck will go through the messages file (and others) on a regular basis (invoked via crontab usually) and email out a report of any suspicious activity. It is easily configurable with several ‘classes’ of items, active penetration attempts which is screams about immediately, bad activity, and activity to be ignored (for example DNS server statistics or SSH rekeying). Psionic Logcheck is available from:http://www.psionic.com/abacus/logcheck/.
colorlogs 
colorlogs will color code log files allowing you to easily spot suspicious activity. Based on a config file it looks for keywords and colors the lines (red, cyan, etc.), it takes input from STDIN so you can use it to review log files quickly (by using “cat”, “tail” or other utilities to feed the log file through the program). You can get it at: http://www.resentment.org/projects/colorlogs/.
WOTS
WOTS collects log files from multiple sources and will generate reports or take action based on what you tell it to do. WOTS looks for regular expressions you define and then executes the commands you list (mail a report, sound an alert, etc.). WOTS requires you have Perl installed and is available from: http://www.vcpc.univie.ac.at/~tc/tools/.
swatch
swatch is very similar to WOTS, and the log files configuration is very similar. You can download swatch from: ftp://ftp.stanford.edu/general/security-tools/swatch/.

Kernel logging

The lowest level of logging possible is at the kernel level. Generally speaking users cannot disabled of avoid this type of logging, and also are usually not even aware it exists (a defenite advantage).
auditd

Shell logging

A variety of command shells have built in logging capabilities.
bash
I will also cover bash since it is the default shell in most Linux installations, and thus its logging facilities are generally used. bash has a large number of variables you can configure at run time or during it’s use that modify how it behaves. Everything from the command prompt style to how many lines to keep in the log file.
HISTFILE
name of the history file, by default it is ~username/.bash_history
HISTFILESIZE
maximum number of commands to keep in the file, it rotates them as needed.
HISTSIZE
the number of commands to remember (i.e. when you use the up arrow key).
The variables are typically set in /etc/profile, which configures bash globally for all users, however, the values can be over-ridden by users with the ~username/.bash_profile file, and/or by manually using the export command to set variables such as export EDITOR=emacs. This is one of the reasons that user directories should not be world readable; the .bash_history file can contain a lot of valuable information to a hostile party. You can also set the file itself non world readable, set your .bash_profile not to log, set the file non writeable (thus denying bash the ability to write and log to it) or link it to /dev/null (this is almost always a sure sign of suspicious user activity, or a paranoid user). For the root account I would highly recommend setting the HISTFILESIZE and HISTSIZE to a low value such as 10. On the other hand if you want to log users shell history and otherwise tighten up security I would recommend setting the configuration files in the user’s home directory to immutable using the chattr command, and set the log files (such as .bash_history) to append only. Doing this however opens up some legal issues, so make sure your users are aware they are being logged and have agreed to it, otherwise you could get into trouble.



Attack detection

Baselines

One major oversight made by a lot of people when securing their machines is that they forget to create a baseline of the system, that is a profile of the system, its usage of resources, and so on in normal operation. For example something as simple as a "netstat -a -n > netstat-output" can give you a reference to latter check against and see if any ports are open that should not be. Memory usage and disk usage are also good things to keep an eye on. A sudden surge in memory usage could result in the system being starved of resources. Likewise for disk usage. It might be a user accident, a malicious user, or a worm program that has compromised your system and is now scanning other systems. Various tools exist to measure memory and disk usage: vmstat, free, df, du, all of which are covered by their respective man pages.
At the very minimum make a full system backup, and regularly backup config files and log files, this can also help you pinpoint when an intrusion occurred (user account "rewt" was added before the April 4th backup, but isn't in the March 20th backup). Once a system is compromised typically a "rootkit" is installed, these consist of trojaned binaries, and are near impossible to remove safely, you are better of formatting the disk and starting from scratch. There is of course a notable exception to this rule, if you were diligent and used file/directory integrity tools such as L5 you will be able to pinpoint the affected files easily and deal with them.
There are also a variety of tools that do not quite fit under the headings here, but are aimed at attack detection. One is the Linux Intrusion Detection System (LIDS) project, more information is listed here.

File system monitoring

So you've secured your machines, and done all the things that needed to be done. So how do you make sure it's actually doing what it is supposed to do, or prove to someone that it is as secure as you say it is? Well you conduct an audit. This can be as simple as reviewing the installed software, configuration files and other settings, or as complex as putting together or hiring a tiger team (or ethical hackers, or whatever buzzword(s) you prefer) to actively try and penetrate your security. If they can't then you did your job well (or they suck), and if they do get in, you know what needs to be fixed (this is also a good method to show the CIO that security is not a one shot affair, it is a constant battle). One thing almost all attackers do is modify system files, once you detect a break in, how do you know which files are ok and which are not? Short of a complete reinstall the only way to be sure (and even then it's not always 100%) is to use software to create signatures of files that cannot be forged so you can compare them later on.
Tripwire
Tripwire is no longer a open source tool. I have absolutely NO problems with commercial software. However, when you expect me to rely on a program to provide security, when I (nor anyone else really) can not view the source (it is available under some special license agreement, probably an NDA) I must decline. Tripwire costs approximately $70 for Linux, and is only available as an RPM package aimed at Red Hat Linux (tripwire is $500 for other operating systems). I feel this is rather on the high side for a piece of software that can easily be replaced with alternatives such as L5 or Gog&Magog. Tripwire is available at: http://www.tripwiresecurity.com/.
AIDE
AIDE is a tripwire replacement that attempts to be better then tripwire. It is GPL licensed which makes it somewhat more desirable then tripwire from a trust point of view. It supports several hashing algorithms, and you can download it from: http://www.cs.tut.fi/~rammer/aide.html.
L5
There is an alternative to tripwire however, L5, available at: ftp://avian.org/src/hacks/, it is completely free and very effective. I would definitely recommend this tool.
Gog&Magog 
Gog&Magog creates a list of system file properties, owner, permissions, an MD5 signature of the file and so (similar to tripwire). You can then have it automatically compare this and ensure any changed files/etc come to your attention quickly. As well it makes recovering from a break in simpler as you’ll know which files were compromised. You can download Gog&Magog from: http://www.multimania.com/cparisel/gog/.
Sentinel
Sentinel is a program that scans your harddrive and creates checksums of files you request it to. It uses a non patented algorithm (RIPEMD-160bit MAC ), and has an optional graphical front end (nice). You can get it at:http://zurk.netpedia.net/zfile.html.
SuSEauditdisk
SuSEauditdisk is a bootable disk with integrity checking tools and the checksums providing a very secure method to check for damage. It ships standard with SuSE and can easily be ported to other Linux distributions, and is GPL licensed. You can get SuSEauditdisk from: http://www.suse.de/~marc/.
ViperDB 
ViperDB checks setuid/setgid programs and folders and can notify you (via syslog) of any changes or reset their permissions and ownership to what they should be. ViperDB creates a series of databases (flat text files actually) in the directory root, i.e.: /etc/.ViperDB might contain:
/etc/login.defs,1180,-,root,rw-,root,r--,r--,Apr,15,18:03
/etc/minicom.users,1048,-,root,rw-,root,r--,r--,Mar,21,19:11
/etc/CORBA,1024,d,root,rwx,root,r-x,r-x,Jun,14,16:51
/etc/X11,1024,d,root,rwx,root,r-x,r-x,Jun,14,23:05
/etc/cron.d,1024,d,root,rwx,root,r-x,r-x,Apr,14,17:09
Unfortunately ViperDB doesn’t seem to handle sub directories, so you will have to add them to the viperdb.ini file with something like:
find /etc/ -type d >> /usr/local/etc/viperdb.ini
viperdb.pl has 3 options, -init (creates a set of databases), -check (checks files against databases, sends any messages to syslog, and then recreates the databases) and –checkstrict (checks files against databases, resets permissions if necessary, sends any messages to syslog, and then recreates the databases). What this means is if you use –check, you will get a warning that say /etc/passwd is now world writeable, and since it recreates the databases the next time you run viperdb you will NOT get a warning. I would advise running viperdb is –checkstrict mode only, and make sure you run viperdb with the –init option after manipulating any file / folder permissions in protected directories. ViperDB is available for download from: http://www.resentment.org/projects/viperdb/.
Sxid
Sxid checks setuid and setgid for changes, generates MD5 signatures of the files and generally allows you to track any changes made. You can get it at: ftp://marcus.seva.net/pub/sxid/.
nannie
nannie is a relatively simply tool that relies on stat to build a list of what files should be like (size, timestamps, etc.). It creates a list containing the filename, inode, link information and so on, it does make a useful, albeit simple burglar alarm. You can get it from: ftp://tools.tradeservices.com/pub/nannie/.
confcollect
confcollect is a simple script that collects system information such as routing tables, rpm’s installed and the like. You can download it from: http://www.skagelund.com/confcollect/.
Pikt
Pikt is an extremely interesting tool, it is actually more of a scripting language aimed at system administration then a simple program. Pikt allows you to do things such as killing off idle user processes, enforcing mail quotas, monitor the system for suspicious usage patterns (off hours, etc), and much more. About the only problem with Pikt will be a steep learning tools, as it uses it’s own scripting language, but ultimately I think mastering this language will pay off if you have many systems to administer (especially since Pikt runs on Solaris, Linux and FreeBSD currently). Pikt is available at: http://pikt.uchicago.edu/pikt/.
Backups
Something people forget about, but you can compare the current files to old backups, many backup formats (Tape, floppy, CDR, etc.) can be made read only, so a backup of a newly installed system provides a good benchmark to compare things to. The utility “diff” and “cmp” can be used to compare files against each other. See the backup section for a full listing of free and commercial software.

Network monitoring / attack detection

If the last section has you worried you should be. There are however many defenses, active and passive against those types of attacks. The best ways to combat network scans are keep software up to date, only run what is needed, and heavily restrict the rest through the use of firewalls and other mechanisms. 
Luckily in Linux these tools are free and easily available, again I will only cover opensource tools, since the idea of a proprietary firewall/etc is rather worrying. The first line of defense should be a robust firewall, followed by packet filters on all Internet accessible machines, liberal use of TCP-WRAPPERS, logging and more importantly automated software to examine the logs for you (it is unfeasible for an administrator to read log files nowadays). 
DTK
The Deception ToolKit is a set of programs that emulate well known services in order to provide a false set of readings to attackers. The hope is to confuse and slow down attackers by leading them to false conclusions, you can download DTK from: http://all.net/dtk/
Psionic PortSentry
The third component to the Abacus suite, it detects and logs port scans, including stealthy scans (basically anything nmap can do it should be able to detect). Psionic PortSentry can be configured to block the offending machine (in my opinion a bad idea as it could be used for a denial of service attack on legitimate hosts), making completion of a port scan difficult. As this tool is in beta I would recommend against using it, however with some age it should mature into a solid and useful tool. Psionic PortSentry is available at: http://www.psionic.com/abacus/portsentry/.
Psionic HostSentry
While this software is not yet ready for mass consumption I thought I would mention it anyways as it is part of a larger project (the Abacus project, http://www.psionic.com/abacus/). Basically Psionic HostSentry builds a profile of user accesses and then compares that to current activity in order to flag any suspicious activity. Psionic HostSentry is available at: http://www.psionic.com/abacus/hostsentry/.
scanlogd
scanlogd monitors network packets and if a threshold is exceeded it logs the packets. You can get it at: http://www.openwall.com/scanlogd/.
Firewalls
Most firewalls support logging of data, and ipfwadm/ipchains are no exception, using the -l switch you get a syslog entry for each packet, using automated filters (Perl is good for this) you can detect trends/hostile attempts and so on. Since most firewalls (UNIX based, and Cisco in any case) log via the syslog facility, you can easily centralize all your firewall packet logging on a single host (with a lot of harddrive space hopefully).
TCP-WRAPPERS
Wietse's TCP-WRAPPERS allow you to restrict connections to various services based on IP address and so forth, but even more importantly it allows you to configure a response, you can have it email you, finger the offending machine, and so on (use with caution however). TCP_WRAPPERS comes standard with most distributions and is available at: ftp://ftp.porcupine.org/pub/security/.
Klaxon
While mostly obsoleted by TCP-WRAPPERS and firewall logging, klaxon can still be useful for detecting port scans/etc if you don't want to totally lock down the machine. Klaxon is available at:ftp://ftp.eng.auburn.edu/pub/doug/.
NFR
NFR (Network Flight Recorder) is much more then a packet sniffer, it actually logs data and in real time detects attacks, scans and so on. This is a very powerful tool and requires a significant investment of time, energy and machine-power to run, but it is at the top of the food chain for detection. NFR is available at: http://www.nfr.com/.
Intrusion Detection Papers
FAQ: Network Intrusion Detection Systems, an excellent FAQ that covers all the major (and many minor) issues with IDS systems. Available at: http://www.robertgraham.com/pubs/network-intrusion-detection.html.

Dealing with attacks

So you've done your homework, you installed tripwire, DTK, and so on. Now what do you do when your pager starts going off at 3am and tells you that someone just made changes on the primary NIS server? Dealing with an attack depends on several factors, is the attack in progress? Did you discover your company plan being sent out by the mail server to a hotmail address? Did you get called in to find a cluster of dead servers? What are your priorities? Restoring service? Ensuring confidential data is safe? Prosecuting the attacker(s)? Several things to keep in mind:
  • Response from the admin will depend heavily on the environment they are in. The attacker may have compromised the administrative accounts, so sending email may not work.
  • Most sites usually don't want to report attacks (successful or not) due to the potential embarrassment and related public relations problems.
  • Most quick attacks, denial of service attacks and the like are spoofed. Tracking down the real attacker is very difficult and resource intensive.
  • Even if all goes well there is a chance law enforcement will seize your equipment as evidence, and hold it, not something to be taken lightly.
  • Do you know how the attacker got in (i.e. NFR recorded it), if so you might just want to plug the holes and go on.
  • Try not to ignore attacks, but at the same time there are many people running garbage attacks in an effort to waste administrators time and energy (and possibly distract them from more subtle attacks).
Also before you deal with an attack, you should consult your company policy. If you don't have one consult your manager, the legal department, etc. It's also a good idea to have a game plan to deal with attacks (i.e., the mail server is first priority, checking fileservers is number two, who do you notify, etc) this will prevent a lot of problems when it happens (be prepared). The O'Reilly book Practical Unix and Internet Security” covers this topic in great detail so I’m not going to rehash it. Go buy the book.
An excellent whitepaper on this is also available, see Appendix D, “How to Handle and Identify Network Probes”.

Packet sniffers

Packet sniffing is the practice of capturing network data not destined for your machine, typically for the purpose of viewing confidential/sensitive traffic such as telnet sessions or people reading their email. Unfortunately there is no real reliable way to detect a packet sniffer since it is mostly a passive activity, however by utilizing network switches and fiber optic backbones (which are very difficult to tap) you can minimize the threat. There is also a tool called AntiSniff, that probes network devices and sees if their response indicates an interface in promiscuous mode. These tools are also invaluable if your network is under attack and you want to see what is going on. There is an excellent FAQ on sniffing at: http://www.robertgraham.com/pubs/sniffing-faq.html.
tcpdump
The granddaddy of packet sniffers for Linux, this tool has existed as long as I can remember, and is of primary use for debugging network problems. It is not very configurable and lacks advanced features of newer packet sniffers, but it can be useful. Most distributions ships with tcpdump.
sniffit
My favorite packet sniffer, sniffit is very robust, has nice filtering capabilities, will convert data payloads into ASCII text for easy reading (like telnet sessions), and even has a graphical mode (nice for monitoring overall activity/connections). Sniffit is available at: http://sniffit.rug.ac.be/~coder/sniffit/sniffit.html.
Ethereal
A nice looking network protocol analyzer (a.k.a., a souped up sniffer) with an interface very similar to NT’s network monitor. It allows easy viewing of data payloads for most network protocols (tftp, http, Netbios, etc). It is based on GTK, thus meaning you will probably have to be running gnome to use it. I haven't tested it yet (but intend to). It is available at: http://ethereal.zing.org/.
Snort
Snort is a nice packet sniffing tool that can be used to detect various attacks as well. It can watch for activity such as Queso TCP-IP fingerprinting scans, Nmap scans, and the like. Snort is available from:http://www.clark.net/~roesch/security.html.
SPY
SPY is an advanced multi protocol sniffer that runs on various platforms. It is not a free program however there is a single user license available for non commercial use with a maximum of 5 hosts. Commercial it costs around $6000 US dollars, but from a quick look at it’s capabilities I would say it is worth it if you need an industrial grade sniffer. You can get it from: http://pweb.uunet.de/trillian.of/Spy/.
packetspy
packetspy is another libpcap based sniffer. You can get it from: http://www.bhconsult.com/packetspy/.
Other sniffers
There are a variety of packet sniffers for Linux, based on the libpcap library among others, here is a short list:

Packet sniffer detection

In theory most operating systems leave tell tale signs when packet sniffing (that is to say their network interfaces respond in certain, non standard ways to network traffic). If the attacker is not to savvy, or is using a compromised machine then chances are you can detect them. On the other hand if they are using a specially built cable, or induction ring there is no chance of detecting them unless you trace every physical piece of network cable and check what is plugged into it.
AntiSniff
As mentioned before AntiSniff i a tool that probes network devices to try and see if they are running in promiscuous mode, as opposed to normal modes of operation. It is supposedly effective, and will work against most sniffers. You can get it from: http://www.l0pht.com/antisniff/.

Scanning / intrusion tools

Overview

Over the last few years the number of security tools for Windows and UNIX has risen dramatically, even more surprising is the fact that most of them are freely available on the Internet. I will only cover the free tools since most of the commercial tools are ridiculously expensive, are not open source, and in many cases have been shown to contain major security flaws (like storing passwords in clear text after installation). Any serious cracker/hacker will have these tools at their disposal, so why shouldn't you?
There are several main categories of tools, ones that scan hosts from within that host, ones that scan other hosts and report back variously what OS they are running (using a technique called TCP-IP fingerprinting), services that are available and so on, at the top of the food chain are the intrusion tools that actually attempt to execute exploits, and report back if they worked or not, lastly I include the exploits category, while not strictly an intrusion tool per se they do exist and you should be aware of them. 
There are also many free tools and techniques you can use to conduct a self audit and ensure that the systems react as you think they should (we all make errors, but catching them quickly and correcting them is part of what makes a great administrator). Tools such as nmap, nessus, crack, and so forth can be quickly employed to scan your network(s) and host(s), finding any obvious problems quickly. I also suggest you go over your config files every once in a while (for me I try to 'visit' each server once a month, sometimes I discover a small mistake, or something I forgot to set previously). Keeping systems in a relative state of synchronization (I just recently finished moving ALL my customers to Kernel 2.2.x, ipchains) will save you a great deal of time and energy. 

Host scanners

Host scanners are software you run locally on the system to probe for problems. 
Cops
Cops is extremely obsolete and it’s original home on CERT’s ftp site is gone. This is mentioned for historical accuracy only.
Tiger
Tiger is still under development, albeit slowly, Texas Agricultural and Mechanical University used to require that a UNIX host pass tiger before it was allowed to connect to the network from offsite. You can get it from:ftp://net.tamu.edu/pub/security/TAMU/.
check.pl
check.pl is a nice Perl program that checks file and directory permissions, and will tell you about any suspicious or ‘bad’ ones (setuid, setgid, writeable directories, etc). Very useful but it tends to find a lot of false positives. It’s available at: http://opop.nols.com/proggie.html.

Network scanners

Network scanners are run from a host and pound away on other machines, looking for open services. If you can find them, chances are an attacker can to. These are generally very useful for ensuring your firewall works.
Strobe
Strobe is one of the older port scanning tools, quite simply it attempts to connect to various ports on a machine(s) and reports back the result (if any). It is simple to use and very fast, but doesn't have any of the features newer port scanners have. Strobe is available for almost all distributions as part of it, or as a contrib package, the source is available at: ftp://suburbia.net/pub/.
Nmap
Nmap is a newer and much more fully-featured host scanning tool. It features advanced techniques such as TCP-IP fingerprinting, a method by which the returned TCP-IP packets are examined and the host OS is deduced based on various quirks present in all TCP-IP stacks. Nmap also supports a number of scanning methods from normal TCP scans (simply trying to open a connection as normal) to stealth scanning and half-open SYN scans (great for crashing unstable TCP-IP stacks). This is arguably one of the best port scanning programs available, commercial or otherwise. Nmap is available at: http://www.insecure.org/nmap/index.html. There is also an interesting article available at: http://raven.genome.washington.edu/security/nmap.txt on nmap and using some of it’s more advanced features.
Network Superscanner
Portscanner
Portscanner is a nice little portscanner (surprise!) that has varying levels of outputs making it easy to use in scripts and by humans. It’s opensource and free to use, you can get it at:http://www.ameth.org/~veilleux/portscan.html.
Queso
Queso isn’t a scanner per se but it will tell you with a pretty good degree of accuracy what OS a remote host is running. Using a variety of valid and invalid tcp packets to probe the remote host it checks the response against a list of known responses for various operating systems, and will tell you which OS the remote end is running. You can get Queso from: http://www.apostols.org/projectz/queso/.
spidermap
spidermap is a set of perl scripts to help automate scans and make them more selective. You can get it from: http://www.secureaustin.com/spidermap/.

Intrusion Scanners

Intrusion scanners are one evolutionary step up from network scanners. These software packages will actually identify vulnerabilities, and in some cases allow you to actively try and exploit them. If your machines are susceptible to these attacks, you need to start fixing things, as any attacker can get these programs and use them.
Nessus
Nessus is relatively new but is fast shaping up to be one of the best intrusion scanning tools. It has a client/server architecture, the server currently runs on Linux, FreeBSD, NetBSD and Solaris, clients are available for Linux, Windows and there is a Java client. Communication between the server and client is ciphered for added security all in all a very slick piece of code. Nessus supports port scanning, and attacking, based on IP addresses or host name(s). It can also search through network DNS information and attack related hosts at your bequest. Nessus is relatively slow in attack mode, which is hardly surprising. However it currently has over 200 attacks and a plug-in language so you can write your own. Nessus is available from http://www.nessus.org/.
Saint
Saint is the sequel to Satan, a network security scanner made (in)famous by the media a few years ago (there were great worries that bad people would take over the Internet using it). Saint also uses a client/server architecture, but uses a www interface instead of a client program. Saint produces very easy to read and understand output, with security problems graded by priority (although not always correctly) and also supports add-in scanning modules making it very flexible. Saint is available from: http://www.wwdsi.com/saint/.
Cheops
While not a scanner per se, it is useful for detecting a hosts OS and dealing with a large number of hosts quickly. Cheops is a "network neighborhood" on steroids, it builds a picture of a domain, or IP block, what hosts are running and so on. It is extremely useful for preparing an initial scan as you can locate interesting items (HP printers, Ascend routers, etc) quickly. Cheops is available at: http://www.marko.net/cheops/.
Ftpcheck / Relaycheck
Two simple utilities that scan for ftp servers and mail servers that allow relaying, good for keeping tabs on naughty users installing services they shouldn’t (or simply misconfiguring them), available from:http://david.weekly.org/code/.
SARA
Security Auditor’s Research Assistant (SARA) is a tool similar in function to SATAN and Saint. SARA supports multiple threads for faster scans, stores it’s data in a database for ease of access and generates nice HTML reports. SARA is free for use and is available from: http://home.arc.com/sara/.
BASS
BASS is the “Bulk Auditing Security Scanner” allows you to scan the internet for a variety of well known exploits. It was basically a proof of concept that the Internet is not secure. You can get it from:http://www.securityfocus.com/data/tools/network/bass-1.0.7.tar.gz

Firewall scanners

There are also a number of programs now that scan firewalls and execute other penetration tests in order to find out how a firewall is configured. 
Firewalk
Firewalk is a program that uses a traceroute style of packets to scan a firewall and attempt to deduce the rules in place on that firewall. By sending out packets with various time to lives and seeing where they die or are refused a firewall can be tricked into revealing rules. There is no real defense against this apart from silently denying packets instead of sending a rejection message which hopefully will reveal less. I would advise utilizing this tool against your systems as the results can help you tighten up security. Firewalk is available from: http://www.packetfactory.net/firewalk/

Exploits

I won't cover exploits specifically, since there are hundreds if not thousands of them floating around for Linux. I will simply cover the main archival sites.
One of the primary archive sites for exploits, it has almost anything and everything, convenient search engine and generally complete exploits.



Software

RPM 

RPM is a software management tool originally created by Red Hat, and later GNU'ed and given to the public (http://www.rpm.org/). It forms the core of administration on most systems, since one of the major tasks for any administrator is installing and keeping software up to date. Various estimates place most of the blame for security break-ins on bad passwords, and old software with known vulnerabilities. This isn't exactly surprising one would think, but while the average server contains 200-400 software packages on average, one begins to see why keeping software up to date can be a major task.
The man page for RPM is pretty bad, there is no nice way of putting it. The book "Maximum RPM" (ISBN: 0-672-31105-4) on the other hand is really wonderful (freely available at http://www.rpm.org/ in post script format). I would suggest this book for any Red Hat administrator, and can say safely that it is required reading if you plan to build RPM packages. The basics of RPM are pretty self explanatory, packages come in an rpm format, with a simple filename convention:
package_name-package_version-rpm_build_version-architecture.rpm
nfs-server-2.2beta29-5.i386.rpm
would be “nfs-server”, version “2.2beta29” of “nfs-server”, the fifth build of that rpm (i.e. it has been packaged and built 5 times, minor modifications, changes in file locations, etc.), for the Intel architecture, and it’s an rpm file.
Command Function
-q Queries Packages / Database for info
-i Install software
-U Upgrades or Installs the software
-e Extracts the software from the system (removes)
-v be more Verbose
-h Hash marks, a.k.a. done-o-dial
Command Example
Function
rpm -ivh package.rpm
Install 'package.rpm', be verbose, show hash marks
rpm -Uvh package.rpm
Upgrade 'package.rpm', be verbose, show hash marks
rpm -qf /some/file
Check which package owns a file
rpm -qpi package.rpm
Queries 'package.rpm', lists info
rpm -qpl package.rpm
Queries 'package.rpm', lists all files
rpm -qa
Queries RPM database lists all packages installed
rpm -e package-name
Removes 'package-name' from the system (as listed by rpm -qa)

Red Hat Linux 5.1 shipped with 528 packages, and Red Hat Linux 5.2 shipped with 573, which when you think about it is a heck of a lot of software (SuSE 6.0 ships on 5 CD's, I haven’t bothered to count how many packages). Typically you will end up with 2-300 packages installed (more apps on workstations, servers tend to be leaner, but this is not always the case). So which of these should you install and which should you avoid if possible (like the r services packages). One thing I will say, the RPM's that ship with Red Hat distributions are usually pretty good, and typically last 6-12 months before they are found to be broken.
There is a list of URL's and mailing lists where distribution specific errata and updates are available later on in this document. 

dpkg

The Debian package system is a similar package to RPM, however lacks some of the functionality, although overall it does an excellent job of managing software packages on a system. Combined with the dselect utility (being phased out) you can connect to remote sites, scroll through the available packages, install them, run any configuration scripts needed (like say for gpm), all from the comfort of your console. The man page for dpkg "man dpkg" is quite extensive.
The general format of a Debian package file (.deb) is:
packagename_packageversion-debversion.deb
ncftp2_2.4.3-2.deb
Unlike rpm files .deb files are not labeled for architecture as well (not a big deal but something to be aware of).
Command Function:
-I Queries Package
-i Install software
-l List installed software (equiv. to rpm -qa)
-r Removes the software from the system
Command Example
Function
dpkg -i package.deb
Install package.deb
dpkg -I package.deb
Lists info about package.deb (rpm -qpi)
dpkg -c package.deb
Lists all files in package.deb (rpm -qpl)
dpkg -l
Shows all installed packages
dpkg -r package-name
Removes 'package-name' from the system (as listed by dpkg -l)

Debian has 1500+ packages available with the system. You will learn to love dpkg (functionally it has everything necessary, I just miss a few of the bells and whistles that rpm has, on the other hand dselect has some features I wish rpm had).
There is a list of URL's and mailing lists where distribution specific errata is later on in this document.

tarballs / tgz 

Most modern Linux distributions use a package management system to install, keep track of and remove software on the system. There are however many exceptions, Slackware does not use a true package management system per se, but instead has precompiled tarballs (a compressed tar file containing files) that you simply unpack from the root directory to install, some of which have install script to handle any post install tasks such as adding a user. These packages can also be removed, but functions such as querying, comparing installed files against packages files (trying to find tampering, etc.) is pretty much not there. Or perhaps you want to try the latest copy of X, and no-one has yet gotten around to making a nice .rpm or .deb file, so you must grab the source code (also usually in a compressed tarball), unpack it and install it. This present no more real danger then a package as most tarballs have MD5 and/or PGP signatures associated with them you can download and check. The real security concern with these is the difficulty in sometimes tracking down whether or not you have a certain piece of software installed, determining the version, and then removing or upgrading it. I would advise against using tarballs if at all possible, if you must use them it is a good idea to make a list of files on the system before you install it, and one afterwards, and then compare them using 'diff' to find out what file it placed where. Simply run 'find /* > /filelist.txt' before and 'find /* > /filelist2.txt' after you install the tarball, and use 'diff -q /filelist.txt /filelist2.txt > /difflist.txt' to get a list of what changed. Alternatively a 'tar -tf blah.tar' will list the contents of the file, but like most tarballs you'll be running an executable install script/compiling and installing the software, so a simple file listing will not give you an accurate picture of what was installed or modified. Another method for keeping track of what you have installed via tar is to use a program such as ‘stow’, stow installs the package to a separate directory (/opt/stow/) for example and then creates links from the system to that directory as appropriate. Stow requires that you have Perl installed and is available from:http://www.gnu.ai.mit.edu/software/stow/stow.html.
Command 
Function
tar -tf filename.tar
Lists files in filename.tar
tar -xf filename.tar
Extracts files from filename.tar

Checking file integrity

Something I thought I would cover semi-separately is checking the integrity of software that is retrieved from remote sites. Usually people don’t worry, but recently ftp.win.tue.nl was broken into, and the TCP_WRAPPERS package (among others) was trojaned. 59 downloads occurred before the site removed the offending packages and initiated damage control procedures. You should always check the integrity of files you download from remote sites, some day a major site will be broken into and a lot of people will suffer a lot of grief.
RPM integrity
RPM packages can (and typically are) PGP signed by the author. This signature can be checked to ensure the package has not been tampered with or is a trojaned version. This is described in great deal in chapter 7 of “Maximum RPM” (online at http://www.rpm.org/), but consists of adding the developers keys to your public PGP keyring, and then using the –K option which will grab the appropriate key from the keyring and verify the signature. This way, to trojan a package and sign it correctly, they would have to steal the developers private PGP key and the password to unlock it, which should be near impossible.
dpkg integrity
dpkg supports MD5, so you must somehow get the MD5 signatures through a trusted channel (like PGP signed email). MD5 ships with most distributions.
PGP signed files
Many tarballs are distributed with PGP signatures in separate ASCII files, to verify them add the developers key to your keyring and then use PGP with the –o option. This way to trojan a package and sign it correctly, they would have to steal the developers private PGP key and the password to unlock it, which should be near impossible. PGP for Linux is available from: ftp://ftp.zedz.net/.
GnuPG signed files
Also used is GnuPG, a completely open source version of PGP that uses no patented algorithms. You can get it from: http://www.gnupg.org/.
MD5 signed files
Another way of signing a package is to create an MD5 checksum. The reason MD5 would be used at all (since anyone could create a valid MD5 signature of a trojaned software package) is that MD5 is pretty much universal and not controlled by export laws. The weakness is you must somehow distribute the MD5 signatures in advance securely, and this is usually done via email when a package is announced (vendors such as Sun do this for patches).

Automating software updates

NSBD
NSBD (not-so-bad-distribution) is a method to autmatically distribute and update software securely over the network. You can get it from: http://www.bell-labs.com/project/nsbd/.

Automating updates with RPM

AutoRPM
AutoRPM is probably the best tool for keeping rpm’s up to date, simply put you point it at an ftp directory, and it downloads and installs any packages that are newer then the ones you have. Please keep in mind however if someone poisons your DNS cache you will be easily compromised, so make sure you use the ftp site’s IP address and not its name. Also you should consider pointing it at an internal ftp site with packages you have tested, and have tighter control over. AutoRPM requires that you install the libnet package Net::FTP for Perl and is available from: http://www.kaybee.org/~kirk/html/linux.html.
Rhlupdate
Rhlupdate will also connect to an ftp site and grab any needed updates, the same caveats apply as above, and again it requires that you install the libnet package Net::FTP for Perl and is available at:ftp://missinglink.darkorb.net/pub/rhlupdate/
RpmWatch
RpmWatch is a simple Perl script that will install updates for you, note it will not suck down the packages you need so you must mirror them locally, or make them accessible locally via something like NFS or CODA. RpmWatch is available from: http://www.iaehv.nl/users/grimaldo/info/scripts/.

Automating updates with dpkg

Debian's software package management tools (dpkg and apt-get) support automated updates of packages and all their dependancies from a network ftp server. Simple create a script that is called by cron once a day (or more often if you are paranoid) that does:
#!/bin/bash
PATH=/usr/bin
apt-get update
apt-get upgrade
The only additional thing you will need to do is configure your download sites in /etc/apt/sources.list and general apt configuration in /etc/apt/apt.conf, you can download it from:http://www.debian.org/Packages/stable/admin/apt.html.

Automating updates with tarballs / tgz

No tools found, please tell me if you know of any (although beyond mirroring, automatically unpacking and running “./configure ; make ; make install”, nothing really comes to mind, i.e. a ports collection similar to BSD). 

Tracking software installation

Usually when software installs from a source install as opposed to a package it has a tendency to go all over the place. Removing it can be an extremely troublesome task.
installwatch
installwatch monitor what a program does, and logs any changes it makes to the system to syslog. Its similar to the “time” program in that it runs the program in a wrapped form so that it can monitor what happens, you run the program as “installwatch /usr/src/something/make” for example (optionally you can use the “–o filename” to log to a specific file). installwatch is available from: http://datanord.datanord.it/~pdemauro/installwatch/.
instmon
instmon is run before and after you install a tarball / tgz package (or any package for that matter). It generates a list of files changed that you can later use to undo any changes. It is available from:http://hal.csd.auth.gr/~vvas/instmon/.

Converting file formats

Another way to deal with packages/etc. is to convert them. There are several utilities to convert rpm files to tarballs, rpm’s to deb’s, and so on.
alien
alien is probably the best utility around for converting files, it handles rpm’s, deb’s and tarballs very well. You can download it from: http://kitenet.net/programs/alien/.
slurp
slurp takes an interesting approach, it behaves somewhat like installwatch, but also has some of the features of alien. It monitors the system as you install a package, and creates an rpm file from this. You can get slurp from:http://students.vassar.edu/~jajohnst/slurp/.

Finding software

One major problem with Linux is finding software that did not ship with your distribution. Searching the Internet is a pain. There are some resources however that can ease the pain:

Secure programming

This whole guide exists because Linux and the software running on Linux systems is either insecurely written, or insecurely setup. Many issues, such as buffer overruns, are due to bad programming and carelessness. These problems become especially bad when the software in question is setuid to run as root, or any other privileged group. There are a variety of techniques, and other measures that can be taken to make software safer.
Secure Linux Programming FAQ
This guide covers a lot of general techniques for secure programming as well as some Linux specific items. You can get it at: http://www.dwheeler.com/secure-programs/.
Secure UNIX Programming FAQ
This document covers a variety of techniques to make programs more secure, as well as some pretty low level items like inherited trust, sharing credentials, and so on. This document is available at:http://www.whitefang.com/sup/ and I highly recommend reading it if you plan to program in Linux (or UNIX in general).
Secure Internet Programming
Secure Internet Programming (SIP) is a laboratory (for lack of a better word) that studies computer security, and more specifically problems with mobile code such as Java and ActiveX. They have a number of interesting projects going, and many publications online that make excellent reading. If you are going to be writing Java code I would say you have to visit this site: http://www.cs.princeton.edu/sip/.
Writing Safe Setuid Programs
Writing Safe Setuid Programs is an extremely comprehensive work that covers most everything and is available in HTML format for easy reference. A must read for anyone that uses setuid software, let alone codes it. Available at: http://olympus.cs.ucdavis.edu/~bishop/secprog.html.
userv
userv allows programs to invoke other programs in a more secure manner then is typically used. It is useful for programs that require higher levels of access then a normal user, but you don't want to give root access to. Available at: http://www.chiark.greenend.org.uk/~ian/userv/.

Testing software

There are a variety of common errors programmers make that leave software vulnerable to attacks. There are also tools to help find these problems and show the existence of other issues.
fuzz
Written by Ben Woodward, fuzz is a semi-intelligent program that feeds garbage, random, and other pseudo-hostile inputs and sees how the program reacts (i.e. does it dump core and have a heart attack?). fuzz is available from: http://fuzz.sourceforge.net.

Compiler patches

There are several sets of patches for compilers to increase security.
Stackguard
Stackguard is a set of patches for GCC that compile programs to prevent them from writing to locations in memory they shouldn't (simplistic explanation, the Stackguard website has much better details). Stackguard does break some functionality however, programs like gdb and other debuggers will fail, but this is generally not a concern for high security production servers. You can get Stackguard from: http://www.immunix.org/
Stack Shield 
Stack Shield is an alternate method for protecting Linux binaries from buffer overflows, however I have not yet tried it. You can get it at: http://www.angelfire.com/sk/stackshield/.

Viruses

Overview

Linux is not as susceptible to viruses in the same ways that a Dos/Windows or Mac platform is. In UNIX, security controls are a fundamental part of the operating system. For example users are not allowed to write promiscuously to any location in memory that they choose to, something that Dos/Windows and the Mac allow. 
To be fair there are viruses for UNIX. However the only Linux one I have seen was called "bliss", had an uninstall option ("--uninstall-please") and had to be run as root to be effective. Or to quote an old Unix favorite "if you don't know what an executable does, don't run it as root". Worms are much more prevalent in the UNIX world, the first major occurrence being the Morris Internet worm which exploited a vulnerability in sendmail. Current Linux worms exploit broken versions of imapd, sendmail, WU-FTPD and other daemons. The simplest fix is to keep up to date and not make daemons accessible unless necessary. These attacks can be very successful especially if they find a network of hosts that are not up to date, but typically their effectiveness fades out as people upgrade their daemons. In general I would not specifically worry about these two items, and there is definitely no need to buy anti-virus software for Linux. 
Worms have a long and proud tradition in the UNIX world, by exploiting known security holes (generally, very few exploit new/unknown holes) and replicating they can quickly mangle a network(s). There are several worms currently making their way around Linux machines, mostly exploiting old Bind 4.x and old IMAP software. Defeating them is as easy as keeping software up to date. 
Trojan horses are also popular. Recently ftp.win.tue.nl was broken into and the TCP_WRAPPERS package (among others) was modified to email passwords to an anonymous account. This was detected when someone checked the PGP signature of the package and found that it wasn't quite kosher. Moral of the story? Use software from trusted sites, and check the PGP signature(s).

Disinfection of viruses / worms / trojans

Back up your data, format and reinstall the system from known good media. Once an attacker has root on a Linux system they can literally do anything, from compromising gcc/egcs to loading interesting kernel modules at boot time. Do not run untrusted software as root. Check the PGP signatures on files you download, etc. An ounce of prevention will pretty much block the spread of viruses, worms and trojans under Linux.
The easiest method for dealing with viruses and the like is to use system integrity tools such as tripwire, L5, and Gog&Magog, you will be able to easily find which files have been compromised and restore/replace/update them. There are also many Anti-Virus scanners available for Linux (but generally speaking there aren’t any Linux viruses).

Virus Scanners for Linux

As stated above viruses aren’t a real concern in the Linux world, however virus scanners that run on Linux can be useful. Filtering email / other forms of content at the gateways to your network (everyone has Windows machines) can provide an extra line of defense since the platforms providing the defense against the threat cannot be compromised by that threat (hopefully). You may also wish to scan files stored on Linux file servers that are accessed by Windows clients. Luckily there are several good anti-virus programs available for Linux.
Sophos Anti-Virus
Sophos Anti-Virus is a commercial virus scanner that runs on a variety of Windows and UNIX platforms. It is free for personal use and relatively inexpensive for commercial use. You can get it at: http://www.sophos.com/.
AntiVir
AntiVir is another commercial virus scanner that runs on a variety of Windows platforms and Linux. You can get it from: http://www.hbedv.com/.
InterScan VirusWall
Trend Micro has ported this product to Linux and offers it for free download on their site. You can get it from: http://www.antivirus.com/products/isvw/.
F-Secure Anti-Virus
Data Fellow's has ported their anti-virus scanner to Linux as well. You can get it at: http://www.europe.datafellows.com/products/
AVP
Kaspersky lab's has also ported their anti-virus scanner over to Linux, currently in beta, available at: http://www.kasperskylab.ru/eng/products/linux.html

Virus scanning of email

Also see the email server section for setting up virus scanning of incoming email (very useful if you have windows clients).



Vendor / support contact information

Caldera OpenLinux
Debian GNU/Linux
LinuxCare
Support: http://www.linuxcare.com/Support: 1-888-546-4878
NetMAX
Red Hat Linux
Slackware
Stormix
SuSE
TurboLinux

Backups

Overview

I don't know how many times I can tell people, but it never ceases to amaze me how often people are surprised by the fact that if they do not backup their data it will be gone, if the drive suffers a head crash on them or they hit 'delete' without thinking. Always backup your system, even if it's just the config files, you'll save yourself time and money in the long run.
To backup your data under Linux there are many solutions, all with various pro's and con's. There are also several industrial strength backup programs, the better ones support network backups which are a definite plus in a large non-homogenous environment.

Non-commercial backup programs for Linux

Tar and Gzip
Oldies but still goldies, tar and gzip. Why? Because like vi you can darn near bet the farm on the fact that any UNIX system will have tar and gzip. They may be slow, klunky and starting to show their age, but it's a universal tool that will get the job done. I find with Linux the installation of a typical system takes 15-30 minutes depending on the speed of the network/cdrom, configuration another 5-15 (assuming I have backups or it is very simple) and data restoration takes as long as it takes (definitely not something you should rush). Good example: I recently backed up a server and then proceeded to blow the filesystem away (and remove 2 physical HD's that I no longer needed), I then installed Red Hat 5.2, and reconfigured all 3 network cards, Apache (for about 10 virtual sites), Bind and several other services in about 15 minutes. If I had done it from scratch it would have taken me several hours. Simply:
tar -cvf archive-name.tar dir1 dir2 dir3....
to create the tarball of all your favorite files (typically /etc, /var/spool/mail/, /var/log/, /home, and any other user/system data), followed by a:
gzip -9 archive-name.tar
to compress it as much as possible (granted harddrive space is cheaper then a politicians promise but compressing it makes it easier to move around). You might want to use bzip, which is quite a bit better then gzip at compressing text, but it is quite a bit slower. I typically then make a copy of the archive on a remote server, either by ftping it or emailing it as an attachment if it's not to big (e.g. the backup of a typical firewall is around 100k or so of config files).
rsync
rsync is an ieal way to move data between servers. It is very effecient for maintaining large directory trees in synch (not real time mind you), and is relatively easy to configure and secure. rsync does not encrypt the data however so you should use something like IPSec if the data is sensitive. rsync is covered here.
Amanda
Amanda is a client/server based network backup programs with support for most unices and Windows (via SAMBA). Amanda is BSD style licensed and available from: http://www.amanda.org/.
afbackup
Afbackup is another client/server with a generally GPL license with one minor exception, development of the server portion on Windows is forbidden. Afbackup has server support for Linux, HP-UX and Solaris, and has clients for that and windows. You can download it at:ftp://ftp.zn-gmbh.com/pub/linux/.
Burt
Burt is a Tcl/Tk based set of extensions that allow for easy backups of Unix workstations, this allows it to run on pretty much any system. Burt is a client/server architecture and appears pretty scalable, it is available at:http://www.cs.wisc.edu/~jmelski/burt/.

Commercial backup programs for linux

BRU
BRU (Backup and Restore Utility), has been in the Linux world since as long as Linux Journal (they have had ads in there since the beginning as far as I can tell). This program affords a relatively complete set of tools in a nice unified format, with command line and a graphical front end (easy to automate in other words). It supports full, incremental and differential backups, as well as catalogs, and can write to a file or tape drive, basically a solid, simple, easy to use backup program. BRU is available at http://www.estinc.com/features.html.
Quickstart
Quickstart is more aimed at making an image of the system so that when the hard drive fails/etc. you can quickly re-image a blank disk and have a working system. It can also be used to 'master' a system and then load other systems quickly (as an alternative to say Red Hat's kickstart). It's reasonably priced as well and garnered a good revue in Linux Journal (Nov 1998, page 50). You can get it at: http://www.estinc.com/qsdr.html.
Backup Professional
CTAR
CTAR:NET
PC ParaChute
Arkeia
Arkeia is a very powerful backup program with a client - server architecture that supports many platforms. This is an 'industrial' strength product and appropriate for heterogeneous environments, it was reviewed in Linux Journal (April 1999, page 38) and you can download a shareware version online and give it a try, the URL is: http://www.arkeia.com/.
Legato Networker
Legato Networker is another enterprise class backup program, with freely available (but unsupported) Linux clients. Legato Networker is available at: http://www.legato.com/Products/html/legato_networker.html and the Linux clients are available from: ftp://ftp.legato.com/pub/Unsupported/Linux_Client/.
Perfect Backup
Perfect Backup supports almost all Linux distributions and has crash recovery. You can get it from: http://www.merlinsoftech.com/nonflash/merlinhome.htm.

Pro's and con's of backup media

There are more things to back data up onto than you can drive a range rover over but here are some of the more popular/sane alternatives:
Name of Media Pro's Con's
Hard Drive It's fast. It's cheap. It's pretty reliable. ($20-$30 USD per gig) It might not be big enough, and they do fail, usually at the worst possible time. Harder to take offsite as well. RAID is a viable option though. 20 gig drives are $350 USD now.
CDROM Not susceptible to EMP, and everyone in the developed world has a CDROM drive. Media is also pretty sturdy and cheap ($2 USD per 650 Megs or so) CDROM's do have a finite shelf life of 5-15 years, and not all recordables are equal. Keep away from sunlight, and make sure you have a CDROM drive that will read them.
Tape It's reliable, you can buy BIG tapes, tape carousels and tape robots, and they're getting cheap enough for almost everyone to own one. Magnetic media, finite life span and some tapes can be easily damaged (you get what you pay for), also make sure the tapes can be read on other tape drives (in case the server burns down....).
Floppies I'm not kidding, there are rumors some people still use these to backup data. It's a floppy. They go bad and are very small. Great for config files though.
Zip Disks I have yet to damage one, nor have my cats. They hold 100 megs which is good enough for most single user machines. Not everyone has a zip drive, and they are magnetic media. The IDE and SCSI models are passably fast, but the parallel port models are abysmally slow. Watch out for the click of death.
Jazz Drives 1 or 2 gig removable hard drives, my SCSI one averages 5 meg/sec writes. They die. I'm on my third drive. The platters also have a habit of going south if used heavily. And they aren’t cheap.
SyQuest 1.6 gigs, sealed platter, same as above. Sealed cartridges are more reliable. Company did recently declare bankruptcy though. No warranty service.
LS120 120 Megs, and cheap, gaining in popularity. Slow. I'm not kidding. 120 megs over a floppy controller to something that is advertised as "up to 3-4 times faster then a floppy drive".
Printer Very long shelf life. requires a standard Mark 1 human being as a reading device. Handy for showing consultants and as reference material. Cannot be easily altered. You want to retype a 4000 entry password file? OCR is another option as well.
























The Linux kernel




Overview
Linux (or GNU/Linux according to Stallman if you’re referring to a complete distribution) is actually just the kernel of the operating system. The kernel is the core of the system, it handles access to the harddrive, security mechanisms, networking and pretty much everything. It had better be secure or you are screwed. 
In addition to this we have hardware problems like the Pentium F00F bug, and problems inherent to the TCP-IP protocol, the Linux kernel has it’s work cut out for it. Kernel versions are labeled as X.Y.Z, Z are minor revision numbers, Y define whether the kernel is a test (odd number) or production (even number), and X defines the major revision (we have had 0, 1 and 2 so far). I would highly recommend running kernel 2.2.x, as of December 1999 this is 2.2.13. The 2.2.x series of kernel has major improvements over the 2.0.x series. Using the 2.2.x kernels also allows you access to newer features such as ipchains (instead of ipfwadm) and other advanced security features. The 2.0.x series has also been officially discontinued as of June 1999. To find out what the latest kernel(s) are simply finger @linux.kernel.org:
[seifried@mail kernel-patches]$ finger @linux.kernel.org
[linux.kernel.org]

The latest stable version of the Linux kernel is: 2.2.13
The latest beta version of the Linux kernel is: 2.3.29
The latest prepatch (alpha) version *appears* to be: 2.3.30-3

Upgrading and Compiling the Kernel

Upgrading the kernel consists of getting a new kernel and modules, editing /etc/lilo.conf, rerunning LILO to write a new MBR. The kernel will typically be placed into /boot, and the modules in /lib/modules/kernel.version.number/. 
Getting a new kernel and modules can be accomplished 2 ways, by downloading the appropriate kernel package and installing it, or by downloading the source code from ftp://ftp.kernel.org/ (please use a mirror site), and compiling it.
Compiling and installing a kernel:
cd /usr/src
there should be a symlink called “linux” pointing to the directory containing the current kernel, remove it if there is, if there isn’t one no problem. You might want to “mv” the linux directory to /usr/src/linux-kernel.version.number and create a link pointing /usr/src/linux at it.
Unpack the source code using tar and gzip as appropriate so that you now have a /usr/src/linux with about 50 megabytes of source code in it. The next step is to create the linux kernel configuration (/usr/src/linux.config), this can be achieved using “make config”, “make menuconfig” or “make xconfig”, my preferred method is “make menuconfig” (for this you will need ncurses and ncurses devel libraries). This is arguably the hardest step, there are hundreds options, which can be categorized into two main areas: hardware support, and service support. For hardware support make a list of hardware that this kernel will be running on (i.e. P166, Adaptec 2940 SCSI Controller, NE2000 Ethernet card, etc.) and turn on the appropriate options. As for service support you will need to figure out which file systems (fat, ext2, minix ,etc.) you plan to use, the same for networking (firewalling, etc.). 
Once you have configured the kernel you need to compile it, the following commands makes dependencies ensuring that libraries and so forth get built in the right order, then cleans out any information from previous compiles, then builds a kernel, the modules and installs the modules.
make dep                #(makes dependencies)
make clean      #(cleans out previous cruft)
make bzImage    #(make zImage pukes if the kernel is to big, and 2.2.x kernels tend to be pretty big)
make modules    #(creates all the modules you specified)
make modules_install    #(installs the modules to /lib/modules/kernel.version.number/)
You then need to copy /usr/src/linux/arch/i386/boot/bzImage (or zImage) to /boot/vmlinuz-kernel.version.number. Then edit /etc/lilo.conf, adding a new entry for the new kernel and setting it as the default image is the safest way (using the default=X command, otherwise it will boot the first kernel listed), if it fails you can reboot and go back to the previous working kernel.
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=linux
image=/boot/vmlinuz-2.2.9
        label=linux
        root=/dev/hda1
        read-only
image=/boot/vmlinuz-2.2.5
        label=linuxold
        root=/dev/hda1
        read-only
Once you have finished editing /etc/lilo.conf you must run /sbin/lilo to rewrite the MBR (Master Boot Record). When LILO runs you will see output similar to:
Added linux *
Added linuxold
It will list the images that are listed on the data in the MBR and indicate with a * which is the default (typically the default to load is the first image listed, unless you explicitly specify one using the default directive).

Kernel versions

Currently the stable kernel release series is 2.2.x, and the development series is 2.3.x. The 2.1.x development series of kernels is not recommended, there are many problems and inconsistencies. The 2.0.x series of kernel while old and lacking some features is relatively solid, unfortunately the upgrade from 2.0.x to 2.2.x is a pretty large step, I would advise caution. Several software packages must be updated, libraries, ppp, modutils and others (they are covered in the kernel docs / rpm dependencies / etc.). Additionally keep the old working kernel, add an entry in lilo.conf for it as "linuxold" or something similar and you will be able to easily recover in the event 2.2.x doesn't work out as expected. Don't expect the 2.2.x series to be bug free, 2.2.9 will be found to contain flaws and will become obsolete, like every piece of software in the world.
There are a variety of kernel level patches that can enhance the security of a Linux system. Some prevent buffer overflow exploits, other provide strong crypto.

Kernel patches

There are a variety of kernel patches directly related to security.
Secure Linux kernel patch
This patch solves a number of issues and provides another level of security for the system. The patch is available for the 2.0 and 2.2 kernel series. You can get it from: http://www.openwall.com/linux/.
International kernel patch
This patch (over a megabyte in size!) adds a huge amount of strong crypto and related items. It includes several encryption algorithms that were AES candidates (including MARS from IBM). You can get it from:http://www.kerneli.org/.
Linux Intrusion Detection System Patch (LIDS)
This patch adds a number of interesting capabilities, primarily aimed at attack detection. You can "lock" file mounts, firewall rules, and a variety of other interesting options are available. You can get it from:http://www.soaring-bird.com.cn/oss_proj/lids/.
Linux trustees (ACL) project
The Linux trustees (ACL) project is a series of kernel patches and utilities to configure ACL access to the filesystem. This solution is still a bit klunky as it keeps the permissions in a file, and acts as a filtering layer between the file and the users, it isn’t actually a proper ACL enabled filesystem (but it is a start). You can get it at: http://www.braysystems.com/linux/trustees.html.
RSBAC
Rule Set Based Access Control is a comprehensive set of patches and utilities to control various aspects of the system, from filesystem ACL's and up. You can get it from: http://www.rsbac.de/rsbac/.
LOMAC
LOMAC (Low Water-Mark Mandatory Access Control for Linux) is a set of kernel patches to enhance Linux security. You can get it at: ftp://ftp.tislabs.com/pub/lomac/.
auditd
auditd allows you to use the kernel logging facilities (a very powerful tool). You can log mail messages, system events and the normal items that syslog would cover, but in addition to this you can cover events such as specific users opening files, the execution of programs, of setuid programs, and so on. If you need a solid audit trail then this is the tool for you, you can get it at: ftp://ftp.hert.org/pub/linux/auditd/.
Fork Bomb Defuser
A loadable kernel module that allows you to control the maximum number of processes per user, and the maximum number of forks, very useful for shell servers with untrusted users. You can get it from:http://rexgrep.tripod.com/rexfbdmain.htm.

Debugging the Linux kernel

KDB v0.6 (Built-in Kernel Debugger)
An SGI kernel debugger, available at: http://oss.sgi.com/projects/kdb/.
kGDB (Remote kernel debugger)
SGI has written a tool that allows you to do kernel debugging, remotely which is a big step up from being tied to the console. You can get it at: http://oss.sgi.com/projects/kgdb/.

Checklists

Internet connection checklist
  • Turn off all unnecessary services
  • Use firewalling to block access to services if possible
  • Use TCP_WRAPPERS to restrict access to services
  • Run nmap and nessus against the host minimally
  • SSL wrap services such as POP and IMAP
  • Use SSH instead of Telnet
  • Ensure software is up to date

Appendix A: Books and magazines

Sendmail - http://www.oreilly.com/catalog/sendmail2/
Linux Network Admin Guide (NAG) - 
http://www.oreilly.com/catalog/linag/
Running Linux - 
http://www.oreilly.com/catalog/runux2/noframes.html
DNS & BIND - 
http://www.oreilly.com/catalog/dns3/
Apache - 
http://www.oreilly.com/catalog/apache2/
Learning The Bash Shell - 
http://www.oreilly.com/catalog/bash2/
Building Internet Firewalls - 
http://www.oreilly.com/catalog/fire/
Computer Crime - 
http://www.oreilly.com/catalog/crime/
Computer Security Basics - 
http://www.oreilly.com/catalog/csb/Cracking DES - http://www.oreilly.com/catalog/crackdes/
Essential System Administration - 
http://www.oreilly.com/catalog/esa2/
Linux in a nutshell - 
http://www.oreilly.com/catalog/linuxnut2/
Managing NFS and NIS - 
http://www.oreilly.com/catalog/nfs/
Managing Usenet - 
http://www.oreilly.com/catalog/musenet/
PGP - 
http://www.oreilly.com/catalog/pgp/
Practical Unix and Internet Security - 
http://www.oreilly.com/catalog/puis/
Running Linux - 
http://www.oreilly.com/catalog/runux2/
Using and Managing PPP - 
http://www.oreilly.com/catalog/umppp/Virtual Private Networks - http://www.oreilly.com/catalog/vpn2/
Red Hat/SAMS also publish several interesting books:
Maximum RPM (available as a postscript document on 
http://www.rpm.org/)
Red Hat User's Guide (available as HTML on 
ftp://ftp.redhat.com/)
SNMP, SNMPv2 and RMON - W. Stallings (ISBN: 0-201-63479-1)
Magazines:
Linux Journal (of course, monthly)
Sys Admin (intelligent articles, monthly)
Perl Journal (quarterly)
Information Security - 
http://www.infosecuritymag.com/

Appendix C: Other Linux security documentation

The Linux CIPE + Masquerading mini-HOWTOhttp://metalab.unc.edu/LDP/HOWTO/mini/Cipe+Masq.html

Appendix D: Online security documentation

SECURITY RISK ANALYSIS AND MANAGEMENThttp://www.norman.com/local/whitepaper.htm
An Introduction to Information Securityhttp://www.certicom.com/ecc/wecc1.htm
Guidelines for the Secure Operation of the Internethttp://sunsite.cnlab-switch.ch/ftp/doc/standard/rfc/12xx/1281
How to Handle and Identify Network Probeshttp://www.network-defense.com/papers/probes.html
Free Firewall and related tools (large)http://sites.inka.de/sites/lina/freefire-l/index_en.html
Internet FAQ Consortium (You want FAQ’s? We got FAQ’s!)http://www.faqs.org/
An Architectural Overview of UNIX Network Securityhttp://www.alw.nih.gov/Security/Docs/network-security.html
The human side of computer security (an article on social engineering)http://www.sunworld.com/sunworldonline/swol-07-1999/swol-07-security.html
General security research and developmenthttp://www.sekure.net/
Some general whitepapers and articleshttp://www.enteract.com/~lspitz/pubs.html
Coast hotlist (hugelist of resources)http://www.cerias.purdue.edu/coast/hotlist/

Appendix E: General security sites

SecurityPortal, has a Linux section, this document and my weekly column (it's a great site!).http://www.securityportal.com/
Open Security Solutionshttp://www.opensec.net/
Security Mailing Listshttp://www.iss.net/vd/mail.html
8 Little Green Menhttp://www.8lgm.org/
Robert's Cryptography, PGP & Privacy Linkshttp://www.interlog.com/~rguerra/www/
Open Security Solutionshttp://www.opensec.net/

Appendix F: General Linux sites

Linux Administration Made Easy (LAME)
http://www.LinuxNinja.com/linux-admin/