Saturday, November 19, 2011

Linux Administrator's Security Guide - I

Getting started
General concepts, server versus workstations
There are many issues that affect actually security setup on a computer. How secure does it need to be? Is the machine networked? Will there be interactive user accounts (telnet/SSH)? Will users be using it as a workstation or is it a server? The last one has a big impact since "workstations" and "servers" have traditionally been very different beasts, although the line is blurring with the introduction of very powerful and cheap PC's, as well as operating systems that take advantage of them. The main difference in today's world between computers is usually not the hardware, or even the OS (Linux is Linux, NT Server and NT Workstation are close family, etc.), it is in what software packages are loaded (Apache, X, etc) and how users access the machine (interactively, at the console, and so forth). Some general rules that will save you a lot of grief in the long run:
1. Keep users off of the servers. That is to say: do not give them interactive login shells, unless you absolutely must.
2. Lock down the workstations; assume users will try to 'fix' things (heck, they might even be hostile, temp workers/etc).
3. Use encryption wherever possible to keep plain text passwords, credit card numbers and other sensitive information from lying around.
4. Regularly scan the network for open ports/installed software/etc that shouldn't be, compare it against previous results.. 
Remember: security is not a solution; it is a way of life (or a procedure if you want to tell your manager).
Generally speaking workstations / servers are used by people that don't really care about the underlying technology, they just want to get their work done and retrieve their email in a timely fashion. There are however many users that will have the ability to modify their workstation, for better or worse (install packet sniffers, warez ftp sites, www servers, IRC bots, etc). To add to this most users have physical access to their workstations, meaning you really have to lock them down if you want to do it right.
1. Use BIOS passwords to lock users out of the BIOS (they should never be in here, also remember that older BIOS's have universal passwords.)
2. Set the machine to boot from the appropriate harddrive only.
3. Password the LILO prompt.
4. Do not give the user root access, use sudo to tailor access to privileged commands as needed.
5. Use firewalling so even if they do setup services they won’t be accessible to the world.
6. Regularly scan the process table, open ports, installed software, and so on for change.
7. Have a written security policy that users can understand, and enforce it.
8. Remove all sharp objects (compilers, etc) unless needed from a system.
Remember: security in depth.
Properly setup, a Linux workstation is almost user proof (nothing is 100% secure), and generally a lot more stable then a comparable Wintel machine. With the added joy of remote administration (SSH/Telnet/NSH) you can keep your users happy and productive.
Servers are a different ball of wax together, and generally more important then workstations (one workstation dies, one user is affected, if the email/www/ftp/etc server dies your boss phones up in a bad mood). Unless there is a strong need, keep the number of users with interactive shells (bash, pine, lynx based, whatever) to a bare minimum. Segment services up (have a mail server, a www server, and so on) to minimize single point of failure. Generally speaking a properly setup server will run and not need much maintenance (I have one email server at a client location that has been in use for 2 years with about 10 hours of maintenance in total). Any upgrades should be planned carefully and executed on a test. Some important points to remember with servers:
1. Restrict physical access to servers.
2. Policy of least privilege, they can break less things this way.
4. Regularly check the servers for changes (ports, software, etc), automated tools are great for this.
5. Software changes should be carefully planned/tested as they can have adverse affects (like kernel 2.2.x no longer uses ipfwadm, wouldn't that be embarrassing if you forgot to install ipchains).
Minimization of privileges means giving users (and administrators for that matter) the minimum amount of access required to do their job. Giving a user "root" access to their workstation would make sense if all users were Linux savvy, and trustworthy, but they generally aren't (on both counts). And even if they were it would be a bad idea as chances are they would install some software that is broken/insecure or other. If all a user access needs to do is shutdown/reboot the workstation then that is the amount of access they should be granted. You certainly wouldn't leave accounting files on a server with world readable permissions so that the accountants can view them, this concept extends across the network as a whole. Limiting access will also limit damage in the event of an account penetration (have you ever read the post-it notes people put on their monitors?).

Installation of Linux

A proper installation of Linux is the first step to a stable, secure system. There are various tips and tricks to make the install go easier, as well as some issues that are best handled during the install (such as disk layout).

Choosing your install media

This is the #1 issue that will affect speed of install and to a large degree safety. My personal favorite is ftp installs since popping a network card into a machine temporarily (assuming it doesn't have one already) is quick and painless, and going at 1+ megabyte/sec makes for quick package installs. Installing from CD-ROM is generally the easiest, as they are bootable, Linux finds the CD and off you go, no pointing to directories or worrying about filename case sensitivity (as opposed to doing a harddrive based install). This is also original Linux media and you can be relatively sure it is safe (assuming it came from a reputable source), if you are paranoid however feel free to check the signatures on the files.
  • FTP - quick, requires network card, and an ftp server (Windows box running something like warftpd will work as well). 
  • HTTP – also fast, and somewhat safer then running a public FTP server for installs
  • Samba - quick, good way if you have a windows machine (share the CDROM out). 
  • NFS - not as quick, but since NFS is usually implemented in most existing UNIX networks (and NT now has an NFS server from MS for free) it's mostly painless. NFS is the only network install supported by Red Hat’s kickstart.
  • CDROM - if you have a fast CDROM drive, your best bet, pop the CD and boot disk in, hit enter a few times and you are done. Most Linux CDROM’s are now bootable.
  • Harddrive - generally the most painful, windows messes up filenames/etc, installing from an ext2 partition is usually painless though (catch 22 for new users however). 

CD ISO images

If you want to burn your own CD of distribution X, then head over to and burn it onto CD.

Red Hat kickstart

Red hat provides a facility for automating installs, which can be very useful. Simply put you create a text file with various specifications for the install, and point the Red Hat installer at it, then sit back and let it go. This is very useful if rolling out multiple machines, or giving users a method of recovery (assuming their data files are safe). You can get more information at:

It ain't over 'til...

So you've got a fresh install of Linux (Red Hat, Debian, whatever, please, please, DO NOT install really old versions and try to upgrade them, it's a nightmare), but chances are there is a lot of extra software installed, and packages you might want to upgrade or things you had better upgrade if you don't want the system compromised in the first 15 seconds of uptime (in the case of BIND/Sendmail/etc.). Keeping a local copy of the updates directory for your distributions is a good idea (there is a list of errata for distributions at the end of this document), and making it available via NFS/ftp or burning it to CD is generally the quickest way to make it available. As well there are other items you might want to upgrade, for instance imapd or bind. You will also want to remove any software you are not using, and/or replace it with more secure versions (such as replacing RSH with SSH).
Bastille Linux
If you are running Red Hat Linux you might want to use the Bastille Linux hardening script available at:

System security

Physical access

This area is covered in depth in the "Practical Unix and Internet Security" book, but I'll give a brief overview of the basics. Someone turns your main accounting server off, turns it back on, boots it from a specially made floppy disk and transfers payroll.db to a foreign ftp site. Unless your accounting server is locked up what is to prevent a malicious user (or the cleaning staff of your building, the delivery guy, etc.) from doing just that? I have heard horror stories of cleaning staff unplugging servers so that they could plug their cleaning equipment in. I have seen people accidentally knock the little reset switch on power bars and reboot their servers (not that I have ever done that). It just makes sense to lock your servers up in a secure room (or even a closet). It is also a very good idea to put the servers on a raised surface to prevent damage in the event of flooding (be it a hole in the roof or a super gulp slurpee). 

The computer BIOS

The computer's BIOS is on of the most low level components, it controls how the computer boots and a variety of other things. Older BIOS's are infamous for having universal passwords, make sure your bios is recent and does not contain such a backdoor. The bios can be used to lock the boot sequence of a computer to C: only, i.e. the first harddrive, this is a very good idea. You should also use the bios to disable the floppy drive (typically a server will not need to use it), and it can prevent users from copying data off of the machine onto floppy disks. You may also wish to disable the serial ports in users machines so that they cannot attach modems, most modern computers use PS/2 keyboard and mice, so there is very little reason for a serial port in any case (plus they eat up IRQ's). Same goes for the parallel port, allowing users to print in a fashion that bypasses your network, or giving them the chance to attach an external CDROM burner or harddrive can decrease security greatly. As you can see this is an extension of the policy of least privilege and can decrease risks considerably, as well as making network maintenance easier (less IRQ conflicts, etc.). There are of course programs to get the BIOS password from a computer, one is available from:, it is available for DOS and Linux.


Once the computer has decided to boot from C:, LILO (or whichever bootloader you use) takes over. Most bootloaders allow for some flexibility in how you boot the system, LILO especially so, but this is a two edged sword. You can pass LILO arguments at boot time, the most damaging (from a security point of view) being "image-name single" which boots Linux into single user mode, and by default in most distributions dumps you to a root prompt in a command shell with no prompting for passwords or other pesky security mechanisms. Several techniques exist to minimize this risk. 
this controls how long (in tenths of seconds) LILO waits for user input before booting to the default selection. One of the requirements of C2 security is that this interval be set to 0 (obviously a dual boot machines blows most security out of the water). It is a good idea to set this to 0 unless the system dual boots something else.
forces the user to enter something, LILO will not boot the system automatically. This could be useful on servers as a way of disabling reboots without a human attendant, but typically if the hacker has the ability to reboot the system they could rewrite the MBR with new boot options. If you add a timeout option however the system will continue booting after the timeout is reached.
requires a password to be used if boot time options (such as "linux single") are passed to the boot loader. Make sure you use this one on each image (otherwise the server will need a password to boot, which is fine if you’re never planning to remotely reboot it).
requires user to input a password, used in conjunction with restricted, also make sure lilo.conf is no longer world readable, or any user will be able to read the password.
Here is an example of lilo.conf from one of my servers.
This boots the system using the /boot/vmlinuz-2.2.5 kernel, stored on the first portion (right after the MBR) of the first IDE harddrive of the system, the prompt keyword would normally stop unattended rebooting, however it is set in the image, so it can boot “linux” no problem, but it would ask for a password if you entered “linux single”, so if you want to go into “linux single” you have 10 seconds to type it in, at which point you would be prompted for the password ("s0m3_pAsSw0rD_h3r3"). Combine this with a BIOS set to only boot from C: and password protected and you have a pretty secure system. One minor security measure you can take to secure the lilo.conf file is to set it immutable, using the “chattr” command. To set the file immutable simply:
chattr +i /sbin/lilo.conf
and this will prevent any changes (accidental or otherwise) to the lilo.conf file. If you wish to modify the lilo.conf file you will need to unset the immutable flag:
chattr -i /sbin/lilo.conf
only the root user has access to the immutable flag.



"Pluggable Authentication Modules for Linux is a suite of shared libraries that enable the local system administrator to choose how applications authenticate users." Straight from the PAM documentation, I don't think I could have said it any better. But what does this actually mean? For example; take the program “login”, when a user connects to a tty (via a serial port or over the network) a program answers the call (getty for serial lines, usually telnet or SSH for network connections) and starts up the “login” program, “login” then typically requests a username, followed by a password, which it checks against the /etc/passwd file. This is all fine and dandy until you have a spiffy new digital card authentication system and want to use it. Well you will have to recompile login (and any other apps that will do authentication via the new method) so they support the new system. As you can imagine this is quite laborious and prone to errors. 
PAM introduces a layer of middleware between the application and the actual authentication mechanism. Once a program is PAM'ified, any authentication methods PAM supports will be usable by the program. In addition to this PAM can handle account, and session data which is something normal authentication mechanisms don't do very well. For example using PAM you can easily disallow login access by normal users between 6pm and 6am, and when they do login you can have them authenticate via a retinal scanner. By default Red Hat systems are PAM aware, and newer versions of Debian are as well (see bellow for a table of PAM’ified systems). Thus on a system with PAM support all I have to do to implement shadow passwords is convert the password and group files; and possibly add one or two lines to some PAM config files (if they weren't already added). Essentially, PAM gives you a great deal of flexibility when handling user authentication, and will support other features in the future such as digital signatures with the only requirement being a PAM module or two to handle it. This kind of flexibility will be required if Linux is to be an enterprise-class operating system. Distributions that do not ship as "PAM-aware" can be made so but it requires a lot of effort (you must recompile all your programs with PAM support, install PAM, etc), it is probably easier to switch straight to a PAM'ified distribution if this will be a requirement. PAM usually comes with complete documentation, and if you are looking for a good overview you should visit:
Other benefits of a PAM aware system is that you can now make use of an NT domain to do your user authentication, meaning you can tie Linux workstations into an existing Microsoft based network without having to say buy NIS / NIS+ for NT and go through the hassle of installing that.

Distribution Version PAM Support
Red Hat 5.0, 5.1, 5.2, 6.0 Completely
Debian 2.1 Yes
Caldera 1.3, 2.2 Completely
TurboLinux 3.6 Completely
SuSE 6.2 Yes

There are more distributions that support PAM and are not listed here, so please tell me about them.
PAM Cryptocard Module
Pam Smart Card Module
Pam module for SMB


In all UNIX-like operating systems there are several constants, and one of them is the file /etc/passwd and how it works. For user authentication to work properly you need (minimally) some sort of file(s) with UID to username mappings, GID to groupname mappings, passwords for the users, and other misc. info. The problem with this is that everyone needs access to the passwd file, every time you do an ls it gets checked, so how do you store all those passwords safely, yet keep them world readable? For many years the solution has been quite simple and effective, simply hash the passwords, and store the hash, when a user needs to authenticate take the password they enter it, hash it, and if it matches then it was obviously the same password. The problem with this is that computing power has grown enormously and I can now take a copy of your passwd file, and try to brute force it open in a reasonable amount of time. So to solve this several solutions exist:
  • Use a 'better' hashing algorithm like MD5. Problem: can break a lot of things if they’re expecting something else. 
  • Store the passwords elsewhere. Problem: the system/users still need access to them, and it might cause some programs to fail if they are not setup for this. 
Several OS's take the first solution, Linux has implemented the second for quite a while now, it is called shadow passwords. In the passwd file, your passwd is simply replaced by an 'x', which tells the system to check your passwd against the shadow file. Anyone can still read the passwd file, but only root has read access to the shadow file (the same is done for the group file and its passwords). Seems simple enough but until recently implementing shadow passwords was a royal pain. You had to recompile all your programs that checked passwords (login, ftpd, etc, etc) and this obviously takes quite a bit of effort. This is where Red Hat shines through, in its reliance on PAM.
To implement shadow passwords you must do two things. The first is relatively simple, changing the password file, but the second can be a pain. You have to make sure all your programs have shadow password support, which can be quite painful in some cases (this is a very strong reason why more distributions should ship with PAM).
Because of Red Hat's reliance on PAM for authentication, to implement a new authentication scheme all you need to do it add a PAM module that understands it and edit the config file for whichever program (say login) allowing it to use that module to do authentication. No recompiling, and a minimal amount of fuss and muss, right? In Red Hat 6.0 you are given the option during installation to choose shadow passwords, or you can implement them later via the pwconv and grpconv utilities that ship with the shadow-utils package. Most other distributions also have shadow password support, and implementation difficulty varies somewhat. Now for an attacker to look at the hashed passwords they must go to quite a bit more effort then simply copying the /etc/passwd file. Also make sure to occasionally run pwconv and in order to ensure all passwords are in fact shadowed. Sometimes passwords will get left in /etc/passwd, and not be sent to /etc/shadow as they should be by some utilities that edit the password file.

Cracking passwords

In Linux the passwords are stored in a hashed format, however this does not make them irretrievable, chances are you cannot reverse engineer the password from the resulting hash, however you can hash a list of words and compare them. If the results match then you have found the password, this is why good passwords are critical, and dictionary words are a terrible idea. Even with a shadow passwords file the passwords are still accessible by the root user, and if you have improperly written scripts or programs that run as root (say a www based CGI script) the password file may be retrieved by attackers. The majority of current password cracking software also allows running on multiple hosts in parallel to speed things up. 
John the ripper
An efficient password cracker available from:
The original widespread password cracker (as far as I know), you can get it at:
VCU (Velocity Cracking Utilities) is a windows based programs to aid in cracking passwords, “VCU attempts to make the cracking of passwords a simple task for computer users of any experience level.”. You can download it from:
I hope this is sufficient motivation to use shadow passwords and a stronger hash like MD5 (which Red Hat 6.0 supports, I don’t know of other distributions supporting it).

Password storage

This is something many people don’t think about much. How can you securely store passwords? The most obvious method is to memorize them, this however has it’s drawbacks, if you administer 30 different sites you generally want to have 30 different passwords, and a good password is 8+ characters in length and generally not the easiest thing to remember. This leads to many people using the same passwords on several systems (come on, admit it). One of the easiest methods is to write passwords down. This is usually a BIG NO-NO; you’d be surprised what people find lying around, and what they find if they are looking for it. A better option is to store passwords in an encrypted format, usually electronically on your computer or palm pilot, this way you only have to remember one password to unlock the rest which you can then use. Something as simple as PGP or GnuPG can be used to accomplish this. 
Gpasman is an application that requires GTK (relatively easily to install on a non Gnome system, just load up the GTK libs). It encrypts your passwords using the rc2 algorithm. Upon startup of the program you type in your master password, and (assuming it is correct) you are presented with a nice list of your user accounts, sites, passwords and a comment field. Gpasman is available at:
Strip is a palm pilot program for storing passwords securely and can also be used to generate passwords. It is GNU licensed and available from:

File / Filesystem security


A solid house needs a solid foundation, otherwise it will collapse. In Linux's case this is the ext2 (EXTended, version 2) filesystem. Pretty much your everyday standard UNIX-like filesystem. It supports file permissions (read, write, execute, sticky bit, suid, sgid and so on), file ownership (user, group, other), and other standard things. Some of its drawbacks are: no journaling, and especially no Access Control Lists, which are rumored to be in the upcoming ext3. On the plus side, Linux has excellent software RAID, supporting Level 0, 1 and 5 very well (RAID isn't security related, but it certainly is safety/stability related). There is an excellent HOWTO on filesystems in regards to Linux at:
The basic utilities to interact with files are: “ls”, “chown”, “chmod” and “find”. Others include ln (for creating links), stat (tells you about a file) and many more. As for creating and maintaining the filesystems themselves, we have “fdisk” (good old fdisk), “mkfs” (MaKe FileSystem, which formats partitions), and “fsck” (FileSystem ChecK, which will usually fix problems). So, what is it we are trying to prevent hostile people (usually users, and or network daemons fed bad info) from doing? A Linux system can be easily compromised if access to certain files is gained, for example the ability to read a non-shadowed password file results in the ability to run the encrypted passwords against crack, easily finding weak password. This is a common goal of attackers coming in over the network (poorly written CGI scripts seem to be a favorite). Alternatively, if an attacker can write to the password file, he or she can seriously disrupt the system, or (arguably worse) get whatever level of access they want. These conditions are commonly caused by "tmp races", where a setuid program (one running with root privileges) writes temporary files, typically in /tmp, however far to many do not check for the existence of a file, thus allowing an attacker to make a hard link in /tmp pointing to the password file, and when the setuid program is run, kaboom, /etc/passwd is wiped out or possibly appended to. There are many more attacks similar to this, so how can we prevent them?
Simple: set the file system up correctly when you install. The two common directories that users have write access to are /tmp and /home, splitting these off onto separate partitions also prevents users from filling up any critical filesystem (a full / is very bad indeed). A full /home could result in users not being able to log in at all (this is why root’s home directory is in /root). Putting /tmp and /home on separate partitions is pretty much mandatory if users have shell access to the server, putting /etc, /var, and /usr on separate partitions is also a very good idea.
The primary tools for getting information about files and filesystems are all relatively simple and easy to use. “df” (shows disk usage) will also show inode usage, “df –i” (inodes contain information about files such as their location on the disk drive, and you can run out of these before you run out of disk space if you have many small files. This results in error messages of "disk full" when in fact “df” will show there is free space (“df –i” however would show the inodes are all used). This is similar to file allocation entries in Windows, with vfat it actually stores names in 8.3 format, using multiple entries for long filenames, with a max of 512 entries per directory, to many long filenames, and the directory is 'full'. The “du” utility will tell you the size of directories, which is very useful for finding out where all that disk space has disappeared to, usage is “du” (lists everything in the current directory and below it that you have access to) or “du /dir/name”, optionally using “-s” for a summary which is useful for directories like /usr/src/linux. To gain information about specific files the primary tool is ls (similar to DOS's “dir” command), “ls” shows just file/dir names, “ls –l” shows information such as file perms, size and so on, and 'ls -la' shows directories and files beginning in “.”'s, typical for config files and directories (.bash_history, .bash_logout, etc.). The “ls” utility has a few dozen options for sorting based on size, date, in reverse order and so forth; “man ls” for all the details. For details on a particular file (creation date, last access, inode, etc) there is “stat”, which simply tells all the vital statistics on a given file(s), and is very useful to see if a file is in use/etc.
To manipulate files and folders we have the typical utilities like cp, mv, rm (CoPy, MoVe and ReMove), as well as tools for manipulating security information. chown is responsible for CHanging OWNership of files the user and group a given file belongs to (the group other is always other, similar to Novell or NT's 'everyone' group). chmod (CHange MODe) changes a files attributes, the basic ones being read, write and execute, as well there is setuid, setguid (set user and group id the program is run as to the ones that own it, often times root), sticky bit and so forth. With proper use of assigning users to groups, chmod and chown you can emulate ACL's to a degree, but it is far less flexible then Sun/AIX/NT's file permissions (although this is rumored for ext3). Please be especially careful with setuid/setguid as any problems in that program/script can be magnified greatly.
I thought I should also mention “find”. It find's files (essentially it will list files), and can also filter based on permissions/ownership (also on size, date, and several other criterion). A couple of quick examples for hunting down setuid/guid programs:
to find all setuid programs:
find / -perm +4000
to find all setgid programs:
find / -perm +2000
The biggest part of file security however is user permissions. In Linux a file is 'owned' by 3 separate entities, a User, a Group, and Other (which is everyone else). You can set which user owns a file and which group it belongs to by:
chown user:group object
where object is a file, directory, etc. If you want to deny execute access to all of the 3 owners simply: 
chmod x="" object
where x is a|g|u|o (All/User/Group/Other), force the permissions to be equal to "" (null, nothing, no access at all) and object is a file, directory, etc. This is by far the quickest and most effective way to rip out permissions and totally deny access to users/etc (="" forces it to clear it). Remember that root can ALWAYS change file perms and view/edit/run the file, Linux does not yet provide safety to users from root (which many would argue is a good thing). Also whoever owns the directory the object is in (be they a user/group/other with appropriate perms on the parent directory) can also potentially edit permissions (and since root owns / it can make changes that can traverse down the filesystem to any location).

Secure file deletion

One thing many of us forget is that when you delete a file, it isn’t actually gone. Even if you overwrite it, reformat the drive, or otherwise attempt to destroy it, chances are it can be recovered, and typically data recovery services only cost a few thousand dollars, so it might well be worth an attackers time and money to have it done. The trick is to scramble the data by repeatedly flipping the magnetic bits (a.k.a. the 1’s and 0’s) so that when finished no traces of the original data remain (i.e. magnetic bits still charged the same way they originally were).
Two programs (both called wipe) have been written to do just this.
wipe (
wipe securely deletes data by overwriting the file multiple times with various bit patterns, i.e. all 0’s, then all 1’s. then alternating 1’s and 0’s and so forth. You can use wipe on files or on devices, if used on files remember that filename’s, creation dates, permissions and so forth will not be deleted, so make sure you wipe the device if you absolutely must remove all traces of something. You can get wipe from:
wipe (
This one also securely deletes data by overwriting the files multiple times, this one does not however support for wiping devices. You can get it at:
PGP also supports secure file deletion.

Access control lists (ACL’s)

One major missing component in Linux is a filesystem with Access Control Lists (ACL’s) instead of the standard User, Group, Other with it’s dozen or so permissions. ACL’s enable you to control access to the filesystem in a much more fine grained fashion, for example on a file you may want to grant the user “bob” full access, “mary” read, the groups sales “change”, the accounting group “read”, and nothing for everyone else . Under existing Linux permissions you could not do this. Hence the need for ACL’s. The ACL stuff is currently under kernel patches.
The Linux trustees (ACL) project

System Files

The password file is arguably the most critical system file in Linux (and most other UNIX's). It contains the mappings of username, user ID and the primary group ID that person belongs to. It may also contain the actual password however it is more likely (and much more secure) to use shadow passwords to keep the passwords in /etc/shadow. This file MUST be world readable, otherwise commands even as simple as ls will fail to work properly. The GECOS field can contain such data as the real name, phone number and the like for the user, the home directory is the default directory the user gets placed in if they log in interactively, and the login shell must be an interactive shell (such as bash, or a menu program) and listed in /etc/shells for the user to log in. The format is:
Passwords are stored utilizing a one way hash (the default hash used is crypt, newer distributions support MD5 which is significantly stronger). Passwords cannot be recovered from the encrypted result, however you can attempt to find a password by using brute force to hash strings of text and compare them, once you find a match you know you have the password. This in itself is generally not a problem, the problem occurs when users choose easily guessed passwords. The most recent survey results showed that %25 of passwords could be broken in under an hour, and what is even worse is that %4 of users choose their own name as the password. Blank fields in the password field are left empty, so you would see “::”, this is something that is critical for the first four fields (name, password, uid and gid).
The shadow file holes the username and password pairs, as well as account information such as expiry date, and any other special fields. This file should be protected at all costs and only the root user should have read permission to it. 
The groups file contains all the group membership information, and optional items such as group password (typically stored in gshadow on current systems), this file to must be world readable for the system to behave correctly. The format is:
A group may contain no members (i.e. it is unused), a single member or multiple members, and the password is optional (and typically not used).
Similar to the password shadow file, this file contains the groups, password and members. Again, this file should be protected at all costs and only the root user should have read permission to it.
This file (/etc/login.defs) allows you to define some useful default values for various programs such as useradd and password expiry. It tends to vary slightly across distributions and even versions, but typically is well commented and tends to contain sane default values.
The shells file contains a list of valid shells, if a user’s default shell is not listed here they may not log in interactively. See the section on telnetd for more information.
This file contains a list of tty’s that root can log in from. Console tty’s are usually /dev/tty1 through /dev/tty6. Serial ports (if you want to log in as root over a modem say) are /dev/ttyS0 and up typically. If you want to allow root to login via the network (a very bad idea, use sudo) then add /dev/ttyp1 and up (if 30 users login and root tries to login root will be coming from /dev/ttyp31). Generally you should only allow root to login from /dev/tty1, and it is advisable to disable the root account altogether, before doing this however please install sudo or program that allows root access to commands. 

Encrypting services / data

Encrypting data files and email

Several encryption programs are also available to encrypt your data, some at the file level (PGP, GnuPG, etc.) and some at the drive level (Cryptographic File System for example). These systems are very appropriate for the storage of secure data, and to some degree for the transmission of secure data. However both ends will require the correct software, compatible versions, and an exchange of public keys will somehow have to take place, which is unfortunately, an onerous task for most people. In addition to this you have no easy way of trusting someone's public key unless you receive it directly from them (such as at a key signing party), or unless it is signed by someone else you trust (but how do you get the trusted signer's key securely?). Systems for drive encryption such as CFS (Cryptographic FileSystem) are typically easy to implement, and only require the user to provide a password or key of some form to access their files. There is a really good article on choosing key sizes at which raises some issues you probably hadn't considered. I would recomend reading it.
PGP (Pretty Good Privacy)
The granddaddy of public encryption, this is by far one of the most popular programs as it is supported under Unix, Windows and Macintosh. Unfortunately it has now been commercialized, which has resulted in a loss of quality for users. I personally believe any software used to encrypt or otherwise secure data MUST be open source or how else can you be sure it is secure. PGP is now sold by Network Associates and I cannot in good faith recommend it as a security mechanism for the secure storage and transmission of files. PGP is available for download from, and
GnuPG (Gnu Privacy Guard)
The alternative to PGP, GnuPG (GPG) is a direct replacement that is fully opensource and GNU licensed (as if the name didn't give it away). This tool is available at:, as source code or precompiled binaries for windows, and RPM's. There is also an article here on GnuPG that I wrote.
pgp4pine is a PGP shell for pine that allows easy usage of PGP/GnuPG from within pine. Signing / encrypting and so on is made easier. You can get it from:
HardEncrypt is a one time pad generator and a set of tools to use it. In theory one time pads are an almost unbreakable form of encryption. Using a set of random, cryptographically secure data you completely mangle your private data, to decrypt it you need the one time pad. This form of encryption is ideal for communication of sensitive data with one catch, you must first transfer the one time pad to the other party. You can download HardEncrypt from:
secret-share allows you to break a file up into as many pieces as you want, all of which are needed to successfully rebuild the file. All but one of the pieces are random data that is encrypted, obfuscating it somewhat. You can download it from:

Encrypting your harddrive

CFS (Cryptographic Filesystem)
CFS allows you to keep data on your harddrive in an encrypted format, and is significantly easier to use then a file encryption program (such as PGP) if you have many files and directories you want to keep away from curious people. The official distribution site is at:, and RPM's are available at:, and Debian binaries are at: 
TCFS is a kernel level data encryption utility, similar to CFS. It however has several advantages over CFS; as it is implemented at the kernel level it is significantly faster. It is tightly integrated with NFS meaning you can server data securely on a local machine, or across the network. It decrypts data on the client machine, so when used over a network the password/etc is never passed over the network. The only catch is that it hasn’t yet been ported to the 2.2 kernel series. You can get TCFS from:
PPDD allows you create a disk partition that is encrypted, it can either be an actual partition, or a loopback device (which resides in a file, but is mounted as a filesystem). It uses the blowfish algorithm which is relatively fast and proven. You can get PPDD from:
Encrypted Home Directory
Encrypted Home Directory works similarly to CFS, however it is aimed at providing a single encrypted directory. Essentially it creates a file of size X in /crypt/ with your UID, and mounts it on a loopback device so you can access it. The trick is the data is encrypted and decrypted on the fly as you access it (just like CFS). The only catch is that the software is still in development, so backup any important data. You can download it from:
BestCrypt is a commercial product, with source code, available for Windows and Linux. You can get it here:

Network encryption

There are a number of sources for information on SSL. Generally where SSL is applicable it is in the individual resource (i.e. WWW). For a good FAQ go here: OpenSSL is an OpenSource implementation of the SSL libraries that is available form:

Sources of random data

In order for encryption to be effective, especially on a large scale such as IPSec across many hosts, good sources of random, cryptographically secure data are needed. In Linux we have /dev/random and /dev/urandom which are good but not always great. Part of the equation is measuring 'random' events, manipulating that data and then making it available (via (u)random). These random events include: keyboard and mouse input, interrupts, drive reads, etc. 
However, as many servers have no keyboard/mouse, and new "blackbox" products often contain no harddrive, sources of random data become harder to find. Some sources, like network activity, are not entirely appropriate because the attacks may be able to measure it as well (granted this would be a very exotic attack, but enough to worry people nonetheless). There are several sources of random data that can be used (or at least they appear random), radioactive decay and radio frequency manipulations are two popular ones. Unfortunately the idea of sticking a radioactive device in a computer makes most people nervous. And using manipulated radio frequencies is prone to error, and the possibility of outside manipulation. For most of us, this isn’t a real concern, however for IPSec gateway servers handling many connections it can be a problem. One potential solution is the PIII, which has a built in random number generator that measures thermal variance in the CPU, I think as we progress, solutions like this will become more common.

Hiding data on your harddrive

One issue many people forget that is the very act of encrypting data can draw attention. For example if a corporate administrator scanned workstations for files ending in .pgp, and you were the only one with files such as that....
StegHide hides data in files such as sound and picture files where not all of the bits in a byte are used. Since the data is encrypted it will appear random, and proving that the data is actually there is difficult. The only downside is to store a one megabyte file you need a sound/picture file of several megabytes, which can be cumbersome (but hard drives and high speed access are becoming cheap so it's a moot point). You can get StegHide at:
Steganographic File System actually hides data on your harddrive, making it difficult to prove that it even exists. This can be very useful as the attacker first has to find the data, let alone break the strong encryption used to protect it. You can get StegFS from:
OutGuess hides data in image files, meaning you can send files in a way that won't attract to much attention (and can't really be prooved either). You can get it from:

Network security


Network security is a pretty broad topic, so I've broken it down into a couple of sections. In this area I cover the bottom 4 or so layers (transport, network, datalink, physical) of the 7 layer OSI protocol stack, the top 3 (application, presentation, session) are in the network server section and so forth (roughly speaking). I also cover some of the basic network configuration files, since an explanation is always useful.

PPP security

PPP provides TCP-IP, IPX/SPX, and NetBEUI connections over serial lines (which can, of course, be attached to modems). It is the primary method most people use to connect to the Internet (virtually all dial-up accounts are PPP). A PPP connection essentially consists of two computing devices (a computer, a Palm Pilot, a terminal server, etc.) connected over a serial link (usually via modems). Both ends invoke PPP, authentication is handled (by one of several methods), and the link is brought up. PPP has no real support for encryption, so if you require a secure link you must invest in some form of VPN software. 
Most systems invoke PPP in a rather kludgy way, you “log in” to the equipment (terminal server, etc.) and then as your login shell PPP is invoked. This of course means your username and password are sent in clear text over the line and you must have an account on that piece of equipment. In this case PPP does not handle the authentication at all. A somewhat safer method of handling this is to use PAP (Password Authentication Protocol). With PAP the authentication is handled internally by PPP, so you do not require a “real” account on the server. However the username and password is still sent in clear text, but at least the system is somewhat more secure due to the lack of “real” user accounts.
The third (and best) method for authentication is to use CHAP (Challenge Handshake Authentication Protocol). Both sides exchange public keys and use them to encrypt data sent during the authentication sequence. Thus your username and password are relatively safe from snooping, however actual data transfers are sent normally. One caveat with CHAP: Microsoft's implementation uses DES instead of MD5, making it slightly 'broken' if connecting with a Linux client. There are patches available however to fix this. PPP ships with almost every Linux distribution as a core part of the OS, the Linux PPP-HOWTO is available at:

TCP-IP security

TCP-IP was created in a time and place where security wasn't a very strong concern. Initially the 'Internet' (then called Arpanet) consisted of very few hosts, all were academic sites, big corporations or government in nature. Everyone knew everyone else, and getting on the Internet was a pretty big deal. The TCP-IP suite of protocol is remarkably robust (it hasn't failed horribly yet), but unfortunately it has no real provisions for security (i.e. authentication, verification, encryption and so on). Spoofing packets, intercepting packets, reading data payloads, and so is remarkably easy in today's Internet. The most common attacks are denial of service attacks since they are the easiest to execute and the hardest to defeat, followed by packet sniffing, port scanning, and other related activities.
Hostnames don't always point at the right IP addresses and IP addresses don't always reverse lookup to the right hostname. Do not use hostname-based authentication if possible. Because DNS cache poisoning is relatively easy, relying on an IP address for authentication reduces the problem to one of spoofing, which is somewhat more secure but by no means truly secure. There are no mechanisms in wide spread use to verify who sent data and who is receiving it except by use of session or IP level encryption (IPSec/IPv6 and other VPN technologies are starting to gain momentum however).
You can start by denying inbound data that claims to originate from your network(s), as this data is obviously spoofed. And to prevent your users, or people who have broken into your network, from launching spoofed attacks you should block all outbound data that is not from your IP addresses. This is relatively simple and easy to manage but the vast majority of networks do not do it (I spent about a year pestering my ISP before they started). If everyone on the Internet had egress filters (that is restricted outbound traffic to that which is from their internal IP addresses) spoofing attacks would be impossible, and thus tracing attackers back to source would be far easier. You should also block the reserved networks (127.*, 10.*, etc.). I have noticed many attacks from the Internet with packets labeled as from those IP ranges. if you use network address translation (like IPMASQ) and do not have it properly firewalled you could be easily attacked or used to relay an attack to a third party.
If you must communicate securely with people, consider using VPN technology. The only available technology that has wide acceptance and is slated to become a the standard (in IPv6) is IPSec, it is an open standard supported by many vendors and most major vendors have actual working implementations native to their OS (although some are crippled to comply with US export laws). Please see Appendix B or the Encrypting Services and Data section for more details.
IPSec is covered in it’s own section. I think it is the future of VPN technology (it’s the most commonly supported standard as of today, and an integral part of IPv6).
IPv6 provides no security per se, but it does have built in hooks for future security enhancements, IPSec support and so on. If used on a network it would of course make life more difficult for an attacker as IPv6 use is not yet widespread. If you want to learn more visit: Linux currently supports IPv6 pretty much completely (one of the few OS’s that does). 
HUNT Project
The HUNT Project is a set of tools for manipulating TCP-IP connections (typically on an Ethernet LAN), that is it can reset connections, spy on them and do otherwise “naughty” things. It also includes a variety of ARP based attacks and other mischievous sources of fun, You can get HUNT at:

Basic config files and utilities

inetd.conf is responsible for starting services, typically ones that do not need to run continuously, or are session based (such as telnet, or ftpd). This is because the overhead of running a service constantly (like telnet) would be higher then the occasional start up cost (or firing in.telnetd up) when a user wants to use it. For some services (like DNS) that service many quick connections, the overhead of starting the service every few seconds would be higher then constantly running it. Also with services such as DNS and email time is critical, a few seconds delay starting an ftp session won't hurt much. The man page for inetd.conf covers the basics (“man inetd.conf”). The service itself is called inetd and is run at boot time, so you can easily stop/start/reload it by manipulating the inetd process. Whenever you make changes to inetd.conf you must restart inetd to make the changes effective, killall -1 inetd will restart it properly. Lines in inetd.conf can be commented out with a # as usual (this is a very simple and effective way of disabling services like rexec). It is advisable to disable as many services in inetd.conf as possible, typically the only ones in use will be ftp, pop and imap. Telnet and r services should be replaced with SSH and services like systat/netstat and finger give away far to much information. Access to programs started by inetd can be easily controlled by the use of TCP_WRAPPERS.
The services file is a list of port numbers, the protocol and the corresponding name. The format is:
service-name port/protocol aliases # optional comment
for example:
time 37/udp timserver
rlp 39/udp resource # resource location
name 42/udp nameserver
whois 43/tcp nicname # usually to sri-nic
domain 53/tcp
domain 53/udp
This file is used for example when you run 'netstat -a', and of course not used when you run 'netstat -an'
Using TCP_WRAPPERS makes securing your servers against outside intrusion is a lot simpler and painless then you would expect. TCP_WRAPPERS is controlled from two files:
hosts.allow is checked first, and the rules are checked from first to last. If it finds a rule that explicitly allows you in (i.e., a rule allowing your host, domain, subnet mask, etc.) it lets you connect to the service. If it fails to find any rules that pertain to you in hosts.allow, it then goes to check hosts.deny for a rule denying you entry. Again it checks the rules in hosts.deny from first to last, and the first rule it finds that denies you access (i.e., a rule disallowing your host, domain, subnet mask, etc.) means it doesn't let you in. If it fails to find a rule denying you entry it then by default lets you. If you are paranoid like me the last rule (or only rule if you are going to a default policy of non-optimistic security) should be:
in hosts.deny:
which means all services, all locations, so any service not explicitly allowed is then blocked (remember the default is to allow). You might also want to just default deny access to say telnet and leave ftp wide open to the world. To do this you would have:
in hosts.allow:
in.telnetd: # allow access from my internal network of 10.0.0.*
in.ftpd: # allow access from anywhere in the world
in hosts.deny:
in.telnetd: # deny access to telnetd from anywhere
or if you wish to be really safe:
ALL: # deny access to everything from everywhere
This may affect services such as ssh and nfs, so be careful! 
You may wish to simply list all the services you are using separately:
If you leave a service on that you shouldn't have in inetd.conf, and DO NOT have a default deny policy, you could be up the creek. It is safer (and a bit more work, but in the long run less work then rebuilding the server) to have default deny rules for firewalling and TCP_WRAPPERS, thus is you leave something on by accident, by default there will be no access to it. If you install something that users need access and you forget to put allow rules in, they will quickly complain that they can't get access and you will be able to rectify the problem quickly. Erring on the side of caution and accidentally denying something is a lot safer then leaving it open. 
The man pages for TCP_WRAPPERS are very good and available by:
man hosts.allow
man hosts_allow
and/or (they are the same man page):
man hosts.deny
man hosts_deny
One minor caveat with TCP_WRAPPERS that recently popped up on Bugtraq, TCP_WRAPPERS interprets lines in hosts.allow and hosts.deny in the following manner:
1) strip off all \'s (line continuations), making all the lines complete (also note the max length of a line is about 2k, better to use multiple lines in some cases).
2) strip out lines starting with #'s, i.e. all commented out lines. Thus:
# this is a test
# in.ftpd: \
this means the "in.telnetd:" line would be ignored as well.

What is running and who is it talking to?

You can’t start securing services until you know what is running. For this task ps and netstat are invaluable; ps will tell you what is currently running (httpd, inetd, etc) and netstat will tell you what the status of ports are (at this point we’re interested in ports that are open and listening, that is waiting for connections). We can take a look at the various config files that control network services.
The program ps shows us process status (information available in the /proc virtual filesystem). The options most commonly used are “ps -xau”, which show pretty much all the information you’d ever want to know. Please note: these options vary across UNIX systems, Solaris, SCO, etc all behave differently (which is incredibly annoying). The following is typical output from a machine (using “ps –xau”).
bin 320 0.0 0.6 760 380 ? S Feb 12 0:00 portmap
daemon 377 0.0 0.6 784 404 ? S Feb 12 0:00 /usr/sbin/atd
named 2865 0.0 2.1 2120 1368 ? S 01:14 0:01 /usr/sbin/named -u named -g named -t /home/named
nobody 346 0.0 18.6 12728 11796 ? S Feb 12 3:12 squid
nobody 379 0.0 0.8 1012 544 ? S Feb 12 0:00 (dnsserver)
nobody 380 0.0 0.8 1012 540 ? S Feb 12 0:00 (dnsserver)
nobody 383 0.0 0.6 916 416 ? S Feb 12 0:00 (dnsserver)
nobody 385 0.0 0.8 1192 568 ? S Feb 12 0:00 /usr/bin/ftpget -S 1030
nobody 392 0.0 0.3 716 240 ? S Feb 12 0:00 (unlinkd)
nobody 1553 0.0 1.8 1932 1200 ? S Feb 14 0:00 httpd
nobody 1703 0.0 1.8 1932 1200 ? S Feb 14 0:00 httpd
root 1 0.0 0.6 776 404 ? S Feb 12 0:04 init [3]
root 2 0.0 0.0 0 0 ? SW Feb 12 0:00 (kflushd)
root 3 0.0 0.0 0 0 ? SW Feb 12 0:00 (kswapd)
root 4 0.0 0.0 0 0 ? SW Feb 12 0:00 (md_thread)
root 64 0.0 0.5 736 348 ? S Feb 12 0:00 kerneld
root 357 0.0 0.6 800 432 ? S Feb 12 0:05 syslogd
root 366 0.0 1.0 1056 684 ? S Feb 12 0:01 klogd
root 393 0.0 0.7 852 472 ? S Feb 12 0:00 crond
root 427 0.0 0.9 1272 592 ? S Feb 12 0:19 /usr/sbin/sshd
root 438 0.0 1.0 1184 672 ? S Feb 12 0:00 rpc.mountd
root 447 0.0 1.0 1180 644 ? S Feb 12 0:00 rpc.nfsd
root 458 0.0 1.0 1072 680 ? S Feb 12 0:00 /usr/sbin/dhcpd
root 489 0.0 1.7 1884 1096 ? S Feb 12 0:00 httpd
root 503 0.0 0.4 724 296 2 S Feb 12 0:00 /sbin/mingetty tty2
root 505 0.0 0.3 720 228 ? S Feb 12 0:02 update (bdflush)
root 541 0.0 0.4 724 296 1 S Feb 12 0:00 /sbin/mingetty tty1
root 1372 0.0 0.6 772 396 ? S Feb 13 0:00 inetd
root 1473 0.0 1.5 1492 1000 ? S Feb 13 0:00 sendmail: accepting connections on port 25
root 2862 0.0 0.0 188 44 ? S 01:14 0:00 /usr/sbin/holelogd.named /home/named/dev/log
root 3090 0.0 1.9 1864 1232 ? S 12:16 0:02 /usr/sbin/sshd
root 3103 0.0 1.1 1448 728 p1 S 12:16 0:00 su -root 3104 0.0 1.3 1268 864 p1 S 12:16 0:00 -bash
root 3136 0.0 1.9 1836 1212 ? S 12:21 0:04 /usr/sbin/sshd
The interesting ones are: portmap, named, Squid (and it’s dnsserver, unlinkd and ftpget children processes), httpd, syslogd, sshd, rpc.mountd, rpc.nfsd, dhcpd, inetd, and sendmail (this server appears to be providing gateway services, email and NFS file sharing). The easiest way to learn how to read ps output is go over the ps man page and learn what the various fields are (most are self explanatory, such as %CPU, while some like SIZE are a bit obscure: SIZE is the number of 4k memory ‘pages’ a program is using). To figure out what the running programs are a safe bet is ‘man ’; which almost always gives you the manual page pertaining to that service (such as httpd). You will notice that services like telnet, ftpd, identd and several others do not show up even though they are on. This is because they are run from inetd, the ‘superserver’. To find these services look at /etc/inetd.conf or your “netstat –vat” output.
netstat tells us pretty much anything network-related that you can imagine. It is especially good at listing active connections and sockets. Using netstat we can find which ports on which interfaces are active. The following output is from a typical server using netstat –vat.
Active Internet connections (including servers)
Proto Recv-Q Send-Q Local Address Foreign Address State 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
tcp 0 0* LISTEN 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
udp 0 0* 
raw 0 0* 
raw 0 0*
Numeric output is in my opinion easier to read (once you memorize /etc/services). The interesting fields for us are the first field, type of service, the fourth field which is the IP address of the interface and the port, the foreign address (if not* means someone is actively talking to it), and the port state. The first line is a remote client talking to the web server on this machine (port 80). We then see the www server listening on which means all interfaces, port 80, followed by the DNS server running on all 3 interfaces, a samba server (139), a mail server (25), an NFS server (2049) and so on. You will notice the ftp server (21) listed, even though it is run out of inetd, and not currently in use (i.e. no one is actively ftping in), it is listed in the netstat output. This makes netstat especially useful for finding out what is active on a machine, making an inventory of active and inactive network related software on the server much easier.
lsof is a handy program similar in idea to ps, except that it prints out what files/etc are open, which can include network sockets. Unfortunately your average lsof puts out a lot of information, so you will need to use grep or redirect it through less (“lsof | less”) to make it easier to read.
squid 9726 root 4u inet 78774 TCP localhost:2074->localhost:2073 (ESTABLISHED)
squid 9726 root 5u inet 78777 TCP localhost:2076->localhost:2075 (ESTABLISHED)
squid 9726 root 6u inet 78780 TCP localhost:2078->localhost:2077 (ESTABLISHED)
squid 9726 root 7w CHR 1,3 6205 /dev/null
squid 9726 root 14u inet 78789 TCP host1:3128 (LISTEN)
squid 9726 root 15u inet 78790 UDP host1:3130 
squid 9726 root 16u inet 78791 UDP host1:3130
squid 9726 root 12u inet 167524 TCP host1:3128->host2:3630 (ESTABLISHED)
squid 9726 root 17u inet 167528 TCP host1:3424-> (SYN_SENT)
This example shows that we have Squid running, listening on ports 3128 and 3130, the last two lines show an open connection from an internal host to the Squid server and the resulting action Squid has taken to fulfill the request (going to host1 is the Squid server and host2 is the client machine making the request. This is an invaluable tool for getting a precise image of what is going on network wise with your server. You can get lsof with some distributions. Please note that versions of lsof compiled for kernel version 2.0.x will not work with kernel 2.2.x and vice versa, as there were too many changes. The primary site for lsof is at:

Routing security

There are a variety of routing software packages available for Linux. Most of them support the newer routing protocols which have a much higher degree of security then the older protocols such as RIP.
routed is one of the standard routing packages available for Linux. It supports RIP (about the oldest routing protocol still in service), and that’s it. RIP is very simple, routers simply broadcast their routing tables to neighboring routers, resulting (in theory) in a complete routing table that contains entries for every destination on the Internet. This method is fundamentally insecure, and very inefficient outside of small secure networks (in which case it probably is not needed). Securing it is really not possible, you can firewall ports 520 and 521 which RIP uses to transfer data, however this can result in routes you want not getting through, and attackers can still spoof routes. Running this service is a very bad idea.
gated is a more advanced piece of routing software then routed. It supports RIP versions 1 and 2, DCN HELLO, OSPF version 2, EGP version 2, and BGP versions 2 through 4. Currently the most popular routing protocol seems to be BGP (Border Gateway Protocol), with OSPF gaining popularity (OSPF has built in security, is very efficient, and quite a bit more complicated). 
MRT (Multi-threaded Routing Toolkit) is a routing daemon and test toolkit that can handle IPv4 and IPv6. You can get it at:
zebra is much more featured then gated, and sports a nice Cisco style command line interface. It runs as a daemon, and is multi threaded for performance, each protocol (RIP, OSPF, etc.) has it’s own configuration, and you can run multiple protocols simultaneously (although this could lead to confusion/problems). There is a master configuration port, and a port for each protocol:
zebrasrv                2600/tcp                # zebra service
zebra           2601/tcp                # zebra vty
ripd            2602/tcp                # RIPd vty
ripngd          2603/tcp                # RIPngd vty
ospfd           2604/tcp                # OSPFd vty
bgpd            2605/tcp                # BGPd vty
ospf6d          2606/tcp                # OSPF6d vty
I would advise firewalling these ports. Access is controlled by a login password, and access to command functions requires another password (using the same syntax as Cisco, “enable”). You can download zebra from:

Network servers

Authentication - NIS/NIS+, Kerberos, Radius
Certificate - OpenCA, pyCA
DHCP - ISC DHCP, Moreton Bay DHCP Server
File / print - NFS, Samba, LPD
News - Usenet news, INN
proxy - Socks, Squid, DeleGate
Shell - Telnet, SSH, SSL-Telnet, etc
Time - NTP
User information - Finger, Identd
WWW based email
X windows

Network Based Authentication


There are a variety of methods to share authentication data between systems. This is an increasingly common task as networks become more distributed, access points more numerous, and users scatter away from the traditional corporate LAN.


NIS and NIS+ (formally known as “yellow pages”) stands for Network Information Service. Essentially NIS and NIS+ provide a means to distribute password files, group files, and other configuration files across many machines, providing account and password synchronization (among other services). NIS+ is essentially NIS with several enhancements (mostly security related), otherwise they are very similar. 
To use NIS you set up a master NIS server that will contain the records and allow them to be changed (add users, etc), this server can distribute the records to slave NIS machines that contain a read only copy of the records (but they can be promoted to master and set read/write if something bad happens). Clients of the NIS network essentially request portions of the information and copy it directly into their configuration files (such as /etc/passwd), thus making them accessible locally. Using NIS you can provide several thousand workstations and servers with identical sets of usernames, user information, passwords and the like, significantly reducing administration nightmares. 
However this is part of the problem: in sharing this information you make it accessible to attackers. NIS+ attempts to resolve this issue however, but NIS+ is an utter nightmare to set up. NIS+ uses secure RPC, which can make use of single DES encryption (which I personally feel is to weak to even bother with). I would not recommend relying on the single DES encryption of secure RPC to secure your NIS data.
An alternative strategy would be to use some sort of VPN (like FreeS/WAN, which seems to solve a lot of problems) and encrypt the data before it gets onto the network. There is an NIS / NIS+ HOWTO at:, and O’Reilly has an excellent book on the subject, "Managing NFS and NIS". NIS / NIS+ runs over RPC which uses port 111, both tcp and udp. This should definitely be blocked at your network border, but will not totally protect NIS / NIS+. Because NIS and NIS+ are RPC based services they tend to use higher port numbers (i.e. above 1024) in a somewhat random fashion, making firewalling of it rather difficult. The best solutions is to place your NIS server(s) on an internal network that is blocked completely from talking to the Internet, inbound and outbound. There is also an excellent paper on securing NIS available from:
ipfwadm -I -a accept -P udp -S -D 111
ipfwadm -I -a accept -P udp -S -D 111
ipfwadm -I -a deny -P udp -S -D 111
ipchains -A input -p udp -j ACCEPT -s -d 111
ipchains -A input -p udp -j ACCEPT -s -d 111
ipchains -A input -p udp -j DENY -s -d 111


Kerberos is a modern network authentication system based on the idea of handing a user a ticket once they have authenticated to the Kerberos server (similar to NT’s use of tokens). Kerberos is available from: The Kerberos FAQ is available at: Kerberos is appropriate for large installations as it scales better and is more secure then NIS / NIS+. Kerberizing programs such as telnet, imap and pop can be achieved with some effort, Windows clients with Kerberos support are harder to find however.


Radius is a commonly used protocol to authenticate dial-in users, and other types of network access.
Livingston Radius
Ascend RADIUSd SQL patch
Cistron RADIUS server

Certificate authority software for Linux


Certificates are becoming incresingly popular, and if you have a high enough demand running your own certificate server can save a lot of time and money. There are other projects where it would be nice to have a certificate authority, but using a commercial one may be to expensive, in these cases the OpenSource alternatives can be quite good.

Certificate authority software

A project to provide a comprehensive set of tools for an "out of the box" certificate authority capable of handling X.509 certificates. It is available from:
pyCA is a collection of software scripts written in python to setup a certificate authority. You can download it from:

Network services - DHCP


DHCPD is something all network admins should use. It allows you to serve information to clients regarding their network settings/etc, typically meaning that the only client setup needed for networking is leaving the defaults and turning the machine on. It also allows you to reconfigure client machines easily (say move from using to, or a new set of DNS servers). In the long run (and short run even) DHCP will save you enormous amounts of work, money and stress. I run it at home with only 8 client machines and have found life to be much easier. DHCPD is maintained by the ISC and is at:
I also highly recommend you run DHCPD version 2.x (3.x is in testing), it's got a lot of new features, and is easier to setup and work with. The absolute latest version(s) of this tend to be a bit neurotic however, be warned it is beta software. Definitely firewall DHCPD off from the Internet. DHCP traffic should only be on local segments, possibly forwarded to a DHCP server on another segment, but the only DHCP traffic you would see coming over the Internet would be an attack/DOS (they might reserve all your IP's, thus leaving your real clients high and dry). If you are forwarding DHCP traffic over the Internet, DON'T. This is a really bad idea for a variety of reasons (primarily performance / reliability, but security as well).
I recommend the DHCPD server be only a DHCP server, locked up somewhere (if you rely on DHCP for your network and the DHCP server fails your network is in serious trouble), allowed to do it's job quietly. If you need to span subnets (i.e., you have multiple ethernet segments, only one of which has a DHCP server physically connected to it) use a DHCP relay (NT has one built in, the DHCP software for Linux has this capability, etc.). There are also several known problems with NT and DHCP, NT RAS has a rather nasty habit of sucking up IP addresses like crazy (I have seen an NT server grab 64 and keep them indefinitely), because it is trying to reserve IP's for the clients that will be dialing in/etc This may not seem like a real problem but it can (and has) lead to resource starvation (specifically the pool of IP addresses can be exhausted). Either turn NT's RAS off or put it on it’s own subnet, the MAC address it sends to the DHCP server is very strange (and spells out RAS in the first few bytes) and is not easy to map out.
DHCPD should definitely be firewalled from external hosts as there is no reason an external host should be querying your DHCP server for IP’s/etc, in addition to this making it available to the outside world could result in an attacker starving the DHCP server of addresses assuming you use a dynamic pool(s) of addresses, you could be out of luck for your internal network, and learning about the structure of your internal network. DHCP runs on port 67, udp because the amounts of data involved are small and a fast response is critical.
ipfwadm -I -a accept -P udp -S -D 67
ipfwadm -I -a accept -P udp -S -D 67
ipfwadm -I -a deny -P udp -S -D 67
ipchains -A input -p udp -j ACCEPT -s -d 67
ipchains -A input -p udp -j ACCEPT -s -d 67
ipchains -A input -p udp -j DENY -s -d 67

DHCP servers

Chroot'ing DHCPD
DHCPD consists of 2 main executables:
� dhcpd - the DHCP
� dhcrelay - a DHCP relay (to relay requests to a central DHCP server since DHCP is based on broadcasts, which typically don't (and shouldn't) span routers/etc. 
DHCPD requires 2 libraries:
� /lib/
� /lib/ 
A config file:
� /etc/dhcpd.conf - configuration info, location of boot files, etc. 
And a few other misc. files:
� /etc/dhcpd.leases - a list of active leases
� a startup file, you can modify the one it comes with or roll your own
The simplest way to setup dhcpd chroot'ed is to simply install dhcpd (latest one preferably) and move/edit the necessary files. A good idea is to create a directory (such as /chroot/dhcpd/), preferably on a separate filesystem from /, /usr, etc (symlinks...), and then create a file structure under it for dhcpd. The following is an example, simply replace /chroot/dhcpd/ with your choice. You must of course execute these steps as root for it to work.
# Install bind so we have the appropriate files
rpm -i dhcpd-2.0b1pl0-1.i386.rpm
# Create the directory structure
cd /chroot/dhcpd/ # or wherever
mkdir ./etc
mkdir ./usr/sbin
mkdir ./usr
mkdir ./var/dhcpd
mkdir ./var
mkdir ./lib
# Start populating the files
cp /usr/sbin/dhcpd ./usr/sbin/dhcpd
cp /etc/dhcpd.conf ./etc/dhcpd.conf
cp /etc/rc.d/init.d/dhcpd ./etc/dhcpd.init
cp /etc/rc.d/init.d/functions ./etc/functions
# Now to get the latest libraries, change as appropriate
cp /lib/ ./lib/ 
cp /lib/ ./lib/
# And create the necessary symbolic links so that they behave
# Remember that dhcpd thinks /chroot/dhcpd/ is /, so use relative links
Then modify or create your startup script.
Once this is done simply remove the original startup file and create a symlink from where it was pointing to the new one, and dhcpd will behave 'normally' (that is it will be automatically started at boot time), while in fact it is separated from your system. You may also wish to remove the 'original' DHCPD files laying about, however this is not necessary.
If you have done the above properly you should have a /chroot/dhcpd/ (or other dir if you specified something different) that contains everything required to run dhcpd.
And a ps -xau should show something like:
root 6872 0.0 1.7 900 532 p0 S 02:32 0:00 ./usr/sbin/dhcpd -d -q 
root 6873 0.0 0.9 736 288 p0 S 02:32 0:00 tee ./etc/dhcpd.log
Moreton Bay DHCP Server

Network services - DNS


DNS is an extremely important service for IP networks. I would not hesitate to say probably the MOST important network service (without it no one can find anything else). It also requires connections coming in from the outside world, and due to the nature and structure of DNS the information DNS servers claim to have may not be true. The main provider of DNS server software (named, the de facto standard) is currently looking at adding a form of DNS information authentication (basically using RSA to cryptographically sign the data, proving it is 'true'). If you plan to administer DNS servers I would say that O’Reilly and Associates “DNS & BIND” is required reading.

DNS servers

Most distributions are finally shipping bind 8.x, however none (to my knowledge) have shipped it setup for non-root, chroot'ed use by default. Making the switch is easy however:
specifies which UID bind will switch to once it is bound to port 53 (I like to use a user called 'named' with no login permissions, similar to 'nobody').
specifies which GID bind will switch to once it is bound to port 53 (I like to use a group called 'named', similar to 'nobody').
specifies the directory that bind will chroot itself to once started. /home/named is a good bet, in this directory you should place all the libraries and config files bind will require. 
An even easier way of running bind chroot'ed is to download the bind-chroot package, available as a contrib package for most distributions, and install it. Before installation you will need a user and group called named (which is what the bind server changes it UID/GID to), simply use groupadd and useradd to create the user/group. Some packages uses holelogd to log bind information to /var/log/messages (as bind would normally do). If this isn’t available you will have to install it by hand, which is a chore. In addition to this the default configuration file for bind is usually setup securely (i.e., you cannot query bind for the version information).
Another aspect of bind is the information it contains about your network(s). When a person queries a DNS server for information, they typically send a small request for one piece of information. For example what is the IP address for And there are domain transfers, where a DNS server requests all the information for say, and grabs it and can then make it available to other (in the case of a secondary DNS server). This is potentially very dangerous, it can be as or more dangerous then shipping a company phone directory to anyone that calls up and asks for it. Bind version 4 didn't really have much security, you could limit transfers to certain server, but not selectively enough to be truly useful. This has changed in Bind 8, documentation is available at To make a long story short in Bind 8 there are global settings and most of these can also be applied on a per domain basis. You can easily restrict transfers AND queries, log queries, set maximum data sizes, and so on. Remember, when restricting zone queries you must secure ALL name servers (master and the secondaries), as you can transfer zones from a secondary just as easily as a master.
Here is a relatively secure named.conf file (stolen from the bind-chroot package at
options {
// The following paths are necessary for this chroot
directory "/var/named";
dump-file "/var/tmp/named_dump.db"; // _PATH_DUMPFILE
pid-file "/var/run/"; // _PATH_PIDFILE
statistics-file "/var/tmp/named.stats"; // _PATH_STATS
memstatistics-file "/var/tmp/named.memstats"; // _PATH_MEMSTATS
// End necessary chroot paths
check-names master warn; /* default. */
datasize 20M;
zone "localhost" {
type master;
file "master/localhost";
    check-names fail;
    allow-update { 
    allow-transfer { 
zone "" {
type master;
file "master/127.0.0";
    allow-update { 
    allow-transfer { 
// Deny and log queries for our version number except from localhost
zone "bind" chaos {
type master;
file "master/bind";
    allow-query {
zone "." {
type hint;
file "";
zone "" {
type master;
file "zones/";
    allow-transfer {;;
DNS runs on port 53, using both udp and tcp, udp is used for normal domain queries (it's lightweight and fast), tcp is used for zone transfers and large queries (say dig Thus firewalling tcp is relatively safe and will definitely stop any zone transfers, but the occasional DNS query might not work. It is better to use named.conf to control zone transfers.
ipfwadm -I -a accept -P tcp -S -D 53
ipfwadm -I -a accept -P tcp -S -D 53
ipfwadm -I -a deny -P tcp -S -D 53
ipchains -A input -p tcp -j ACCEPT -s -d 53
ipchains -A input -p tcp -j ACCEPT -s -d 53
ipchains -A input -p tcp -j DENY -s -d 53
would block zone transfers and large queries, the following would block normal queries (but not zone transfers, so if locking it down remember to use both sets of rules)
ipfwadm -I -a accept -P udp -S -D 53
ipfwadm -I -a accept -P udp -S -D 53
ipfwadm -I -a deny -P udp -S -D 53
ipchains -A input -p udp -j ACCEPT -s -d 53
ipchains -A input -p udp -j ACCEPT -s -d 53
ipchains -A input -p udp -j DENY -s -d 53
Chroot'ing DNS
Dents is a GPL DNS server, currently in testing stages (release 0.0.3). Dents is being written from the ground up with support for SQL backends, integration with SNMP, uses CORBA for it’s internals. All in all it should give Bind a serious run for the money, I plan to test and evaluate it, but until then you’ll just have to try it yourself. Dents is available at:

Email servers


Simple Mail Transfer Protocol (SMTP) is one of the more important services provided by the Internet. Almost all companies now have or rely upon email, and by extensions SMTP servers. There are many SMTP server packages available, the oldest and most tested is Sendmail (now commercially supported, etc.), and there are two new contenders, Postfix and Qmail, both of which were written from scratch with security in mind. Firewalling SMTP is straightforward, it runs on port 25, tcp:
ipfwadm -I -a accept -P tcp -S -D 25
ipfwadm -I -a accept -P tcp -S -D 25
ipfwadm -I -a deny -P tcp -S -D 25
ipchains -A input -p tcp -j ACCEPT -s -d 25
ipchains -A input -p tcp -j ACCEPT -s -d 25
ipchains -A input -p tcp -j DENY -s -d 25

Email servers

Sendmail is another one of those services most of us have a love/hate relationship with. We hate to admin it and we'd love to replace it (actually I have started removing Sendmail from machines I admin and replacing it with Postfix). 
Sendmail has earned itself a very bad reputation for security, however I find it hard to blame software when I find systems running ancient versions of Sendmail. The root of the problem (if you'll pardon a bad pun) is that almost everyone runs Sendmail as root (and something like %70 of Internet email is handled by Sendmail machines, so there are a lot of them), so as soon as a bug is found, finding a system to exploit it on is not all that hard. The last few releases of Sendmail have been quite good, no root hacks, etc, and with the new anti spam features Sendmail is finally to come of age. More information on Sendmail and source code is available from:
Chroot'ing Sendmail is a good option, but a lot of work, and since it runs as root, rather debatable as to the effectiveness of this (since root can break out of a chroot'ed jail). I personally think switching to Postfix or Qmail is a better expenditure of effort. 
Keeping Sendmail up to date is relatively simple, I would recommend minimally version 8.9.3 (the 8.9 series has more anti-spam features, 8.8.x has most of these features as well assuming you have a properly setup Most distributions ship 8.8.x, although the newer releases are generally shipping the 8.9.x. You can also get the source from, but compiling Sendmail is not for the faint of heart or those that do not have a chunk of time to devote to it.
Sendmail only needs to be accessible from the outside world if you are actually using it to receive mail from other machines and deliver the mail locally. If you only want to run Sendmail so that local mail delivery works (i.e. a stand alone workstation, test server or other) and so mail can easily be sent to other machines, simply firewall off Sendmail, or better, do not run it in daemon mode (where it listens for connections). Sendmail can be run in a queue flushing node, where it simply 'wakes' up once in a while and processes local mail, either delivering it locally, or sending it off on its way across the 'net. To set Sendmail to run in queue mode:
edit your Sendmail startup script
and change the line that has:
sendmail -bd -q1h
sendmail -q1h
Please note: if you use your system to send lots of email you may wish to set the queue flush time lower, perhaps “-q15m” (flush the queue every 15 minutes) now outbound and system internal mail will behave just fine, which unless you run a mail server, is perfect.
Now for all those wonderful anti-spam features in sendmail. Sendmail's configuration files consist of (this applies to Sendmail 8.9.x):
Primary config file, also tells where other config files are.
You can define the location of configuration files in, typically people place them in /etc/ or /etc/mail/ (which makes it a little less cluttered).
Access list database, allows you to reject email from certain sources (IP or domain), and control relaying easily. My access file looks like this:
which means 10.0.0.* (hosts on my internal network) are allowed to use the email server to send email to wherever they want, and that all email to or from * is rejected. There are lists online of known spammers, typically they are 5-10,000 entries long, this can seriously impede sendmail performance (as each connection is checked against this list), on the other hand having your sendmail machine used to send spam is even worse.
aliases file, allows you to control delivery of mail local to the system, useful for backing up incoming user’s email to a separate spool. Most list serve software uses this file to get mail sent to lists delivered to the programs that actually process them. Remember to run the command “newaliases” after editing this file, and to then restart sendmail.
domain table (adding domains) that you handle, useful for virtual hosting.
configuration file for majordomo, I would personally recommend SmartList over Majordomo.
file containing names of hosts for which we receive email, useful if you host more then one domain.
location of help file (telnet to port 25 and type in "HELP")
Virtual user table, maps incoming users, i.e. maps to
Sendmail 8.9.x (and previous versions) do not really support logging of all email very nicely (something required in today's world for legal reasons by many companies). This is one feature being worked on for the release of Sendmail 8.10.x. Until then there are 2 ways of logging email, the first is somewhat graceful and logs email coming IN to users on a per user basis. The second method is not graceful and involves a simple raw log of all SMTP transactions into a file, you would have to write some sort of processor (probably in Perl) to make the log useful.
Mail (incoming SMTP connections to be more precise) is first filtered by the access file, in here we can REJECT mail from certain domains/IP’s, and RELAY mail from certain hosts (i.e. your internal network of windows machines). Any local domains you actually host mail for will need to go into Assuming mail has met the rules and is queued for local delivery the next file that gets checked is virtusertable, this is a listing of email addresses mapped to the account name/other email address. i.e.: alias-seifried listuser mangled-emails
The last rule is a catch all so mangled email addresses do not get bounced, and instead sent to a mailbox. Then the aliases file is checked, if an entry is found it does what it says to, otherwise it attempts to deliver the mail to a local users mailbox, my aliases file entry for seifried is:
alias-seifried: seifried, "/var/backup-spool/seifried"
This way my email gets delivered to my normal mailbox, and to a backup mailbox (in case I delete an email I really didn't mean to), or as a worst case scenario, Microsoft Outlook decides to puke someday and hose my mailboxes. This would also be useful for corporations, as you now have a backup of all incoming email on a per user basis, and can allow them (or not) to access the file containing the backed-up mail.
One caveat, when using a catch-all rule for a domain (i.e. you must create an alias for EACH account, and for mailing lists. Otherwise when it looks through the list and doesn't find a specific entry (for say it will send it to the mailbox specified by the catch all rule. For this reason alone you might not wish to use a catch all rule.
The second method is very simple, you simply start sendmail with the -X option and specify a file to log all transactions to. This file will grow very large very quickly, I would NOT recommend using this method to log email unless you absolutely must.
Dynamic Relay Authorization Control
Dynamic Relay Authorization Control (DRAC) ties into your POP/IMAP server to temporarily grant SMTP relay access to hosts that successfully authenticate and pick up mail (the assumption being these hosts will send mail, and not abuse this privilege. You can get it from:
Postfix is a mail transfer agent (MTA) aimed at security, speed, ease of configuration, generally things Sendmail fails miserably at. I would highly recommend replacing Sendmail with Postfix. The only portion of Postfix that runs as root is a master control program, aptly called “master”, it calls several other programs to process mail to the queue (“pickup”), a program to manage the queue, wait for incoming connections, deferred mail delivers and so on (“qmgr”), a program to actually send and receive the mail (“smtpd”) and so on. Each part of Postfix is very well thought out, and usually does one or two tasks, very well. For example instead of the sendmail model where queued mail simply gets dumped into /var/spool/mqueue, in Postfix there is a world accessible directory called “maildrop” which is checked by “pickup”, which feeds the data to “cleanup” which moves the mail (if it’s properly formatted and so on) to a secure queue directory for actual processing.
The primary configuration files are held in /etc/postfix, and there are several primary configuration files you must have:
Controls the behavior of the various “helper” programs, are they chroot’ed, maximum number of processes they may run and so forth. It’s probably best to leave the defaults on most mail servers unless you need to do some tuning for high loads or securing the server (i.e. chroot’ing it).
This file is as close to as you will get (for purpose, as for layout it’s quite different). It is well commented and sets all the major variables, and the locations and format of various files containing information such as virtual user mappings and related information.
Here is a list of variables and file locations you will typically have to set, the /etc/postfix/ file is usually heavily commented. Please note the following examples of entries are not a complete
# what is the machines hostname?
myhostname =
# what is the domain name?
mydomain =
# what do I label mail as “from”?
myorigin = $mydomain 
# which interfaces do I run on? All of them usually.
inet_interfaces = all
# a file containing a list of host names and fully qualified domains names I 
# receive mail for, usually they are listed like: 
# mydestination = localhost, $myhostname, etc
# but I much prefer to keep them listed in a file.
mydestination = /etc/postfix/mydestination
# map of incoming usernames. “man 5 virtual”
virtual_maps = hash:/etc/postfix/virtual
# alias mappings (like /etc/aliases in sendmail), “man 5 aliases”
alias_maps = hash:/etc/postfix/aliases
# alias database, you might have different settings. “man 5 aliases”
alias_database = hash:/etc/postfix/aliases
# where to deliver email, Mailbox format or Maildir (traditional /var/spool/mail).
home_mailbox = Maildir/
# where to keep mail, usually /var/spool/mail/ but you can easily change it
mail_spool_directory = /var/spool/mail
# what command do we use to deliver email? /usr/bin/procmail is the default but if 
# you want to use scanmail which is the AMaViS anti-virus tie in software simply put:
mailbox_command = /usr/sbin/scanmails
# who do I relay email for, again you can list them, or keep them in a file (one 
# per line).
relay_domains = /etc/postfix/relaydomains
# list of local networks (by default we relay mail for these hosts).
mynetworks =,
# what do we display to people connecting to port 25? By default it displays the
# version number which I do not.
smtpd_banner = $myhostname ESMTP $mail_name
Generally speaking any files that simply list one item per line (like /etc/postfix/mydestination or /etc/postfix/relaydomains) are usually just stored as a flat text file. Files that contain mappings (i.e. aliases, where you have entries like “root: someuser”) should be turned into hashed database files for speed (you can specify the type of file as hash, dbm, etc.).
Like most IBM products, Postfix has a very funky license, but appears to be mostly open source and free. Postfix is available at: You can binary postfix packages from:
Sendmail Pro
Sendmail Pro is a commercial version of Sendmail with support, and is available at: I haven’t been able to get a demo or find anyone using it so I’m not 100% sure as to how close it is to the “original” Sendmail, although the company has told me it uses the same code base.
Qmail (like postfix) was created as a direct response to perceived flaws in Sendmail. Qmail is GPL with a no binary distribution clause meaning you must install it from source code. You must also get the authors permission before you make and distribute any changes (not nice but such is life). Very little code in Qmail runs as root, and it is very modular compared to Sendmail (which is a pretty monolithic piece of code). You can download it from:
Zmailer is a GPL mailer available at: It has crypto hooks and generally looks like it is well built.
DMail is a commercial mail server, and is not open source. You can download a trial version from:
nullmailer sends mail to smart hosts (relays) so that the local machine doesn't have to run any mail server software. It's at:
MasqMail queues mail while offline and then sends it when you connect to your ISP. It can be configured for multiple ISP's with return addresses and so on. You can download it at:

POP servers

POP (post Office Protocol) is a relatively simple protocol that allows you to retrieve email from a server and delete it. The basic commands are USER, PASS (used to login), LIST (to list emails and sizes), RETR (to retrieve and email) and DELE (to delete an email).
WU IMAPD (contains the default popd for most distros)
POP and IMAP are fundamentally related but very different, so I have split them apart. POP stands for “Post Office Protocol” and simply allows you to list messages, retrieve them, and delete them. There are many POP servers for Linux available, the stock one that ships with most distributions if ok for the majority of users. The main problems with POP are similar to many other protocols; usernames and passwords are transmitted in the clear, making it a very good target for packet sniffing. POP can be SSL’ified, however not all mail clients support SSL secured POP. Most POP servers come configured to use TCP_WRAPPERS, which is an excellent method for restricting access. Please see the earlier section on TCP_WRAPPERS for more information. POP runs as root (since it must access user mailboxes) and there have been a number of nasty root hacks in various POP servers in the past. POP runs on ports 109 and 110 (109 is basically obsolete though), using the tcp protocol. The Washington University IMAPD server also comes with a pop server and is generally the ‘stock’ pop server that ships with most Linux distributions. You can get it from:
ipfwadm -I -a accept -P tcp -S -D 110
ipfwadm -I -a accept -P tcp -S -D 110
ipfwadm -I -a deny -P tcp -S -D 110
ipchains -A input -p tcp -j ACCEPT -s -d 110
ipchains -A input -p tcp -j ACCEPT -s -d 110
ipchains -A input -p tcp -j DENY -s -d 110
Cyrus is an imap (it also supports pop and kpop) server aimed at ‘closed’ environments. That is to say that the users will not have any access to the mail server other then by imap or pop protocols. This allows Cyrus to store the mail in a much more secure manner and allows for easier management of larger installations. Cyrus is not GNU licensed but is relatively “free”, and available from: There is also a set of add on tools for Cyrus available from:
IDS (It Doesn’t Suck) POP is a lighter popd replacement aimed at smaller installations. It is GPL and available from:
GNU pop3d
A pop daemon written to be small and fast, GNU licensed. Available from:
Qpopper is freeware produced by Qualcomm (the makers of Eudora). I would not recommend it (the source code is available at: You can get it from:


IMAP is a much more advanced mail protocol, allowing you to retrieve email from the server, and manage it on the server (you can create folders to store messages on the server). This protocol is much more useful then POP since multiple email boxes are a bit more graceful, multiple people using one email box is workable, and for travelling users, since you download the headers first (subject, etc) and can more selectively retrieve email.
WU IMAPD (contains the default imapd for most distros)
IMAP is POP on steroids. It allows you to easily maintain multiple accounts, have multiple people access one account, leave mail on the server, just download the headers, or bodies and no attachments, and so on. IMAP is ideal for anyone on the go or with serious email needs. The default POP and IMAP servers that most distributions ship (bundled together into a single package named imapd oddly enough) fulfill most needs. 
IMAP also starts out as root, although imapd typically drops to the privilege of the user accessing it, and cannot be easily set to run as a non-root user since they have to open mailboxes (and in IMAP’s case create folders, files, etc. in the user’s home directory), so they cannot drop privileges as soon as one would like. Nor can they easily be chroot'ed (IMAP needs access to /var/spool/mail, and IMAP needs access to the user’s home directory). The best policy is to keep the software up to date. And if at all possible, firewall pop and imap from the outside world, this works well if no-one is on the road and needs to collect their email via the Internet. Washington University (WU) IMAPD is available from:
IMAP runs on port 143 and most IMAPD servers support TCP_WRAPPERS, making it relatively easy to lock down. 
ipfwadm -I -a accept -P tcp -S -D 143
ipfwadm -I -a accept -P tcp -S -D 143
ipfwadm -I -a deny -P tcp -S -D 143