Saturday, November 19, 2011

Linux Administrator's Security Guide - II

ipchains -A input -p tcp -j ACCEPT -s -d 143
ipchains -A input -p tcp -j ACCEPT -s -d 143
ipchains -A input -p tcp -j DENY -s -d 143
Cyrus is an imap (it also supports pop and kpop) server aimed at ‘closed’ environments. That is to say that the users will not have any access to the mail server other then by imap or pop protocols. This allows Cyrus to store the mail in a much more secure manner and allows for easier management of larger installations, and I highly recommend it. Cyrus is not GNU licensed but is relatively “free”, and available from: There is also a set of add on tools for Cyrus available from:
Courier-IMAP is a lightweight IMAP server specifically for use with Maildir style mailboxes (not /var/spool/mail). You can get it from:

Scanning email for viruses

While Linux is not terribly suspectible to viruses, Windows clients are.
AMaViS uses third party scanning software (such as McAfee) to scan incoming email for viruses. You can get AMaViS at: Make sure you get the latest version, previous ones have a root compromise. As of July 19 the latest is:
Using AMaViS with Sendmail is relatively simple, it has a program called “scanmail” that acts as a replacement for procmail (typically the program that handles local delivery of email). When an email comes in instead of using procmail to deliver it, Sendmail calls scanmail which decompresses and decodes any attachments/etc. and then uses a virus scanner (of your choice) to scan the attachments. If no virus is found mail delivery goes ahead as usual. If a virus is found however, an email is sent to the sender informing them that they have sent a virus, and an email is sent to the intended recipient informing them about the person that sent them a virus. The instructions for this are at:
Since Postfix can make use of procmail to do local mail delivery it should work in theory without any trouble. In practice it takes a few minor tweaks to work correctly. To enable it replace the line in
mailbox_command = /usr/bin/procmail
with the line:
mailbox_command = /usr/sbin/scanmails
and restart postfix. For the local warning to work (a warning is sent to the intended recipient of the message) the hostname of the machine (sundog, mailserver01, etc.) must be listed in the “mydestination” in, otherwise the warning does not get delivered. You should (and most sites generally do) redirect root’s email to a user account using the aliases file, otherwise warnings will not be delivered to root properly. By default as well mail to “virusalert” is directed to root, you should also redirect this mail to a normal user account.

Enhancing E-Mail Security With Procmail

procmail (the default local delivery agent typically) has a wide variety of features that can be used to help "sanitize" email. More information on this is available at:

SSL wrapping POP and IMAP servers

simap stream tcp nowait root /usr/sbin/stunnel imapd -l imapd
RANDFILE = stunnel.rnd
[ req ]
default_bits = 1024
encrypt_key = no
distinguished_name = req_dn
x509_extensions = cert_type
[ req_dn ]
countryName = Country Name (2 letter code)
organizationName = Organization Name (eg, company)
0.commonName = Common Name (FQDN of your server)
[ cert_type ]
nsCertType = server
openssl req -new -x509 -days 365 -config /etc/stunnel.cnf -out /etc/stunnel.pem -keyout stunnel.pem
openssl x509 -subject -dates -fingerprint -noout -in stunnel.pem

Non-commercial mailing list software


File / print servers


There are many ways to share resources over a LAN. Your main concern will be what the client side is running since most server operating systems (especially Linux) support almost all types of clients (UNIX, Windows, MAC, Novell, etc.). You will also need to take into account the type of files you are sharing, are they simply data files, work documents, source code, network boot files, or?

Network booting

tftp is used by everything from X windows terminals to Cisco routers when booting to get their initial boot files and configuration data.

Network services - Tftp

tftp (Trivial File Transfer Protocol) is used for devices that require information from a network server, typically at boot time. It is an extremely simple form of ftp, with most of the security and advanced commands stripped off, it basically allows a device to retrieve (and upload) files from a server in a very simple manner. tftp is almost exclusively used for diskless workstations, router configuration data, and any device that boots up, and requires information it cannot store permanently. As such it presents a rather large security hole, just imagine if someone were to connect to your tftp server and grab the boot file for your main Cisco router. 
The stock tftp can be locked down, it accepts a directory name that it is essentially limited to (very similar to chroot), and TCP_WRAPPERS can be used to limit access to certain hosts only, but if you want access control to files you will need to run utftp. By default tftp (at least for Red Hat) defaults to giving access only to the /tftpboot directory (which usually doesn't exist, so create it if you need it). It is a very good idea to keep the tftp directory as separate from the system as possible. This is done by specifying the directory or directories you want tftp to have access to after the tftp command in inetd.conf. The following example starts tftp normally and grants it access to the /tftpboot directory and the /kickstart directory.
tftp dgram udp wait root /usr/sbin/tcpd in.tftpd /tftpboot /kickstart
Also remember tftp uses UDP, so a 'ps xau' won't necessarily show who is logged in or what they are doing (as opposed to ftp which shows up) unless they are currently downloading a file (since most tftp applications revolve around small files it is unlikely you will catch someone in the act). The best place to monitor tftp is from syslog, but even then tftp doesn't log IP addresses or anything truly useful. The following is some ps output, and some syslog output of an active tftp session.
nobody 744 0.0 0.6 780 412 ? R 14:31 0:00 in.tftpd /tftpboot
Apr 21 14:31:15 hostname tftpd[744]: tftpd: trying to get file: testfile 
Apr 21 14:31:15 hostname tftpd[744]: tftpd: serving file from /tftpboot 
TFTP can be easily restricted using TCP_WRAPPERS and firewalling, tftp runs on port 69, UDP so simply restrict access to that needed by your various diskless workstations, routers and the like. It is also a good idea to block all tftp traffic at your network borders, as there is no need for a machine to remote boot using tftp across the Internet/etc. Also tftp runs as the user nobody. But since no authentication is done, and all devices accessing the tftp server are doing so as 'nobody', file level security is pretty well useless. All in all a very, insecure server. TFTP runs on port 69, udp.
ipfwadm -I -a accept -P udp -S -D 69
ipfwadm -I -a accept -P udp -S -D 69
ipfwadm -I -a deny -P udp -S -D 69
ipchains -A input -p udp -j ACCEPT -s -d 69
ipchains -A input -p udp -j ACCEPT -s -d 69
ipchains -A input -p udp -j DENY -s -d 69
utftpd is a secure replace for the stock tftpd, it provides much finer access control and support for some other interesting features (such as revision control). You can also base access on the clients IP address, meaning your router configurations and diskless workstation configurations can be kept separate and discrete from each other. utftpd is GPL licensed and available at:

UNIX file sharing

NFS is the most universal method of file sharing supported by UNIX in general. Almost every UNIX OS (Linux, *BSD, Sun, etc.) supports NFS. There are also commercial NFS clients and servers for Windows. NFS is ideal for sharing out user home directories and other real time" filesystems.

Network services - NFS

NFS stands for Network File System and is just that, it is a good way to distribute filesystems, read only and read/write, while maintaining a degree of security and control assuming your network is enclosed and secure. NFS is primarily meant for use in a high bandwidth environment (i.e., a LAN) where security risks are not high, or the information being shared is not sensitive (i.e., a small trusted LAN behind a firewall exchanging CAD/CAM diagrams, or a large university lab using nfs to mount /usr/. If you need a high level of security, such as encrypting data between hosts, NFS is not the best choice. I personally use it at across my internal LAN (this machine has 2 interfaces, guess which one is heavily firewalled), to share file systems containing rpm's, this website, etc. Safer alternatives include SAMBA (free) and now IBM is porting AFS to Linux (costly, but AFS is a sweet piece of code).
NFS has a few rudimentary security controls. The first one would be firewalling; using NFS across a large, slow, public network like the Internet just isn't a good idea in any case, so firewall off port 2049, UDP. Since NFS runs as a set of daemons, TCP_WRAPPERS are of no use unless NFS is compiled to support them. The config file for NFS actually has quite a few directives, the bulk of which deal with user id and group id settings (map everyone to nobody, perhaps map all the engineering clients to 'engineer', etc, etc) but no real mechanisms for authentication (your client can claim to be UID 0, this is why root's id is squashed by default to nobody). NFS read-only exports are pretty safe, you only have to worry about the wrong people getting a look at your info (if it is sensitive) and or creating a denial of service attack (say you have a directory world readable/etc for sharing kernel source, and some gomer starts sucking down data like crazy...). 
Writeable exports are a whole other ball game, and should be used with extreme caution, since the only 'authentication' is based on IP/hostname (both easily spoofable), and UID (you to can run Linux and be UID 0). Bounce a client down with a DOS attack, grab their IP, mount the writeable share and go to town. You say "but they'd have to know the IP and UID", packet sniffing is not rocket science folks, nor is 'showmount'.
So, how do we go about securing NFS? The first is to firewall it, especially if the machine is multi-homed, with an interface connected to a publicly accessible network (the Internet, the student lab, etc.). If you plan to run NFS over a publicly accessible network it better be read only, and you will be far better off with a different product then NFS. 
The second and most interesting part is the /etc/exports file. This controls what you allow clients to do, and how they do it.
A sample exports file:
# Allow a workstation to edit web content
# Another share to allow a user to edit a web site
# Public ftp directory
/home/ftp *,all_squash)
The structure of the exports file is pretty simple, directory you wish to export, client (always use IP’s, hostnames can easily be faked), and any options. The client can be a single IP (, hostname (, a subnet (, or a wildcard (* Some of the more interesting (and useful) directives for the exports file are:
secure - the nfs session must originate from a privileged port, i.e. root HAS to
be the one trying to mount the dir. This is useful if the server you are
exporting to is secured well.
ro - a good one, Read Only, enough said.
noaccess - used to cut off access, i.e. export /home/ but do a noaccess on /home/root
root_squash - squashes root's UID to the anonymous user UID/GID (usually 'nobody'), very useful if you are exporting dirs to servers with admins you do not 100% trust (root can almost always read any file.... HINT)
no_root_squash - useful if you want to go mucking about in exported dirs as root to fix things (like permissions on your www site)
squash_uids and squash_gids - squash certain UID(s) or GID(s) to the anonymous user, in Red Hat a good example would be 500-10000 (by default Red Hat starts adding users and groups at 500), allowing any users with lower UID's (i.e. special accounts) to access special things.
all_squash - a good one, all privileges are revoked basically and everyone is a guest.
anonuid and anongid - specifically set the UID / GID of the anonymous user (you might want something special like 'anonnfs').
The man exports page is actually quite good.
Beyond this there isn't much you can do to secure NFS apart from ripping it out and putting some other product in (like AFS, Coda, etc). NFS is relatively robust, almost every flavor of UNIX supports it, and it is usually easy to setup, work with and maintain. It's also 'old faithful', been around a long time. Just check "Practical Unix and Internet Security", they also state in bold not to use NFS if security is a real issue.
NFS should be restricted from the outside world, it runs on port 2049, udp, as well as using RPC which runs on port 111, udp/tcp, and makes use of mountd which runs on port 635, udp. Replace the 2049 with 111, and 635 udp and tcp to secure those services (again the best idea is a blanket rule to deny ports 1 to 1024, or better yet a default policy of denial).
ipfwadm -I -a accept -P udp -S -D 2049
ipfwadm -I -a accept -P udp -S -D 2049
ipfwadm -I -a deny -P udp -S -D 2049
ipchains -A input -p udp -j ACCEPT -s -d 2049
ipchains -A input -p udp -j ACCEPT -s -d 2049
ipchains -A input -p udp -j DENY -s -d 2049

rsync is the ideal method for synchronizing large amounts of data that isn't time critical (i.e. for ftp site mirroring). It uses an extremely efficient algorithm to find files that are newer (or gone), and then retrieves them, it also has several nice security features.

Network services - rsync

rsync is an extremely efficient method for mirroring files, be it source code files of a CVS tree, a web site, or even this document. rsync preserves file permissions, links, file times and more. In addition to this, it supports an anonymous mode (which, incidentally, I use for the mirroring of this document) that makes life very easy for all concerned. The rsync program itself can act as the client (run from a command line or script) and as the server (typically run from inetd.conf). The program itself is quite secure: it does not require root privileges to run as a client nor as the server (although it can if you really want it to) and can chroot itself to the root directory of whatever is being mirrored (this however requires root privileges and can be more dangerous then it is worth). You can also map the user id and group id it will access the system as (the default is nobody for most precompiled rsync packages and is probably the best choice). In non-anonymous mode rsync supports usernames and passwords that are encrypted quite strongly using 128 bit MD4. The "man rsyncd.conf" page quite clearly covers setting up rsync as a server and making it relatively safe. The default configuration file is /etc/rsyncd.conf. It has a global section and module sections (basically each shared out directory is a module).
rsyncd.conf example:
motd file = /etc/rsync.motd # specifies a file to be displayed, legal disclaimer, etc.
max connections = 5 # maximum number of connections so you don't get flooded
        comment = public ftp area # simple comment 
        path = /home/ftp/pub # path to the directory being exported
        read only = yes # make it read only, great for exported directories
        chroot = yes # chroot to /home/ftp/pub 
        uid = nobody # explicitly set the UID
        gid = nobody # explicitly set the GID
        comment = my secret stuff
        path = /home/user/secret # path to my stuff
        list = no # hide this module when asked for a list
        secrets file = /etc/rsync.users # password file
        auth users = me, bob, santa # list of users I trust to see my secret stuff
        hosts allow =, # list of hosts to allow
As you can see rsync is quite configurable, and generally quite secure, the exception being the actual file transfers which are not encrypted in any way. If you need security I suggest you use SSH to tunnel a connection, or some VPN solution like FreeS/WAN. Also make sure you are running rsync 2.3.x or higher as a potential root compromise was found in 2.2.x. Rsync is available at: Rsync runs on port 873, tcp.
ipfwadm -I -a accept -P tcp -S -D 873
ipfwadm -I -a accept -P tcp -S -D 873
ipfwadm -I -a deny -P tcp -S -D 873
ipchains -A input -p tcp -j ACCEPT -s -d 873
ipchains -A input -p tcp -j ACCEPT -s -d 873
ipchains -A input -p tcp -j DENY -s -d 873

Printing under Linux

There are a variety of print daemons for Linux but they generally emulate lpd (the original).
lpd is the age-old line printer daemons (when all you ever printed was text) which allows for the usage and sharing of printers. 

Network services - Printing

Print servers for Linux

lpd is the UNIX facility for printing (Line Printer Daemon). It allows you to submit print jobs, run them through filters, manage the print queues, and so on. lpd can accept print jobs locally, or over the network, and access various parts of the system (printers, logging daemons, etc), making it a potential security hole. Historically lpd has been the source of several nasty root hacks. Although these bugs seems to have been mostly ironed out, there are still many potential denial of service attacks though due to it’s function (something simple like submitting huge print jobs and running the printer out of paper). Fortunately, lpd is slowly being phased out with the advent of network aware printers, however there is still a huge amount of printing done via lpd. lpd access is controlled via /etc/hosts.equiv, and /etc/hosts.lpd. You should also firewall lpd from the outside world. And if you need to send print jobs across public networks, remember anyone can read them, so a VPN solution is a good idea. lpd runs on port 515 using tcp. The hosts.lpd file should contain a list of hosts (, etc), one per line that are allowed to use the lpd services on the server, you might as well use ipfwadm/ipchains.
ipfwadm -I -a accept -P tcp -S -D 515
ipfwadm -I -a accept -P tcp -S -D 515
ipfwadm -I -a deny -P tcp -S -D 515
ipchains -A input -p tcp -j ACCEPT -s -d 515
ipchains -A input -p tcp -j ACCEPT -s -d 515
ipchains -A input -p tcp -j DENY -s -d 515
An alternative to the stock lpd is “LPRng” (LPR Next Generation), it provides new enhancements and also supports a higher level of security. LPRng supports Kerberos and PGP-based authentication, as well as a restrictions files, /etc/lpd.perms, which allows you to control access based on user, group, authentication, IP, and so on, allowing for extremely flexible and secure configurations. LPRng has excellent documentation and is available at:
pdq is another LPD replacement, no real emphasis on enhanced security but it does seem to offer some management improvements and performance gains over the stock LPD. You can get pdq from:
Common UNIX Printing System (CUPS), is GPL licensed and version 1.0 just came out. CUPS is available from:

Common UNIX Printing System (CUPS), is GPL licensed and version 1.0 just came out. CUPS is available from:
LPR next generation, an alternative to the stock LPR.

Windows file and print sharing

SMB (server message block) is the current windows file sharing protocol. Samba does an incredible job of providing all the services required to properly share windows files (such as Primary and Backup Domain Controller services). You can also provide windows access to printers through Samba, and using smbclient access Windows printers.

Network services - SMB


SAMBA is one of the best things since sliced bread, that is if you want to share files and printers between Windows and *NIX. It is also somewhat misunderstood, and suffers heavily from interaction with various (sometimes broken) Windows clients. SAMBA has a great many kludges that attempt to make it somewhat sane, but can lead to what looks like broken behavior sometimes. SAMBA simply gives access to the filesystem via SMB (Server Message Block), the protocol Windows uses to share files and printers. It verifies the username and password given (if required) and then gives access to the files according to the file permissions and so forth that are set. I'm only going to cover Samba 2.x, Samba 1.x is pretty old and obsolete.
Samba 2.x is controlled via smb.conf, typically in /etc (man smb.conf). In /etc/smb.conf you have 4 main areas of configuration switches: [globals] , [printers] , [homes], and each [sharename] has it's own configuration (be it a printer or drive share). There are a hundred or so switches, the smb.conf man page covers them exhaustively. Some of the important (for security) ones are:
security = xxxx where xxxx is share, server or domain, share security is per share, with a password that everyone uses to get at it, server means the samba server itself authenticates users, either via /etc/password, or smbpasswd. If you set it to domain, samba authenticates the user via an NT domain controller, thus integrating nicely into your existing NT network (if you have one).
guest account = xxxx where xxxx is the username of the account you want the guest user to map to. If a share is defined as public then all requests to it are handled as this user.
hosts allow = xxxx where xxxx is a space separated list of hosts / IP blocks allowed to connect to the server.
hosts deny = xxxx where xxxx is a space separated list of hosts / IP blocks not allowed to connect to the server.
interfaces = xxxx where xxxx is a space separated list of IP blocks that samba will bind to
SMB uses a variety or ports, mostly relying on ports 137, 138 and 139, both udp and tcp for all except 139.
ipfwadm -I -a accept -P tcp -S -D 137:139
ipfwadm -I -a accept -P tcp -S -D 137:139
ipfwadm -I -a deny -P tcp -S -D 137:139
ipfwadm -I -a accept -P udp -S -D 137:139
ipfwadm -I -a accept -P udp -S -D 137:139
ipfwadm -I -a deny -P udp -S -D 137:139
ipchains -A input -p tcp -j ACCEPT -s -d 137:139
ipchains -A input -p tcp -j ACCEPT -s -d 137:139
ipchains -A input -p tcp -j DENY -s -d 137:139
ipchains -A input -p udp -j ACCEPT -s -d 137:139
ipchains -A input -p udp -j ACCEPT -s -d 137:139
ipchains -A input -p udp -j DENY -s -d 137:139
I would also highly recommend installing and using SWAT (samba Web Administration Tool) as it will cut down on the mistakes/etc that you are liable to make. Samba and SWAT are available at: and ship with almost every distribution.
SWAT is a very nice administration tool to setup your smb.conf. The main problem is that is requires you to use the root account and password to ‘log’ in, and runs as a separate process out of inetd.conf, so there is no easy way to encrypt it, and as far as I can tell no way to grant others users administrative access to SWAT. Having said that however it is a good tool for cutting down on mistakes made while editing smb.conf. You can also run SWAT with the –a switch, meaning no password will be required, and using TCP_WRAPPERS to restrict access to certain workstations (although you’d still be open to IP spoofing). Essentially SWAT was not meant as a secure administrative tool, but it is useful. SWAT comes with samba (usually) and is available at:, a demo of SWAT is online at:


CIFS allows a Linux client to mount a Windows fileshare, modify the file ACL's (under NT) and otherwise access it fully. You can get CIFS for Linux at:

General file sharing

There are also a number of generic file sharing methods that support multiple types of clients and servers.
An advanced network filesystem, not very fun to implement.
An https-based system for sharing files among machines securely. You can get it from:
A high end, commercial file sharing protocol suitable for large installations with high security and performance requirements.

Network file sharing - AFS

A high end, commercial file sharing protocol suitable for large installations with high security and performance requirements. The FAQ is available at: A free AFS client implementation for a variety of unices (including Linux of course) is available from:

Source code sharing

CVS is used to centrally maintain source code in a repository, and to allow people to make modifications, with an emphasis on the ability to roll back changes, get an old "snapshot" and so on. It is very popular for large software projects.

Network services - CVS

CVS allows multiple developers to work together on large source code projects and maintain a large code base in a somewhat sane manner. CVS's internal security mechanisms are rather simple on their own; in fact some would say weak, and I would have to agree. CVS's authentication is typically achieved over the network using pserver, usernames are sent in clear text, and passwords are trivially hashed (no security really). 
To get around this you have several good options. In a Unix environment probably the simplest method is to use SSH to tunnel connections between the client machines and the server. "Tim TimeWaster" (Tim Hemel, one of the Final Scratch guys) has written an excellent page covering this at: A somewhat more complicated approach (but better in the long run for large installations) is to kerberize the CVS server and clients. 
Typically large networks (especially in university environments) already have an established Kerberos infrastructure. Details on kerberizing CVS are available at: Apart from that I would strongly urge firewalling CVS unless you are using it for some public purpose (such as an open source project across the Internet). 
Another tool for securing CVS that just appeared is “cvsd”, a wrapper for pserver that chroot’s and/or suid’s the pserver to a harmless user. cvsd is available at: in rpm format and a source tarball.
There are other less obvious concerns you should be aware of, when dealing with source code you should be very to ensure no Trojan horses or backdoors are allowed into the code. In an open source project this is relatively simple, review the code people submit, especially if it is a publicly accessible effort, such as the Mozilla project. Other concerns might be destruction of the source code, make sure you have backups. CVS uses port 2401, tcp.
ipfwadm -I -a accept -P tcp -S -D 2401
ipfwadm -I -a accept -P tcp -S -D 2401
ipfwadm -I -a deny -P tcp -S -D 2401
ipchains -A input -p tcp -j ACCEPT -s -d 2401
ipchains -A input -p tcp -j ACCEPT -s -d 2401
ipchains -A input -p tcp -j DENY -s -d 2401
Network services - FTP
FTP used to be the most used protocol on the Internet by sheer data traffic until it was surpassed by HTTP a few years ago (yes, there was a WWW-free Internet once upon a time). FTP does one thing, and it does it well, transferring of files between systems. The protocol itself is insecure, passwords, data, etc is transferred in cleartext and can easily be sniffed, however most ftp usage is 'anonymous', so this isn't a huge problem. One of the main problems typically encountered with ftp sites is improper permissions on directories that allow people to use the site to distribute their own data (typically copyrighted material, etc). Again as with telnet you should use an account for ftping that is not used for administrative work since the password will be flying around the network in clear text.
Problems with ftp in general include:
� Clear text authentication, username and password.  � Clear text of all commands.  � Password guessing attacks � Improper server setup and consequent abuse of servers  � Several nasty Denial of Service attacks still exist in various ftp servers � Older version of WU-FTPD and derivatives have root hacks 
Securing FTP isn't to bad, between firewalling and TCP_WRAPPERS you can restrict access based on IP address / hostname quite well. In addition most ftp servers run chroot'ed by default for anyone anonymous access, or an account defined as guest. With some amount of work you can set all users that are ftping in to be chroot'ed to their home directory or wherever appropriate. You can also run ftp servers that encrypts the data (using such things as SSL/etc.) however this means your ftp clients must speak the encryption protocol, and this isn't always practical. Also make very sure you have no publicly accessible directories on your ftp server that are both readable and writeable, otherwise people will exploit it to distribute their own software (typically warez or porn).
An example of firewalling rules:
ipfwadm -I -a accept -P tcp -S -D 21
ipfwadm -I -a accept -P tcp -S -D 21
ipfwadm -I -a deny -P tcp -S -D 21
ipchains -A input -p tcp -j ACCEPT -s -d 21
ipchains -A input -p tcp -j ACCEPT -s -d 21
ipchains -A input -p tcp -j DENY -s -d 21
An example of the same using TCP_WRAPPERS in /etc/hosts.allow:
And in /etc/hosts.deny:
There are several encrypted alternatives to ftp as mentioned before, SSLeay FTPD, and other third party utils. Since most ftp accounts are not used as admin accounts (cleartext passwords, you have been warned), and hopefully run chroot'ed, the security risk is minimized. Now that we have hopefully covered all the network based parts of ftp, lets go over securing the user accounts and environment.
FTP servers
There are numerous ftp server software packages available for Linux. The popular ones (Wu-FTPD and ProFTPD) have had a severe number of problems, so make sure your version is up to date.
ProFTPD is a GPL licensed ftp server that can run on a variety on UNIX platforms. It supports newer features such as virtual ftp, per directory configuration (using .ftpaccess files similar to Apache’s .htaccess files), support for expired accounts and more. It also supports really useful features such as limiting downloads and much tighter security controls then WU-FTPD. I highly recommend it over any other freely available FTP server for UNIX.
ProFTPD’s main configuration file is /etc/proftpd.conf, it has a rather Apache-esque configuration style which I like a lot. ProFTPD can be run from inetd (and make use of TCP_WRAPPERS) or it can be run as a stand-alone server. It also supports per directory config files to limit access and so forth. ProFTPD supports virtual ftp as well (although unlike virtual www serving, extra IP addresses are required) and each site can be configured differently (different anonymous access, if any, and more things along those lines). The general proftpd.conf typically has a section covering global settings (inetd or standalone, maximum number of processes to run, who to run as, and so on), followed by a default config, followed by specific site (virtual sites) configuration. On a server doing virtual hosting it is probably a good idea to turn “DefaultServer” off, so any clients ftping in aimlessly are denied instead of being dumped into a default site.
Sample configuration for a ProFTPD server being run from inetd with no anonymous access:
ServerName "ProFTPD Default Installation"
ServerType inetd
DefaultServer on
Port 21
Umask 022
MaxInstances 30
User nobody
Group nobody

AllowOverwrite on
Let’s say, like me, that you are paranoid and want to control access to the ftp server by IP addresses, hostnames and domain names (although I would recommend only relying on IP’s). You could accomplish via firewall rules, but that tends to slow the machine down (especially if you are adding lots of rules as would be prone to happen). You could use TCP_WRAPPERS, but you wouldn’t be able to selectively limit access to virtual sites, anonymous sites, just the server itself. Or you could do it in the proftpd.conf file using the “” directive.
The following example will limit access to 10.1.*.* and, all other machines will be denied access.

Order Allow,Deny
Allow from 10.1.,
Deny from all
If you place this within a “” or “” directives it applies only to that virtual site or anonymous setup, if placed in a “” directive it will apply to all the “” and “” sections, and if placed in the server config (i.e. with the “ServerName” and related items) it will behave like TCP_WRAPPERS would, anyone not from 10.1.*.* or immediately gets bumped when they try to connect to port 21, as opposed to simply being denied login if it’s in a “”, “” or “” section.
If you want to add anonymous access simply append:

User ftp
Group ftp
RequireValidShell off
UserAlias anonymous ftp
MaxClients 10
DisplayLogin welcome.msg
DisplayFirstChdir .message


This would assign the “ftp” users home directory (assuming a normal setup “~ftp” would probably be /home/ftp) as the root anonymous directory, the ProFTPD would run as the user “ftp” and group “ftp” when people log in anonymously (as opposed to logging in as a normal user), and anonymous logins would be limited to 10. As well the file /home/ftp/welcome.msg would be displayed when anonymous users ftp in, and any directory with a .message file containing text would be displayed when they changed into it. The “” covers /home/ftp/*, and then denies write access for all, meaning no-one can upload any files. If you wanted to add an incoming directory simply add the following after the “*=""” directives:



This would allow people to write files to /home/ftp/incoming/, but not read (i.e. download) them. As you can see ProFTPD is very flexible, this results in ProFTPD requiring more horsepower then WU-FTPD, but it is definitely worth it for the added control. You can get ProFTPD and the documentation from:
proftpd-ldap allows you to do password look ups using an LDAP directory, you can download it from:
I would not recommend the use of WU-FTPD, it has many security problems, and quite a few Linux vendors do not use WU-FTPD on their own ftp servers. I would highly recommend ProFTPD which is freely available and covered in the next section.
One of the main security mechanisms in WU-FTPD is the use of chroot. For example; by default all people logging in as anonymous have /home/ftp/ set as their “root” directory. They cannot get out of this and say look at the contents of /home/ or /etc/. The same can be applied to groups of users and / or individuals, for example you could set all users to be chroot'ed to /home/ when they ftp in, or in extreme cases of user privacy (say on a www server hosting multiple domains) set each user chroot'ed to within their own home directory. This is accomplished through the use of /etc/ftpaccess and /etc/passwd (man ftpaccess has all the info). I will give a few examples of what needs to be done to accomplish this since it can be quite confusing at first. ftpd also checks /etc/ftpusers and if the user attempting to login is listed in that file (like root should be) it will not let the user login via ftp.
To chroot users as they login into the ftp server is rather simple, but poorly documented. The ftp server check /etc/ftpaccess for “guestgroup”’s, which are simply "guestgroup some-group-on-the-system" i.e. "guestgroup users". The groupname needs to be defined in /etc/group and have members added. You need to edit their passwd file line so that the ftp server knows where to dump them. And since they are now chroot'ed into that directory on the system, they do not have access to /lib, etc so you must copy certain files into their dir for things like “ls” to work properly (always a nice touch).
Setting up a user (billybob) so that he can ftp in, and ends up chroot'ed in his home directory (because he keeps threatening to take the sysadmin possum hunting). In addition to this billybob can telnet in and change his password, but nothing else because he keeps trying to run ircbots on the system. The system he is on uses shadowed passwords, so that's why there is an 'x' in billybob's password field.
First off billybob needs a properly setup user account in /etc/passwd:
billybob:x:500:500:Billy Bob:/home/billybob/./:/usr/bin/passwd
this means that the ftp server will chroot billybob into /home/billybob/ and chdir him into what is now / (/home/billybob to the rest of us). The ftpaccess man file covers this bit ok, and of course /usr/sbin/passwd needs to be listed in /etc/shells.
Secondly, for the ftp server to know that he is being chroot'ed he needs to be a member of a group (badusers, ftppeople, etc) that is defined in /etc/group. And then that group must be listed in /etc/ftpaccess.
Now you need to copy some libraries and binaries in the chroot “jail”, otherwise “billybob” won't be able to do a whole lot once he has ftp'ed in. The files needed are available as packages (usually called “anonftp”), once this is installed the files will be copied to /home/ftp/, you will notice there is an /etc/passwd, this is simply uses to map UID's to usernames, if you want billybob to see his username and not UID, add a line for him (i.e., copy his line from the real /etc/passwd to this one). The same applies to the group file as well.
without "billybob:*:500:500:::" in /home/billybob/etc/passwd:
drwxr-xr-x 2 500 500 1024 Jul 14 20:46 billybob
and with the line added to /home/billybob/etc/passwd:
drwxr-xr-x 2 billybob 500 1024 Jul 14 20:46 billybob
and with a line for billybob's group added to /home/billybob/etc/group:
drwxr-xr-x 2 billybob billybob 1024 Jul 14 20:46 billybob
Billybob can now ftp into the system, upload and download files from /home/billybob to his hearts content, change his password all on his own, and do no damage to the system, nor download the passwords file or other nasty things.
FTP is also a rather special protocol in that the clients connect to port 21 (typically) on the ftp server, and then port 20 of the ftp server connects to the client and that is the connection that the actual data is sent over. This means that port 20 has to make outgoing connections. Keep this in mind when setting up a firewall either to protect ftp servers or clients using ftp. As well there is 'passive' ftp and usually used by www browsers/etc, which involves incoming connections to the ftp server on high port numbers (instead of using 20 they agree on something else). If you intend to have a public ftp server put up a machine that JUST does the ftp serving, and nothing else, preferably outside of your internal LAN (see Practical Unix and Internet Security for discussions of this 'DMZ' concept). You can get WU-FTPD from
NcFTPD is a high volume ftp server, however it is only free for personal or .edu usage. You can get it from:
BSD ftpd
The BSD ftp server (ftpd) has also been ported over to Linux, so if you have the urge to run it you can. Download it at:
Muddleftpd is a small ftp server. You can get it at:
Troll ftpd
Troll ftpd is an extremely small and relatively secure ftp server. It cannot execute external programs, and is quite easy to configure. You can get it at:
BetaFTPD is a single threaded, small ftp server. You can get it at:
Another GPL licensed FTP server, available from:
Also a drop in replacement for your favorite ftpd (probably WU-FTPD), also available as a set of patches for WU-FTPD. This is highly appropriate as most servers have many users that require ftp access. The tarball is available at:, and as RPM packages at
SRP can also be used to encrypt the username/password login portion of your ftp session, or the entire session. You can get SRP at and it is covered in the LASG here.
sftp runs over ssh which makes for relatively ftp sessions. You can get it from:
Linux LDAP servers
Lightweight directory access protocol seems to be the future of storing user information (passwords, home directories, phone numbers, etc.). Many products (ADS, NDS, etc.) support LDAP interfaces, making it important for Linux to support LDAP as it will be required to tie it into future enterprise networks.
LDAP servers
OpenLDAP is a completely opensource (note it is not GPL) package that provides an LDAP server, replication server and utilities. You can get it from:
LDAP authentication
The NSS LDAP Module allows you to do user authentication via LDAP. You can get it from:
LDAP tools is a Python program that runs as a cgi and provides a www interface t an LDAP directory. You can get it:
A KDE based LDAP browing tool with the ability to edit objects (basically an LDAP admin tool). You can get it at:
A GTK based LDAP client that can modify settings/rtc. Available from:
A www based admint ool for LDAP. Available from:
Perl/Java/C SDK's for LDAP
A variety of Software Development Kits for LDAP, available from:

Network services - NNTP


NNTP (network news transfer protocol) is useful for sharing large amounts of information among many servers. It is also useful for holding discussions and forums on topics like cryptography.

NNTP server software

The usenet server INN has had a long and varied history, for a long period there were no official releases and it seemed to be in a state of limbo. However, it is back for good now it would seem. The server software is responsible for handling a potentially enormous load, if you take a full newsfeed the server must process several hundred articles per second, some several kilobytes in size. It must index these articles, write them to disk, and hand them out to clients that request them. INN itself is relatively secure, since it handles data with a directory and generally doesn't have access outside of that, however as with any messaging system if you use it for private/confidential material you must be careful. INN is currently maintained by ISC and is available at:
One of the main security threats with INN is resource starvation on the server. If someone decides to flood your server with bogus articles or there is a sudden surge of activity you might be in trouble if capacity is lacking. INN has had several bad security holes in past, but with today's environment the programmers seem to have chased down and eliminated all of them (none have surfaced recently). It is highly recommended (for more than security reasons alone) that you place the news spool on a separate disk system, let alone partition. You might also wish to use ulimit to restrict the amount of memory available so that it cannot bring the server to it's knees. 
As for access, you should definitely not allow public access. Any news server that is publicly accessible will be quickly hammered by people using it to read news, send spam and the like. Restrict reading of news to your clients/internal network and if you are really worried force people to login. Client access to INN is controlled via the nnrp.access file. You can specify IP address(s), domain names and domains (such as *, as well as there access levels (read and post), the newsgroups they do or don't have access to and you can also specify a username and password. However, because the password is linked to the host/domain it gets somewhat messy. 
example of nnrp.access:
*:: -no - : -no- :!*
# denies access from all sites, for all actions (post and read), to all groups.
* Post:::*
# hosts in have full access to all groups
**, !me.*
# hosts in have read access to everything but the me hierarchy
* Post:myname:mypassword:*
# give me access from my AOL account using a username and password
If you are going to run a news server I highly recommend the O'Reilly book "Managing Usenet". Usenet is similar to Sendmail, a total beast to get running smoothly and keep happy.
News should be firewalled as most servers typically server an internal group, and access connections from one or two upstream feeds:
ipfwadm -I -a accept -P tcp -S -D 119
ipfwadm -I -a accept -P tcp -S -D 119
ipfwadm -I -a deny -P tcp -S -D 119
ipchains -A input -p tcp -j ACCEPT -s -d 119
ipchains -A input -p tcp -j ACCEPT -s -d 119
ipchains -A input -p tcp -j DENY -s -d 119
Diablo is free software aimed at backbone news transport, that is to say accepting articles from other NNTP servers and feeding them on to other servers, it is not aimed at use by end users for reading or posting. You can get Diablo at:
A commercial NNTP server for various platforms. Available from:
Cyclone is a commercial NNTP server aimed at backbone news transport, that is to say accepting articles from other NNTP servers and feeding them on to other servers, it is not aimed at use by end users for reading or posting. You can get Cyclone at:
Typhoon is a commercial NNTP server aimed at end user news access, that is to say allowing users to post and read articles. You can get Typhoon at:

Proxy software

There are a variety of proxy software packages for Linux. Some are application level (such as SQUID) and others are at the session level (such as SOCKS).
Application proxy server software
SQUID is a powerful and fast object cache server. It proxies FTP and WWW sessions, basically giving it many of the properties of an FTP and a WWW server, but it only reads and writes files within it's cache directory (or so we hope), making it relatively safe. Squid would be very hard to use to actually compromise the system and runs as a non root user (typically 'nobody'), so generally it's not much to worry about. Your main worry with Squid should be improper configuration. For example, if Squid is hooked up to your internal network (as is usually the case), and the internet (again, very common), it could actually be used to reach internal hosts (even if they are using non-routed IP addresses). Hence proper configuration of Squid is very important. 
The simplest way to make sure this doesn't happen is to use Squid's internal configuration and only bind it to the internal interface(s), not letting the outside world attempt to use it as a proxy to get at your internal LAN. In addition to this, firewalling it is a good idea. Squid can also be used as an HTTP accelerator (also known as a reverse proxy), perhaps you have an NT WWW Server on the internal network that you want to share with the world, in this case things get a bit harder to configure but it is possible to do relatively securely. Fortunately Squid has very good ACL's (Access Control Lists) built into the squid.conf file, allowing you to lock down access by names, IP’s, networks, time of day, actual day (perhaps you allow unlimited browsing on the weekends for people that actually come in to the office). Remember however that the more complicated an ACL is, the slower Squid will be to respond to requests.
Most network administrators will want to configure Squid so that an internal network can access www sites on the Internet. In this example is the internal network, is the external IP address of the Squid server, and is a www server we want to see.
Squid should be configured so that it only listens for requests on it’s internal interface, if it were listening on all interfaces I could go to port 3128 and request, or any internal machine for that matter and view www content on your internal network. You want something like this in your squid.conf file:
This will prevent anyone from using Squid to probe your internal network.
On the opposite side of the coin we have people that use Squid to make internal www servers accessible to the Internet in a controlled manner. For example you may want to have an IIS 4.0 www server you want to put on the Internet, but are afraid to connect it directly. Using Squid you can grant access to it in a very controlled manner. In this example is a random machine on the Internet, is the external IP address of the Squid server, is it’s internal IP address, and is a www server on the internal network running IIS 4.0.
To set Squid up to run as an accelerator simply set the “http_port” to 80 in squid.conf:
http_port 3128
And then set the IP addresses differently:
And finally you have to define the machine you are accelerating for:
httpd_accel_port 80
This is covered extensively in the Squid FAQ at: (section 20).
The ACL's work by defining rules, and then applying those rules, for example:
acl internalnet
http_access allow internalnet
http_access deny all
Which defines "internalnet" as being anything with a source of, allowing it access to the http caching port, and denying everything else. Remember that rules are read in the order given, just like ipfwadm, allowing you to get very complex (and make mistakes if you are not careful). Always start with the specific rules followed by more general rules, and remember to put blanket denials after specific allowals, otherwise it might make it through. Its better to accidentally deny something then to let it though, as you'll find out about denials (usually from annoyed users) faster then things that get through (when annoyed users notice accounting files from the internal www server appearing on the Internet). The Squid configuration files (squid.conf) is well commented (to the point of overkill) and also has a decent man page.
Another useful example is blocking ads, so to block them you can add the following to squid.conf:
acl ads dstdomain
http_access deny ads
The acl declaration is simply a pattern, be it a destination domain name, source domain name, regex and so on, the http_access directive actually specifies what to do with it (deny, allow, etc). Properly setup this is an extremely powerful tool to restrict access to the WWW. Unfortunately it does have one Achilles heel: it doesn't support user based authentication and control (not that many UNIX based proxy servers do). Remember that like any set of rules they are read from top to bottom, so put your specific denials and allowals first, and then the more general rules. The squid.conf file should be well commented and self explanatory, the Squid FAQ is at:
One important security issue most people overlook with Squid is the log files it keeps. By default Squid may or may not log each request it handles (depends on the config file), from “” to “”. You definitely want to disable the access logs unless you want to keep a close eye on what people view on the Internet (legally this is questionable, check with your lawyers). The directive is “cache_access_log” and to disable it set it to “/dev/null”, this logs ALL accesses, and ICP queries (inter-cache communications). The next big one is the “cache_store_log”, which is actually semi useful for generating statistics on how effective your www cache is, it doesn’t log who made the request, simply what the status of objects in the cache is, so in this case you would see the pictures on a pornographic site being repeatedly served, to disable it set it to “none”. The “cache_log” should probably be left on, it contains basic debugging info such as when the server was started and when it was stopped, to disable it set it to “/dev/null”. Another, not very well documented log files is the “cache_swap_log” file, which keeps a record of what is going on with the cache, and will also show you the URL’s people are visiting (but not who/etc), setting this to “/dev/null” doesn’t work (in fact Squid pukes out severely) and setting it to “none” simply changes the filename from “log” to “none”. The only way to stop it is to link the file to “/dev/null” (by default the root of the www cache files /log), and also to link the “log-last-clean” to “/dev/null” (although in my quick tests it doesn’t appear to store anything you can’t be sure otherwise). So to summarize:
in squid.conf:
cache_access_log /dev/null
cache_store_log none
cache_log /dev/null
and link:
/var/spool/squid/log to /dev/null
/var/spool/squid/log-last-clean to /dev/null
or whichever directory holds the root of your www cache (the 00 through 0F directories).
Another important issue that gets forgotten is the ICP (Internet Cache Protocol) component of Squid. The only time you will use ICP is if you create arrays or chains of proxy servers. If you’re like me, you have only the one proxy server and you should definitely disabled ICP. This is easily done by setting the ICP port in squid.conf from the default “3130” to “0”. You should also firewall port 3128 (the default Squid port that clients bind to) from the Internet:
ipfwadm -I -a accept -P tcp -S -D 3128
ipfwadm -I -a accept -P tcp -S -D 3128
ipfwadm -I -a deny -P tcp -S -D 3128
or in ipchains:
ipchains -A input -p all -j ACCEPT -s -d 3128
ipchains -A input -p all -j ACCEPT -s -d 3128
ipchains -A input -p all -j DENY -s -d 3128
squidGuard allows you to put in access control lists, filter lists, and redirect requests, easily and efficiently. It is ideal for controlling access to the WWW, and for more specific tasks such as blocking pornographic content (a valid concern for many people). It cannot make decisions based upon content however, it simply looks at the URL’s being processed, so it cannot be used to block active content and so on. squidGuard is available from:
LDAP auth module for SQUID
This allows you to authenticate users via an LDAP server, however passwords/etc are transmitted in the clear, so use some form of VPN to secure it. You can get it from:
Cut the crap
Cut the crap (CTC) is aimed at blocking banner ads and reducing bandwidth usage while surfing. You can get it from:
WWWOFFLE is a rather nice looking proxy for UNIX systems that handles HTTP and FTP. You can get it at:

Circuit level proxy software

SOCKS is a circuit level proxy, typically loaded on firewalls because it has good access controls. Applications must be SOCKS'ified, most popular web browsers, ftp clients and so on have support by default. You can get it from:
Dante is a free implementaiton of the popular SOCKS server. It is available from:
DeleGate is a multi-protocol proxy with support for HTTP, NNTP, FTP, SSL proxying and more. It has some serious security issues however. You can get it from:
Proxy Gallery
A variety of proxy packages written by for various requirements. These are UDP, TCP, HTTP, Hand-off (for playing Ultima Online) and tunneling package. They are available at:

Shell servers


Telnet was one of the first services on what is now the Internet, it allows you to login to a remote machine interactively, issue commands and see their results. It is still the primary default tools for remote administration in most environments, and has nearly universal support (even NT has a telnet daemon and client). It is also one of the most insecure protocols, susceptible to sniffing, hijacking, etc. If you have clients using telnet to come into the server you should definitely chroot their accounts if possible, as well as restricting telnet to the hosts they use with TCP_WRAPPERS. The best solution for securing telnet is to disable it and use SSL'ified telnet or ssh.
Problems with telnet include:
  • Clear text authentication, username and password. 
  • Clear text of all commands. 
  • Password guessing attacks (minimal, will end up in the log files) 
The best solution is to turn telnet off and use ssh. This is however not practical in all situations. If you must use telnet then I strongly suggest firewalling it, have rules to allow hosts/networks access to port 23, and then a general rule denying access to port 23, as well as using TCP_WRAPPERS (which is more efficient because the system only checks each telnet connection and not every packet against the firewall rules) however using TCP_WRAPPERS will allow people to establish the fact that you are running telnet, it allows them to connect, evaluates the connection, and then closes it if they are not listed as being allowed in.
An example of firewalling rules:
ipfwadm -I -a accept -P tcp -S -D 23
ipfwadm -I -a accept -P tcp -S -D 23
ipfwadm -I -a deny -P tcp -S -D 23
or in ipchains:
ipchains -A input -p all -j ACCEPT -s -d 23
ipchains -A input -p all -j ACCEPT -s -d 23
ipchains -A input -p all -j DENY -s -d 23
An example of the same using TCP_WRAPPERS, in /etc/hosts.allow:
And in /etc/hosts.deny:
in.telnetd: ALL
There are several encrypted alternatives to telnet as mentioned before, ssh, SSLeay Telnet, and other third party utils, I personally feel that the 'best' alternative if you are going to go to the bother of ripping telnet out and replacing it with something better is to use ssh.
To secure user accounts with respect to telnet there are several things you can do. Number one would be not letting root login via telnet, this is controlled by /etc/securetty and by default in most distributions root is restricted to logging on from the console (a good thing). For a user to successfully login their shell has to be valid (this is determined by the list of shells in /etc/shells), so setting up user accounts that are allowed to login is simply a matter of setting their shell to something listed in /etc/shells, and keeping users out as simple as setting their shell to /bin/false (or something else not listed in /etc/shells. Now for some practical examples of what you can accomplish by setting the user shell to things other then shells.
For an ISP that wishes to allow customers to change their password easily, but not allow them access to the system (my ISP uses Ultrasparcs and refuses to give out user accounts for some reason, I wonder why).
in /etc/shells list:
and set the users shell to /usr/bin/passwd so you end up with something like:
and voila. The user telnets to the server, is prompted for their username and password, and is
then prompted to change their password. If they do so successfully passwd then exits and they are disconnected. If they are unsuccessful passwd exits and they are disconnected. The following is a transcript of such a setup when a user telnets in:
Connected to localhost.
Escape character is '^]'.

Red Hat Linux release 5.2 (Apollo)
Kernel 2.2.5 on an i586
login: tester
Changing password for tester
(current) UNIX password: 
New UNIX password: 
Retype new UNIX password: 
passwd: all authentication tokens updated successfully
Connection closed by foreign host.
Telnet also displays a banner by default when someone connects. This banner typically contains systems information like the name, OS, release and sometimes other detailed information such as the kernel version. Historically this was useful if you had to work on multiple OS's, however in today's hostile Internet it is generally more harmful then useful. Telnetd displays the contents of the file /etc/ (typically it is identical to /etc/issue which is displayed on terminals and so forth), this file is usually recreated at boot time in most Linux distributions, from the rc.local startup file. Simply edit the rc.local file, either modifying what it puts into /etc/issue and /etc/, or comment out the lines that create those files, then edit the files with some static information.
Typical Linux rc.local contents pertaining to /etc/issue and /etc/
# This will overwrite /etc/issue at every boot. So, make any changes you
# want to make to /etc/issue here or you will lose them when you reboot.
echo "" > /etc/issue
echo "$R" >> /etc/issue
echo "Kernel $(uname -r) on $a $(uname -m)" >> /etc/issue
cp -f /etc/issue /etc/
echo >> /etc/issue
simply comment out the lines or remove the uname commands. If you absolutely must have telnet enabled for user logins make sure you have a disclaimer printed:
This system is for authorized users only. Trespassers will be prosecuted.
or something like the above. Legally you are in a stronger position if someone cracks into the system or otherwise abuses your telnet daemon.

Telnet - SSL

SSLtelnet and MZtelnet
A drop in replacement for telnet, SSLtelnet and MZtelnet provide a much higher level of security then plain old telnet, although SSLtelnet and MZtelnet are not as flexible as SSH, they are perfectly free (i.e., GNU licensed) which SSH is not (although OpenSSH is *BSD licensed). The server and client packages are available as tarballs at:, and as RPM packages at
Slush is based on OpenSSL and supports X.509 certificates currently, which for a large organization is a much better (and saner) bet then trying to remember several dozen passwords on various servers. Slush is GPL, but not finished yet (it implements most of the required functionality to be useful, but has limits). On the other hand it is based completely in open source software making the possibilities of backdoors/etc remote. Ultimately it could replace SSH with something much nicer. You can get it from:

SSH - server and client software

SSH is a secure protocol and set of tools to replace some common (insecure) ones. It was designed from the beginning to offer a maximum of security and allows remote access to servers in a secure manner. SSH can be used to secure any network based traffic, by setting it up as a 'pipe' (i.e. binding it to a certain port at both ends). This is quite kludgy but good for such things as using X across the Internet. In addition to this the server components runs on most UNIX systems, and NT, and the client components runs on pretty much anything. Unfortunately SSH is no longer free; however, there is a project to create a free implementation of the SSH protocol. There aren't any problems with SSH per se like there are with telnet, all session traffic is encrypted and the key exchange is done relatively securely (alternatively you can preload keys at either end to prevent them from being transmitted and becoming vulnerable to man in the middle attacks).
SSH typically runs as a daemon, and can easily be locked down by using the sshd_config file. You can also run sshd out of inetd, and thus use TCP_WRAPPERS, and by default the ssh rpm's from have TCP_WRAPPERS check option compiled into them. Thus using TCP_WRAPPERS you can easily restrict access to ssh. Please note earlier versions of ssh do contain bugs, and several sites have been hacked (typically with man in the middle attacks or problems with buffer overflows in the ssh code), but later version of ssh address these problems. The main issue with ssh is it’s license, it is only free for non-commercial use, however you can download source code from a variety of sites. If you want to easily install ssh there is a script called “install-ssh” that will download, compile and install ssh painlessly, it is available from:
The firewalling rules for ssh are pretty much identical to telnet. There is of course TCP_WRAPPERS, the problem with TCP_WRAPPERS being that an attacker connects to the port, but doesn't get a daemon, HOWEVER they know that there is something on that port, whereas with firewalling they don't even get a connection to the port. The following is an example of allowing people to ssh from internal machines, and a certain C class on the internet (say the C class your ISP uses for it's dial-up pool of modems). 
ipfwadm -I -a accept -P tcp -S -D 22
ipfwadm -I -a accept -P tcp -S isp.dial.up.pool/24 -D 22
ipfwadm -I -a deny -P tcp -S -D 22
ipchains -A input -p tcp -j ACCEPT -s -d 22
ipchains -A input -p tcp -j ACCEPT -s isp.dial.up.pool/24 -d 22
ipchains -A input -p tcp -j DENY -s -d 22
Or via TCP_WRAPPERS, hosts.allow:
sshd:, isp.dial.up.pool/
In addition to this, ssh has a wonderful configuration file, /etc/sshd/sshd_config by default in most installations. You can easily restrict who is allowed to login, which hosts, and what type of authentication they are allowed to use. The default configuration file is relatively safe but following is a more secure one with explanations. Please note all this info can be obtained by a “man sshd” which is one of the few well written man pages out there. The following is a typical sshd_config file:
Port 22
# runs on port 22, the standard
# listens to all interfaces, you might only want to bind a firewall
# internally, etc
HostKey /etc/ssh/ssh_host_key
# where the host key is
RandomSeed /etc/ssh/ssh_random_seed
# where the random seed is
ServerKeyBits 768
# how long the server key is
LoginGraceTime 300
# how long they get to punch their credentials in
KeyRegenerationInterval 3600
# how often the server key gets regenerated 
PermitRootLogin no
# permit root to login? no
IgnoreRhosts yes
# ignore .rhosts files in users dir? yes
StrictModes yes
# ensures users don't do silly things
QuietMode no
# if yes it doesn't log anything. yikes. we want to log logins/etc.
X11Forwarding no
# forward X11? shouldn't have to on a server
FascistLogging no
# maybe we don't want to log too much.
PrintMotd yes
# print the message of the day? always nice
KeepAlive yes
# ensures sessions will be properly disconnected
SyslogFacility DAEMON
# who's doing the logging?
RhostsAuthentication no
# allow rhosts to be used for authentication? the default is no
# but nice to say it anyways
RhostsRSAAuthentication no
# is authentication using rhosts or /etc/hosts.equiv sufficient
# not in my mind. the default is yes so lets turn it off. 
RSAAuthentication yes
# allow pure RSA authentication? this one is pretty safe
PasswordAuthentication yes
# allow users to use their normal login/passwd? why not.
PermitEmptyPasswords no
# permit accounts with empty password to log in? no
Other useful sshd_config directives include:
AllowGroups - explicitly allow groups (/etc/group) to login using ssh
DenyGroups - explicitly disallows groups (/etc/groups) from logging in
AllowUsers - explicitly allow users to login in using ssh
DenyUsers - explicitly blocks users from logging in
AllowHosts - allow certain hosts, the rest will be denied
DenyHosts - blocks certain hosts, the rest will be allowed
IdleTimeout time - time in minutes/hours/days/etc, forces a logout by SIGHUP'ing the process.
OpenSSH is a project initiated by the OpenBSD project to get a fully functional version 1 SSH client and server that is freely licensed (i.e. BSD and GPL). They have cleaned up the code, fixed more then a few bugs, and introduced better PAM support that the "official" SSH client and server. This is going to replace traditional SSH completely. It's available at: I personally switched my machines over to it and have nad 0 problems.
LSH is a free implementation of the SSH protocol (both client and server), LSH is GNU licensed and is starting to look like the alternative (commercially speaking) to SSH (which is not free anymore). You can download it from:, please note it is under development.
I couldn't find a whole lot of information on this but it appears to be a version of SSH that is independantly maintained, with some ehancements (like SecureID support). You can get it from:

SSH - client software: 

Fresh Free FiSSH
Most of us still have to sit in front of windows workstations, and ssh clients for windows are a pain to find. Fresh Free FiSSH is a free ssh client for Windows 95/NT 4.0. Although not yet completed, I would recommend keeping your eye on it if you are like me and have many Windows workstations. The URL is:
Tera Term
Tera Term is a free Telnet client for Windows, and has an add-on DLL to enable ssh support. Tera Term is available from: The add-on DLL for SSH support is available from:
putty is a Windows SSH client, pretty good, and completely free, and also small (184k currently). You can download it from:
mindterm is a free java ssh client, you can get it at:
The Java Telnet Application
The Java Telnet Application supports ssh,and is free, you can get it at:
Secure CRT
A commercial Telnet / SSH client from Vandyke software. You can download / purchase it at:
Fsh is stands for “Fast remote command execution” and is similar in concept to rsh/rcp. It avoids the expense of constantly creating encrypted sessions by bring up an encrypted tunnel using SSH or LSH, and running all the commands over it. You can get it from:
SSH Win32 ports
Ports of SSH to Win32 available at:


SRP is a relative newcomer, however it has several advantages over some of the older programs. SRP is free and does not use encryption per se to secure the data, so exporting it outside of the US isn’t as much of a problem (there is a version that encrypts and is available within the US and Canada, and interoperates with the non encrypting version of SRP). SRP uses pretty nifty math and is explained in detail here: The disadvantage is that SRP only encrypts the login (username and password) so any data transferred (such as the telnet session or ftp sites) are vulnerable. You can get SRP from: SRP currently has Telnet and FTP support (for windows as well) although SRP enabling other protocols is relatively straightforward. A windows client with SRP capabilities is available at:


NSH is a commercial product with all the bells and whistles (and I do mean all). It’s got built in support for encryption, so it’s relatively safe to use (I cannot verify this completely however, as it isn’t open source). Ease of use is high, you cd //computername and that ‘logs’ you into that computer, you can then easily copy/modify/etc. files, run ps and get the process listing for that computer, etc. NSH also has a Perl module available, making scripting of commands pretty simple, and is ideal for administering many like systems (such as workstations). In addition to this NSH is available on multiple platforms (Linux, BSD, Irix, etc.) with RPM’s available for Red Hat systems. NSH is available from:, and 30 day evaluation versions are easily downloaded. 

R services

R services such as rsh, rcp, rexec and so forth are very insecure. There is simply no other way to state it, DO NOT USE THEM. Their security is based on the hostname/IP address of the machine connecting, which can easily be spoofed or, using techniques such as DNS poisoning, otherwise compromised. By default they are not all disabled, please do so immediately. Edit /etc/inetd.conf and look for rexec, rsh and so on, and comment them out, followed by a "killall -1 inetd" to restart inetd.
If you absolutely must run these services use TCP_WRAPPERS to restrict access, it's not much but it will help. Also make sure you firewall them as TCP_WRAPPERS will allow an attacker to see that they are running, which might result in a spoofed attack, something TCP_WRAPPERS cannot defend against if done properly. Access to the various R services is controlled via rhosts files, usually each user has their own rhosts file, unfortunately this is susceptible to packet spoofing. The problem with r services is also that once there is a minor security breach that can be used to modify files, editing a users (like root's) .rhost file makes it very easy to crack a system wide open.
If you need remote administration tools that are easy to use and similar to rsh/etc I would recommend NSH (Network SHell) or SSH, they both support encryption, and a much higher level of security. Alternatively using VPN software will reduce some of the risk as you can deny packet spoofers the chance to compromise your system(s) (part of IPSec is authentication of sender and source, which is almost more important then encrypting the data in some cases).

SNA connectivity

SNA is a very common network protocol that hails back to the days of IBM and "heavy iron". 
SNA software

Network services - SNMP


SNMP (Simple Network Management Protocol) was designed to let heterogeneous systems and equipment talk to each other, report data and allow modifications to there settings over a TCP-IP network. For example an SNMP enabled device (such as a Cisco router) can be monitored/configured from an SNMP client, and you can easily write scripts to, say, alert you if denied packets/second rises above 20. Unfortunately SNMP has no security built into it. SNMPv1, originally proposed in RFC 1157 (May 1990) and section 8 (Security Considerations) reads thusly: "Security issues are not discussed in this memo.". I think that about sums it up. In 1992/1993 SNMPv2 was released, and did contain security considerations however these security considerations were dropped later on when they were shown to be completely broken. Thus we end up today with SNMPv2 and no security. 
Currently the only way to protect your SNMP devices consists of setting the community name to something hard to guess (but it is very easy to sniff the wire and find the name), and firewall/filter SNMP so that only the hosts that need to talk to each other can (which leaves you open to spoofing). Brute force community name attacks are easy to do and usually effective, and there are several tools specifically for monitoring SNMP transmissions and cracking open an SNMP community, it is a pretty dangerous world out there. 
These risks are slightly mitigated by the usefulness of SNMP, if properly supported and implemented it can make network administration significantly easier. In almost every SNMP implementation the default community name is "public" (this goes for Linux, NT, etc), you must change this, to something obscure (your company name is a bad idea). Once a person has your community name they can conduct an "snmpwalk" and take over your network. SNMP runs over UDP on ports 161 and 162; block this at all entrances to your network (the backbone, the dialup pool, etc). If a segment of network does not have SNMP enabled devices or an SNMP console you should block SNMP to and from that network. This is your only real line of defense with SNMP. 
Additionally the use of IPSec (or other VPN software) can greatly reduce the risk from sniffing. The RFC's for SNMPv3 however go extensively into security (especially RFC 2274, Jan 1998) so there is hope for the future. If you are purchasing new SNMP aware/enabled products make sure they support SNMPv3, as you then have a chance at real security.
There are no specific problems with cu-snmpd per se, apart from the general SNMP problems I have covered. The cu-snmp tools and utilities only support SNMPv1 and SNMPv2, so remember to be careful when using them on or across untrusted networks as your main line of security (the community name) will be out in the open for anyone to see.
ipfwadm -I -a accept -P udp -S -D 161:162
ipfwadm -I -a accept -P udp -S -D 161:162
ipfwadm -I -a deny -P udp -S -D 161:162
ipchains -A input -p udp -j ACCEPT -s -d 161:162
ipchains -A input -p udp -j ACCEPT -s -d 161:162
ipchains -A input -p udp -j DENY -s -d 161:162

SNMP server software

Network services - NTP

NTP (Network Time Protocol) is rather simple in it’s mission, it keeps computers clocks in synchronization. So what? Try comparing log files from 3 separate servers if their clocks are out of synch by a few minutes. NTP simply works by a client connecting to a time server, working out the delay between them (on a local LAN it might be only 1-2ms, across the internet it might be several hundred ms), and then it asks for the time and sets it’s own clock. Additionally servers can be ‘clustered’ to keep themselves synchronized, the chances of 3 or more servers losing track of what time it is (also called ‘drift’) is relatively low. 
The time signal is typically generated by an atomic clock or GPS signal, measured by a computer, these are ‘stratum 1’ time servers, below them are stratum 2 time servers that typically are publicly accessible, a company might maintain it’s own stratum 3 time servers if they have sufficient need, and so on. 
The data NTP exchanges is of course not terribly sensitive, it’s a time signal, however if an attacker were able to tamper with it, all sorts of nastiness could result: log files might be rendered unusable, accounts might be expired early, cron jobs that backup your server might run in prime time causing delays, etc. Thus it is a good idea to run your own time server(s), and set the maximum adjustment they will make to only a few seconds (they shouldn’t drift very much in any case). If you are really paranoid, or have a great number of clients you should consider buying a GPS time unit. 
They come in all shapes and sizes, from a 1U rack mount job that plugs directly into your LAN to ISA and PCI cards that plug into a server and have an antenna. It is a good idea to firewall off your timeserver, as a denial of service attack on it would be detrimental to your network. In addition to this if possible you should use the encryption available in ntpd, based on DES it is generally sufficient to thwart most attackers. NTP runs on port 123 using udp (and when you connect to servers they will come from port 123 to your port 123), so firewalling it is relatively simple:
ipfwadm -I -a accept -P udp -S -D 123
ipfwadm -I -a accept -P udp -S -D 123
ipfwadm -I -a deny -P udp -S -D 123
ipchains -A input -p udp -j ACCEPT -s -d 123
ipchains -A input -p udp -j ACCEPT -s -d 123
ipchains -A input -p udp -j DENY -s -d 123

NTP server software

XNTP is available from: There usually are no man pages with ntpd or xntpd (wonderful huh?) but documentation can be found in /usr/doc/ntp-xxxx/, or at:

NTP client software

ntpdate ships with most distributions.

User information


There are a variety of services that can provide information about local users to other local users, and other machines. These can be useful if you want to find out which user connected to a machine, or see when they last logged in. Of course these are great services for attackers since they can glean a lot of information from them.

Ident server software

The ident service is used to map users/processes to ports in use. For example most IRC servers attempt to find out who is connecting to them by doing an ident lookup, which basically consists of asking the ident server on the client computer what information it has about a port number, and the response can range from nothing (if no-one is using that particular port) to a username, groupname, process id, and other interesting information. The default setting in most distributions is that identd is on (it is polite to run it, irc servers and newer versions of sendmail check identd responses), and will only hand out the username. The primary use of identd is to allow remote systems some means of tracking down users that are connecting to their servers, irc, telnet, mail, or other, for authentication purposes (not a good idea since it is very easy to fake). The local university here in Edmonton requires you to run identd if you want to telnet into any of the main shell servers, primarily so they can track down compromised accounts quickly. 
Running identd on your machine will help other administrators when tracking down problems, as they can not only get the IP address and time of a problem, but using identd can look up the user name. In this way it is a two edged sword, while it gives out information useful for tracking down malicious users (definitely people you want to boot off of your servers) it can also be used to gain information about users on your system, leading to their accounts being compromised. Running identd on servers only makes sense if they are hosting shell accounts/etc.
Identd runs on port 113 using tcp, and typically you will only need if you want to IRC (many irc networks require an identd response), or be nice to systems running daemons (such as tcp_wrapped telnet, or sendmail) that do identd lookups on connections.
ipfwadm -I -a accept -P tcp -S -D 113
ipfwadm -I -a accept -P tcp -S -D 113
ipfwadm -I -a deny -P tcp -S -D 113
ipchains -A input -p tcp -j ACCEPT -s -d 113
ipchains -A input -p tcp -j ACCEPT -s -d 113
ipchains -A input -p tcp -j DENY -s -d 113
Identd supports quite a few features, and can be easily set to run as a non-root user. Depending on your security policies you may not want to give out very much information, or you might want to give out as much as possible. Simply tack the option on in inetd.conf, after in.identd (the defaults are -l -e -o).
-p port
-a address
Can be used to specify which port and address it binds to (in the case of a machine with IP’s aliased, or multiple interfaces), this is generally only useful if you want internal machines to connect, since external machines will probably not be able to figure out what port you changed it to.
-u uid
-g gid
Are used to set the user and group that identd will drop its privileges to after connecting to the port, this will result in it being far less susceptible to compromising system security. As for handling the amount of information it gives out:
Specifies that identd will not return the operating system type, and simply say "UNKNOWN", a very good option.
Will have identd return user numbers (i.e. UID) and not the username, which still gives them enough information to tell you and allow you to track the user down easily, without giving valuable hints to would be attackers.
Allows users to make a ~/.noident file, which will force identd to return "HIDDEN-USER" instead of information. This allows users the option of having a degree of privacy, but a malicious user will use this to evade identification.
-F format
Enables you to specify far more information than is standard, everything from user name and number to the actual PID, command name, and command name and arguments that were given! This I would recommend only for internal use, as it is a lot of information that attackers would find useful.
In general I would advise running identd on servers with user shell accounts, and otherwise disabling it, primarily due to the number of denial of service attacks it is susceptible to. Running identd will make life a lot easier for other administrators when tracking down attacks originating from your site, which will ultimately make your life easier. 
Other Identd daemons
There are also other versions of identd available, some with security enhancements (I do not endorse these as I have yet to test them):

Finger server software

Finger is one of those things most admins just disable and ignore. It is a useful tool on occasion, but if you want to allow other admins to figure out which of your users is currently trying to crack their machines, use identd. Finger lets out way to much info, and is a favorite tool for initial probes and data gathering on targets. There have also been several nasty DOS attacks released, basically consisting of sending hundreds of finger requests and in certain configurations just watching the server croak. Please don't run finger. Many distributions ship with it enabled, but to quote inetd.conf from Red Hat:
# Finger, systat and netstat give out user information which may be
# valuable to potential "system crackers." Many sites choose to disable 
# some or all of these services to improve security.
If you still have the urge that you absolutely must run it use -u to deny finger @host requests that are only ever used to gather information for future attacks. Disable finger, really. Fingerd has also been the cause of a few recent and very bad denial of service attacks, especially if you run NIS with large maps, DO NOT, repeat NOT run fingerd. Finger runs on port 79, and cfingerd runs on port 2003, both use tcp.
ipfwadm -I -a accept -P tcp -S -D 79
ipfwadm -I -a accept -P tcp -S -D 79
ipfwadm -I -a deny -P tcp -S -D 79
ipchains -A input -p tcp -j ACCEPT -s -d 79
ipchains -A input -p tcp -j ACCEPT -s -d 79
ipchains -A input -p tcp -j DENY -s -d 79
Cfingerd (configurable fingerd) is a great replacement for the stock fingerd, it was built with security in mind, runs as a non-root user typically, and users can easily configure it so they aren’t fingerable. Cfingerd is available from:
PFinger is similar to Cfingerd in that it is a secure replacement for the stock fingerd. You can get PFinger from:
The Finger Server
The Finger Server is a nice web based finger server that gives users the ability to update their finger information themselves. You can get it at:

Network services - HTTP / HTTPS


WWW traffic is one of the largest components of Internet usage today. There are a variety of popular WWW servers for Linux, the most popular of course being Apache (with over %50 of the market). Most modern WWW servers also have the capability to use SSL to secure sessions (for e-commerce and so on). This section is very Apache-centric, but since this is the default www server for almost all Linux (and *BSD) distributions it makes sense. I'm also writing for the 1.3.9 version of Apache which no longer uses access.conf or srm.conf, but instead has rolled everything into httpd.conf.
HTTP runs on port 80, tcp, and if it is for internal use only (an Intranet, or www based control mechanism for a firewall server say) you should definitely firewall it.
ipfwadm -I -a accept -P tcp -S -D 80
ipfwadm -I -a accept -P tcp -S -D 80
ipfwadm -I -a deny -P tcp -S -D 80
or in ipchains:
ipchains -A input -p all -j ACCEPT -s -d 80
ipchains -A input -p all -j ACCEPT -s -d 80
ipchains -A input -p all -j DENY -s -d 80
HTTPS runs on port 443, tcp, and if it is for internal use only (an Intranet, or www based control mechanism for a firewall server say) you should definitely firewall it.
ipfwadm -I -a accept -P tcp -S -D 443
ipfwadm -I -a accept -P tcp -S -D 443
ipfwadm -I -a deny -P tcp -S -D 443
or in ipchains:
ipchains -A input -p all -j ACCEPT -s -d 443
ipchains -A input -p all -j ACCEPT -s -d 443
ipchains -A input -p all -j DENY -s -d 443

WWW server software

What can I say about securing Apache? Not much actually. By default Apache runs as the user 'nobody', giving it very little access to the system, and by and large the Apache team has done an excellent job of avoiding buffer overflows/etc. In general most www servers simply retrieve data off of the system and send it out, most of the danger come not from Apache but from sloppy programs that are executed via Apache (CGI's, server side includes, etc).