Tuesday, July 27, 2010


The Transmission Control Protocol and Internet Protocol (TCP/IP) is a standard set of protocols developed in the late 1970s by the Defense Advanced Research Projects Agency (DARPA) as a means of communication between different types of computers and computer networks. TCP/IP is the driving force of the Internet, and thus it is the most popular set of network protocols on Earth.

TCP/IP Introduction

The two protocol components of TCP/IP deal with different aspects of computer networking. Internet Protocol, the "IP" of TCP/IP is a connectionless protocol which deals only with network packet routing using the IP Datagram as the basic unit of networking information. The IP Datagram consists of a header followed by a message. The Transmission Control Protocol is the "TCP" of TCP/IP and enables network hosts to establish connections which may be used to exchange data streams. TCP also guarantees that the data between connections is delivered and that it arrives at one network host in the same order as sent from another network host.

TCP/IP Configuration

The TCP/IP protocol configuration consists of several elements which must be set by editing the appropriate configuration files, or deploying solutions such as the Dynamic Host Configuration Protocol (DHCP) server which in turn, can be configured to provide the proper TCP/IP configuration settings to network clients automatically. These configuration values must be set correctly in order to facilitate the proper network operation of your Ubuntu system.
The common configuration elements of TCP/IP and their purposes are as follows:
  • IP address The IP address is a unique identifying string expressed as four decimal numbers ranging from zero (0) to two-hundred and fifty-five (255), separated by periods, with each of the four numbers representing eight (8) bits of the address for a total length of thirty-two (32) bits for the whole address. This format is called dotted quad notation.
  • Netmask The Subnet Mask (or simply, netmask) is a local bit mask, or set of flags which separate the portions of an IP address significant to the network from the bits significant to the subnetwork. For example, in a Class C network, the standard netmask is which masks the first three bytes of the IP address and allows the last byte of the IP address to remain available for specifying hosts on the subnetwork.
  • Network Address The Network Address represents the bytes comprising the network portion of an IP address. For example, the host in a Class A network would use as the network address, where twelve (12) represents the first byte of the IP address, (the network part) and zeroes (0) in all of the remaining three bytes to represent the potential host values. A network host using the private IP address would in turn use a Network Address of, which specifies the first three bytes of the Class C 192.168.1 network and a zero (0) for all the possible hosts on the network.
  • Broadcast Address The Broadcast Address is an IP address which allows network data to be sent simultaneously to all hosts on a given subnetwork rather than specifying a particular host. The standard general broadcast address for IP networks is, but this broadcast address cannot be used to send a broadcast message to every host on the Internet because routers block it. A more appropriate broadcast address is set to match a specific subnetwork. For example, on the private Class C IP network,, the broadcast address is Broadcast messages are typically produced by network protocols such as the Address Resolution Protocol (ARP) and the Routing Information Protocol (RIP).
  • Gateway Address A Gateway Address is the IP address through which a particular network, or host on a network, may be reached. If one network host wishes to communicate with another network host, and that host is not located on the same network, then a gateway must be used. In many cases, the Gateway Address will be that of a router on the same network, which will in turn pass traffic on to other networks or hosts, such as Internet hosts. The value of the Gateway Address setting must be correct, or your system will not be able to reach any hosts beyond those on the same network.
  • Nameserver Address Nameserver Addresses represent the IP addresses of Domain Name Service (DNS) systems, which resolve network hostnames into IP addresses. There are three levels of Nameserver Addresses, which may be specified in order of precedence: The Primary Nameserver, the Secondary Nameserver, and the Tertiary Nameserver. In order for your system to be able to resolve network hostnames into their corresponding IP addresses, you must specify valid Nameserver Addresses which you are authorized to use in your system's TCP/IP configuration. In many cases these addresses can and will be provided by your network service provider, but many free and publicly accessible nameservers are available for use, such as the Level3 (Verizon) servers with IP addresses from to
    The IP address, Netmask, Network Address, Broadcast Address, and Gateway Address are typically specified via the appropriate directives in the file /etc/network/interfaces. The Nameserver Addresses are typically specified via nameserver directives in the file/etc/resolv.conf. For more information, view the system manual page for interfaces orresolv.conf respectively, with the following commands typed at a terminal prompt:
    Access the system manual page for interfaces with the following command:

    man interfaces

    Access the system manual page for resolv.conf with the following command:

                              man resolv.conf

IP Routing

IP routing is a means of specifying and discovering paths in a TCP/IP network along which network data may be sent. Routing uses a set of routing tables to direct the forwarding of network data packets from their source to the destination, often via many intermediary network nodes known as routers. There are two primary forms of IP routing: Static Routing andDynamic Routing.
Static routing involves manually adding IP routes to the system's routing table, and this is usually done by manipulating the routing table with the route command. Static routing enjoys many advantages over dynamic routing, such as simplicity of implementation on smaller networks, predictability (the routing table is always computed in advance, and thus the route is precisely the same each time it is used), and low overhead on other routers and network links due to the lack of a dynamic routing protocol. However, static routing does present some disadvantages as well. For example, static routing is limited to small networks and does not scale well. Static routing also fails completely to adapt to network outages and failures along the route due to the fixed nature of the route.
Dynamic routing depends on large networks with multiple possible IP routes from a source to a destination and makes use of special routing protocols, such as the Router Information Protocol (RIP), which handle the automatic adjustments in routing tables that make dynamic routing possible. Dynamic routing has several advantages over static routing, such as superior scalability and the ability to adapt to failures and outages along network routes. Additionally, there is less manual configuration of the routing tables, since routers learn from one another about their existence and available routes. This trait also eliminates the possibility of introducing mistakes in the routing tables via human error. Dynamic routing is not perfect, however, and presents disadvantages such as heightened complexity and additional network overhead from router communications, which does not immediately benefit the end users, but still consumes network bandwidth.


TCP is a connection-based protocol, offering error correction and guaranteed delivery of data via what is known as flow control. Flow control determines when the flow of a data stream needs to be stopped, and previously sent data packets should to be re-sent due to problems such as collisions, for example, thus ensuring complete and accurate delivery of the data. TCP is typically used in the exchange of important information such as database transactions.
The User Datagram Protocol (UDP), on the other hand, is a connectionless protocol which seldom deals with the transmission of important data because it lacks flow control or any other method to ensure reliable delivery of the data. UDP is commonly used in such applications as audio and video streaming, where it is considerably faster than TCP due to the lack of error correction and flow control, and where the loss of a few packets is not generally catastrophic.


The Internet Control Messaging Protocol (ICMP) is an extension to the Internet Protocol (IP) as defined in the Request For Comments (RFC) #792 and supports network packets containing control, error, and informational messages. ICMP is used by such network applications as the ping utility, which can determine the availability of a network host or device. Examples of some error messages returned by ICMP which are useful to both network hosts and devices such as routers, includeDestination Unreachable and Time Exceeded.


Daemons are special system applications which typically execute continuously in the background and await requests for the functions they provide from other applications. Many daemons are network-centric; that is, a large number of daemons executing in the background on an Ubuntu system may provide network-related functionality. Some examples of such network daemons include the Hyper Text Transport Protocol Daemon (httpd), which provides web server functionality; theSecure SHell Daemon (sshd), which provides secure remote login shell and file transfer capabilities; and the Internet Message Access Protocol Daemon (imapd), which provides E-Mail services.

Configuring hands-free installation using Kickstart

1. Preparing the installation server (HTTP, NFS, FTP)

#mkdir -p /ks
#cp /root/anaconda-ks.cfg /ks/ks.cfg
#chmod 644 /ks/ks.cfg
#vi /etc/httpd/conf.d/install.conf
Alias /ks "/ks"

Options Indexes
AllowOverride None
Order allow,deny
Allow from all
#service httpd restart
#cp /ks/ks.cfg /var/ftp/pub/ks.cfg
#vi /etc/exports
/ks      *(ro,no_root_squash)
#service nfs restart
2. Preparing the ks.cfg file
#vi ks.cfg
url --url           # HTTP
ftp --url       # FTP
nfs --server= --dir=/install   # NFS
Note: choose one of the three methods
lang en_US.UTF-8
keyboard us
network --device eth0 --bootproto dhcp
rootpw --iscrypted $1$6sZN40io$cwGN9pScqhCBn8AeRX4910
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --disabled
timezone Asia/Shanghai
bootloader --location=mbr --driveorder=sda --append="rhgb quiet"
clearpart --all --drives=sda
part /boot --fstype ext3 --size=100 --ondisk=sda
part pv.2 --size=0 --grow --ondisk=sda
volgroup VolGroup00 --pesize=32768 pv.2
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=256 --grow --maxsize=512
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow

3. Installing the CentOS5 client with kickstart
Set BIOS to boot from CD-ROM
Boot from boot.iso or the CentOS5 DVD
boot:linux ks=            # HTTP
boot:linux ks=              # FTP
boot:linux ks=nfs:              # NFS
Note: you can choose one of the three methods to start the installation.

Performing network OS installation (HTTP, NFS, FTP)

A. Preparing the installation server (
#yum install httpd
#mkdir -p /var/www/html/install
Method1 #mount -o loop /path/to/Centos5.iso /var/www/html/install (for ISO image)
Method2 #mount -o loop /dev/hdc /var/www/html/install (for CD-ROM)
#vi /etc/http/conf.d/install.conf
Alias /var/www/html/install "/install"

Options Indexes
AllowOverride None
Order allow,deny
Allow from all
#service httpd restart
Next, make sure that the /install directory is shared via HTTP, and verify
client access by the following commands:
B. Installing the CentOS5 client using HTTP
Set BIOS to boot from CD-ROM
Boot from boot.iso or the CentOS5 DVD
boot:linux askmethod                             # text mode by default
Choose a Language --> English --> OK
Keyboard Type --> us --> OK
Installation Method --> HTTP --> OK
Configure TCP/IP --> Enable IPv4 support --> DHCP --> OK
HTTP Setup --> Web site name: --> CentOS directory: /install --> OK
Next, the "Welcome" dialog appears. Follow the steps in the Redhat installation document.
2. NFS
A. Preparing the installation server (
#yum install nfs-utils
#mkdir /install
Method1 #cp /path/to/Centos5.iso /install/Centos5.iso
Method2 #mount -o loop /dev/hdc /install (for CD-ROM)
#vi /etc/exports
/install   *(ro, no_root_squash)
#service portmap restart
#service nfs restart
Be sure to test the NFS share by the following command:
#showmount -e localhost
#exportfs -v
B. Installing the CentOS5 client using NFS
Set BIOS to boot from CD-ROM
Boot from boot.iso or the CentOS5 DVD
boot:linux askmethod               #GUI mode by default; you can specify the text mode by typing
boot:linux askmethod text
Choose a Language --> English --> OK
Keyboard Type --> us --> OK
Installation Method --> NFS image --> OK
Configure TCP/IP --> Enable IPv4 support --> DHCP --> OK
NFS Setup --> NFS server name: --> CentOS directory: /install --> OK
Next, the "Welcome" dialog appears. Follow the steps in the Redhat installation document.
3. FTP
A. Preparing the installation server (
#yum install vsftpd
#mkdir -p /var/ftp/pub/install
Method1. #mount -o loop /path/to/Centos5.iso /var/ftp/pub/install (for ISO image)
Method2. #mount -o loop /dev/hdc /var/ftp/pub/install (for CD-ROM)
#service vsftpd restart
Next, make sure that the /var/ftp/pub/install directory is shared via FTP, and verify
client access by typing
B. Installing the CentOS5 client using FTP
Set BIOS to boot from CD-ROM
Boot from boot.iso or the CentOS5 DVD
boot:linux askmethod                              # text mode by default
Choose a Language --> English --> OK
Keyboard Type --> us --> OK
Installation Method --> FTP --> OK
Configure TCP/IP --> Enable IPv4 support --> DHCP --> OK
FTP Setup --> FTP site name: --> CentOS directory: /pub/install --> OK
Next, the "Welcome" dialog appears. Follow the steps in the Redhat installation document.
Note: the boot.iso can be found in the images folder on the Centos.iso.


In a previous post we looked at the install and setup of a kickstart server. One of the last steps that had to be taken as the client was to use an "append" at the boot prompt to assign the client a static ip address. This time we are going to look at setting up PXE services for clients to create a truly "hands-off" approach to installing desktops and servers with kickstart. I will be using the HTTP protocol again for my kickstart and I must say resources out there for the PXE/Kickstart/HTTP are really limited. It took a lot of trial and error to get this working, however the FTP and NFS method are much easier to implement.

You should already have a working kickstart server in place before setting up anything else in this post. For those that don't as a quick refresh you should have the following directory structure:

|-- CentOS
|-- images
    `-- pxeboot
|-- isolinux
    `-- isolinux.cfg
|-- kickstart
|-- repodata 
In the pxeboot folder should be vmlinuz and initrd.img files, and the kickstart folder should contain your kickstart file (test.cfg in our case). You can also refer to this earlier post to setup this up. Next you will need to setup a DHCP server first.
# yum -y install dhcp
# cp /usr/share/doc/dhcp-3.0.5/dhcpd.conf.sample /etc/dhcpd.conf
# vi /etc/dhcpd.conf

## /etc/dhcpd.conf file ##
ddns-update-style interim;
ignore client-updates;
allow booting;
allow bootp;

subnet netmask {
   # default gateway
   option routers;
   option subnet-mask;
   option domain-name   "mydomain.org";
   option domain-name-servers;
   # EST Time Zone
   option time-offset   -18000; 
   # Client IP range
   range dynamic-bootp;
   default-lease-time 21600;
   max-lease-time 43200;
   # PXE Server IP
   filename "pxelinux.0";

## END FILE ## 
Now you will need to save the file and set the service to start on boot.
# chkconfig dhcpd on
# service dhcpd restart

Now your DHCP server should be setup and working properly. You can test this if you'd like by allowing a client to lease an ip address from the server to verify that it is working (run the dhclient command on any linux box). Next we will need to setup a TFTP server to server up the PXE file to clients. We will need to install the server and configure it run with xinetd service. Essentially all you need to do is change the "disable" option to "yes".
# yum -y install tftp-server
# vi /etc/xinetd.d/tftp

## /etc/xinetd.d/tftp file ##

service tftp
        socket_type           = dgram
        protocol              = udp
        wait                  = yes
        user                  = root
        server                = /usr/sbin/in.tftpd
        server_args           -s /tftpboot
        disable               = no
        per_source            = 11
        cps                   = 100 2
        flags                 = IPv4

## END FILE ## 
Save the file and restart the service for it to take effect:
# service xinetd restart

Next is going to be the install of syslinux which is required to allow the clients to actually PXE boot.
# yum -y install syslinux

Simple enough. Next we will need to create the TFTP directory layout for the clients to PXE boot from.
# cd /
# mkdir tftpboot
# cd tftpboot
# mkdir images
# mkdir pxelinux.cfg
# cp /usr/share/syslinux/menu.c32 .
# cp /usr/share/syslinux/pxelinux.0 .

* Some will have to use /usr/lib/syslinux

Now your directory structure should be in place with the required files. Last we will just copy over the kernel for the clients to use when booting.
# cd images
# cp /var/www/pub/images/pxeboot/vmlinuz .
# cp /var/www/pub/images/pxeboot/initrd.img .

Finally we just need to make the PXE file that directs the clients where you boot from.
# cd /tftpboot/pxelinux.cfg
# vi default

## /tftpboot/default ##

default menu.c32
prompt 0
timeout 10


LABEL CentOS 5.4 x32
MENU LABEL CentOS 5.4 x32
KERNEL images/vmlinuz
append initrd=images/initrd.img linux ks=

## END FILE ##

Once you save and close this file you are done with the setup! There is one small change I forgot to mention...you will need to adjust your firewall settings for these new services.
# vi /etc/sysconfig/iptables
# -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 67 -j ACCEPT
# -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 68 -j ACCEPT
# -A RH-Firewall-1-INPUT -m udp -p udp --dport 69 -j ACCEPT
#service iptables restart

That should do it. Now if many of you haven't guessed by now I use the following addresses on my "lab" network to perform these test installs:

DHCP Server:
DNS Server:
PXE Server:
Clients: -

Most of this should be obvious from following this tutorial. Now try PXE booting your client and it should pickup all that it needs from the PXE server, boot the linux kernel into RAM, and begin executing your kickstart file for installation. I will note for those of you that are note using the HTTP protocol (NFS or FTP) there are very few changes that need to be made to this tutorial to make PXE booting work for you. In particular you will have a different directory layout when starting and the /tftpboot/default file will need to have the last line changed to the format of the protocol you are using.

CentOS DHCP Server Setup

One of the basics elements found on all networks in a DHCP server, making it an important part of any network.  DHCP makes network administration easy because you can make changes to a single point (theDHCP server) on your network and let it filter down to the rest of the network.  To begin setting up a DHCP server we are going to first need to configure our machine with a static ip address.  As the root user you will need to open the following file, /etc/sysconfig/network-scripts/ifcfg-eth0 (assuming you want the eth0 interface to distribute ip addresses to the network).  Configure the following:


Once finished you will need to restart the networking service: service networking restart

Now that you have static ip address setup you will need to install the dhcpd package which contains the DHCP server software.  After this package is installed there are two important files which we will need to work with.  The first is /etc/dhcpd.conf which is the configuration file for the DHCP server.  This file may not exist in which case you will need to create it.  You can find a sample to work off of (recommended) at /usr/share/doc/dhcp-/dhcp.conf.sample.  Copy this over to the main configuration file and then edit the main configuration file to your specifications.  This is the easiest and fastest way to setup the DHCP.  The second file to take note of is /var/lib/dhcpd/dhcpd.leases which stores all the client leases for the DHCP server.

$ yum install dhcp.i386

$ cp /usr/share/doc/dhcp-3.0.5/dhcp.conf.sample /etc/dhcpd.conf
$ nano /etc/dhcpd.conf

# Sample DHCP Config File
ddns-update-style interim;

subnet netmask {

     # Parameters for the local subnet
     option routers                      ;
     option subnet-mask                  ;

     option domain-name                            "testbed.edu";
     option domain-name-servers          ;

     default-lease-time       21600;
     max-lease-time       43200;

     range dynamic-bootp;

$ service dhcpd restart

As we can see above looking through the configuration file, there is only one subnet for this network.  The gateway has been defined by the "option routers" paramters, the DNS information by the "option domain-name" parameters, and the leases for the client by the "range" parameter.  Restarting the DHCP service will allow the configuration file to be loaded into the server and it will begin to lease ip addresses to clients.  One other configuration parameter that you should know if how to setup reserved ip addresses.

# In the configuration file
host client01 {

 option host-name “client01.example.com”;
 hardware ethernet 02:B4:7C:43:DD:FF;

This basically reserves this ip address for the client01 host with the specified MAC address.  This can be usefu for printers or particular addresses that you wish to reserve.  You can now watch as clients should begin leasing their ip addresses from the server as they connect to the network.  Some other ideas you might want to consider implementing with a DHCP server is a failover server, relay servers, and backing up the configuration file and/or the lease database.  As a tip, instead of editing the dhcpd.conf file and then restarting the server to make changes you can use the omshell command which will allow you to connect to, query, and change the configuration while the server is running.

Building a Kickstart Install Server (CentOS/Redhat)

In a recent project I needed to build a kickstart server which would be used to automate the deployment of some new servers that were being setup.  We will build a simple kickstart server offering installs over HTTP to the clients.  I will also post a sample kickstart script that I used to accomplish the installs of the servers.  This little project actually turned out to be quiet easy, and the most difficult part was writing the custom scripts to execute after the installation of the server completed.  First we are going to install/setup the kickstart server itself.  I will be using Virtualbox as my test environment to demonstrate here.  Using the CentOS install DVD, walk through the install instructions to get the system up and running.  This should be fairly simple although I will make one note; I only install the base packages and the Gnome desktop manager to keep the install quick and easy.  If you'd like to add other packages to your install just be aware that it can raise the amount of time that the install takes (my install time was about 10 minutes).  After the installation of the server is complete you will be brought to the desktop for the first time.  When performing network installations with kickstart you can actually offer up the install files via NFS, HTTP, or FTP.  I choose to use HTTP because it was the quickest and slightly easier than the other two methods.  It also requires less configuration for those attempting this for the first time.

While you don't need Gnome in order for this server to work properly it is easier to use and saves time in the configuration aspects.  Once you have a desktop go to System -> Administration -> Security Level & Firewall.  Here you can make changes to SELinux and the Firewall.  First we will need to check off the boxes for NFS, SSH, and HTTP (don't disable the firewall unless you are in a totally isolated environment).  In the second tab change the SELinux setting to permissive or disabled (I choose disabled because I have no need for it on this server).  Confirm all the changes and allow the settings to take effect.  Next we will need to install Apache which will serve up the installation files.  Open a shell, change to the root user, and install Apache with: yum install httpd.  Once the install has completed verify that the service is running with: service httpd status.  The last part of the configuration will be to create the directory structure we will use to serve the installation files and populate them.  You will need to make sure that the install DVD for CentOS is in the CD-ROM drive.  In the same shell use the following commands to create directories and copy over the files for installation.

cd /var/www
mkdir pub
cd pub
mkdir kickstart
cp -vr /media/CentOS_5.4_Final/ /var/www/pub/

You should now have a pub directory that is filled with the install files from the CentOS 5.4 install DVD, which will be used to install clients via kickstart.  For the last steps we will need to build a kickstart file and copy it into /var/www/pub/kickstart where the clients will pull it from during install.  Below I will paste a basic kickstart files (with comments) that you can copy & paste into a file called test.cfg, which will need to be moved to /var/www/pub/kickstart.  Kickstart files can get very complex with scripts and custom settings which is why we are going to use this basic template.

# Kickstart file for a basic install.

url --url
lang en_US.UTF-8
keyboard us

# Assign the client a static IP upon first boot & set the hostname
network --device eth0 --bootproto static --ip= --netmask= --gateway= --nameserver= --hostname RHEL01 --noipv6

# Set the root password
rootpw --iscrypted

# Enable the firewall and open port 22 for SSH remote administration
firewall --enabled --port=22:tcp

# Setup security and SELinux levels
authconfig --enableshadow --enablemd5
selinux --permissive

# Set the timezone
timezone --utc America/New_York

# Create the bootloader in the MBR with drive sda being the drive to install it on
bootloader --location=mbr --driveorder=hda

# Wipe all partitions and build them with the info below
# ***hda may be different on your machine depending on the type of drives you use***
clearpart --drives=hda --all --initlabel
part /boot --fstype ext3 --size=100
part / --fstype ext3 --size=5000
part swap --size=2000
part /home --fstype ext3 --size=100 --grow

# Install the Base and Core software package groups for a minimal install, plus OpenSSH server & client

Now everything is in place.  The kickstart server has been built, the kickstart file is in place, and you are ready to boot up your client to start testing a kickstart installation.  For our test we will kick off another virtual machine and boot from the netinstall.iso (available from the CentOS downloads page).  This will boot off the CD and give us a prompt for parameters to be passed to the kernel during boot up.  We will use the following command:

$ linux text ks= append ip= netmask=

This command tells the client to boot the kernel, look for the server (our kickstart server), retrieve the test.cfg file from /pub/kickstart/, and assign the client a static address of  There are two things to note here; one is that in order to not use a static address you will need a functional DHCPserver with specific settings configure (this will be detailed in another post), two the static ip assignment can actually take place in the kickstart file however there is a bug in CentOS currently which prevents this from happening, which is why we must specify a static ip via kernel boot parameters.  If you typed the command correctly and the server is setup properly you will see the client begin to install the system automatically.  When finished you will be prompted to reboot and your system will be ready for use!  While the install is happening you can view log files in the background by switching virtual terminals.  Alt+F2 will give you a shell once the system is installed, Alt+F3 will show command line logs, Alt+F4 shows the kernel logs.  This process to automatically install servers and clients via kickstart is extremely helpful in rolling out new systems and fairly easy to accomplish.  Hopefully you will take this further and work on customizing your installations and post install scripts.  For a reference on kickstart files see the documentation:


For a more automated approach to kickstart check out my other posting for PXE booting, hands free install:

RHEL5 DHCP Configuration file

ddns-update-style interim;
ignore client-updates;

subnet netmask {

# --- default gateway
option routers;
option subnet-mask;

option nis-domain "domain.org";
option domain-name "prasanna.org";
option domain-name-servers;

option time-offset -18000; # Eastern Standard Time
# option ntp-servers;
# option netbios-name-servers;
# --- Selects point-to-point node (default is hybrid). Don't change this unless
# -- you understand Netbios very well
# option netbios-node-type 2;

range dynamic-bootp;
default-lease-time 21600;
max-lease-time 43200;

# we want the nameserver to appear at a fixed address
host ns {
next-server marvin.redhat.com;
hardware ethernet 12:34:56:78:AB:CD;

DHCP & NFS Configuration in RHEL5


DHCP Server Configuration

Step1  :           # rpm –q dhcp*
                        # rpm –ivh dhcp*
Step 2 :           # cd /usr/share/doc/dhcp-3.0.1
# ls
                        # dhcpd.conf.sample
                        # cp dhcpd.conf.sample /etc/dhcpd.conf
Step 3 :           # vi /etc/dhcpd.conf
                        option domain-name           dns      server ip
Step 4 :                       # vi /etc/sysconfig/network-scripts/ifcfg-eth0
Step 5 :           # service dhcpd restart
                        # chkconfig dhcpd on

Client Configuration

# vi /etc/sysconfig/network-script/ifcfg-eth0

NFS Server Configuration

Step1  :           # rpm –ivh nf*
Step 2 :           # vi /etc/exports
/test *(rw,sync)
                        Step3  :           # exportfs –av
                        Step4  :           # service portmap restart
                                                # chkconfig portmap on
                        Step 5 :           # service nfs restart
                                                # chkconfig nfs on

Client Configuration

Step1  :           # mount /mnt
                        # cd /mnt          

Building a Two-Node Linux Cluster with Heartbeat

The term "cluster" is actually not very well defined and could mean different things to different people. According to Webopedia, cluster refers to a group of disk sectors. Most Windows users are probably familiar with lost clusters--something that can be rectified by running the defrag utility.
However, at a more advanced level in the computer industry, cluster usually refers to a group of computers connected together so that more computer power, e.g., more MIPS (millions instruction per second), can be achieved or higher availability (HA) can be obtained.
Beowulf, Super Computer for the "Poor" Approach
Most super computers in the world are built on the concept of parallel processing--high-speed computer power is achieved by pulling the power from each individual computer. Made by IBM, "Deep Blue", the super computer that played chess with the world champion Garry Kasprov, was a computer cluster that consisted of several hundreds of RS6000s. In fact, many big time Hollywood movie animation companies, such as Pixar, Industrial Light and Magic, use computer clusters extensively for rendering (a process to translate all the information such as color, movement, physical properties, etc., into a single frame of picture).
In the past, a super computer was an expensive deluxe item that only few universities or research centers could afford. Started at NASA, Beowulf is a project of building clusters with "off-the-shelf" hardware (e.g., Pentium PCs) running Linux at a very low cost.
In the last several years, many universities world-wide have set up Beowulf clusters for the purpose of scientific research or simply for exploration of the frontier of super computer building.
High Availability (HA) Cluster
Clusters in this category use various technologies to gain an extra level of reliability for a service. Companies such as Red Hat, TurboLinux and PolyServe have cluster products that would allow a group of computers to monitor each other; when a master server (e.g., a web server) goes down, a secondary server will take over the services, similar to "disk mirroring" among servers.
Simple Theory
Because I do not have access to more than one real (or public) IP address, I set up my two-node cluster in a private network environment with some Linux servers and some Win9x workstations.
If you have access to three or more real/public IP addresses, you can certainly set up the Linux cluster with real IP addresses.
In the above network diagram (fig1.gif), the Linux router is the gateway to the Internet, and it consists of two IP addresses. The real IP,, is attached to a network card (eth1) in the Linux router and should be connected to either an ADSL modem or a cable modem for internet access.
The two-node Linux router consists of node1 ( and node2 ( Depending on your setup, either node1 or node2 can be your primary server, and the other will be your backup server. In this example, I will choose node1 as my primary and node2 as my backup. Once the cluster is set, with IP aliasing (read IP aliasing from the Linux Mini HOWTO for more detail), the primary server will be running with an extra IP address ( As long as the primary server is up and running, services (e.g., DHCP, DNS, HTTP, FTP, etc.) on node1 can be accessed by either or In fact, IP aliasing is the key concept for setting up this two-node Linux cluster.
When node1 (the primary server) goes down, node2 will be take over all services from node1 by starting the same IP alias ( and all subsequent services. In fact, some services can co-exist between node1 and node2 (e.g., FTP, HTTP, Samba, etc.), however, a service such as DCHP can have only one single running copy on the same physical segment. Likewise, we can never have two identical IP addresses running on two different nodes in the same network.
In fact, the underlining principle of a two-node, high-availability cluster is quite simple, and people with some basic shell programming techniques could probably write a shell script to build the cluster. We can set up an infinite loop within which the backup server (node2) simply keeps pinging the primary server, if the result is unsuccessful, and then start the floating IP ( as well as the necessary dæmons (programs running at the background).
A Two-Node Linux Cluster HOWTO with "Heartbeat"
You need two Pentium class PCs with a minimum specification of a 100MHz CPU, 32MB RAM, one NIC (network interface card), 1G hard drive. The two PCs need not be identical. In my experiment, I used an AMD K6 350M Hz and a Pentium 200 MMX. I chose the AMD as my primary server as it can complete a reboot (you need to do a few reboots for testing) faster than the Pentium 200. With the great support of CFSL (Computers for Schools and Libraries) in Winnipeg, I got some 4GB SCSI hard drives as well as some Adaptec 2940 PCI SCSI controllers. The old and almost obsolete equipment is in good working condition and is perfect for this experiment.
  • AMD K6 350MHz cpu
  • 4G SCSI hard drive (you certainly can use IDE hard drive)
  • 128MB RAM
  • 1.44 Floppy drive
  • 24x CD-ROM (not needed after installation)
  • 3COM 905 NIC
  • Pentium 200 MMX
  • 4G SCSI hard drive
  • 96MB RAM
  • 1.44 Floppy
  • 24x CD-ROM
  • 3COM 905 NIC

The Necessary Software
Both node1 and node2 must have Linux installed. I chose Red Hat and installed Red Hat 7.2 on node1 and Red Hat 6.2 on node2 (I simply wanted to find out if we could build a cluster with different versions of Linux installed on different nodes). Make sure you have installed all dæmons that you want to support. Here is my installation detail:
Hard disk partitions: 128MB for swap and the rest mounted for "/" (so that you don't need to worry about whether there is too much or not enough for a certain subdirectory).
Installed Packages:
  • Apache
  • FTP
  • Samba
  • DNS
  • dhcpd (server)
  • Squid
Heartbeat is a part of Ultra Monkey (The Linux HA Project), and the RPM can be downloaded from www.UltraMonkey.org.
The download is small and RPM installation is smooth and simple. However, the document or HOWTO for configuration is hard to find and confusing. In fact, that is the reason I decided to write this HOWTO; so that hopefully you can get your cluster setup with less problems.
Setting Up the Primary Server (node1) and the Backup Server (node2)
It is not the purpose of this article to show you how to install Red Hat; a lot of excellent documentation can be found at either www.linuxdoc.org orwww.redhat.com. I will simply include some of the most important configuration files for your reference:
/etc/hosts       localhost     router     node1     node2
This file should be the same on both node1 and node2; you may add any other nodes as you see fit.
Check HOSTNAME (cat /etc/HOSTNAME) and make sure it returns either node1 or node2. If not, you can use this command (uname -n > /etc/HOSTNAME) to fix the hostname problem.
ifconfig for node1
eth0      Link encap:Ethernet  HWaddr 00:60:97:9C:52:28  
          inet addr:  Bcast:  Mask:
          RX packets:18617 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14682 errors:0 dropped:0 overruns:0 carrier:0
          collisions:3 txqueuelen:100 
          Interrupt:10 Base address:0x6800 
eth0:0    Link encap:Ethernet  HWaddr 00:60:97:9C:52:28  
          inet addr:  Bcast:  Mask:
          Interrupt:10 Base address:0x6800 
lo        Link encap:Local Loopback  
          inet addr:  Mask:
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:38 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
Please notice that eth0:0 shows the IP aliasing with IP
ifconfig for node2
eth0      Link encap:Ethernet  HWaddr 00:60:08:26:B2:A4  
          inet addr:  Bcast:  Mask:
          RX packets:15673 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17550 errors:0 dropped:0 overruns:0 carrier:0
          collisions:2 txqueuelen:100 
          Interrupt:10 Base address:0x6700 
lo        Link encap:Local Loopback  
          inet addr:  Mask:
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:142 errors:0 dropped:0 overruns:0 frame:0
          TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
Install the Heartbeat RPM
If you are using Internet Explorer on Windows, you might have problems accessing FTP (Netscape works much better). I suggest you either use a command-line FTP or an FTP Windows/X Window System client (e.g., wu_ftp) to access the FTP site of Ultra Monkey (ftp.UltraMonkey.org).
Once you log in to the FTP server of Ultra Monkey, go to pub, then UltraMonkey and then the latest version 1.0.2 (not the beta). The only package is heartbeat-0.4.9-1.um.1.i386.rpm; save heartbeat-0.4.9-1.um.1.i386.rpm on your Linux box, log in as root and install it with
rpm -ivh heartbeat-0.4.9-1.um.1.i386.rpm

Null Modem Cable, Crossover Cable, Second NIC
According to the accompanying documentation, you need to install a second NIC on both nodes and connect them with a cross overcable. Besides the second NIC, a null modem cable connecting the serial (com) ports of each node is mandatory (according to the documentation). I followed the instructions in the documentation and installed everything. However, as I did more tests on the cluster, I found that the null modem cable, crossover cable and the second NIC are optional; they are nice to have but definitely not mandatory.
Configuring Heartbeat is the most important part of the whole installation and must be set up correctly to get your cluster working. Moreover, it should be identical on both nodes. There are three configuration files, all stored under /etc/ha.d: ha.cf, haresource and aythkeys.
My /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
#       File to write other messages to
logfile /var/log/ha-log
#       Facility to use for syslog()/logger 
logfacility     local0
#       keepalive: how many seconds between heartbeats
keepalive 2
#       deadtime: seconds-to-declare-host-dead
deadtime 10
udpport 694
#       What interfaces to heartbeat over?
udp     eth0
node    atm1
node    cluster1
# ------> end of ha.cf
Whatever is not shown above, you can simply leave as it was (all commented out by the #). The last three options are most important:
udp     eth0
node    atm1
node    cluster1
Unless you have a cross cable, you should use your eth0 (your only NIC) for udp; the two nodes at the end of the above files must be the same as returned by uname -nfrom each node.
My /etc/ha.d/haresources
atm1 IPaddr:: httpd smb dhcpd
This is the only line you need; in the above example, I included httpd, smb and dhcpd. You may add as many dæmons as you want, provided they have the exact same spelling as those dæmons under /etc/rc.d/init.d
My /etc/ha.d/authkeys
You don't need to add anything to this file, but you have to issue the command
chmod 600 /etc/ha.d/authkeys
Start the Heartbeat Daemon
You may start the dæmon with
service heartbeat start
/etc/rc.d/init.d/heartbeat start
Once heartbeat is started on both nodes, you will find that the ifconfig from the primary server will return something like:
ifconfig for node1
eth0      Link encap:Ethernet  HWaddr 00:60:97:9C:52:28  
          inet addr:  Bcast:  Mask:
          RX packets:18617 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14682 errors:0 dropped:0 overruns:0 carrier:0
          collisions:3 txqueuelen:100 
          Interrupt:10 Base address:0x6800 
eth0:0    Link encap:Ethernet  HWaddr 00:60:97:9C:52:28  
          inet addr:  Bcast:  Mask:
          Interrupt:10 Base address:0x6800 
lo        Link encap:Local Loopback  
          inet addr:  Mask:
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:38 errors:0 dropped:0 overruns:0 frame:0
          TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
When you see the line eth0:0, heartbeat is working, and you can try to access the server by using and check the log files /var/log/ha-log. Also, check the log file on node2 ( and try
ps -A | grep dhcpd
and you should find no running dhcpd on node2.
Now, the real HA test. Reboot, and then shut down the primary server (node1: Don't just power down the server; make sure you issue reboot or press CTL-ALT-DEL and wait until everything is shut down properly before you turn off your PC.
Within ten seconds, go to node2 and try ifconfig. If you can get the IP aliasing eth0:0, you are in business and have a working HA two-node cluster.
eth0      Link encap:Ethernet  HWaddr 00:60:08:26:B2:A4  
          inet addr:  Bcast:  Mask:
          RX packets:15673 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17550 errors:0 dropped:0 overruns:0 carrier:0
          collisions:2 txqueuelen:100 
          Interrupt:10 Base address:0x6700 
eth0:0    Link encap:Ethernet  HWaddr 00:60:08:26:B2:A4  
          inet addr:  Bcast:  Mask:
          RX packets:15673 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17550 errors:0 dropped:0 overruns:0 carrier:0
          collisions:2 txqueuelen:100 
          Interrupt:10 Base address:0x6700 
You can try
ps -A | grep dhcpd
or you can try to release and renew the IP info on your Win9x workstation, and you should see the new address for the dhcpd server.

Commercial Products
Commercial products from Red Hat, TurboLinux and PolyServe use the same concept of IP aliasing. When the primary server goes down, the backup server will pick up the same aliasing IP so that high availability can be achieved.
The cluster product from PolyServe is very sophisticated. It has support on SAN (server area network) and is capable of more than two nodes. It is very easy to install and easy to configure. I successfully configured the trial version without reading any documentation through a windows monitoring client. However, sophistication comes with a price tag, and the software costs more than a thousand dollars for a two-node cluster. The 30-day trial version cluster will stop after two hours, and it is not much fun for testing.
The cluster product from TurboLinux needs some fine-tuning. The installation documentation is confusing (or maybe they simply don't want people to do-it-themselves). The web configuration tool is unstable; the cgi script will crash whenever the user clicks the reload or refresh button. And of course, as a commercial product, it comes with a high price tag.
Linux is very stable and reliable, and it is quite common to have our servers up and running for a few hundred days at a time. Heartbeat works fine in my tests, and if you are looking for a product with higher availability for a small business or education institution, Heartbeat is definitely a perfect option.

Network services in RHEL-resolv.conf,network,hosts

1: setting DNS for your server:

# vim /etc/resolv.conf
search example.com

2:setting a local dns using /etc/hosts

# vim /etc/hosts
IP Address     Hostname           Alias     Localhost          Gate.openarch.com   gate.openarch.com Gate

3:setting hostname for server

# vim /etc/sysconfig/network
# service network restart

4: Setting DHCP or statics IP

# ifconfig
# vim /etc/sysconfig/network-scripts/ifcfg-eth0
Bootproto=DHCP or Static
#ifdown eth0
#ifup eth0