Sunday, August 14, 2011

To Secure Apache Web-Server

The Apache web server is an extremely stable and secure piece of software. With Apache powering close to 70 percent of the web sites on the Internet today, it has been well tested. It has become clear over the last decade that no software is 100% secure. Fortunately, there are several simple steps you can take to make your Apache installation more secure.

Keep Current

The single biggest cause of security breaches is software that was out of date. As bugs and exploits are found in the Apache web server, patches are released to correct them. The single biggest step you can take to securing your Apache server is to install the patches or upgrade to the latest release of Apache.

Security By Obscurity

The default Apache installation options cause the server to add a signature that shows what version of Apache you are running, what operating system it is running on and even what modules you are using in your Apache configuration. Providing this information makes it easier to exploit your system since hackers will have a great deal of information about the types and versions of your software and can easily search for vulnerabilities. While security by obscurity is not enough by itself, it is a good way to improve the security of your server. To disable Apache’s signature and reduce the information included in the HTTP header, add the following options to your default httpd.conf file:

ServerSignature Off
ServerTokens Prod

Run Under the Right User and Group

The default installation of Apache has the web server to run under the user nobody and the group nobody. While this is definitely better than some older configurations that ran the server as root, it can still be problematic. This is because on some systems the nobody user and group are used by several systems. If one of these other systems is comprised, the attackers would also have access to your Apache server and files. Likewise, if Apache were comprised, the attackers could do added damage to other subsystems. Using a separate user and group for Apache is recommended. You can set these in httpd.conf using the following:

User apache
Group apache

Control Directory and File Access

Apache has access controls that can be used to tighten your security. In particular, you want to block access to access to any files outside of your web root. This prevents users from downloading system files or reading configuration files for your web application if your server were to be mis-configured. Accomplishing this takes two steps. The first is to add the following to your default httpd.conf file:

Order Deny,Allow
Deny from All
Options None

This configuration effectively block access to all files on your file system. The next step is to selectively enable access to the files in your web root directory. If you are running multiple virtual hosts, you will need to include this in each virtual host configuration. For this example, lets say that your web root is /home/user/web. To enable access to the files in the web root, add this to your configuration:

Order Allow,Deny
Allow from All

Turn Off Unneeded Modules

This especially applies when it comes to Apache modules. You should disable any modules that you do not need and are not specifically using. There is always a risk that the default configuration for an unused module will allow something that you did not intend. The easiest solution is to disable the module. If you are using DSO modules, simply remove or comment out the LoadModule line in httpd.conf for any modules that you are not using.check modules using the command

httpd -l

protect .htaccess

However, .htaccess can also create other security problems. Depending on what options are enabled in Apache, .htaccess can override a number of Apache’s configuration settings.

You need to set this within a directory block. For example if your web root was /home/user/web, you would use the following in your Apache configuration:

AllowOverride None
Control Permissions on Configuration Files

control configuration files using there ownership in root control

Don’t Allow Writing in Executable Directories 

Dont allow wrinting permission for executable directories may be the hackers manages to write a file into this directory
or change 444 permission to the configuration files.

Disable FollowSymLinks

Symbolic links can expose files and directories on your file system that you did not intend to expose. Apache supports FollowSymLinks as a setting for Options. When this option is set, Apache will allow a user to follow a symbolic link to a file that is outside of the web root. You can stop this behavior by using:

Options None

within a Directory block. Or if you are enabling other options you can use:

Options -FollowSymLinks

Ten Linux Commands you Don’t Use most

1.Quickly Find a PID with pgrep

pgrep looks through the currently running processes and lists the process IDs which matches the selection criteria.

pgrep ssh

This will list all PIDs associated with the ssh process.

2.Execute The Last Executed Command


This will execute the last command you used on the command line.

3.Execute The Last Command Starting With s

If you want to execute a command a command from history starting with the letter S you can use the following:


This will execute the last command used on the command line that started with s.

4.Run a Command Repeatedly and Display the Output

watch runs command repeatedly, This allows you to watch the program output change over time. By default, the program is run every 2 seconds. watch is very similar to tail.

watch -d ls -l

This will watch the current directory for any file changes and highlight the change when it occurs.

5.Save Quickly in VI/VIM

If you’re in a hurry, you can save and quit the file you’re editing in vi by exiting insert mode, holding shift, and hitting z twice.

6.Quickly Log Out of a Terminal

You can quickly log out of a terminal session by using: CTRL+D

7.Navigate to the Last Directory You Were In

cd - will take you to the last directory you were in.

8.Make Parent Directories the Smart Way

mkdir -p /home/adam/make/all/of/these/directories/ will create all directories as needed even if they do not exist.

9.Delete the Entire Line

delete the entire line in teminal by using: CTRL+U.

10.Set the Time stamp of a File

touch -c -t 0801010800 .c will show the time stamp as 2008-01-01 8:00. The format is (YYMMDDhhmm).

Ten Linux Commands you Don’t Use most

1.Quickly Find a PID with pgrep

pgrep looks through the currently running processes and lists the process IDs which matches the selection criteria.

pgrep ssh

This will list all PIDs associated with the ssh process.

2.Execute The Last Executed Command


This will execute the last command you used on the command line.

3.Execute The Last Command Starting With s

If you want to execute a command a command from history starting with the letter S you can use the following:


This will execute the last command used on the command line that started with s.

4.Run a Command Repeatedly and Display the Output

watch runs command repeatedly, This allows you to watch the program output change over time. By default, the program is run every 2 seconds. watch is very similar to tail.

watch -d ls -l

This will watch the current directory for any file changes and highlight the change when it occurs.

5.Save Quickly in VI/VIM

If you’re in a hurry, you can save and quit the file you’re editing in vi by exiting insert mode, holding shift, and hitting z twice.

6.Quickly Log Out of a Terminal

You can quickly log out of a terminal session by using: CTRL+D

7.Navigate to the Last Directory You Were In

cd - will take you to the last directory you were in.

8.Make Parent Directories the Smart Way

mkdir -p /home/adam/make/all/of/these/directories/ will create all directories as needed even if they do not exist.

9.Delete the Entire Line

delete the entire line in teminal by using: CTRL+U.

10.Set the Time stamp of a File

touch -c -t 0801010800 .c will show the time stamp as 2008-01-01 8:00. The format is (YYMMDDhhmm).

Screen command tips

Screen is a full-screen window manager for the terminal mode. It is best known for multiplexing a single terminal across several processes. By using it, you can run many number of commands within a single terminal.

1) First open a terminal and type :

$ screen

2) Screen starts and creates a new single window with a shell.

New windows can be created within the same terminal using the screen command.

3) Now you started screen in a terminal, suppose you want to run the 'top' command to check the System load and at the same time you want to ping a ip -
For that, first execute the first program to be run (say 'top'). now 'top' will start in the terminal.
Now open a new window in screen by pressing the '[Ctrl + a] c' - which I will state as 'C-a c' . This will create a new window in the same terminal. Here, you can give commands to compile your program.
In screen, each window is given a unique identifier. The first window is numbered 0, the next window is 1 and so on. Now to switch between your 'top' and the ping, you can use the key 'Ctr+a 0' and 'Ctrl+a 1' respectively.

You can also log out from the machine and re-login. Then start any terminal session and type 'screen -r' to once again be connected from where you left.
In case, there were more than one screen sessions running on the machine, Screen prompts for a
For example, say I have two screen sessions. So when I type 'screen -r' command, it gives the following message:

$ screen -r
There are several suitable screens on:
2999.pts-6.localhost (Detached)
1920.PTS-6.localhost (Detached)
Type "screen [-d] -r [pid]" to resume one of them.

To check our linux system’s temperature

To find out more about your system’s temperature, install acpi. Then do a simple

acpi -V


Battery 0: Full, 100%
Battery 0: design capacity 7800 mAh, last full capacity 4988 mAh = 63%
Adapter 0: on-line
Thermal 0: ok, 63.5 degrees C
Thermal 0: trip point 0 switches to mode critical at temperature 126.0 degrees C
Cooling 0: Processor 0 of 10

Taking screenshots of Terminal window in linux

import screenshot.jpg

This will allow you to select a rectangle using your mouse. The moment you let go of your left mouse button, a screenshot with the contents of that rectangle will be saved in the current directory.
And then there’s scrot.

scrot -d 4 screenshot.png

This will take a screenshot of your entire desktop, with a delay of 4 seconds between launching the command and saving the screenshot.png file. Use

scrot -c -d 4 screenshot.png

to also display a countdown in the console. Use

scrot -q 80 -c -d 4 screenshot.jpg

this command also used

xwd | convert xwd:- screenshot_$(date +%Y%m%d_%H%M%S).png

To Take Screen Short of Remote Desktop

If you want to take a screenshot of remote Linux desktop

DISPLAY=”:0.0″ import -window root screenshot.png

Linux Command For Gmail inbox checking

To check the gmail,u can access ur  gmail inbox from command prompt.

#curl -u username:password --silent "" | perl -ne 'print "\t" if //; print "$2\n" if /<(title|name)>(.*)<\/\1>/;'

To find IP address & To watch Linux Memory usage

Find out your router’s external IP address using the Linux command line

To Find Router external IP.Using the following commands we can see our router external IP address

without curl

#wget -O - -q

with curl


#curl -s '' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g'

TO see Linux memory usage in real-time

If you want to display your memory usage in real-time, do a

#watch -d "free -mt"

It will display used and free memory every two seconds.

To watch SSH users actions

To view your ssh user activities

#cat /dev/vcs1

this will show you what happens in first console. to check other consoles  /dev/vcs1 or vcs2 or vcs3.

Man Command Utilities

Man pages(short for manual pages)are the documentation that comes preinstalled with almost all Unix and Unix-like operating systems. The linux command used to display them is man. Each page is a self-contained document.

The following section numbers of the manual followed by the types of pages they contain.

1   Executable programs or shell commands (or) User-level commands
2   System calls (functions provided by the kernel)
3   Library calls (functions within program libraries)
4   Special files (or) Devices and device drivers
5   File formats and conventions e.x:-/etc/passwd
6   Games
7   Miscellaneous (or) Various miscellaneous stuff - macro packages etc.(including macro packages &conventions), e.x:-man(7)
8   System administration commands (or) System maintenance and operation commands(usually only for root)
9   Kernel routines [Non standard]

To Create a PDF document for man page

syntax :-

man -t (command) | ps2pdf -> (command).pdf
man -t man -t mkdir | ps2pdf -> mkdir.pdf

To Enable Query Cache in Mysql

Query caching is a way to increase the mysql performance by caching database queries.
to enable just edit one file, in redhat it’s called /etc/my.cnf

Add the following lines in the mysqld section

query_cache_limit = 16M
query_cache_size = 256M
query_cache_type = 1

restart the mysql daemon

# /etc/init.d/mysql restart

To verify the cache is enabled

# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 36074287
Server version: 5.0.92-community-log MySQL Community Edition (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SHOW VARIABLES LIKE '%query_cache%';

| Variable_name                | Value    |
| have_query_cache             | YES      |
| query_cache_limit            | 16777216 |
| query_cache_min_res_unit     | 4096     |
| query_cache_size             | 268435456|
| query_cache_type             | ON       |
| query_cache_wlock_invalidate | OFF      |
6 rows in set (0.00 sec)

Strange Linux Commands Stands For

awk = "Aho Weinberger and Kernighan"
            This language was named by its authors, Al Aho, Peter Weinberger and Brian Kernighan.

cat = "CATenate"
The cat command is a standard Unix program used to concatenate and display files. The name is from catenate, a synonym of concatenate.

grep = "Global Regular Expression Print"
    grep comes from the ed command to print all lines matching a certain pattern g/re/p where re is a regular expression.

    fgrep = "Fixed GREP"
    fgrep searches for fixed strings only. The "f" does not stand for "fast" - in fact, "fgrep foobar *.c" is usually slower than "egrep foobar *.c"
    egrep = "Extended GREP"
nroff = "New ROFF"
troff = "Typesetter new ROFF"
    These are descendants of "roff", which was a re-implementation of the Multics "runoff" program (a program that you'd use to "run off" a good copy of a document)

tee = T
    From plumbing terminology for a T-shaped pipe splitter.

Perl = "Practical Extraction and Report Language"
Perl = "Pathologically Eclectic Rubbish Lister"
    The Perl language is Larry Wall's highly popular freely-available completely portable text, process, and file manipulation tool that bridges the gap between shell and C programming.

Command to check UUID

In your /etc/fstab file, you have have seen an entry that looks UUID=c81355eb-96d2-458a-8ce0-3fa12a04cb8e instead of a more familiar disk drive designation, such as /dev/hda1. Such entries are called universally unique identifiers (UUID). You can use these 128-bit numbers to make hard disk management easier.

This following command is used to print the UUID for a device. This may be used with UUID= in /etc/fstab to name devices that works even if disks are added and removed. redhat uses this in /etc/fstab file.

Print UUID to a selected  partition /dev/sda1

#blkid -o value -s UUID /dev/sda1

Print all UUIDs

#blkid -o value -s UUID

Few Basic Monitoring tools

The following monitoring tools can be used to get information about system activities. use these tools to find the performance problem. some of basic monitoring commands -Process Activity Command

When you need to see the running processes on your Linux in real time, you have top as your tool for that. top also displays other info besides the running processes, like free memory both physical and swap and updates the list every five seconds.,

2.vmstat - Report virtual memory statistics

The command vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.

3.w - Who Is Logged on And What They Are Doing

w displays the information about users currently on machine, and their processes,the current time,how long the system running, how many users are currently logged on, and the system load average for the past 1, 5, and 15 minutes

4.uptime - How Long The System is Running

The uptime command can be used to check how long the server has been running. - process status

The command should be used to display the currently running processes on our systems - Information about free and used memory on the system

The command free displays the total amount of free and used physical and swap memory in the system, as well as the buffers used by the kernel.

7.iostat - Average CPU Load, Disk Activity

The command iostat report CPU statistics and input/output statistics for devices, partitions and network filesystems (NFS).

8.sar - Store Address Register

The sar command is used to collect, report, and save system activity information it reports every 10min system activity

9.mpstat - Multiprocessor Usage

The mpstat command displays activities for each available processor, reports global and per-processor statistics

10.pmap - Process Memory Usage

pmap displays the memory map of a process for the specified pid

11.netstat - network statistics

The command netstat displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. - Stack Segment

ss command is used to dump socket statistics. It allows showing information similar to netstat.

13.iptraf - Real-time Network Statistics

The iptraf command is interactive colorful IP LAN monitor. It is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others.

14.tcpdump - Detailed Network Traffic Analysis

The tcpdump is simple command that dump traffic on a network. However, you need good understanding of TCP/IP protocol to utilize this tool. For.e.g to display traffic info about DNS.

15.strace - trace system calls and signals

Trace system calls and signals. This is useful for debugging webserver and other server problems.

16./Proc file system - Various Kernel Statistics

/proc file system provides detailed information about various hardware devices and other Linux kernel information.

Ways to Installing CHKROOTKIT on Linux server

chkrootkit (Check Rootkit) is a common Unix-based program intended to help system administrators check their system for known rootkits. It is a shell script using common UNIX/Linux tools like the strings.

Environments for chkrootkit:
chkrootkit is tested on: Linux 2.0.x, 2.2.x, 2.4.x and 2.6.x,
FreeBSD 2.2.x, 3.x, 4.x and 5.x, OpenBSD 2.x, 3.x and 4.x., NetBSD
1.6.x, Solaris 2.5.1, 2.6, 8.0 and 9.0, HP-UX 11, Tru64, BSDI and Mac

1. Login to your server as root. (SSH)

2. Down load the chkrootkit.
Type: wget

3. Unpack the chkrootkit you just downloaded.
Type: tar xvzf chkrootkit.tar.gz

4. Change to new directory
Type: cd chkrootkit*

5. Compile chkrootkit
Type: make sense

6. Run chkrootkit
Type: ./chkrootkit

what the chkrootkit will do

1. It checks for signs of rootkits - chkrootkit, ifpromisc.c, chklastlog.c, chkwtmp.c, check_wtmpx.c, chkproc.c, chkdirs.c, strings.c, chkutmp.c; chkrootkit is the main module which controls all other modules.

2.chkrootkit checks system binaries for modifications. eg: find, grep, cron, crontab, echo, env, su, ifconfig, init, sendmail ...).

3.Next, it finds default files and directories of many rootkits (sniffer's logs, HiDrootkit's default dir, tOrn's default files and dirs...).

4.After that, it continues to look for default files and directories of known rootkits.

If it says "Checking `bindshell'... INFECTED (PORTS: 465)"
This is normal and it is NOT really a virus.

The following tests are made:

aliens asp bindshell lkm rexedcs sniffer wted w55808 scalper slapper z2 amd basename biff chfn chsh cron date du dirname echo egrep env find fingerd gpm grep hdparm su ifconfig inetd inetdconf init identd killall ldsopreload login ls lsof mail mingetty netstat named passwd pidof pop2 pop3 ps pstree rpcinfo rlogind rshd slogin sendmail sshd syslogd tar tcpd tcpdump top telnetd timed traceroute vdir w write.

Clamav Installion And Uses On Linux

Clam Antivirus (ClamAV) is a free, cross-platform antivirus software tool-kit able to detect many types of malicious software, including viruses.There is a common talk that there are no viruses on the Linux platform - which to a large extent is true. But when you get a mail attachment from windows machine may be the machine is full of virus. That virus no affect our linux server.But it will affect the windows users whom using our websites.

Download ClamAV from

# tar zxvf clamav-0.95.1.tar.gz

# cd clamav-0.95.1

# ./configure

# make all

# make install

Once after installation you need to modify two configuration files to get ClamAV running & for definition updates.

1. vim /etc/clamd.conf
   Comment on example:line number 8
2.vim /etc/freshclam.conf
   Comment on example:line number 8

ClamAV installation in Cpanel

#Main >> cPanel >> Manage Plugins

#Name: clamavconnector
      Author: cPanel Inc.
      and select the Install and keep updated tick box

    and finally save

    after completing in WHM.

You can install it from backend. Follow the steps
#Go terminal window
#login as root

#For 32 bit installations:
    cd /usr/local/cpanel/modules-install/clamavconnector-Linux-i686

#For 64 bit:
    cd /usr/local/cpanel/modules-install/clamavconnector-Linux-x86_64

#Run on screen ./install

update your virus definitions


check files in your home directory:


check files in the entire home directory:

clamscan -r /home

check files on the entire drive (displaying everything):

clamscan -r /

check files on the entire drive but only display infected files and ring a bell when found:

clamscan -r --bell --mbox -i /

scan and mail report

clamscan --remove -r --bell -i /home/example/mail/ |  mail -s 'clam'

examples of scanned virus

/home/example/mail/new/,S=42794: Trojan.Spy.Zbot-464 FOUND
/home/example/mail/new/,S=42794: Removed.
/home/example/mail/new/,S=10619: Trojan.Downloader.Agent-1452 FOUND
/home/example/mail/new/,S=10619: Removed.

File Fragmentation checking on Linux

To find file fragmentation information for a specific file,we can use filefrag command.
filefrag reports on how badly fragmented a particular file. It makes allowances for indirect blocks for ext2 and ext3 filesystems, but can be used on files for any filesystem.


filefrag -v (filename)

filefrag -v /home/example/example.txt

-v   => verbose when checking for file fragmentation

ouptput:(for example)

Checking example.txt
Filesystem type is: ef53
Filesystem cylinder groups is approximately 606
Blocksize of file example.txt is 4096
File size of example.txt is 1194 (1 blocks)
First block: 7006588
Last block: 7006588
example.txt: 1 extent found

To Clear Linux Memory Cache

To free pagecache:

# sync; echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

# sync; echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

# sync; echo 3 > /proc/sys/vm/drop_caches

==> sync - flush file system buffers

Linux Ext2,Ext3,Ext4 File systems

A LINUX file system is a collection of files and directories stored. Each file system is stored in a separate whole disk we see file system types

The ext2 (second extended file system) is a file system for the Linux kernel. It was initially designed by Remy Card as a replacement for the extended file system (ext).It was introduced with the 1.0 kernel in 1993.Ext2 is flexible,can handle file system up to 4 TB,and supports long file names up to 1012 characters,it has sparse super blocks feature which increase file system performance.In case any user processes fill up a file system,ext2 normally reserves about 5% of disk blocks for exclusive use by root so that root can easily recover from that situation.

The ext3 (third extended file system) is a journal ed file system that is commonly used by the Linux kernel. It is the default file system for many popular Linux distributions,Stephen Tweedie developed ext3.It provides all the features of ext2,and also features journaling and backward compatibility with ext2.The backward compatibility enables you to still run kernels that are only ext2-aware with ext3 partitions.we can also use all of the ext2 file system tuning,repair and recovery tools with ext3 also you can upgrade an ext2 file system to an ext3 file system without losing any of your data.
Ext3’s journaling feature speeds up the amount of time ,in ext2 when a file system is uncleanly mounted ,the whole file system must be checked.This takes a long time on large file systems.On an ext3 system ,the system keeps a record of uncommitted file transactions and applies only those transactions when the system is brought back up.So a complete system check is not required and the system will come back up much faster.

The ext4 (fourth extended file system) is a journaling file system for Linux,Ext4 is part of the Linux 2.6.28 kernel,Ext4 is the evolution of the most used Linux file system, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the file system such as the ones destined to store the file data. The result is a filesystem with an improved design, better performance, reliability and features.developed by Mingming Cao,Andreas Dilger,Alex Zhuravlev,Dave Kleikamp,Theodore Ts'o, Eric Sandeen,Sam Naghshineh and others.

Runtime linux file change monitor

The following command monitor u file change,like process and CPU usage on top command.

watch -d -n 2 df -Th; ls -FlAt;

d  => (differences) Highlight changes between iterations.
n  => (interval)  Run the command every seconds.
df => report file system disk space usage.
T  => print file system type.
h  => print sizes in human readable format.
ls => list directory contents.
F => (classify)append indicator (one of */=>@|) to entries.
l   => a long listing format.
A => do not list implied.
t  => sort by modification time.

It will update files are getting written on our file system.It highlights if a file get modified,and you will know what your granted SSH access users exactly modifying.

Drop all ping packets

You can setup kernel variable to drop all ping packets.

# echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all

This instructs the kernel to simply ignore all ping requests (ICMP type 0 messages).

To enable ping request type the command:

# echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all


You can drop by adding following line to /etc/sysctl.conf file:

net.ipv4.icmp_echo_ignore_all = 1

Save and close the file.

Osi Layers

Open system Interconnection-Created by ISO(International Organization for Standardization) in 1970 .

* OSI layer helps to communicate two different vendor network devices using common set of rules.
*That rules are very difficult to understand vendors.So that rules are separate seven groups,each group called    layers. In this method is called layered approach model.
* Types of Seven Layers: Application,Presentation,Session,Transport,Network,Data link,Physical.
* First three top layers(Application,Presentation and Session) managed application process(host layers) and bottom  four layers(Transport,Network,Data link,Physical) are manged communication process(media layers)

Application Layers:

* Application layer provides interface between user and that particular application.
* Application layer responsible for establishing and identifying intended communication partner(User).
*Application layer choose actual  application (using port no) and  give to next  layer using protocol stack. 
* Application layer acting to network related applications and also desktop related applications.

                   * In my system,i removed my NIC card and TCP/IP settings,network drivers.At a time open my Internet explore the html web page document opened and retrieve HTTP protocol and getting some error message.In these case user opened html page and using http protocol and create Application layer PDU (Protocol data unit)and give to next layer using via port no(HTTP port no-80).And also added Presentation and Session layer PDU's finally comes to network layer finding source and destination ip address,But system don't have NIC card so encapsulation failed and lost the data  finally getting some error message.In this cause user connected to application and getting replay message.

Presentation Layer:

* Presentation layer  responsible present the data and data translation and code formatting for Application  layer entities(Application layer PDU).
*Presentation layer also responsible for data compression,decompression and data encryption,decryption.  
*Presentation layer responsible for general format to ASCII format.It means sender data to translate to any format and then re translate to that format in destination system. 

Session Layer: 

*Transport layer responsible for  setting up (connecting)and managing and tear down (disconnect)session between presentation layer entities.
*Transport layer dialog control (end to end connection control)between two devices or end to end nodes and also organize the server to client or between two nodes using different modes.
Simplex-This is one way communication.(sender transmit the data and receiver received that data but receiver can't communicate to sender)
Half duplex- In this method sender to send the data to receiver after received the data then receiver replay back(At the same time can't communicate sender and receiver)
Full duplex-In this method at the same time communicate sender and receiver.
* And the main work for session layer different application data keeps and communicate to that exact that application for destination. 

Transport Layer: 

* Transport layer responsible create segments(break the big data to small small segments) for session layer data entries and reassemble the segments to data and give to session layer.
*Transport layer gives End to End Transport services and create logical connection between sending host to receiving host. 
*Transport layer works with TCP and UDP protocols. 
TCP(Transmission control protocol)

rsync command with examples

rsync: Remote Sync.

Description: rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.

(*).One of the main feature of rsync is that it transfers only the changed block to the destination, instead of sending the whole file.

rsync options source destination (or)
rsync [OPTION]... SRC [SRC]... [USER@]HOST:DEST (or)

a - Archive same -rptgolD, [Its enough to copy entire folder of file even with out changing file permission]
-r - Recursive [Copy entire directive]
-p, --perms preserve(Unalter) permissions
-t, --times preserve times
-o, --owner preserve owner (super-user only)
-g, --group preserve group
-l, --links copy symlinks as symlinks
-D same as --devices --specials
--devices preserve device files (super-user only)
--specials preserve special files

-z, --compress - Compress file data during the transfer.

-v - verbose, Explain what happen in view

-u - Do Not Overwrite the Modified Files at the Destination [if the file in the destination in modified and if u use -u option in rsync, the modified file wont be overwritten].

--rsh=COMMAND -- specify the remote shell to use. (e.g) --rsh='ssh -p1055' to login to remote system.

--progress - View the rsync Progress during Transfer

--stats - give some file-transfer stats

--delete - This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized.

--exclude - This option is used to exclude some certain files

--max-size=SIZE - This tells rsync to avoid transferring any file that is larger than the specified SIZE

--max-size=SIZE - This tells rsync to avoid transferring any file that is smaller than the specified SIZE


1.rsync Command to copy only Files from a folder

#rsync -avz doc/ Desktop/welcome/
#ls Desktop/welcome/

===>taglist.txt is the file present in the folder doc

2.rsync command to copy entire folder to the destination

#rsync -avz doc Desktop/welcome/
#ls Desktop/welcome/

===>doc is the folder we copied.

3.rsync command with --progress option

#rsync -avz --progress doc Desktop/welcome/
building file list ...
2 files to consider
69366 100% 11.63MB/s 0:00:00 (xfer#1, to-check=0/2)

sent 18509 bytes received 48 bytes 37114.00 bytes/sec
total size is 69366 speedup is 3.74

===> --progress option gives you a progress meter of data send to the destination.

4.rsync command with --progress and --stats options

#rsync -avz --progress --stats doc Desktop/welcome/
building file list ...
2 files to consider
69366 100% 11.63MB/s 0:00:00 (xfer#1, to-check=0/2)

Number of files: 2
Number of files transferred: 1
Total file size: 69366 bytes
Total transferred file size: 69366 bytes
Literal data: 69366 bytes
Matched data: 0 bytes
File list size: 83
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 18509
Total bytes received: 48

sent 18509 bytes received 48 bytes 37114.00 bytes/sec
total size is 69366 speedup is 3.74

===> --stats option tells you a status information of file transfered.

5.rsync command with --delete option

#cd Desktop/welcome/
#cat > hi
#rsync -avz --delete doc Desktop/welcome/
building file list ... done
deleting doc/hi

sent 105 bytes received 26 bytes 262.00 bytes/sec
total size is 69366 speedup is 529.51

===> --delete command is used to delete files in the destination if the source doesn't have the file.

#cd Desktop/welcome/

6.rsync command with --exclude option.

#cd doc
#cat hello
this is hello file

#rsync -avz --exclude doc/hi doc Desktop/welcome/
building file list ... done

sent 105 bytes received 26 bytes 262.00 bytes/sec
total size is 69366 speedup is 529.51

#cd Desktop/welcome/

7.rsync command to transfer file from Source system to Destination system.

#rsync -avz --progress --stats --delete --rsh='ssh -Xp36985' doc rajm@

===> The above command is used to copy file from source system to Desktop of destination system .

8.rsync command to transfer file from destination system to source system.

#rsync -avz --progress --stats --rsh='ssh -Xp36985' rajm@ .

===> The above command is used to copy files from Desktop system to source of destination system .

Sed command with examples

Sed stands for stream editor. A stream editor performs the text transformation from input stream and the input stream may be a file or input from a pipeline.

cat editing
how are you
hope you are fine

. . .Sed command to print lines. . .

1.#sed '1p' editing
how are you
hope you are fine

==> 1p which tells that print the first line and show the remaining lines. so we can see from the output hi is priented twice.

2.#sed -n '1p' editing

===> -n options is nothing but noprint, (i.e) It prints only the first line and didn't show the other lines.

3.#sed -n '2,4p' editing
how are you

===> The above command is used to print only a certain range of lines.

4.#sed -n '$p' editing
hope you are fine

===> The above command displays only the last line. The '$' symbol indicated the last line and '-n'is used for no print and it combines with option 'p' to displays only the last line .

. . .Sed Command to delete lines. . .

1. #sed '1d' editing
how are you
hope you are fine

===> The above command used to delete only the fisrt line and display the remaining lines.

2. #sed '2,4d' editing

===> The above command is used to delete a range of lines.

. . .Sed Command for search and replace. . .

1.#sed 's/hi/changed/' editing
how are you
hope you are fine

==> The above command search for hi and replace to changed and print the remaining lines

2.#sed -n 's/hi/changes/p' editing

===> The above command search for hi and replace to changed and don't print the remaining lines

3.#sed '1,4s/are/is/' editing
how is you
hope you are fine

===> The above command used to do search and replace only for a range of lines.

. . .How to delete empty lines using sed command. . .

#cat new_file

The above line is empty

1.#cat new_file | sed -e '/^$/d'
The above line is empty

2.#sed -e '/^$/d' new_file
The above line is empty

===> The above two commands produce the same results, It search for empty line and delete it. '^' is nothing but starting of the line and '$' tells the end of the line, so from the starting to ending of the line is empty means delete it.

. . .How to remove space from a word. . .

#cat remove
s p a c e.

#cat remove | sed 's/ //g'
# sed 's/ //g' remove

===> It nothing but search and replace, the above command search for empty space and replace it with nothig, so then space becomes nospace.

. . .How to remove a lines permanently from a file. . .

#cat new_file

The above line is empty

#sed -i '2d' new_file

===> This above command will delete the line number 2 permanently from the file new_file.

. . .How to assign numbers for lines using sed command. . .
#cat editing

#sed '=' editing

===> In the above command '=' symbol is used to assign number for each lines, it works like as same as 'nl' command.

. . .How to use Word Boundaries using sed command. . .

Word Boundaries - \<\>

#cat first_text
i am rajkumar

#sed -e 's/\/kumar/g' first

i am kumarkumar

===> Normally if u use search and replace it will replace any word which contains raj to kumar, if 'g' is specified.
#sed -e 's/\/kumar/g' first_text
i am rajkumar

===> If u use Word Boundaries means the only exact work will be searched and get replaced.(Its same like as 'grep -w' command)

. . .How to include files in sed command. . .

#cat commands

#vim lol

#sed -f lol commands
===> The '-f' option is used to include files in command prompt.

Special Characters with uses

Ctl-A       Moves cursor to beginning of line of text (on the command-line).
Ctl-B       Backspace (nondestructive).
Ctl-C       Break. Terminate a foreground job.
Ctl-D       Log out from a shell (similar to exit).
Ctl-E       Moves cursor to end of line of text (on the command-line).
Ctl-F       Moves cursor forward one character position (on the command-line).
Ctl-G       BEL. On some old-time teletype terminals, this would actually ring a bell.In an xterm it might beep.
Ctl-H       Rubout (destructive backspace). Erases characters the cursor backs over while  backspacing.
Ctl-I        Horizontal tab.
Ctl-J        Newline (line feed). In a script, may also be expressed in octal notation -- '\012' or in   hexadecimal -- '\x0a'.
Ctl-K       Vertical tab.  When typing text on the console or in an xterm window, Ctl-K erases  from the character under the cursor to end of line. Within a script, Ctl-K may behave  differently, as in Lee Lee Maschmeyer's example, below
Ctl-L       Formfeed (clear the terminal screen). In a terminal, this has the same effect as the  clear  command. When sent to a printer, a Ctl-L causes an advance to end of the paper sheet.
Ctl-N       Erases a line of text recalled from history buffer [20] (on the command-line).
Ctl-O       Issues a newline (on the command-line).
Ctl-P       Recalls last command from history buffer (on the command-line).
Ctl-Q       Resume (XON). This resumes stdin in a terminal.
Ctl-R       Backwards search for text in history buffer (on the command-line).
Ctl-S       Suspend (XOFF). This freezes stdin in a terminal. (Use Ctl-Q to restore input.)
Ctl-T       Reverses the position of the character the cursor is on with the previous character  (on the      command-line).
Ctl-U       Erase a line of input, from the cursor backward to beginning of line. In some settings, Ctl-U     erases the entire line of input, regardless of cursor position.
Ctl-X       In certain word processing programs, Cuts highlighted text and copies to clipboard.
Ctl-Y       Pastes back text previously erased (with Ctl-U or Ctl-W).
Ctl-Z       Pauses a foreground job.

How to Encrypt a file in Linux

root@user:~# vim test.txt (write something here)

This is a Test file

=====Now see the Content of the file using cat command=====
#root@user:~# cat test.txt
This is Test file

=====Now we are going to Encrypt the file with gpg======
root@user:~# gpg -c test.txt

Enter Pass-phrase :

Repeat Pass-phrase :

====You can see one more file create.=====
root@user:~# ls -l test*

-rw-rr 1 root root 59 2011-03-02 17:20 test.txt
-rw-rr 1 root root 97 2011-03-02 17:23 test.txt.gpg

=====Lets try to see encrypt file with cat command=====
root@user:~# cat test.txt.gpg


=====Delete original File=====
root@user:~# rm test.txt
=====Now we are going to decrypt the encrypted file=====
root@user:~# gpg test.txt.gpg

Enter Pass-phrase :
=====See decrypted file content=====
root@user:~# cat test.txt
This is a Test file

Adding User and their respective Password through Shell Script

. . .List of Users to be Added . . .
#cat list

. . .Script to automate User and their Password Adding . . .
cat /dev/urandom | tr -cd "a-zA-Z0-91234567890-=\`" | fold -w 9 | head -n 2 > pass
cp pass check
u=`cat list`
for j in $u
useradd $j
echo "User $j Added"
echo "=================================="
for i in $u
echo "User Name is :$i"
p=`cat pass`
echo "$p" | passwd --stdin "$i"
sed -i '1d' /home/rajm/script/blog/auto/pass
#sed -i '1d' $p
echo "User $i ’s password changed!"
echo "=============================="

. . .Give permisson and run the script. . .
#chmod +x
User ravi Added
User deepak Added
User Name is :ravi
Changing password for user ravi.
passwd: all authentication tokens updated successfully.
User ravi ’s password changed!
User Name is :deepak
Changing password for user deepak.
passwd: all authentication tokens updated successfully.
User deepak ’s password changed!

. . .The File check used to check the password. . . .
#cat check

Some Scripting Tricks

. . .Uses of some Build in Variables. . .

1. echo $LOGNAME and echo $USER -----> show's who is the logged in user
2. echo $HOSTNAME               -----> show's the Hostname of the Linux Box
3. echo $PPID  ----> show's the Process ID of Shell's Parent Directory ( If u kill this every logged in terminal will be closed)
4. echo $PWD            ----> show's the Present Working Directory
5. echo $UID              -----> show's the UserID of the currently Logged in user
6. echo $MAIL          -----> Show's the Mail Path       
7. echo $HISTFILE   -----> Show's the file's which stores the history details
8. echo $HOME        -----> show's the users HOME Directory
9. echo $PATH         -----> A colon-separated list of directories in which the shell looks for commands.
10.echo $BASH The full pathname used to execute the current instance of Bash.
11.echo $HISTSIZE  ----> This will show you the HISTORY size
12  $?   ------> Expands to the exit status of the most recently executed foreground pipeline.
13. $$   -----> Shows to the process ID of the shell and if it used inside a script it shows the process ID of the script.
14. $!   -----> Expands to the process ID of the most recently executed background command(&).
15. $0  -----> Expands to the name of the shell or shell script.
16. $_  -----> It will show you what is the Last Argument used [ like esc (.) or alt (.) ]

How to set a value for a variable in single command?

#echo ${raj:=rajkumar}
#echo $raj

How to assign o/p value of a command to a variable?

1. $(command)
2. `command`

#echo `date`
Sun May 1 13:12:31 IST 2011
#echo $(date)
Sun May 1 13:12:40 IST 2011


a=`du -sh /home/rajm | awk {'print $1'}`
echo "I am User $USER and I am running this script"
echo "The UserID of $USER is $UID"
echo "The name of the Script is $0"
echo "The processID of the script $0 is: $$"
echo "The Script is running from $PWD directory"
echo "Size of User $LOGNAME Home directory($HOME) is :$a"
echo "Status of Previously executed command is: $?"
echo "The script $0 contains $LINENO Lines"

I am User rajm and I am running this script
The UserID of rajm is 516
The name of the Script is
The processID of the script is: 5096
The Script is running from /home/rajm/script/blog directory
Size of User rajm Home directory(/home/rajm) is :696M
Status of Previously executed command is: 0
The script contains 10 Lines

How Linux Boots

When we install Linux we basically do more number of partitions. When allocating disk space for the partitions, the first sector(One Sector = 512 Bytes), or data unit, for each partition is always reserved for programmable code used in booting. The very first sector of hard disk is reserved for booting purpose and is called Master Boot Record(MBR).

Step:1 POST (Power On Self Test)

Step:2 BIOS (Basic Input Output System)

Step:3 MBR (Master Boot Record), after opening MBR the boot loader code in MBR is executed. Then the MBR needs to know which partitions on the disk have boot loader code specific to their operating systems in their boot sectors and then attempts to boot one of them.

Step:4 Select's the particular boot partition(basically /boot), then it need to select the boot loader. Their are two types of boot loader in linux LILO(LInox LOader, / etc/lilo.conf) and GRUB (Grand Unified Boot Loader).LILO is not in use now a days.

Step:5 Then the data in /boot/grub/grub.conf is readed, which list all the available operating  system and their booting parameters.

Step:6 When Linux begins to boot with its kernel, it first runs the /sbin/init program, which does some system checks.

Step:7 Then the /etc/inittab file is opened this will tell us which runlevel should be used.

Step:8 Based on the Selected runlevel the init process then executes start up scripts located in subdirectory of /etc/rc.d/*. If the runlevel 5 is chosen means then the scripts in /etc/rc.d/rc5.d is executed.

Step:9 cd /etc/rc.d/rc5.d . The files inside in the directory are Start up with two things "S" and"K". The scripts starting with "S" are executed when System starts and the Scripts starting with "K" are executed when the system shutdowns. The Number that follows the K and S specifies the position in which the scripts should be run in ascending order.

Step:10 Then the scrips inside the /etc/rc.d/rc.local are executed, if u manually added any.