Sunday, August 14, 2011

Ways to Installing CHKROOTKIT on Linux server

CHKROOTKIT
chkrootkit (Check Rootkit) is a common Unix-based program intended to help system administrators check their system for known rootkits. It is a shell script using common UNIX/Linux tools like the strings.

Environments for chkrootkit:
chkrootkit is tested on: Linux 2.0.x, 2.2.x, 2.4.x and 2.6.x,
FreeBSD 2.2.x, 3.x, 4.x and 5.x, OpenBSD 2.x, 3.x and 4.x., NetBSD
1.6.x, Solaris 2.5.1, 2.6, 8.0 and 9.0, HP-UX 11, Tru64, BSDI and Mac
OS X.


1. Login to your server as root. (SSH)

2. Down load the chkrootkit.
Type: wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz

3. Unpack the chkrootkit you just downloaded.
Type: tar xvzf chkrootkit.tar.gz

4. Change to new directory
Type: cd chkrootkit*

5. Compile chkrootkit
Type: make sense

6. Run chkrootkit
Type: ./chkrootkit

what the chkrootkit will do

1. It checks for signs of rootkits - chkrootkit, ifpromisc.c, chklastlog.c, chkwtmp.c, check_wtmpx.c, chkproc.c, chkdirs.c, strings.c, chkutmp.c; chkrootkit is the main module which controls all other modules.

2.chkrootkit checks system binaries for modifications. eg: find, grep, cron, crontab, echo, env, su, ifconfig, init, sendmail ...).

3.Next, it finds default files and directories of many rootkits (sniffer's logs, HiDrootkit's default dir, tOrn's default files and dirs...).

4.After that, it continues to look for default files and directories of known rootkits.


If it says "Checking `bindshell'... INFECTED (PORTS: 465)"
This is normal and it is NOT really a virus.

The following tests are made:

aliens asp bindshell lkm rexedcs sniffer wted w55808 scalper slapper z2 amd basename biff chfn chsh cron date du dirname echo egrep env find fingerd gpm grep hdparm su ifconfig inetd inetdconf init identd killall ldsopreload login ls lsof mail mingetty netstat named passwd pidof pop2 pop3 ps pstree rpcinfo rlogind rshd slogin sendmail sshd syslogd tar tcpd tcpdump top telnetd timed traceroute vdir w write.

Clamav Installion And Uses On Linux

Clam Antivirus (ClamAV) is a free, cross-platform antivirus software tool-kit able to detect many types of malicious software, including viruses.There is a common talk that there are no viruses on the Linux platform - which to a large extent is true. But when you get a mail attachment from windows machine may be the machine is full of virus. That virus no affect our linux server.But it will affect the windows users whom using our websites.

Download ClamAV from http://sourceforge.net/projects/clamav/files/clamav/0.97/clamav-0.97.tar.gz/download

Extract
# tar zxvf clamav-0.95.1.tar.gz

# cd clamav-0.95.1

# ./configure

# make all

# make install

Once after installation you need to modify two configuration files to get ClamAV running & for definition updates.

1. vim /etc/clamd.conf
   Comment on example:line number 8
2.vim /etc/freshclam.conf
   Comment on example:line number 8


ClamAV installation in Cpanel


#Main >> cPanel >> Manage Plugins

#Name: clamavconnector
      Author: cPanel Inc.
      and select the Install and keep updated tick box

    and finally save

    after completing in WHM.

You can install it from backend. Follow the steps
    
#Go terminal window
      
#login as root

#For 32 bit installations:
    cd /usr/local/cpanel/modules-install/clamavconnector-Linux-i686

#For 64 bit:
    cd /usr/local/cpanel/modules-install/clamavconnector-Linux-x86_64

#Run on screen ./install


update your virus definitions

freshclam

check files in your home directory:

clamscan

check files in the entire home directory:

clamscan -r /home

check files on the entire drive (displaying everything):

clamscan -r /

check files on the entire drive but only display infected files and ring a bell when found:

clamscan -r --bell --mbox -i /

scan and mail report

clamscan --remove -r --bell -i /home/example/mail/ |  mail -s 'clam' 123@example.com

examples of scanned virus

/home/example/mail/new/1301578754.H708604P328.server.test.com,S=42794: Trojan.Spy.Zbot-464 FOUND
/home/example/mail/new/1301578754.H708604P328.server.test.com,S=42794: Removed.
/home/example/mail/new/1301455585.H960996P15497.server.test.com,S=10619: Trojan.Downloader.Agent-1452 FOUND
/home/example/mail/new/1301455585.H960996P15497.server.test.com,S=10619: Removed.

File Fragmentation checking on Linux

To find file fragmentation information for a specific file,we can use filefrag command.
filefrag reports on how badly fragmented a particular file. It makes allowances for indirect blocks for ext2 and ext3 filesystems, but can be used on files for any filesystem.

syntax:

filefrag -v (filename)

filefrag -v /home/example/example.txt

-v   => verbose when checking for file fragmentation

ouptput:(for example)

Checking example.txt
Filesystem type is: ef53
Filesystem cylinder groups is approximately 606
Blocksize of file example.txt is 4096
File size of example.txt is 1194 (1 blocks)
First block: 7006588
Last block: 7006588
example.txt: 1 extent found

To Clear Linux Memory Cache

To free pagecache:

# sync; echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

# sync; echo 2 > /proc/sys/vm/drop_caches

To free pagecache, dentries and inodes:

# sync; echo 3 > /proc/sys/vm/drop_caches

==> sync - flush file system buffers

Linux Ext2,Ext3,Ext4 File systems

A LINUX file system is a collection of files and directories stored. Each file system is stored in a separate whole disk partition.here we see file system types

The ext2 (second extended file system) is a file system for the Linux kernel. It was initially designed by Remy Card as a replacement for the extended file system (ext).It was introduced with the 1.0 kernel in 1993.Ext2 is flexible,can handle file system up to 4 TB,and supports long file names up to 1012 characters,it has sparse super blocks feature which increase file system performance.In case any user processes fill up a file system,ext2 normally reserves about 5% of disk blocks for exclusive use by root so that root can easily recover from that situation.

The ext3 (third extended file system) is a journal ed file system that is commonly used by the Linux kernel. It is the default file system for many popular Linux distributions,Stephen Tweedie developed ext3.It provides all the features of ext2,and also features journaling and backward compatibility with ext2.The backward compatibility enables you to still run kernels that are only ext2-aware with ext3 partitions.we can also use all of the ext2 file system tuning,repair and recovery tools with ext3 also you can upgrade an ext2 file system to an ext3 file system without losing any of your data.
Ext3’s journaling feature speeds up the amount of time ,in ext2 when a file system is uncleanly mounted ,the whole file system must be checked.This takes a long time on large file systems.On an ext3 system ,the system keeps a record of uncommitted file transactions and applies only those transactions when the system is brought back up.So a complete system check is not required and the system will come back up much faster.

The ext4 (fourth extended file system) is a journaling file system for Linux,Ext4 is part of the Linux 2.6.28 kernel,Ext4 is the evolution of the most used Linux file system, Ext3. In many ways, Ext4 is a deeper improvement over Ext3 than Ext3 was over Ext2. Ext3 was mostly about adding journaling to Ext2, but Ext4 modifies important data structures of the file system such as the ones destined to store the file data. The result is a filesystem with an improved design, better performance, reliability and features.developed by Mingming Cao,Andreas Dilger,Alex Zhuravlev,Dave Kleikamp,Theodore Ts'o, Eric Sandeen,Sam Naghshineh and others.

Runtime linux file change monitor

The following command monitor u file change,like process and CPU usage on top command.


watch -d -n 2 df -Th; ls -FlAt;

d  => (differences) Highlight changes between iterations.
n  => (interval)  Run the command every seconds.
df => report file system disk space usage.
T  => print file system type.
h  => print sizes in human readable format.
ls => list directory contents.
F => (classify)append indicator (one of */=>@|) to entries.
l   => a long listing format.
A => do not list implied.
t  => sort by modification time.

It will update files are getting written on our file system.It highlights if a file get modified,and you will know what your granted SSH access users exactly modifying.

Drop all ping packets

You can setup kernel variable to drop all ping packets.

# echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all

This instructs the kernel to simply ignore all ping requests (ICMP type 0 messages).

To enable ping request type the command:

# echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all

[or]

You can drop by adding following line to /etc/sysctl.conf file:

net.ipv4.icmp_echo_ignore_all = 1

Save and close the file.

Osi Layers


Open system Interconnection-Created by ISO(International Organization for Standardization) in 1970 .

* OSI layer helps to communicate two different vendor network devices using common set of rules.
*That rules are very difficult to understand vendors.So that rules are separate seven groups,each group called    layers. In this method is called layered approach model.
* Types of Seven Layers: Application,Presentation,Session,Transport,Network,Data link,Physical.
* First three top layers(Application,Presentation and Session) managed application process(host layers) and bottom  four layers(Transport,Network,Data link,Physical) are manged communication process(media layers)



Application Layers:

* Application layer provides interface between user and that particular application.
* Application layer responsible for establishing and identifying intended communication partner(User).
*Application layer choose actual  application (using port no) and  give to next  layer using protocol stack. 
* Application layer acting to network related applications and also desktop related applications.


Example:
                   * In my system,i removed my NIC card and TCP/IP settings,network drivers.At a time open my Internet explore the html web page document opened and retrieve HTTP protocol and getting some error message.In these case user opened html page and using http protocol and create Application layer PDU (Protocol data unit)and give to next layer using via port no(HTTP port no-80).And also added Presentation and Session layer PDU's finally comes to network layer finding source and destination ip address,But system don't have NIC card so encapsulation failed and lost the data  finally getting some error message.In this cause user connected to application and getting replay message.

Presentation Layer:

* Presentation layer  responsible present the data and data translation and code formatting for Application  layer entities(Application layer PDU).
*Presentation layer also responsible for data compression,decompression and data encryption,decryption.  
*Presentation layer responsible for general format to ASCII format.It means sender data to translate to any format and then re translate to that format in destination system. 

Session Layer: 

*Transport layer responsible for  setting up (connecting)and managing and tear down (disconnect)session between presentation layer entities.
*Transport layer dialog control (end to end connection control)between two devices or end to end nodes and also organize the server to client or between two nodes using different modes.
Simplex-This is one way communication.(sender transmit the data and receiver received that data but receiver can't communicate to sender)
Half duplex- In this method sender to send the data to receiver after received the data then receiver replay back(At the same time can't communicate sender and receiver)
Full duplex-In this method at the same time communicate sender and receiver.
* And the main work for session layer different application data keeps and communicate to that exact that application for destination. 

Transport Layer: 

* Transport layer responsible create segments(break the big data to small small segments) for session layer data entries and reassemble the segments to data and give to session layer.
*Transport layer gives End to End Transport services and create logical connection between sending host to receiving host. 
*Transport layer works with TCP and UDP protocols. 
 
TCP(Transmission control protocol)
*

rsync command with examples

rsync: Remote Sync.

Description: rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.

Features:
(*).One of the main feature of rsync is that it transfers only the changed block to the destination, instead of sending the whole file.

Syntax:
rsync options source destination (or)
rsync [OPTION]... SRC [SRC]... [USER@]HOST:DEST (or)
rsync [OPTION]... [USER@]HOST:SRC [DEST]

a - Archive same -rptgolD, [Its enough to copy entire folder of file even with out changing file permission]
-r - Recursive [Copy entire directive]
-p, --perms preserve(Unalter) permissions
-t, --times preserve times
-o, --owner preserve owner (super-user only)
-g, --group preserve group
-l, --links copy symlinks as symlinks
-D same as --devices --specials
--devices preserve device files (super-user only)
--specials preserve special files

-z, --compress - Compress file data during the transfer.

-v - verbose, Explain what happen in view

-u - Do Not Overwrite the Modified Files at the Destination [if the file in the destination in modified and if u use -u option in rsync, the modified file wont be overwritten].

--rsh=COMMAND -- specify the remote shell to use. (e.g) --rsh='ssh -p1055' to login to remote system.

--progress - View the rsync Progress during Transfer

--stats - give some file-transfer stats

--delete - This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side), but only for the directories that are being synchronized.

--exclude - This option is used to exclude some certain files

--max-size=SIZE - This tells rsync to avoid transferring any file that is larger than the specified SIZE

--max-size=SIZE - This tells rsync to avoid transferring any file that is smaller than the specified SIZE


Examples:

1.rsync Command to copy only Files from a folder

#rsync -avz doc/ Desktop/welcome/
#ls Desktop/welcome/
taglist.txt

===>taglist.txt is the file present in the folder doc

2.rsync command to copy entire folder to the destination

#rsync -avz doc Desktop/welcome/
#ls Desktop/welcome/
doc

===>doc is the folder we copied.

3.rsync command with --progress option

#rsync -avz --progress doc Desktop/welcome/
building file list ...
2 files to consider
doc/
doc/taglist.txt
69366 100% 11.63MB/s 0:00:00 (xfer#1, to-check=0/2)

sent 18509 bytes received 48 bytes 37114.00 bytes/sec
total size is 69366 speedup is 3.74

===> --progress option gives you a progress meter of data send to the destination.

4.rsync command with --progress and --stats options

#rsync -avz --progress --stats doc Desktop/welcome/
building file list ...
2 files to consider
doc/
doc/taglist.txt
69366 100% 11.63MB/s 0:00:00 (xfer#1, to-check=0/2)

Number of files: 2
Number of files transferred: 1
Total file size: 69366 bytes
Total transferred file size: 69366 bytes
Literal data: 69366 bytes
Matched data: 0 bytes
File list size: 83
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 18509
Total bytes received: 48

sent 18509 bytes received 48 bytes 37114.00 bytes/sec
total size is 69366 speedup is 3.74

===> --stats option tells you a status information of file transfered.

5.rsync command with --delete option

#cd Desktop/welcome/
#cat > hi
#hello
#rsync -avz --delete doc Desktop/welcome/
building file list ... done
deleting doc/hi
doc/

sent 105 bytes received 26 bytes 262.00 bytes/sec
total size is 69366 speedup is 529.51

===> --delete command is used to delete files in the destination if the source doesn't have the file.

#cd Desktop/welcome/
#ls
taglist.txt

6.rsync command with --exclude option.

#cd doc
#cat hello
this is hello file

#rsync -avz --exclude doc/hi doc Desktop/welcome/
building file list ... done
doc/

sent 105 bytes received 26 bytes 262.00 bytes/sec
total size is 69366 speedup is 529.51

#cd Desktop/welcome/
#ls
taglist.txt

7.rsync command to transfer file from Source system to Destination system.

#rsync -avz --progress --stats --delete --rsh='ssh -Xp36985' doc rajm@192.168.1.5:Desktop/

===> The above command is used to copy file from source system to Desktop of destination system .

8.rsync command to transfer file from destination system to source system.


#rsync -avz --progress --stats --rsh='ssh -Xp36985' rajm@192.168.1.5:Desktop/doc .

===> The above command is used to copy files from Desktop system to source of destination system .

Sed command with examples

Sed stands for stream editor. A stream editor performs the text transformation from input stream and the input stream may be a file or input from a pipeline.

cat editing
hi
hello
welcome
how are you
hope you are fine

. . .Sed command to print lines. . .

1.#sed '1p' editing
hi
hi
hello
welcome
how are you
hope you are fine

==> 1p which tells that print the first line and show the remaining lines. so we can see from the output hi is priented twice.

2.#sed -n '1p' editing
hi

===> -n options is nothing but noprint, (i.e) It prints only the first line and didn't show the other lines.

3.#sed -n '2,4p' editing
hello
welcome
how are you

===> The above command is used to print only a certain range of lines.

4.#sed -n '$p' editing
hope you are fine

===> The above command displays only the last line. The '$' symbol indicated the last line and '-n'is used for no print and it combines with option 'p' to displays only the last line .

. . .Sed Command to delete lines. . .

1. #sed '1d' editing
hello
welcome
how are you
hope you are fine

===> The above command used to delete only the fisrt line and display the remaining lines.

2. #sed '2,4d' editing
hi

===> The above command is used to delete a range of lines.

. . .Sed Command for search and replace. . .

1.#sed 's/hi/changed/' editing
changed
hello
welcome
how are you
hope you are fine

==> The above command search for hi and replace to changed and print the remaining lines

2.#sed -n 's/hi/changes/p' editing
changes

===> The above command search for hi and replace to changed and don't print the remaining lines

3.#sed '1,4s/are/is/' editing
hi
hello
welcome
how is you
hope you are fine

===> The above command used to do search and replace only for a range of lines.

. . .How to delete empty lines using sed command. . .

#cat new_file
hi

The above line is empty

1.#cat new_file | sed -e '/^$/d'
hi
The above line is empty

2.#sed -e '/^$/d' new_file
hi
The above line is empty

===> The above two commands produce the same results, It search for empty line and delete it. '^' is nothing but starting of the line and '$' tells the end of the line, so from the starting to ending of the line is empty means delete it.

. . .How to remove space from a word. . .

#cat remove
s p a c e.

#cat remove | sed 's/ //g'
space
# sed 's/ //g' remove
space

===> It nothing but search and replace, the above command search for empty space and replace it with nothig, so then space becomes nospace.

. . .How to remove a lines permanently from a file. . .

#cat new_file
hi

The above line is empty

#sed -i '2d' new_file

===> This above command will delete the line number 2 permanently from the file new_file.

. . .How to assign numbers for lines using sed command. . .
#cat editing
hi
hello
welcome

#sed '=' editing
1
hi
2
hello
3
welcome


===> In the above command '=' symbol is used to assign number for each lines, it works like as same as 'nl' command.

. . .How to use Word Boundaries using sed command. . .

Word Boundaries - \<\>

#cat first_text
raj
rajkumar
i am rajkumar

#sed -e 's/\/kumar/g' first

kumar
kumarkumar
i am kumarkumar

===> Normally if u use search and replace it will replace any word which contains raj to kumar, if 'g' is specified.
#sed -e 's/\/kumar/g' first_text
kumar
rajkumar
i am rajkumar

===> If u use Word Boundaries means the only exact work will be searched and get replaced.(Its same like as 'grep -w' command)

. . .How to include files in sed command. . .

#cat commands
raj
linux

#vim lol
{
s/raj/kumar/g
s/linux/linuzzzzzzzz/g
}

#sed -f lol commands
kumar
linuzzzzzzzz
===> The '-f' option is used to include files in command prompt.

Special Characters with uses

Ctl-A       Moves cursor to beginning of line of text (on the command-line).
Ctl-B       Backspace (nondestructive).
Ctl-C       Break. Terminate a foreground job.
Ctl-D       Log out from a shell (similar to exit).
Ctl-E       Moves cursor to end of line of text (on the command-line).
Ctl-F       Moves cursor forward one character position (on the command-line).
Ctl-G       BEL. On some old-time teletype terminals, this would actually ring a bell.In an xterm it might beep.
Ctl-H       Rubout (destructive backspace). Erases characters the cursor backs over while  backspacing.
Ctl-I        Horizontal tab.
Ctl-J        Newline (line feed). In a script, may also be expressed in octal notation -- '\012' or in   hexadecimal -- '\x0a'.
Ctl-K       Vertical tab.  When typing text on the console or in an xterm window, Ctl-K erases  from the character under the cursor to end of line. Within a script, Ctl-K may behave  differently, as in Lee Lee Maschmeyer's example, below
Ctl-L       Formfeed (clear the terminal screen). In a terminal, this has the same effect as the  clear  command. When sent to a printer, a Ctl-L causes an advance to end of the paper sheet.
Ctl-N       Erases a line of text recalled from history buffer [20] (on the command-line).
Ctl-O       Issues a newline (on the command-line).
Ctl-P       Recalls last command from history buffer (on the command-line).
Ctl-Q       Resume (XON). This resumes stdin in a terminal.
Ctl-R       Backwards search for text in history buffer (on the command-line).
Ctl-S       Suspend (XOFF). This freezes stdin in a terminal. (Use Ctl-Q to restore input.)
Ctl-T       Reverses the position of the character the cursor is on with the previous character  (on the      command-line).
Ctl-U       Erase a line of input, from the cursor backward to beginning of line. In some settings, Ctl-U     erases the entire line of input, regardless of cursor position.
Ctl-X       In certain word processing programs, Cuts highlighted text and copies to clipboard.
Ctl-Y       Pastes back text previously erased (with Ctl-U or Ctl-W).
Ctl-Z       Pauses a foreground job.

How to Encrypt a file in Linux

root@user:~# vim test.txt (write something here)

This is a Test file

:wq
=====Now see the Content of the file using cat command=====
#root@user:~# cat test.txt
This is Test file

=====Now we are going to Encrypt the file with gpg======
root@user:~# gpg -c test.txt

Enter Pass-phrase :

Repeat Pass-phrase :

====You can see one more file create.=====
root@user:~# ls -l test*

-rw-rr 1 root root 59 2011-03-02 17:20 test.txt
-rw-rr 1 root root 97 2011-03-02 17:23 test.txt.gpg

=====Lets try to see encrypt file with cat command=====
root@user:~# cat test.txt.gpg

i+`P$@CoEkW%>o
8*zbB`EA9{7
IW

=====Delete original File=====
root@user:~# rm test.txt
=====Now we are going to decrypt the encrypted file=====
root@user:~# gpg test.txt.gpg

Enter Pass-phrase :
=====See decrypted file content=====
root@user:~# cat test.txt
This is a Test file

Adding User and their respective Password through Shell Script

. . .List of Users to be Added . . .
#cat list
ravi
deepak

. . .Script to automate User and their Password Adding . . .
#vim auto.sh
#!bin/bash
cat /dev/urandom | tr -cd "a-zA-Z0-91234567890-=\`" | fold -w 9 | head -n 2 > pass
cp pass check
u=`cat list`
for j in $u
do
useradd $j
echo "User $j Added"
echo "=================================="
done
for i in $u
do
echo "User Name is :$i"
p=`cat pass`
echo "$p" | passwd --stdin "$i"
sed -i '1d' /home/rajm/script/blog/auto/pass
#sed -i '1d' $p
echo "User $i ’s password changed!"
echo "=============================="
done


. . .Give permisson and run the script. . .
#chmod +x auto.sh
sh auto.sh
User ravi Added
==================================
User deepak Added
==================================
User Name is :ravi
Changing password for user ravi.
passwd: all authentication tokens updated successfully.
User ravi ’s password changed!
==============================
User Name is :deepak
Changing password for user deepak.
passwd: all authentication tokens updated successfully.
User deepak ’s password changed!
==============================

. . .The File check used to check the password. . . .
#cat check
5

Some Scripting Tricks

. . .Uses of some Build in Variables. . .

1. echo $LOGNAME and echo $USER -----> show's who is the logged in user
2. echo $HOSTNAME               -----> show's the Hostname of the Linux Box
3. echo $PPID  ----> show's the Process ID of Shell's Parent Directory ( If u kill this every logged in terminal will be closed)
4. echo $PWD            ----> show's the Present Working Directory
5. echo $UID              -----> show's the UserID of the currently Logged in user
6. echo $MAIL          -----> Show's the Mail Path       
7. echo $HISTFILE   -----> Show's the file's which stores the history details
8. echo $HOME        -----> show's the users HOME Directory
9. echo $PATH         -----> A colon-separated list of directories in which the shell looks for commands.
10.echo $BASH The full pathname used to execute the current instance of Bash.
11.echo $HISTSIZE  ----> This will show you the HISTORY size
12  $?   ------> Expands to the exit status of the most recently executed foreground pipeline.
13. $$   -----> Shows to the process ID of the shell and if it used inside a script it shows the process ID of the script.
14. $!   -----> Expands to the process ID of the most recently executed background command(&).
15. $0  -----> Expands to the name of the shell or shell script.
16. $_  -----> It will show you what is the Last Argument used [ like esc (.) or alt (.) ]

How to set a value for a variable in single command?

#echo ${raj:=rajkumar}
#echo $raj

How to assign o/p value of a command to a variable?
Syntax:

1. $(command)
2. `command`

#echo `date`
Sun May 1 13:12:31 IST 2011
#echo $(date)
Sun May 1 13:12:40 IST 2011

cat first.sh

#!bin/bash
a=`du -sh /home/rajm | awk {'print $1'}`
echo "I am User $USER and I am running this script"
echo "The UserID of $USER is $UID"
echo "The name of the Script is $0"
echo "The processID of the script $0 is: $$"
echo "The Script is running from $PWD directory"
echo "Size of User $LOGNAME Home directory($HOME) is :$a"
echo "Status of Previously executed command is: $?"
echo "The script $0 contains $LINENO Lines"

#sh first.sh
I am User rajm and I am running this script
The UserID of rajm is 516
The name of the Script is first.sh
The processID of the script first.sh is: 5096
The Script is running from /home/rajm/script/blog directory
Size of User rajm Home directory(/home/rajm) is :696M
Status of Previously executed command is: 0
The script first.sh contains 10 Lines

How Linux Boots

When we install Linux we basically do more number of partitions. When allocating disk space for the partitions, the first sector(One Sector = 512 Bytes), or data unit, for each partition is always reserved for programmable code used in booting. The very first sector of hard disk is reserved for booting purpose and is called Master Boot Record(MBR).

Step:1 POST (Power On Self Test)

Step:2 BIOS (Basic Input Output System)

Step:3 MBR (Master Boot Record), after opening MBR the boot loader code in MBR is executed. Then the MBR needs to know which partitions on the disk have boot loader code specific to their operating systems in their boot sectors and then attempts to boot one of them.

Step:4 Select's the particular boot partition(basically /boot), then it need to select the boot loader. Their are two types of boot loader in linux LILO(LInox LOader, / etc/lilo.conf) and GRUB (Grand Unified Boot Loader).LILO is not in use now a days.

Step:5 Then the data in /boot/grub/grub.conf is readed, which list all the available operating  system and their booting parameters.

Step:6 When Linux begins to boot with its kernel, it first runs the /sbin/init program, which does some system checks.

Step:7 Then the /etc/inittab file is opened this will tell us which runlevel should be used.

Step:8 Based on the Selected runlevel the init process then executes start up scripts located in subdirectory of /etc/rc.d/*. If the runlevel 5 is chosen means then the scripts in /etc/rc.d/rc5.d is executed.

Step:9 cd /etc/rc.d/rc5.d . The files inside in the directory are Start up with two things "S" and"K". The scripts starting with "S" are executed when System starts and the Scripts starting with "K" are executed when the system shutdowns. The Number that follows the K and S specifies the position in which the scripts should be run in ascending order.

Step:10 Then the scrips inside the /etc/rc.d/rc.local are executed, if u manually added any.

Friday, August 12, 2011

Managing links in Linux

Hard and Symbolic links are the two types of files that exist in a Unix Operating System to point to another file.
Hard Links: They are a pointer that is exactly as the same than the file it points to, no mather if it has a different name, any modification done to the pointer are also done to the target file. The hard links can only point to other files, but not directories (though directories are a special kind of files). And the other main difference with Symbolic Links is that hard links MUST reside in the same filesystem than the file they point to, because they have the same inode number.
Symbolic liks: The are a pointer to another file but they contain the name of the file they point to, it can span filesystems (they have a different inode number), and they can point to files or direcoties.
Creating Hard and Symbolic Links:
ln -fs []

By default ln (without arguments) it creates hard links.
-f, --force : Remove existing destination files.
-s, --symbolic : Make symbolic links instead of hard links.

Analysing logs in Linux

Part of the security and sysadmins tasks is the log analysis and decision taking. There is plenty of information in http://www.linux.org/apps/all/Administration/Log_Analyzers.html.

The tools i recommend is called "Lire", this tool permits the creation of several reporting formats, including html, pdf, xml, between others. It also permits to analyze many log file formats, which include MySQL, Iptables, BIND, Apache, Qmail, Postfix, Syslog and more. Lire is GPL'ed Free Software (and Open Source), built around the idea of extendibility.

This tool is available from http://www.logreport.org/lire, it has been deveploped in Perl and i recommend you to install all the dependence modules with CPAN (type "perl -M CPAN -e shell" on the command line as root).

Introduction to Iptables usage in Linux

I am going to explain the generic matches, the ones that apply to all the IP packets. In general, the patch pattern looks like "-s (--src, --source).

For example:
iptables -A INPUT -s 10.10.10.5 -j DROP

The IP address could also be a hostname, in that case it would be resolved to an ip address before being added to the chain. The field of the IP address could also be a range of addresses using a netmask. This instruction is applied in the INPUT chain, but it could be used also in the OUTPUT chain if the machine has more than one ip address.

iptables -A INPUT -s 10.10.10.0/24 -j DROP

This instructions matches the first 24 bits of the address. This means, it matches addresses between 10.10.10.0 - 10.10.10.255.

iptables -A INPUT -s ! 10.10.10.5 -j DROP

The exclamation mark negates the ipaddress, this matches the packets where the source IP is no 10.10.10.5

The "-d (--dst --destination)" matches the destination address of the packets and is used generaly on the OUTPUT chain. The same rules than in the -s option apply (address ranges can be specified as hostnames, a single IP address or a range, and negation of the addresses). For example:

iptables -A OUTPUT -d ! 10.10.10.4 -j REJECT

The "-i (--in-interface) " specify on which network interface the rule should take effect, for example "eth0". This options could be used in the FORWARD, INPUT and PREROUTING chains. The network interface also accepts wildcards, for example, if you want to filter all the traffic from a privat eaddress such as 10.10.10.2 in all the interfaces:

iptables -A INPUT -s 10.10.10.2 -i eth* -j DROP

This would drop all packets arriving on eth0, eth1, eth2, etc.

A final "-p" option allows to work with a specific protocol, for example, if you want to drop all the UDP packets:

iptables -A INPUT -p udp -j DROP

The protocols that can be used are:
TCP, UDP, ICMP, ALL (this is for all the protocols).

 i explained some general packet matching with Iptables, now i am going to explain the TCP packet matching generalities, where the commands displayed following, all match specific values from the TCP packet headers, for example the source and destination ports, tcp options, tcp flags (for example the syn, fin, etc).

You use the --protocol argument to match TCP packets, and optionally, the source port of the packet can be specified with "--sport (--source-port) ", the source port can be a numeric value or the name of the port, that should match the port number we want in the /etc/servicesfile.

Here are two examples:
iptables -A INPUT -p tcp --sport 23 -j REJECTiptables -A INPUT -p tcp --sport telnet -j REJECT

This rules do the same, they reject the inbound traffic from the TCP port 23 of the remote host). The usage of port names instead of the number, creates a little more cpu consume and could be told as "speed penalty" in large rulesets.

It is possible also to specify a range of ports in the rules, lower and upper port separated by a colon. Here i filter all the ports between 10 and 999:
iptables -A OUTPUT -p tcp --sport 10:999 -j REJECT

All TCP source ports except the 80 are accepted with this rule:
iptables -A INPUT -p tcp --sport ! 80 -j ACCEPT

The same can be done with a range of ports:
iptables -A INPUT -p tcp --sport ! 1024:40000 -j LOG

If the first port is numerically higher than the second, iptables swaps both numbers around automatically.

To specify a TCP destination port, the "-dport (--destination-port) [port]" is used, and its rules are the same than in TCP source port matching.

For example, to stop users in your private network to connect to IRC, supposing that IRC uses the ports between 6667 to 66670, you may want to add this rule:
iptables -A OUTPUT -p tcp --dport 6667:6670 -j REJECT

To match a specific flag in the TCP header, you have to make use of "--tcp-flags [mask] [flags]".

The [mask] argument is a list of flags separated by commas, which should be matched.
The [flags] argument is a list of flags that must be set, any flag listed in the [mask] argument, but not in the second, this means that the flag must be unset.

The possible flags are: SYN, ACK, FIN, RST, PSH and URG. ALL and NONE are also possible.

In this example, the SYN,ACK and FIN flags are the mask, and the SYN flag is the one that has to be set:
iptables -A FORWARD -p tcp --tcp-flags SYN,ACK,FIN SYN -j ACCEPT

The mask can also be inverted, meaning that the ACK and FIN should be set, but not the SYN:
iptables -A FORWARD -p tcp --tcp-flags ! SYN,ACK,FIN SYN -j ACCEPT

Matching a particular TCP flag:
This is accomplished with the "--syn" flag, it is usefull because the SYN flag is the TCP start sequence also known as the "3 way handshake", explained in this picture:

iptables -A FORWARD -p tcp --syn -j ACCEPT iptables -A FORWARD -p tcp ! --syn -j ACCEPT



In this case i am going to explain some iptables features related with UDP. UDP (User Datagram Protocol) has the characteristic of being connectionless.

The packets format is this:
In this packet format can be seen that UDP has no flags like TCP. UDP cares only about the source and destination addresses.

In Iptables, udp is specified with the "-p udp" argument. Similar rules apply to udp than with TCP matching, negation and port ranges are allowed:

-sport (--source-port) 
--dport (--destination-port) 

This rule matches any UDP packet with source port of 161 (SNMP)
iptables -A INPUT -p udp --sport 161 -j ACCEPT

This rule logs all the packets with destination port with range from 161 to 180
iptables -A INPUT -p udp --dport 161:180 -j LOG

To learn more about UDP, see the RFS 768

10 most important Unix Security issues

1. Web Server. One of the places that an intruder is going to check first is for vulnerabilities in your Apache version and in you cgi-scripts.

2. Remote Procedure Calls. RPC Services should be down if they are not required, they allow a remote user to execute instructions in your computer; the intruder usualy gains root access this way.
3. SNMP (Simple Network Management Protocol). This protocol is known to have had its vulnerabilities and their password can be easily cracked and more easier captured from the network.
4. SSH (Secure Shell). SSH has been exploited before, if you do not need it then you can turn it off, or filter the source ip addresses with TCP Wrapper.

5. Remote Services (Trusted host). This was a setup in the machines based on the rely of other machines IP address, and leaved access without asking password. Their binaries are "rsh", "rcp", "rlogin" and "rexec". They exist and can be used also today, the attacked can do a party with your machine if they use a technique known as "ip spoofing".

6. FTP (File Transfer Protocol). Many vulnerabilities have been found in FTP, as exploits and protocol weaknesses, like clear text password transfer (resolved in SFTP).

7. LPD (Line Printer Daemon). This daemon is also remotely exploitable with help of an overflow and a shellcode, gaining root access if the server is running as root.

8. BIND/DNS (Dynamic Name Server). DNS Flooding, exploits and other attacks are available, if you are going to set up a DNS, use a firewall to filter any port that you do not want.

9. Sendmail. This mail transfer agent is known for its buffer overflows and remote exploits, though it has resolved its issues, always appears something new. It is recommended to use qmail.

10. Weak Password / No Passwords in the system. I do not need to explain this.

Many people that talk about security talk about a false sense of security that one can have in the cyberspace, i do not totaly agree with them, i see very often thay it is created a false sense of insecurity also. The items i have listed before create some sense of insecurity and alert; but do not worry, if you are going to run one of this critical services, just keep in mind:

* Use a well configured firewall (pay more attention to "well configured" than "firewall")
* Set up correctly an Intrusion Detection and Prevention System.
* Ask for help a security professional

How to do an secure tunel with ssh in Linux

You may know that ssh is a secure way to connect, remember those old days when telnet was used and the passwords just flew through the network and any person with a sniffer could capture it ?

With ssh you can create a secure connection from one point to anther, going through a middle point, like the figure shows:
The tunnel is an cyphered connection from A to B, and from B to C the connection is not cyphered (almost not by ssh that we are using). B acts as a gateway to C.
In A you would write:
$ ssh -g -L [port in A]:[C address]:[port in C] [b address]
Example of doing a tunnel to a webpage:
$ ssh -g -L 8000:www.gmail.com:80 sureshkumarpakalapati.in

You would connect with your browser to www.gmail.com:8000.
This would create a tunnel from A to B and B to gmail, this way nobody in A's network will be able to sniff the gmail traffic, only in B's network would that be possible.

Limit users access to Linux in a time range

In the cases when you want to limit the access to a Linux operating system in a time range, you would like to use pam_time.so. pam_time was written by Andrew G. Morgan.

Take a look at /etc/security/time.conf

To limit for example ssh access from 23:00 PM and 08:00 AM.
sshd;*;*;!Al2300-0800

The format of the file is:
Service;ttys;users;time

the !Al means, anything except "All the days".

If you would like to permit people from 4 to 8 PM all the days, except root:
login;*;!root;!Al1600-2000

Further reading:man time.conf

Disable users from loggin into the server, except the administrator

In cases where you have to disable the login to all users,except root, for example when you have to do a backup, you have to use pam_nologin.so (man nologin).

1) Edit the pam file for the service you want to control, in this example i modify ssh pam control file, located in /etc/pam.d/sshd

Add this line
account required pam_nologin.so


2) Create the /etc/nologin file, just do "touch /etc/nologin"

This should disable the login from ssh. If you want to disable the login from terminal, modify the /etc/pam.d/login file.

3) To re-enable the login just remove /etc/nologin

How to control which files have been deleted and by who ?

 This is a hack you can use to control file deletion and know exactly who deleted a file.

The trick is to add into the /etc/profile file this script:

[walter@walter ~]$ rm () { echo `id` deleted the file $1 at `date` >> /tmp/.log; /bin/rm $1; }

The log file will show you this:

uid=500(walter) gid=500(walter) groups=500(walter) deleted the file test at Mon Nov 26 10:31:16 ART 2007 
To print also the hostname where the deletion has come from:
$ rm() { i=`tty | cut -d / -f 3,4`;host=`w | grep $i | awk '{print $3}'`;echo -e `id` deleted the file $1 at `date` comming from "$host\n" >> /tmp/.log;/bin/rm "$@";}
The output would be:
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),503(devel) deleted the file at Tue Nov 27 15:09:14 ART 2007
The problem of this solution is that if the user is some curious, he could know about this "set" variable, and:

* Unset the variable
* Execute the binary calling it directly

So, if you need the best way, you will have to write a little C script that replaces the original "rm" binary and rename the original "rm" binary to "rm.orig". Now, the "rm" binary should log the deletion of the file and then execute the "rm.orig", obviously, changing the process name to "rm", so the user do not suspects.

Advantages of Linux over its commercial competitors

Linux is free.
You can install a complete Unix system at no expense other than the hardware.

Linux is fully customizable in all its components.
Thanks to the General Public License (GPL), you are allowed to freely read and modify the source code of the kernel and of all system programs.

Linux runs on low-end, cheap hardware platforms.
You can even build a network server using an old Intel 80386 system with 4 MB of RAM.

Linux is powerful.
Linux systems are very fast, since they fully exploit the features of the hardware components. The main Linux goal is efficiency, and indeed many design choices of commercial variants, like the STREAMS I/O subsystem, have been rejected by Linus because of their implied performance penalty.

Linux has a high standard for source code quality.
Linux systems are usually very stable; they have a very low failure rate and system maintenance time.

The Linux kernel can be very small and compact.
It is possible to fit both a kernel image and full root filesystem, including all fundamental system programs, on just one 1.4 MB floppy disk. As far as we know, none of the commercial Unix variants is able to boot from a single floppy disk.

Linux is highly compatible with many common operating systems.
It lets you directly mount filesystems for all versions of MS-DOS and MS Windows, SVR4, OS/2, Mac OS, Solaris, SunOS, NeXTSTEP, many BSD variants, and so on. Linux is also able to operate with many network layers, such as Ethernet (as well as Fast Ethernet and Gigabit Ethernet), Fiber Distributed Data Interface (FDDI), High Performance Parallel Interface (HIPPI), IBM's Token Ring, AT&T WaveLAN, and DEC RoamAbout DS. By using suitable libraries, Linux systems are even able to directly run programs written for other operating systems. For example, Linux is able to execute applications written for MS-DOS, MS Windows, SVR3 and R4, 4.4BSD, SCO Unix, XENIX, and others on the 80 x 86 platform.

Linux is well supported.
Believe it or not, it may be a lot easier to get patches and updates for Linux than for any other proprietary operating system. The answer to a problem often comes back within a few hours after sending a message to some newsgroup or mailing list. Moreover, drivers for Linux are usually available a few weeks after new hardware products have been introduced on the market. By contrast, hardware manufacturers release device drivers for only a few commercial operating systems — usually Microsoft's. Therefore, all commercial Unix variants run on a restricted subset of hardware components.

With an estimated installed base of several tens of millions, people who are used to certain features that are standard under other operating systems are starting to expect the same from Linux. In that regard, the demand on Linux developers is also increasing. Luckily, though, Linux has evolved under the close direction of Linus to accommodate the needs of the masses.