Wednesday, August 17, 2011

iptables hashlimit

To limit number of incoming connection we can use hashlimit module of iptables. Initially it looks diffcult to implement but its very self explanatory. I here shared a simple example to implement hashlimit.
iptables limiting ssh connection to 1 connection (--hashlimit 2/sec -j Drop) on the basis of source ip (--hashlimit-mode srcip). Hashlimit keep track of connection in a table, that table will created inside /proc/net/ipt_hashlimit. Name of table in example is dropssh (--hashlimit-name dropssh).

Routing table in linux

Linux find route for a particular host or network by checking the routing table. Whenever we enter a route entry by default it move into main routing table.


root# ip route show

root# ip route add 10.60.0.1 via 10.20.0.1

All above command will apply on main routing table.

We can create custom routing tables and set rule to forward certain traffic to new created routing table.

To create a new routing table, edit the file /etc/iproute2/rt_tables and add entry for new table

root# vi /etc/iproute2/rt_tables

and add following line

100 newrtable

100 is id and newrtable is name of routing table.

To check current entries in newly created routing table

root# ip route show table newrtable

To add route entry in routing table

root# ip route add default via 10.46.0.1 table newrtable

Here default gateway for this route table is 10.46.0.1. check table again

root# ip route show table newrtable

To forward traffic to this newly created route table , iptables command can be used along with ip rule command

root# iptables -t mangle -A OUTPUT -p tcp --dport 22 -j MARK --set-mark 100

Here ssh traffic marked with lablel 100

and then

root# ip rule add fwmark 100 lookup newrtable

Here the traffic marked with lablel 100 routed to table newrtable

Get Size of IP Routing cache

To get the size of IP Routing Cache we can use following command


root# dmesg
grep -i 'IP route cache'

IP route cache hash table ....

To edit routing table size we can use kernel parameter

rhash_entries=' size in bytes'

SSH Tunneling

SSH Tunneling is also known as SSH Port Forwarding. Using SSH Tunneling you can forward all traffic through the system on which you have ssh access. For example suppose you are inside a network in which port 25(smtp) is not allowed , and you need to send mail using smtp. The solution is SSH Tunneling. What you have to do is open a port on local system may be port no 3000 and do ssh tunneling so that all traffic thats coming on port 3000 get forwarded to remote system on which you have ssh access and that system allow smtp. For this you need to execute following command on your system


root# ssh -L 3000:202.125.250.x:25 sshuser@remotehostname -N

In above given example -L 3000:202.125.250.x:25 , 202.125.250.x is ip of remote system on which you have ssh access and smtp is allowed. The username for remote ssh access in mentioned as sshuser@remotehostname .

Now you need to mention smtp server as localhost:3000 in mail client.

nginx configuration

To enhance the performance and security of my web server , few days back i decided to use Reverse proxy theory. My idea was to put a a lightweight web server in front of my apache web sever to serve all the static contents by that lightweight web server, and dynamic content by main backend apache web server. In my study, I found that Apache itself can implement reverse-proxy, but i decided to go for lightweight web server. In my learning process i got some good feedback for lighttpd , but nginx is more popular (Based on stuffs i got by googling).


I decided to go with nginx for testing and deployment.

After installing nginx, Initially i done following configuration in default ngnix sitesconfiguration file, in my case it is (/etc/nginx/sites-available/default)

server {

listen 80;

servername test.com;

accesslog /var/log/nginx/access.log;

location / {

proxypass 127.0.0.1:8080;

}

location /images {

root /data/web/images;

autoindex on;

}

}

I my example my apache server is running on port 8080. Here reverse proxy concept is used to to forward all home page accessing(http://mysite.com), to port 8080 on localhost (in our case apache is running on 8080). All access to http://test.com/images will served directly by nginx.

Reverse SSH Tunnel

Reverse SSH Tunnel can be used to connect office computer from home computer. Suppose you issued following command on your office computer.


root#ssh -R 2048:localhost:22

In above given statement , home computer 2048 port become open to connect office computer port no 22.

Now issue following command on home computer.

root#ssh -p 2048 localhost

This will connect remote computer through ssh.

Recover LVM from Corrupted physical volume

I had Volume Group /dev/vg1 that consist two physical volumes /dev/sdb1 and /dev/sdc1 . One of physical volume /dev/sdc1, corrupted due to disk problem , now the challenge was to recover LVM. I decided to use pvremove command, in following way


root#pvremove /dev/sdc1

Above command displayed error couldn't find device uuid 'xxxxxxx'

Then issue tried forcefully

root#pvremove -ff /dev/sdc1

After some warning, it removed that physical volume.

Then i issued pvdisplay command

root#pvdisplay

This display message ' Couldn't find device with uuid xxxxxx'

Now what you need to do is to create a physical volume with new disk with missing uuid .

For that following command can be used

root# pvcreate --uuid=xxxxxx /dev/sdd1 --restorefile=/etc/lvm/archive/vg0_0.vg

Where /dev/sdd1 is new hard disk in replace of /dev/sdc1

Then restore the vg metadata with following command

root# vgcfgrestore -f /etc/lvm/archive/vg0_0.vg tvg0

Note: check archive of vg in /etc/lvm/archive

Passwordless ssh access using putty

As we know ssh allow authentication using private/public key. Today i was accessing my linux system from windows by giving username and password and i decided to access using key. I followed following steps for that ,As we know putty is most popular windows ssh client. First of all i downloaded puTTygen and putty software from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html. Then i executed puttygen to generate private and public keys . I saved my public key file as vishesh.pub and private key file as vishesh.ppk.


Now public file has to be transferred on linux system, after transfer i written following line on linux system to import public key into authorized key

root#ssh-keygen -i -f vishesh.pub >>/root/.ssh/authorized_key

Now my ssh server is ready to accept private key for authentication.

Finally i executed putty and specify following parameters

IP address of linux box

default userrname as 'root'

and location of private key file.

And linux box get connected without asking password

Oracle-10g Installation on RHEL-5

Step-1: Create following groups and users


# groupadd oinstall

# groupadd dba

# groupadd oper

Step-2: Create user

# useradd –g oinstall –G dba oracle

Step-3: Create following directories and then change mode and ownership

# mkdir –p /u01/app/oracle/product/10.2.0

# chmod –R 777 /u01

# chown –R oracle:oinstall /u01

Step-4:

# vi /etc/sysctl.conf

kernel.shmall = 2097152

kernel.shmmax = 536870912

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

fs.file-max = 65536

net.ipv4.ip_local_port_range = 1024 65000

net.core.rmem_default=262144

net.core.wmem_default=262144

net.core.rmem_max=262144

net.core.wmem_max=262144

Step-5:

# sysctl –p

Step-6:

# vi /etc/security/ limits.conf

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

Step-7:

# vi /etc/pam.d/login

#%PAM-1.0

auth required pam_securetty.so

auth required pam_stack.so service=system-auth

auth required pam_nologin.so

account required pam_stack.so service=system-auth

password required pam_stack.so service=system-auth

# pam_selinux.so close should be the first session rule

session required pam_selinux.so close

session required pam_stack.so service=system-auth

session optional pam_console.so

# pam_selinux.so open should be the last session rule

session required pam_selinux.so multiple open

session required /lib/security/pam_limits.so

session required pam_limits.so


Step-8:

# vi /home/oracle/ .bash_profile

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME

ORACLE_SID=PRODLAP; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

PATH=$ORACLE_HOME/OPatch:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

export PATH

if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksn" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

Step-9:

# passwd oracle

Enter 'oracle' until you get the # prompt.

Step-10:

# vi /etc/redhat-release

In this file you get whatever is written you have to delete that line and have to write redhat-4

Step-10:

Logout from root user and login as an Oracle User

Step-11:

Open new terminal and at oracle $ prompt execute runInstaller from oracle setup.

Advance usage of grep command

Guess output of following command

root# grep -2 vishesh /etc/passwd

Above command print not only line which contains string 'vishesh' but also print 2 previous lines and next 2 lines

Same result can also be obtained by following command
root# grep -C 2 vishesh /etc/passwd

But following command will print only next lines not previous
root#grep -A 2 vishesh /etc/passwd

Bad Sector on Hard Disk

Modern disk map bad sector to good sector to avoid any sort of error due to bad sector. If a large number of sector mapped to another sectors that means disk is going to fail


smartctl command is very helpful command to diagnose disk related problems

To instruct for selftest , we can issue following command

root# smartctl -l selftest /dev/sda

To get attributes list you can use following command

root#smartctl -A /dev/sda

The output of above command is self explanatory and help you to recognize if there is any problem on hard disk.

Create Public key using private key

Suppose you lost or misplaced your public key file. Now you only have your private key file and you need to generate public ssh RSA or DSA key to continue with ssh. Yes there is option to create public key using available private key in ssh


root# ssh-keygen -f ~/.ssh/id_rsa -y > ~/.ssh/id_rsa.pub

In above written command id_rsa in private key that generate public key id_rsa.pub

LVM Snapshots

First of all some of important facts about LVM Snapshots


A snapshot volume does not have to be as large as the source volume

LVM snapshots use a copy on write (CoW) technology

The snapshot volume only needs to be large enough to accommodate the maximum number of changes that might occur to the source volume during the life of the snapshot.

Snapshot is really just a copy of the inode tree at the point in time you created it

Let us understand this with an example. Suppose you have a 20GB Logical Volume where you storing your data. Now suppose in normal circumstance approx 500MB of data get some changes daily. In this case you need to create 500MB LVM snapshot for your 20GB LVM. The modified data of LVM stored on snapshot instead of LVM. So when it is time to read from the snapshot , any changed blocks are read from the snapshot area. But any blocks that haven't changed since the snapshot are read from the original.

LVM snapshots can be used for taking data backup of live system. You can keep snapshot for a given period of time after that you can take backup of snapshot for its further uses. It you need to recover data from snapshot you need to mount it separately and get data from it.

Command to create snapshot:

root#lvcreate -L 500M -s -n databackup /dev/vg1/lv1

Here databackup is name of snapshot and its size will be 500MB. This is snapshot of Logical Volume lv1.

To recover data from snapshot, you need to mount it as following and then perform recovery

root#mount -t ext3 /dev/vg1/databackup /mnt

To take backup of LVM snapshot for further references, you may use following command

root#tar -cf /mybackup/lv1backup /mnt

After taking it is advisable to remove snapshot, for that you can use following command

root#umount /mnt

root#lvremove /dev/vg1/databackup

Watch , Command to learn

Sometime there is need to watch a directory for certain file, i mean you want to keep watch on directory. Following command can serve your purpose


root#watch -d ls -l

Above command will update screen after every 2 second (Default is 2 sec) and present you with the difference (-d option).

If you want to monitor network connection update using watch command you can use in following way

root#watch -d 'netstat -an'

Watch command allow you to watch program output changes over time , by default that time is 2 second but you change that time using -n option.use -d option if you want to changes in successive update but if you want to see cumulative changes use -c option.

KeepAlive Feature in Apache

KeepAlive is the feature that allow use existing connection for new requests. In the case of web server Keep Alive HTTP Persistent connections are being used . We can understand by following example


root#curl -v http://www.google.com http://www.google.com

Two request are made for google.com but only one connection will serve the purpose. Connection created when first request made and next request will travel on existing connection which made for serving first request.

Now let us discuss Keep Alive feature from Apache Web server perspective. The benefit of having Keep Alive on that the client will be able to request more than one entity from web server without having to create another TCP connection. The problem with this is that if you have a connection limit in Apache set to 300 and there is 300 active connections all others have to wait until first 300 client either done or reach keep-alive timeout.

Disabling Keep-Alive force client to create one connection per request. On the low memory servers it is advisable to turn Keep-Alive feature off.

The biggest argument against Keep Alive in Apache is that it block Apache processes with supportive fact that the client using Keep Alive prevent his apache process to serve other client until connection either closed or time out. In the same span of time more client could be served by same apache process. One strong recommendation is, use two instances of Apache one for dynamic content with mod_php(if dynamic pages are coded in PHP) and other for static content. Keep Keep Alive off for Instance which serve dynamic pages and turn on Keep Alive Instance serving static pages.

Background Flushing

Background flushing in linux happens when either of following condition reached


Too much written data is pending (/proc/sys/vm/dirty_background_ratio)

Timeout of pending writes is reached(/proc/sys/vm/dirty_expire_centisecs)

In above circumstances, unless another hit ( /proc/sys/vm/dirty_ratio) is not reached ,more written data may be cached and further writes will block.

In theory this will create a background process without disturbing other process but in practical it does disturbance to the other process which are doing uncached reading or synchronized writing. The reason of this may be background process writing with the 100% device speed, as a result other process halt. I got the fact that clearing dirty caches may solve this issue

Clear Cache from memory

root# echo 3>/proc/sys/vm/drop_caches

And its recommended to run 'sync' command before clearing cache

VirtualHosts and SSL

Due to limitation of SSL protocol, It is impossible to host more than one SSL Virtualhosts on the same ip address and port. The limitation of SSL protocol is that. Apache needs to know the name of host in order to select correct certificate. The name of host part encapsulated inside HTTP Request Header , which is encrypted content. Since host name part will not revealed unless encryption channel get established so it's not possible to create multiple SSL Virtualhost on one ip and port.


Apache allow to configure name-based virtual host, but it always use configuration of first configured virtual host ssl settings for rest of virtual hosts. Logically you can say SSL create encryption layer between client and server on which traffic for all the virtual hosts can move. In reality it doesn't create any sort of encryption channel between client and virtual host. All the virtual hosts of configured apache server will use same SSL certificate for encryption, it may acceptable in many circumstances. But the need of independent SSL Certificate for virtual hosts are also very common.

Nmap vs Nessus

Nmap and Nessus both are network vulnerability scanner The history of vulnerability scanner is very exciting. In initial days Telnet was used to find open port stat. Over time a set of scripts was developed to make vulnerability scanning simple, one of such script set was SATAN (Security Administrator Tool for Analyzing Network). After SATAN some of popular commercial tool was ISS ( Internet Security System) .


As the Open Source movement became popular, in network security field Nmap was released in 1997 and Nessus released in 1998 both was open source. Nessus became proprietary in 2005 although for personal use this product is still free.

As per as use Nmap use is concern, it is very helpful in

Find the status of host (up or down)

Find the open ports on a particular hosts

OS and its version on hosts (windows xp or linux ?)

Presence of firewall

List of network services running on host

Nessus can do almost all which Nmap do, other than that Nessus can find CVE(Common Vulnerability Exposures) using its plug in. Nessus should be used in you have following security needs

Security audit

Vulnerability Scanning and analysis

Sensitive data discovery

Open port scanner (like Nmap)

Asset & Process profiling

One point to be noted that Nmap can work more effectively if we use its Scripting Engine feature

Applying filter in rsync

As we know rsync command is very helpful in taking backup. Sometime need to backup of some folder by including some sub folders and excluding some of them. by an e synchronizing with remote system we can mention folders which to be included and which to be excluded as command line options. Let us understand it by an example, suppose one need to take take backup of / by excluding /home and by including /home/user2 .


Following command help out

root# rsync -av --filter=+home --filter=+/home/user2 --filter=-/home/* / user1@remotehost:/folder

Without the delete option per directory options are only relevant on sending side, to exclude the merge files themselves without affecting the transfer. The -e modifier can do this easily. For example

root# rsync -av --filter=':e .mp3' host:src/dir /dst

rsync in Batch mode

Batch mode can be used to apply same set of changes to many identical system. Suppose same of tree of a filesystem has to be replicated a number of hosts. Updates in one tree has be propagated to many hosts. In order to achieve this rsync can be used in batch mode. The --write-batch option tell rsync client to store in a batch file all the information needed to repeat this information against the other.


To apply the recorded change rsync has to be run with --read-batch option by specifying the same batch file. One thing also need to keep in mind that one bash file also get created in this process. Let us put this in an example command

root#rsync --write-batch=lists -a host:/source/dir /testdir

root#rsync --read-batch=lists -a /datadir

What is archive mode in rsync ?

As you know when i apecify -a option with rsync that means i want to apply rsync command in archive mode. Archive mode considered as most suitable for taking backup. Putting -a basically a shortcut of bunch of switches which includes follwings

-r recursive

-l copy symlinks as symlinks

-p preserve permission

-t preserve modification time

-g preserve group

-o preserve owner

-D device specials

And which exclude followings

-H preserve hard link

-A preserve ACLs

-X preserve extended attribute

So how concise and smart to use -a instead of mentioning above given bunch of switches.

Incremental Backup using tar

GNU tar currently provide following options to handle incremental backup


--listed-incremental=snapshot file (-g snapshot file)

--incremental (-G)

Examples

root# tar --create --file=archive1.tar listed-incremental=/var/log/usr.snar /usr

Above given command will create archive1.tar as incremental backup of /usr filesystem, additional metadata will stored in file /var/log/usr.snar file . If /var/log/usr.snar file not exist it will get created, the created archive will then level 0 backup.

Now suppose for the same above given example, if /var/log/usr.snar exist. Then above given will check which files get modified and only those files stored in archive, so that will be level 1 backup.

So the best option is take level 0 (full) backup first as following

root# tar --create --file archive1.tar --listed-incremental=/var/log/usr.snar

copy that file

root#cp /var/log/usr.snar /var/log/usr.snar-1

Then take incremental backup as follows

root#tar --create --file archive2.tar --listed-incremental=/var/log/usr.snar

To extract content from backup we also have to follow same steps which followed at the time of backup. In our example the procedure will be as follows

root# tar --extract --listed-incremental=/dev/null --file=archive1.tar

Followed by

root#tar --extract --listed-incremental=/dev/null --file=archive2.tar

Bandwidth testing using command

Sometime we need to test bandwidth between two linux system. Here in this post i am showing by example to test bandwidth without any network monitoring tool.


Suppose you need to test bandwidth between two linux hosts one in Host1 and another is Host2. Execute following command on Host1

root@Host1# nc -u -l 7654 >/dev/null

On Host2 run following command

root@Host2# dd if=/dev/zero bs=1MB
nc -u Host1 7654 & pid=$(pidof dd)

while((1)) ; do kill -USR1 $pid; sleep 2; done

Now analyze the output, How much data copied in how many seconds and calculate your network bandwidth.

Remove usb device from command line

As we use Eject option for usb devices in graphical , what if we need to do same thing using command line. Following command can do this


root# echo 1>/sys/bus/usb/devices/usb1/remove

To poweroff the usb device, hdparm command can also be used in following way

root# hdparm --command=stop /dev/sda1

If usb device has to be put in sleep mode, following command can be used

root#hdparm -Y /dev/sda1
In above given examples /dev/sda1 refering plugged usb device.

Restrict telnet login by user

As we know ssh and telnet can be used for remote login by user. It is very simple to disable telnet service so that no one can login via telnet. But what if we want some user to log in via telnet and others not. For example suppose it want to allow user vishesh to log in via telnet and other users completely not allowed login via telnet


pam_succeed_if pam module should be configured in /etc/pam.d/telnet file to achieve this. Put following entries in /etc/pam.d/telnet

auth required pam_succeed_if.so user=vishesh quiet

My advice is create a group by name telnet and add user to that group to allow telnet login, in this scenario put following line as 2nd line in /etc/pam.d/telnet

auth required pam_succeed_if.so quiet user in group telnet

What is LUN?

If suppose we got a large storage array, and requirement is to not allow one server to use all storage spaces, so it need to divided into logical units as LUN(Logical Unit Number). So LUN allow us slice storage array into usable storage chunks and present same to server. LUN basically refer to either a entire physical volume or subset of larger physical disk or volume. LUN represent logical abstraction or you can say virtual layer between physical disk and application. A LUN is scsi concept.

Authentication in single user mode

After a long time i am here with an security trick. Many occasion we need to move into single user mode , which is very simple. But this can be security risk also to overcome this either we can use password in grub boot loader or we can go with following simple solution


---------------------------------------------------
open /etc/inittab file and put following line in this
~~:S:wait:/sbin/sulogin

Sendmail Spamming Prevention

Sendmail is defame for security holes. But this is also a fact that sendmail is one of popular MTA. Its not possible to avoid spamming through sendmail completely but yes sendmail can be configured in better way to limit spamming though it.


I am here discussing some of the options which can be useful in this context. Although this is a subjective discussion that these options are how much useful in different scenario.

confMAX_DAEMON_CHILDREN is one of such option in sendmail.mc which specify number of Daemon's children. But these children process can handle incoming and outgoing traffic both.

confMAX_QUEUE_RUN option specify how many queued message to process every time queue is run.

confQUEUE_LA and confREFUSE_LA options are also very useful. confQUEUE_LA limit at what system load mail will queued for later processing and confREFUSE_LA specify at what load sendmail will reject will (Even will not queue for later processing).

http traffic load

On some occasion , we need to calculate http traffic of our system. Although there are a number of tools and utilities but i avoid to install and use them in small size set-up.


I decided to use tcpdump command , tcpdump command is very useful command that help to decode network traffic. For example for above given requirement i applied tcpdump command in following format.

tcpdump -s 1500 -Svni eth0 tcp and port 80

Above given command will display http packets (tcp packet of port number 80). In output of command we can get total length which is equal to IP Header Length + TCP Header Length + Payload Length . As we know IP Header + TCP Header length will be 52 , so we can calculate payload length which will be Total Length -52.

Prevent DDOS attack on web site

There can a number of idea to prevent Distributed Denial of Service attack on website . Here i am sharing a basic idea that i use now a days to mitigate DDOS from malicious web crawler .As I noticed when a malicious crawler access web site we found log entry by a particular User Agent in access-log. What i do is rewrite request from such User Agent to Forbidden messages using Rewrite module


RewriteCond %{HTTP_USER_AGENT} Malicious_User_Agent [NC]

RewriteRule .* - [F,L]

Caching Configuration in Apache

The goad of caching in Apache could be
-Reduce Number of request in man cases
-Eliminate need to send full response in many cases

For former we use Expiration mechanism and for later we use validation mechanism.

For Apache mod_expires and mod_headers handles cache control through http headers.

mod_expires module control setting of expires and cache-control http header in server response.

For example to enable mod_expires for all .html files, we can use following syntax
ExpiresActive On
ExpiresDefault A300
Expires A86400

In Above example, default caching is for 5 minutes(A300) but for html files caching is 1Day(A86400).

Now suppose we want to disable caching for dynamic pages located in folder /var/www/phpfiles

Header Set Cache-Control "max-age=0,no-store"

Redirect http to https

Suppose you have a requirement to divert your http traffic to https. There are many ways to achieve this , but as a Apache web administrator i prefer to apply following Rewrite rule for my site


RewriteEngine On
RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://abc.com/$1 [R,L]

Here in above example example site is abc.com