Friday, August 19, 2011

Editing initrd

Today i become frosted when i found that installing window xp, erase fedora 9 from my laptop. But i recovered my grub because previously i taken backup of mbr using following command.


dd if=/dev/sda of=/root/mbr bs=446 count=1

I recovered my grub by overwriting /root/mbr on /dev/sda

dd if=/root/mbr of=/dev/sda .

But now, real pain started my system was unable to mount root because it search UUID specified in initrd-2.6.27.7.img, so i decided to edit my initrd.

For that firstly i unzip and uncompress cpio archived initrd

cd /root/vk

gunzip

After that i got content of initrd in /root/vk folder.

Then i edited /root/vk/init file and then again created cpio archived initrd

cd /root/vk

find .
cpio --create --format 'newc'>/tmp/vkinit

gzip /tmp/vkinit

cp /tmp/vkinit /boot/

Now i am able to mount my root.



Using usbmon

Using usbmon to monitor usb traffic


Steps of using usbmon are as follows

1. Prepare

Mount debugfs (it has to be enabled in your kernel configuration), and

load the usbmon module (if built as module). The second step is skipped

if usbmon is built into the kernel.

# mount -t debugfs none_debugs /sys/kernel/debug

# modprobe usbmon

#
Verify that bus sockets are present.

# ls /sys/kernel/debug/usbmon

0s 0u 1s 1t 1u 2s 2t 2u 3s 3t 3u 4s 4t 4u

#

Now you can choose to either use the socket '0u' (to capture packets on all

buses), and skip to step #3, or find the bus used by your device with step #2.

This allows to filter away annoying devices that talk continuously.

2. Find which bus connects to the desired device

Run "cat /proc/bus/usb/devices", and find the T-line which corresponds tothe device. Usually you do it by looking for the vendor string. If you have

many similar devices, unplug one and compare two /proc/bus/usb/devices outputs.

The T-line will have a bus number. Example:

T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 0

D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1

P: Vendor=0557 ProdID=2004 Rev= 1.00

S: Manufacturer=ATEN

S: Product=UC100KM V2.00

Bus=03 means it's bus 3.

3. Start 'cat'

# cat /sys/kernel/debug/usbmon/3u > /tmp/1.mon.out

to listen on a single bus, otherwise, to listen on all buses, type:

# cat /sys/kernel/debug/usbmon/0u > /tmp/1.mon.out

This process will be reading until killed. Naturally, the output can be

redirected to a desirable location. This is preferred, because it is going

to be quite long.

Use BackTrack for hacking

Dear friends


If you really interested in hacking, Use BackTrack, Its a operating system that provide a lots of tools for different kind of hacking. For more information you can visit following site.


Use backtrack 4, download using following link http://www.remote-exploit.org/backtrack.html

udev is a Device manager

udev is a generic kernel device manager. It runs as a daemon on a


Linux system and listens to uevents the kernel sends out (via netlink

socket) if a new device is initialized or a device is removed from the

system. The system provides a set of rules that match against exported

values of the event and properties of the discovered device. A

matching rule will possibly name and create a device node and run

configured programs to set-up and configure the device.

Different network interface for different program

For example, let's say you're a student living on-campus; the university provides you with broadband Internet access via Wi-Fi, which is great, except for the fact that you cannot trust it (yes, even when you're careful to use HTTPS and so on, I'll cover that in subsequent blog posts). A general solution to that problem would be getting your very own private Internet access, but being a student, you would prefer not to waste too much money into it, so you'll most likely take the cheapest subscription. So now you have two routes to the Internet: a fast but insecure one, and another that is private but slow. How to use both on the same computer? As bandwidth-intensive applications are often also the ones that don't really require privacy, one could imagine categorizing programs in a way so as to watch Internet TV over the Wi-Fi network while corresponding over the cable.


Here's how to do it with Linux, assuming that the default route is your private connection and that your Wi-Fi interface is named ath0, has IP address 10.1.2.3 and gateway 10.0.0.1:

Create a "wifi" user

adduser wifi

Mark packets coming from the wifi user

iptables -t mangle -A OUTPUT -m owner --uid-owner wifi -j MARK --set-mark 42

Apply the Wi-Fi IP address on them

iptables -t nat -A POSTROUTING -o ath0 -m mark --mark 42 -j SNAT --to-source 10.1.2.3

Route marked packets via Wi-Fi

ip rule add fwmark 42 table 42

ip route add default via 10.0.0.1 dev ath0 table 42

Launch programs as the wifi user

sudo -u wifi vlc

Step 1 is of course required only once; steps 2, 3 and 4 are better put together in a shell script. Regarding step 5, it is much more practical to edit your KDE menu entries for example and there specify that the program has to be run as the wifi user.

Linux as QOS Machine

The tc command allows administrators to build different QoS policies in their networks using Linux instead of very expensive dedicated QoS machines. Using Linux, you can implement QoS in all the ways any dedicated QoS machine can and even more. Also, one can make a bridge using a good PC running Linux that can be transformed into a very powerful and very cheap dedicated QoS machine.


Queueing determine the way data sent, controlling data upload and download by setting certain criteria is possible in linux. Although UDP doesn't have flow control feature by TCP has.

Queueing discipline can be categorized as Classless and Classfull.

Classless discipline are simplest can be used to delay, reschedule, drop and accept the data. These discipline can shape an interface. There are serveral qdisc implementation in linux such as FIFO(pfifo and bfifo),pfifo_fast,tbf,sfq and esfq. By Default pfifo_fast work in linux.

Why top and ps show diffrent priority?

There is some discrepancy in ps output caused by the fact that each system may use different values to represent the process priority and that the values have changed with the introduction of RT priorities.


The kernel stores the priority value in /proc//stat (let's call it p->prio) and ps reads the value and displays it in various ways to the user:

$ ps -A -o pri,opri,intpri,priority,pri_foo,pri_bar,pri_baz,pri_api,pid,commPRI PRI PRI PRI FOO BAR BAZ API PID COMMAND 19 80 80 20 0 21 120 -21 1 init 24 75 75 15 -5 16 115 -16 2 kthreadd139 -40 -40 -100 -120 -99 0 99 3 migration/0 24 75 75 15 -5 16 115 -16 4 ksoftirqd/0139 -40 -40 -100 -120 -99 0 99 5 watchdog/0139 -40 -40 -100 -120 -99 0 99 6 migration/1 24 75 75 15 -5 16 115 -16 7 ksoftirqd/1139 -40 -40 -100 -120 -99 0 99 8 watchdog/1 24 75 75 15 -5 16 115 -16 9 events/0

Yes, there are 8 undocumented values for the process priority that can be passed to -o option:

Option Computed as

prioirity p->priointpri 60 + p->prioopri 60 + p->priopri_foo p->prio - 20pri_bar p->prio + 1pri_baz p->prio + 100pri 39 - p->prioritypri_api -1 - p->priority

They were introduced to fit the values in certain intervals and compatibility with POSIX and other systems.

qemu vs kvm

The QEMU package provides a processor and system emulator which enables users to launch guest virtual machines not only under the same hardware platform as the host machine, but also dramatically different hardware platforms. For example, QEMU can be used to run a PPC guest on a x86 host. QEMU dynamically translates the machine code of the guest architecture into the machine code of the host architecture.


QEMU does full hardware virtualization; in other words, it would allow you to run a MIPS guest OS inside an x86 host. This is useful, but slower than the alternatives...

KVM is a hypervisor that leverages QEMU for device emulation. I believe (going from memory) this device emulation is basically limited to VGA, disk controllers, etc.
So... unless you have a compelling reason (guest platform different from host), it would be best to use KVM and QEMU together

Using dump

As we know if you are using dump as a backup system. Then modified tower of hanoi algorithm is suitable. But why?


The idea is is to make the numbers rise and fall to minimise the number of backups needed to do a full restore. Write yourself some sequences and figure out for yourself which ones you would need for a full backup. Try to figure out for each backup whether the same files will be dumped by a later backup. They will, if a later backup number is lower. The agorithm your aiming to create is Start with a level 0 and ignore everything before. from end of list, find the lowest number before you reach the starting dump. You'll need this backup. Make it the new start of list. from end of list, find the lowest number before you reach the starting dump. You'll need this backup. Make it the new start of list. etc. E.g. Given 0 3 2 5 4 7 6 9 To restore everything you need the 0, 2, 4 and 6. I.e. every second dump. You'll see that wherever you stop in that sequence, no more than 3 backups are required to recover everything.

Nice.Using the algorithm above I get the following:Sequence

Sequence Dumps needed

0 3 0 3

0 3 2 0 2

0 3 2 5 0 2 5

0 3 2 5 4 0 2 4

0 3 2 5 4 7 0 2 4 7

0 3 2 5 4 7 6 0 2 4 6

0 3 2 5 4 7 6 9 0 2 4 6 9

Every time a dump of level N is, eh, taken,earlier tapes of level N become obsolete and are free to go(*). In thiscase, that happens every other time.

lvm vs RAID1

So, what is the better approach? Using LVM mirroring capabilities or putting the LVM on a mdadm RAID 1.


lvm require extra log partition for mirroring. Altough that log partition doesn't require much space but still a extra pain.

LVM on mdadm RAID1 is more stable than LVM's own mirroring at the moment.

You don't need to partition the disk and create a new PV just for theLVM mirror log. You can use "--alloc anywhere" while creating the mirrorvolume and LVM will gladly allocate mirror log on one of your mirror legs.

What Is A Snapshot in lvm?

A snapshot is an operation in which we "freeze" the data on a logical volume, while still enabling writing new data.


This, of-course, is an oxymoron.

It is solved by splitting the data to old (written before taking the snapshot) and new (written after taking the snapshot).

The old data resides on the original logical volume.

The new data is written to a different disk.

When an application reads from the device, the underlying kernel code finds where lies the fresh copy of the data, and returns that to the application.

Meanwhile, we may mount the original (frozen) content on a different directory, and access it in read-only mode.

E.g. to backup the data.

Clustering in Linux

When one than one computer work together to perform a task, its known as Clustering.


There are four type of cluster

• Storage

• High availability

• Load balancing

• High performance

Above given type of clustering is basically based on objective of clustering.

Storage clusters provide a consistent file system image across servers in a cluster, allowing the

servers to simultaneously read and write to a single shared file system. A storage cluster

simplifies storage administration by limiting the installation and patching of applications to one

file system. Red Hat Cluster Suite provides storage clustering through Red Hat GFS.

High-availability clusters provide continuous availability of services by eliminating single points

of failure and by failing over services from one cluster node to another in case a node becomes

inoperative. Red Hat Cluster Suite provides high-availability clustering through its High-availability Service Management component.

Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance

the request load among the cluster nodes. Node failures in a load-balancing cluster are not

visible from clients outside the cluster. Red Hat Cluster Suite provides load-balancing through

LVS (Linux Virtual Server).

High-performance clusters use cluster nodes to perform concurrent calculations. A

high-performance cluster allows applications to work in parallel, therefore enhancing the

performance of the applications. (High performance clusters are also referred to as

computational clusters or grid computing.)

User wise bandwidth control

Suppose you want to control download speed of a user test to 1mbps. linux provide iptables and tc command to help you in this scenario. HTB alogorithm can be implemented on network interface to control that.


Mark packet originated by user test with mark 6

iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner test -j MARK --set-mark 6

Following script can help in this situation

TC=/sbin/tc

IF=eth0

DNLD=1mbit

start() {

$TC qdisc add dev $IF root handle 1: htb default 30

$TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD

$TC filter add dev $IF protocol ip parent 1:0 prio 1 handle 6 fw flowid 1:1

}

stop() {

$TC qdisc del dev $IF root

}

restart() {

stop

sleep 1

start

}

show() {

$TC -s qdisc ls dev $IF

}

case "$1" in

start)

echo -n "Starting bandwidth shaping: "

start

echo "done"

;;

stop)

echo -n "Stopping bandwidth shaping: "

stop

echo "done"

;;

restart)

echo -n "Restarting bandwidth shaping: "

restart

echo "done"

;;

show)

echo "Bandwidth shaping status for $IF:"

show

echo ""

;;

*)

pwd=$(pwd)

echo "Usage: tc.bash {start
stop
restart
show}"

;;

esac

exit 0

/etc/resolv.conf directives

There are five main configuration directive that one can use in /etc/resolv.conf


domain (Domain name of host, also set using hostname command)

search (which domain to search)

nameserver (dns server ip)

sortlist ( to specify subnet ) and

options (specify timeout,retry etc options)

--------------------------------------------

sample /etc/resolv.conf entry

-------------------

domain test.edu

nameserver 0.0.0.0

nameserver 10.11.0.200

nameserver 10.11.0.101

options timeout:2

samba : NTFS full control cab be applied on file why not on directories?

With Samba 3.3.x, we moved to using the returned Windows permissions (as mapped from POSIX ACLs) to control all file access. This gets us closer to Windows behavior,but there's one catch. "Full Control" includes the ability to delete a file, but in POSIX the ability to delete a file belongs to the containing directory, not the file itself.


So when we return the Windows permissions for a file ACL with "rwx" set, by default we'd like to map to "Full Control" (see the default setting of the parameter acl map full control) but we must remove the DELETE_ACCESS flag from the mapping, as that is not a permission that is granted. Thus the ACL editor doesn't see "DELETE_ACCESS"in the returned ACE entry, and so doesn't believe it's "Full Control".

If we don't remove the DELETE_ACCESS bit, the client will open a file for delete, and successfully get a file handle back, but the delete will fail when the set file info (delete this file) call is made. Windows clients only check the error return on the open for

delete call, not the actual set file info that allows the delete - if you fail that call Windows explorer silently ignores the error, tells you you have deleted the file, but the file is still there and will reappear on the next directory refresh, thus confusing users.

Implementing cluster

From last few days, i was searching to implement cluster on my single laptop. The idea was to become more familiar with clustering. Linuxquestios.org guys help me in this direction and i decided to use vmware server 2.1 to implement clustering among guest nodes. My laptop has ubuntu 9.1 installed and i installed rhel 5.1 as guest using vmware server. Going througth docs on clustering using vwware i concluded that i have to add a virtual scsi disk with diffrent bust allocation to my virtual guest. I added scsi1, then i changed some configuration in guest .vmx file, like. Earlier my major concern was how to implement a disk that can be shared among guests.


Following modifications in done in .vmx file

disk.locking = false

scsi1.present = true

scsi1.sharedbus = true

scsi1.virtualdev= "lsilogic"

scsi1:0.present = true

scsi1.0.filename = "d:virtualshareddisk"

scsi1:0.mode = "independent: -persistent"

scsi1:0.devicetype = "disk"

after that i restarted my guest. I jumped when i found that 'fdisk -l' command list my new disk.

So now i have disk that can be shared among my individual guest.

Since i already decided to use OCFS as cluster file system so i installed ocfs2-'uname -r' and ocfs-tools-'uname -r'. For managing grpahically i also installed ocfsconsole-'uname -r' (remember to replace 'uname -r' with your kernel version).

After installation, i noticed that two new script files get created inside /etc/initd.d. Files are o2cb and ocfs2. Now there is time to configure ocfs2, so i executed script

root#cd /etc/init.d

roooot#./o2cb configure

..
above command generated error that cluster.conf not found so i created /etc/ocfs2/cluster.conf with following details

cluster:

node_count=2

name=ocfs2

node:

ip_port=7777

ip_address=192.168.11.90

number=1

name=node1

cluster=ocfs2

node:

ip_port=7777

ip_address=192.168.11.100

number=2

name=node2

cluster=ocfs2

now execute

root#cd /etc/init.d

roooot#./o2cb configure

sucessful .

after that create new partition on /dev/sdb

after that execute

root# mkfs.ocfs2 -b 4k -C 32k -N4 -L shareddata /dev/sdb1 --fs-feature-level=max-compat

clustered file system created on /dev/sdb1, now mount it on guests

#mount -t ocfs2 /dev/sdb1 /mnt/shared

Great all worked ..

what is KVM ?

On sept 2, 2009 redhat announced the availability of the fourth update to its Red Hat Enterprise Linux 5 platform. With this update redhat offers a new sort of virtualization known as KVM.


So now you don't have to bother about new commands to test performance of guest machine. KVM make possible uniform support for the complete Linux environment, no different treatment for host and guest.

n December 2006, Linus Torvalds announced that new versions of the Linux kernel would include the virtualization tool known as KVM (Kernel Virtual Machine Monitor).

KVM merges hypervisor with kernel and thus reducing redundancy and speeding up execution time. A KVM driver communicate with kernel and act as a interface for userspace virtual machine. Memory management and scheduling of process done by kernel itself.

Multi-Level Security in SELINUX

Having information of different security levels on the same computer systems poses a real threat. It is not a straight-forward matter to isolate different information security levels, even though different users log in using different accounts, with different permissions and different access controls.


One of the solution is to purchase dedicated systems to each security level but this is very expensive. Another inexpensive solution is use MLS feature of selinux.

The term multi-level arises from the defense community's security classifications: Confidential, Secret, and Top Secret.

The Bell-La Padula Model (BLP) model is used in selinux to protect multi level data.

Under such a system, users, computers, and networks use labels to indicate security levels. Data can flow between like levels, for example between "Secret" and "Secret", or from a lower level to a higher level. This means that users at level "Secret" can share data with one another, and can also retrieve information from Confidential-level (i.e., lower-level), users. However, data cannot flow from a higher level to a lower level. This prevents processes at the "Secret" level from viewing information classified as "Top Secret". It also prevents processes at a higher level from accidentally writing information to a lower level. This is referred to as the "no read up, no write down" model.

Linux From Scratch

If you thinking about creating your own linux distro linuxfromscratch is right platform to start. The documentation available on http://www.linuxfromscratch.org/ is very helpful is creating custom distro. But there is also some points you should care before starting for new distro. Remember


"Understand what reason to develop new distro"

Replication method changed in openldap. After struggling a little , i managed to set up replication between two ldap server in master slave way.

This is how i achieved ldap replication in RHEL 5.2 with slapd version 2.3 43

===Provider ldap server =====

database bdb

suffix "dc=abc,dc=del"

rootdn "uid=root,ou=People,dc=abc,dc=del"

rootpw {SSHA}ifvOmrnBD6xEbsgTbY7n/EikFnKTbbhm

directory /var/lib/ldap/abc.del

index objectClass,entryCSN,entryUUID eq

index uidNumber,gidNumber,loginShell eq,pres

#replication

overlay syncprov

syncprov-checkpoint 1 5

syncprov-sessionlog 100

#monitoring ldap

database monitor

access to *

by dn.exact="uid=root,ou=People,dc=abc,dc=del" read

===Consumer LDAP Server =====

database bdb

suffix "dc=abp,dc=del"

directory /var/lib/ldap/abc.del

rootdn uid=root,ou=People,dc=abc,dc=del

syncrepl rid=000

provider=ldap://10.11.0.105

type=refreshOnly

interval=00:00:20:00

retry="60 +"

searchbase="dc=abc,dc=del"

attrs="*,+"

bindmethod=simple

binddn="uid=root,ou=People,dc=abc,dc=del"

=============================================================

Best Practices deploying LVM

1. Use multiple volume groups to define classes of storage.


2. Use full disk physical volumes over partitions.

3. I like unique naming of volume groups just so that if

a drive lands else where, and it presents itself to the system,

it will not collide with existing volume group names.

(but there is good reason for not doing this... and really

only a factor if you have a tendency to throw drives into

different computers all of the time)

How to set maximum size for a TCP Connection ?

Using iptables we can do this this


root# iptables -I FORWARD -o eth2 -p tcp --syn -j TCPMSS --set-mss 1440

Set maximum size of TCP connection to 1440

You can test using

root# tshark -n -i eth2 tcp port 80

Accidentally removed an lvm can i restore it?

Following command can be helpful in this scenario

vgcfgrestore

Also look at your /etc/lvm/archive/ for all archived metadata.

You should be able to use '--list' option.

which overlays are present in my openldap servers

Use the monitor backend and then search with following command


ldapsearch -b cn=overlays,cn=monitor -s sub monitoredInfo

Setting up monitor backend is also not a big deal

Enter following entries in /etc/openldap/slapd.conf

=================

database monitor

access to *

by dn.exact="cn=Manager,dc=example,dc=com

by * none

=====================

Move LVM Volume Group to another computer

To move a volume group form one system to another, perform the following steps:


Make sure that no users are accessing files on the active volumes in the volume group, then unmount the logical volumes.

Use the -a n argument of the vgchange command to mark the volume group as inactive, which prevents any further activity on the volume group.

Use the vgexport command to export the volume group. This prevents it from being accessed by the system from which you are removing it.

After you export the volume group, the physical volume will show up as being in an exported volume group when you execute the pvscan command.

When the disks are plugged into the new system, use the vgimport command to import the volume group, making it accessible to the new system.

Activate the volume group with the -a y argument of the vgchange command.

Mount the file system to make it available for use.