Tuesday, August 30, 2011

Why is the difference between du and df output?

# df –h /apps
/dev/mapper/datavg-lv01 70G 60G 10G /apps
# du –sh /apps
/apps 50G

If the files are deleted (by rm command) while they are being opened or used by a Linux program / process, the evil of “open file descriptor” problem arises and confuse the Linux file system on reporting the real figure of used disk space or free disk space available.
In order to resolve the fake “disk space full” problem, i.e. to reclaim “used disk space”, you need to kill or terminate the “defunct process” - in this case, the rm command that turns to be defunct process while the files are being used.
Once these defunct processes are terminated, the “open file descriptor” problem will be resolved, and both the du and df commands will agree to report the real file system used disk space or free disk space!
How to find out and terminate or kill the defunct processes that cause open file descriptor problem, in order to resolve the difference of used disk space in du and df command?
For this particular scenario, the lsof command (list open file command) is great to show light:
lsof | grep "deleted" or
lsof | grep "/apps" (rather long and messy)
and look for Linux process ID in second column of the lsof command output. The seventh column is the size of file being “deleted” (but not success and turns out to be defunct process).

How to recreate the “open file descriptor” problem that causes the difference of used disk space reported by df and du command?
  1. Create one 500MB file in my /home file system:
dd if=/dev/zero of=/home/lokams bs=1024 count=500000
  1. Run md5 checksum against the 500MB file with md5sum command:
md5sum /home/lokams
  1. Now, open another session and remove the /home/lokams file while md5sum still computing its md5 checksum:
rm /home/lokams
  1. Now, both the Linux df and du commands will report different used disk space or free disk space, that caused by “open file descriptor” problem:
df -h; du -h --max-depth=1 /home

How to make LiveCD detect and mount LVM partition?


Reestablish Volume Group

To tap into the volume group you wish to work with, make sure /etc/lvm/lvm.conf filters are able to see the /dev/md? devices, and execute the following:
[tempsrv] # vgscan
Reading all physical volumes. This may take a while...
Found volume group "rootvg" using metadata type lvm2


which should display the volume group (here it is “rootvg”)associated with an md device you enabled. Then, to make the logical volumes (LVs) available for mounting, execute the following:
[tempsrv] # vgchange -ay rootvg

Now Mount the Logical Volumes:

Now all you need do is mount the reestablished LVs. I find this an excellent time to make use of Bash:
[tempsrv] # for i in `ls /dev/rootvg`; do mount /dev/rootvg/$i /mnt/$i; done;
Of course, you need to create the destination mount point before running the script.

Search and replace recursively on a directory in Linux

Here is the small bash shell script to make life simple... This script can do a search for string and replace with a new string recursively in a directory.

--------------------------------------------------------------------------------
#!/bin/bash
# This script will search and replace all regular files for a string
# supplied by the user and replace it with another string.
function usage {
echo ""
echo "Search/replace script"
echo "Usage: ./$0 searchstring replacestring"
echo "Remember to escape any special characters in the searchstring or the replacestring"
echo ""
}

#check for required parameters
if [ ${#1} -gt 0 ] && [ ${#2} -gt 0 ];
then

for f in `find -type f`;
do
grep -q $1 $f
if [ $? = 0 ];then
cp $f $f.1bak
echo "The string $1 will be replaced with $2 in $f"
sed s/$1/$2/g < $f.1bak > $f
rm $f.1bak
fi
done

else
#print usage informamtion
usage
fi

tar and gzip/bzip on a single command on AIX

This command will be helpful when you do not have an option "z" for tar. In Linux you can directly specify "z" option in tar command but in AIX, you can not ....
To compress:

"tar cvf - abc | gzip > abc.tar.gz"
"tar cvf - abc | bzip2 > abc.tar.bz2"
To uncompress:

"gunzip < abc.tar.gz | tar xvf -"

"bzip2 < abc.tar.bz2 | tar xvf -"

List folders / directories by size in Linux / AIX / Windows

To list the directory sizes in kilo bytes and largest at the top
du -sk * | sort +0nr
du -sk * | sort -nr

To list the directory sizes in Mega bytes and largest at the top
du -sm * | sort +0nr
du -sm * | sort -nr

To list the directory sizes in kilo bytes and largest at the bottom.
du -sk * | sort +0n
du -sk * | sort -n

To list the directory sizes in Mega bytes and largest at the bottom.
du -sm * | sort +0n
du -sm * | sort -n

To list the directory sizes in human readable format (Mix of kilo, Mega and Giga bytes) and largest at the bottom
du -s *|sort -n|cut -f 2-|while read a;do du -hs $a;done


To list the size of hidden directories

du -sk .[a-z]* | sort +0nr

To list the size of all the files and directorires including hidden files and directories
du -sk .[a-z]* * | sort +0n

Windows explorer Folder size extension

http://foldersize.sourceforge.net/

Download the package from the abouve URL and install it.

· After the installation Folder Size column is available to Explorer, but Explorer isn't displaying it yet. Open an Explorer window in Details view.
· Right click on the column headers to see a list of columns you can add. Choose Folder Size.
· Now we can replace the existing Size column with the new Folder Size column. Right click on the column headers and uncheck the Size column. Drag the Folder Size column header to where Size used to be.
· Make this the default view for all folders. Go to Folder Options from the Tools menu. In the View tab, click Apply to All Folders

How to forcefully unmount a Linux / AIX /Solaris disk partition?

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent data loss.

To find out the processes which are active on the partition.

[root@tempsrv ~]# lsof | grep "/mnt"
ssh 22883 lokams cwd DIR 253,1 4096 193537 /mnt
vi 22909 root cwd DIR 253,1 4096 193537 /mnt

/** or **/

[root@tempsrv ~]# fuser -mu /mnt
/mnt: 22883c(lokams) 22909c(root)
[root@tempsrv ~]#

Above output tells that users "lokams" and "root" has a "ssh and vi" processes running that is using /mnt. All you have to do is stop those process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:

umount /mnt

To unmount /mnt forcefully with out checking which processes are active currently:
fuser -km /mnt

-k : Kill processes accessing the file.
-m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt

You can also try umount command with –l option:
umount -l /mnt

-l : Also known as Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option works with kernel version 2.4.11+ and above only.

To unmount a NFS mount point:

umount -f /mnt

-f: Force unmount in case of an unreachable NFS system

The above can be accomplished with the below command in AIX:
fuser -kxuc /mnt
The above can be accomplished with the below command in Solaris:
fuser -ck /mnt

To list the process numbers and user login names of processes using the /etc/passwd file in AIX / Solaris, enter:
fuser -cu /etc/passwd

RPM Packages installation and usage

Note:

  • Normal querying doesnot require a root loogin but for installation and uninstalling a package you need to be logged in as root.
  • We can also use regular expressions or wiild-characters with the rpm command.

RPM PACKAGE INSTALLATION/ UNINSTALLATION:

# installing a rpm package with hash printing and in verbose mode

rpm -ivh foobar-1.0-i386.rpm

# to install a package ignoring any dependencies

rpm -ivh --nodeps 

# upgrading a package with hash printing and in verbose mode

rpm -Uvh foobar-1.1-i386.rpm

# Upgrade only those which are already installed from an RPM repository

rpm -Fvh *.rpm

# uninstall a package

rpm -e foobar

# uninstall ignoring the dependencies

rpm -e --nodeps foobar

# to force install /uninstall

rpm -ivh --force foobar-1.0-i386.rpm

RPM PACKAGE QUERY

# find all those packages which are installed on your system

rpm -qa | sort | less

rpm -qa | sort > rpmlist

# findout all the files which are installed by a rpm package

rpm -ql foobar

rpm -qpl foobar-1.0-i386.rpm

# search for an installed package

rpm -qa | grep foobar

# search for a specific file in a rpm repository

for i in *.rpm ; do rpm -qpl $i | grep filename && echo $i ; done

# findout to what package does the a directory/file (say) /etc/skel belong to

rpm -qf /etc/skel

rpm -q --whatprovides 

# to see what config files are installed by a package

rpm -qc foobar

MISC

# To test walk-through a installtion of a package use

rpm -ivh --test foobar-1.1-i386.rpm

10. Similarly uninstalling a package without considering dependencies, use

# rpm -e --nodeps 

11. To force install a package ( same as using "--replacefiles" and "--replacepkgs" together.

It like installing a package with no questions asked :) use it with caution, this option can make some of your existing software unusable or unstable

# rpm -i --force 

12. To exclude the documentation for a package while installing, useful incase of minimal stripped-down installation

# rpm -i --excludedocs 

13. To include documentation while installing (by default this option is enabled), this option is useful only one has set to exclude documentation in "/etc/rpmrc" or in "~/.rpmrc" or in /usr/lib/rpm/rpmrc"

# rpm -ivh --includedocs 

14. To display the debug info while installing, use

When using this option it not neccessary to specify the "-v" verbose option as the debug information provided by the rpm command is verbose by default.

# rpm -ih --test -vv 

As already discussed "-ih" combined option tell rpm to do installation with hash printing, and using the "--test" tells the rpm command to only do walkthrough of installation and not to do the actual installation, "-vv" option asks the rpm package to also print the debug information.

15. To upgrade a package (i.e uninstall the previous version and install a newer version), use

# rpm -U -v -h 

16. To permit upgrade to an old package version (i.e downgrade), use

# rpm -U -v -h --oldpackage 

17. To list all the rpm(s) installed on your system, use

$ rpm -qa

One can pipe the output of the above command to another shell command, e.g.

$ rpm -qa | less

$ rpm -qa | grep "foobar"

$ rpm -qa > installed_rpm.lst

  • Use you imagination for more combinations, you may even use wild characters.

About umask


umask command will be used for setting the default file creation permissions.

When a file is created, its permissions are set by default depending on the umask setting. This value is usually set for all users in /etc/profile and can be obtained by typing:

# umask

The default umask value is usually 022. It is an octal number which indicates what rights will be removed by default to all new files. For instance, 022 indicates that write permissions will not be given to group and other.

By default, and with a umask of 000, files get mode 666 and directories get mode 777. As a result, with a default umask value of 022, newly created files get a default mode 644 (666 - 022 = 644) and directories get a default mode 755 (777 - 022 = 755).

In order to change the umask value, simply use the umask command and give it an octal number. For instance, if you want all new directories to get permissions rwxr-xr--- and files to get permissions rw-r----- by default (modes 750 and 640), you'll need to use a umask value which removes all rights to other, and write permissions to the group : 027. The command to use is:

# umask 027

About SUID, SGID and Sticky bit


Set user ID, set group ID, sticky bit

In addition to the basic permissions discussed above, there are also three bits of information defined for files in Linux:

SUID or setuid: change user ID on execution. If setuid bit is set, when the file will be executed by a user, the process will have the same rights as the owner of the file being executed.
SGID or setgid: change group ID on execution. Same as above, but inherits rights of the group of the owner of the file on execution. For directories it also may mean that when a new file is created in the directory it will inherit the group of the directory (and not of the user who created the file).
Sticky bit: It was used to trigger process to "stick" in memory after it is finished, now this usage is obsolete. Currently its use is system dependent and it is mostly used to suppress deletion of the files that belong to other users in the folder where you have "write" access to.

Numeric representation
Octal digit Binary value Meaning
0 000 setuid, setgid, sticky bits are cleared
1 001 sticky bit is set
2 010 setgid bit is set
3 011 setgid and sticky bits are set
4 100 setuid bit is set
5 101 setuid and sticky bits are set
6 110 setuid and setgid bits are set
7 111 setuid, setgid, sticky bits are set

Textual representation

SUID, If set, then replaces "x" in the owner permissions to "s", if owner has execute permissions, or to "S" otherwise.

Examples:
-rws------ both owner execute and SUID are set
-r-S------ SUID is set, but owner execute is not set

SGID, If set, then replaces "x" in the group permissions to "s", if group has execute permissions, or to "S" otherwise.

Examples:
-rwxrws--- both group execute and SGID are set
-rwxr-S--- SGID is set, but group execute is not set

Sticky, If set, then replaces "x" in the others permissions to "t", if others have execute permissions, or to "T" otherwise.

Examples:
-rwxrwxrwt both others execute and sticky bit are set
-rwxrwxr-T sticky bit is set, but others execute is not set

Setting the sticky bit on a directory : chmod +t

If you have a look at the /tmp permissions, in most GNU/Linux distributions, you'll see the following:

lokams@tempsrv# ls -l | grep tmp
drwxrwxrwt 10 root root 4096 2006-03-10 12:40 tmp

The "t" in the end of the permissions is called the "sticky bit". It replaces the "x" and indicates that in this directory, files can only be deleted by their owners, the owner of the directory or the root superuser. This way, it is not enough for a user to have write permission on /tmp, he also needs to be the owner of the file to be able to delete it.

In order to set or to remove the sticky bit, use the following commands:

# chmod +t tmp
# chmod -t tmp

Setting the SGID attribute on a directory : chmod g+s

If the SGID (Set Group Identification) attribute is set on a directory, files created in that directory inherit its group ownership. If the SGID is not set the file's group ownership corresponds to the user's default group.

In order to set the SGID on a directory or to remove it, use the following commands:

# chmod g+s directory
# chmod g-s directory

When set, the SGID attribute is represented by the letter "s" which replaces the "x" in the group permissions:

# ls -l directory
drwxrwsr-x 10 george administrators 4096 2006-03-10 12:50 directory

Setting SUID and SGID attributes on executable files : chmod u+s, chmod g+s

By default, when a user executes a file, the process which results in this execution has the same permissions as those of the user. In fact, the process inherits his default group and user identification.

If you set the SUID attribute on an executable file, the process resulting in its execution doesn't use the user's identification but the user identification of the file owner.

For instance, consider the script myscript.sh which tries to write things into mylog.log :

# ls -l
-rwxrwxrwx 10 george administrators 4096 2006-03-10 12:50 myscript.sh
-rwxrwx--- 10 george administrators 4096 2006-03-10 12:50 mylog.log

As you can see in this example, George gave full permissions to everybody on myscript.sh but he forgot to do so on mylog.log. When Robert executes myscript.sh, the process runs using Robert's user identification and Robert's default group (robert:senioradmin). As a consequence, myscript fails and reports that it can't write in mylog.log.

In order to fix this problem George could simply give full permissions to everybody on mylog.log. But this would make it possible for anybody to write in mylog.log, and George only wants this file to be updated by his myscript.sh program. For this he sets the SUID bit on myscript.sh:

# chmod u+s myscript.sh

As a consequence, when a user executes the script the resulting process uses George's user identification rather than the user's. If set on an executable file, the SUID makes the process inherit the owner's user identification rather than the one of the user who executed it. This fixes the problem, and even though nobody but George can write directly in mylog.log, anybody can execute myscript.sh which updates the file content.

Similarly, it is possible to set the SGID attribute on an executable file. This makes the process use the owner's default group instead of the user's one. This is done by:

# chmod g+s myscript.sh

By setting SUID and SGID attributes the owner makes it possible for other users to execute the file as if they were him or members of his default group.

The SUID and GUID are represented by a "s" which replaces the "x" character respectively in the user and group permissions:

# chmod u+s myscript.sh
# ls -l
-rwsrwxrwx 10 george administrators 4096 2006-03-10 12:50 myscript.sh
# chmod u-s myscript.sh
# chmod g+s myscript.sh
# ls -l
-rwxrwsrwx 10 george administrators 4096 2006-03-10 12:50 myscript.sh

Ethernet bonding in Linux

Bonding is creation of a single bonded interface by combining 2 or more ethernet interfaces. This helps in high availability and performance improvement.

Steps for bonding in Fedora Core and Redhat Linux

Step 1.

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding config file.

cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168. 10.100
NETMASK=255. 255.255.0
GATEWAY=192. 168.10.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

Step 2.

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Step 3.

Set the parameters for bond0 bonding kernel module. Add the following lines to/etc/modprobe. conf

# bonding commands
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Note: Here we configured the bonding mode as "balance-alb". All the available modes are given at the end and you should choose appropriate mode specific to your requirement.

Step 4.

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5.

Restart the network, or restart the computer.

$ service network restart # Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:13:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!

RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.

* Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.

* Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.

* Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.

* Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.

* Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.

* Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.

* Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

Hardware and System information tools in Linux

Hardware Lister (lshw) - http://ezix.org/project/wiki/HardwareLiSter

lshw (Hardware Lister) is a small tool to provide detailed information on the hardware configuration of the machine. It can report exact memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, bus speed, etc.

http://ezix.org/project/wiki/HardwareLiSter

Usage:

lshw [format] [options... ] where format can be

-X to launch the GUI (if available)
-html to activate HTML mode
-xml to activate XML mode
-short to print hardware paths
-businfo to print bus information

dmidecode:

Dmidecode reports information about your system's hardware as described in your system BIOS. This command gives you vendor name, model name, serial number, BIOS version, asset tag as well as a lot of other details. This will include usage status for the CPU sockets, expansion slots (e.g. AGP, PCI, ISA) and memory module slots, and the list of I/O ports (e.g. serial, parallel, USB). A pretty useful command for sysadmins to prepare their system inventory.

cfg2html:

Cfg2html generates an HTML or plain ASCII report of your Linux/AIX/HP-UX system. It includes configuration information about kernel, filesystems, security, etc, and might be usefull for system documentation.

AIX version of cfg2html can be found here:

http://sourceforge.net/projects/cfg2html/

Linux and HP-UX version of cfg2html can be found here:

http://www.cfg2html.com/

sosreport:(son of sysreport)

https://fedorahosted.org/sos/

The command sosreport is a tool that collects information about a system, such as what kernel is running, what drivers are loaded, and various configuration files for common services. It also does some simple diagnostics against known problematic patterns.

Using Hamachi on Linux

LogMeIn Hamachi is a VPN service that easily sets up in 10 minutes, and enables secure remote access to your business network, anywhere there’s an Internet connection.
It works with your existing firewall, and requires no additional configuration. Hamachi is the first networking application to deliver an unprecedented level of direct peer-to-peer connectivity. It is simple, secure, and cost-effective.
Hamachi is a simple way of making a VPN between different computers. Hamachi can be used to anything from printing to your office printer from home to sharing files heavily encrypted over the internet to your friends.
Installing Hamachi
The first thing you need to do is download Hamachi from https://secure.logmein.com/products/hamachi/list.asp
Extract the archive and run make as root:
# tar zxvf hamachi-0.9.9.9-20-lnx.tar.gz
# cd hamachi-0.9.9.9-20-lnx
# make install
Now you need to start the tuncfg daemon as root
# /sbin/tuncfg
Configuring
To begin using Hamachi you first must run with your own user.

$ hamachi-init

When you have completed the initialization you can start Hamachi by simply typing
$ hamachi start
Since you are starting hamachi for the first time you need to tell the client to go online
$ hamachi login
You may want to change you nickname with
$ hamachi change-nick
Once logged in you can create a network with
$ hamachi create networkname
You can then give the network name and password to your friends and ask them to join you. Once you have friends in your network you can list their IP-addresses with
$ hamachi list
You can then use the listed IP-addresses to connect to your friends just like they would be on your LAN.
For more commands and options check the Hamachi Readme.
Automatically starting Hamachi
To get Hamachi to automatically start when you start your computer you need to create a startup script for it.

############################################
### This is a startup script for hamachi ###
############################################

#!/bin/bash
USER=testuser
case "$1" in
start)
/sbin/tuncfg
/bin/su - $USER -c "hamachi start"
;;
stop)
/bin/su - $USER -c "hamachi stop"
;;
restart|force-reload)
/bin/su - $USER -c "hamachi start"
/bin/su - $USER -c "hamachi stop"
;;
*)
exit 1
;;
esac

exit 0

################ End of Hamachi Startup Script ##################

Change the USER variable to your username in the script. Then make the script executable and move it to /etc/init.d/
# chmod +x hamachi
# mv hamachi /etc/init.d
You then need to link the script to the appropriate runlevel
# ln -s /etc/init.d/hamachi /etc/rc3.d/S99hamachi
# ln -s /etc/init.d/hamachi /etc/rc3.d/K99hamachi
Where 3 is the runlevel for which you want Hamachi to start.

Simple Port Forwarding using IPTABLES

# IP Forwarding
echo "1" > /proc/sys/net/ipv4/ip_forward

# Policy
/sbin/iptables -P INPUT ACCEPT
/sbin/iptables -P OUTPUT ACCEPT
/sbin/iptables -P FORWARD ACCEPT

# IP Masquerade
/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# Forward
/sbin/iptables -A FORWARD -i eth0 -j ACCEPT

# Portforwarding from 10.144.2.21:8888 to 10.144.65.230:80
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8888 -j DNAT --to 10.144.65.230:80
/sbin/iptables -A FORWARD -p tcp -i eth0 -d 10.144.65.230 --dport 80 -j ACCEPT

find command real time examples

$ find . -name *.gif -exec ls {} \;

The -exec parameter holds the real power. When a file is found that matches the search criteria, the -exec parameter defines what to do with the file. This example tells the computer to:

1. Search from the current directory on down, using the dot (.) just after find.
2. Locate all files that have a name ending in .gif (graphic files).
3. List all found files, using the ls command.

The -exec parameter requires further scrutiny. When a filename is found that matches the search criteria, the find command executes the ls {} string, substituting the filename and path for the {} text. If saturn.gif was found in the search, find would execute this command:

$ ls ./gif_files/space/solar_system/saturn.gif

An important alternative to the -exec parameter is -ok; it behaves the same as -exec, but it prompts you to see if you want to run the command on that file. Suppose you want to remove most of the .txt files in your home directory, but you wish to do it on a file-by-file basis. Delete operations like the UNIX rm command are dangerous, because it's possible to inadvertently delete files that are important when they're found by an automated process like find; you might want to scrutinize all the files the system finds before removing them.

The following command lists all the .txt files in your home directory. To delete the files, you must enter Y or y when the find command prompts you for action by listing the filename:

$ find $HOME/. -name *.txt -ok rm {} \;

Each file found is listed, and the system pauses for you to enter Y or y. If you press the Enter key, the system won't delete the file.

If too many files are involved for you to spend time with the -ok parameter, a good rule of thumb is to run the find command with -exec to list the files that would be deleted; then, after examining the list to be sure no important files will be deleted, run the command again, replacing ls with rm.

Both -exec and -ok are useful, and you must decide which works best for you in your current situation. Remember, safety first!

Examples:

  • To remove all temp, swap and core files in the current directory.
$ find . \( -name '*.tmp* -o -name '*.swp' -o -name 'core' \) -exec rm {} \;

  • To copy the entire contents of a directory while preserving the permissions, times, and ownership of every file and subdirectory
$ cd /path/to/source/dir
$ find . | cpio -vdump /path/to/destination/dir


  • To list the first line in every text file in your home directory
$ find $HOME/. -name *.txt -exec head -n 1 -v {} \; > report.txt
$ less <>
  • To maintain LOG and TMP file storage space for applications that generate a lot of these files, you can put the following commands into a cron job that runs daily:
  • The first command runs all the directories (-type d) found in the $LOGDIR directory wherein a file's data has been modified within the last 24 hours (-mtime +0) and compresses them (compress -r {}) to save disk space. The second command deletes them (rm -f {}) if they are more than a work-week old (-mtime +5), to increase the free space on the disk. In this way, the cron job automatically keeps the directories for a window of time that you specify.
$ find $LOGDIR -type d -mtime +0 -exec compress -r {} \;
$ find $LOGDIR -type d -mtime +5 -exec rm -f {} \;
  • To find links that point to nothing
$ find / -type l -print | perl -nle '-e || print';
  • To list zero-length files
$ find . -empty -exec ls {} \;
  • To delete all *.tmp files in the home directory
$ find ~/. -name "*.tmp" | xargs rm
(or)
$ find ~/. -name "*.tmp" -exec rm {} \;
  • To see what hidden files in your home directory changed in the last 5 days:
$ find ~ -mtime -5 -name \.\*
  • If you know something has changed much more recently than that, say in the last 14 minutes, and want to know what it was there's the mmin argument:
$ find ~ -mmin 14 -name \.\*
  • To locate files that have been modified since some arbitrary date use this little trick:
$ touch -d "13 may 2001 17:54:19" date_marker
$ find . -newer date_marker 
  • To find files created before that date, use the cnewer and negation conditions:
$ find . \! -cnewer date_marker
  • To find files containing between 600 to 700 characters, inclusive.
$ find . -size +599c -and -size -701c 
Thus we can use find to list files of a certain size:

$ find /usr/bin -size 48k
  • To find empty files
$ find . -size 0c
  • Using the -empty argument is more efficient. To delete empty files in the current directory:
$ find . -empty -maxdepth 1 -exec rm {} \;
  • To locate files belonging to a certain user:
# find /etc -type f \! -user root -exec ls -l {} \;
  • To search for files by the numerical group ID use the -gid argument:
$ find -gid 100
  • To find directories with '_of_' in their name we'd use:
$ find . -type d -name '*_of_*'
  • To redirect the error messages to /dev/null
$ find / -name foo 2>/dev/null
  • To remove all files named core from your system.
# find / -name core | xargs /bin/rm -f
# find / -name core -exec /bin/rm -f '{}' \; # same thing
# find / -name core -delete # same if using Gnu find
  • To find files modified less than 10 minutes ago. I use this right after using some system administration tool, to learn which files got changed by that tool:

# find / -mmin -10
  • When specifying time with find options such as -mmin (minutes) or -mtime (24 hour periods, starting from now), you can specify a number n to mean exactly n, -n to mean less than n, and +n to mean more than n. 2 For example:

# find . -mtime 0 # find files modified within the past 24 hours
# find . -mtime -1 # find files modified within the past 24 hours
# find . -mtime 1 # find files modified between 24 and 48 hours ago
# find . -mtime +1 # find files modified more than 48 hours ago
# find . -mmin +5 -mmin -10 # find files modifed between 6 and 9 minutes ago
  • To find all files containing “house” in the name that are newer than two days and are larger than 10K, try this:

# find . -name “*house*” -size +10240 -mtime -2
  • The -xdev prevents the file “scan” from going to another disk volume (refusing to cross mount points, for example). Thus, you can look for all regular directories on the current disk from a starting point like this:

# find /var/tmp -xdev -type d -print
  • To find world writables in your system:

# find / -perm 777 | xargs ls -ld | grep -v ^l | grep -v ^s
# find / -perm 666 | xargs ls -ld | grep -v ^l | grep -v ^s

# find . -perm +o=w -exec ls -ld {} \; | grep -v ^l | grep -v ^s | grep -v ^c
or
# find . -perm +o=w | xargs ls -ld | grep -v ^l | grep -v ^s | grep -v ^c
  • To find the orphan files and directories in your system:

# find / -nouser -nogroup | xargs ls -ld
  • To find the files changed in last 5 mins and move them to a different folder.

# find /tmp -mmin -5 -type f -exec mv {} /home/lokams \;
(or)
# mv `find . -mmin -5 -type f` /tmp/
(or)
# find . -mmin -10 -type f | xargs -t -i {} mv {} /tmp/
  • To search on multiple directories

$ find /var /etc /usr -type f -user root -perm 644 -name '*ssh*'
  • To find and list all regular files set SUID to root (or anyone else with UID 0 ;)
# find / -type f -user 0 -perm -4000
  • To find all regular files that are world-writable and removes world-writability:
# find / -type f -perm -2 -exec chmod o-w {} \;
  • To find all files owned by no one in particular and give them to root:
# find / -nouser -exec chown root {} \;
  • To find all files without group ownership and give them to the system group:
# find / -nogroup -exec chgrp system {} \;
  • To find and gzip regular files in current directory that do not end in .gz
$ gzip `find . -type f \! -name '*.gz' -print`
  • To find all empty files in my home directory and delete after being prompted:
$ find $HOME -size 0 -ok rm -f {} \;
  • To find all files or symlinks in /usr not named fred:
$ find /usr \( -type f -o -type l \) \! -name fred
  • If you have a file with spaces, control characters, leading hyphens or other nastiness and you want to delete it, here's how find can help. Forget the filename, use the inumber instead. Say we have a file named "-rf .", spaces and all. We wouldn't dare attempt removing it the normal way for fear of issuing 'rm -rf /' at the command line.

$ echo jhjhg > "-rf /"
$ ls -la
total 4
-rw-r----- 1 mongoose staff 6 Nov 07 15:57 -rf /
drwxr-x--- 2 mongoose staff 512 Nov 07 15:57 .
drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..

Find the I-number of the file using the -i flag in ls:

$ ls -lai .
total 4
18731 -rw-r----- 1 mongoose staff 6 Nov 07 15:57 -rf .
18730 drwxr-x--- 2 mongoose staff 512 Nov 07 15:57 .
1135 drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..
^^^^^
There it is: I-node 18731. Now plug it into find and make find delete it:

$ find . -inum 18731 -ok rm {} \;
rm ./-rf . (?) y
$ ls -la
total 3
drwxr-x--- 2 mongoose staff 512 Nov 07 16:03 .
drwxr-xr-x 3 mongoose bin 1024 Nov 07 15:53 ..

Simple usage of "tcpdump"

Tcpdump is a really great tool for network security analyst; you can dump packets that flow within your networks into file for further analysis. With some filters you can capture only the interested packets, which it reduce the size of saved dump and further reduce loading and processing time of packets analysis.

Lets start with capturing packets based on network interface, ports and protocols. Let assume I wanna capture tcp packets that flow over eth1, port 6881. The dump file with be save as test.pcap.

tcpdump -w test.pcap -i eth1 tcp port 6881
Simple right? What if at the same time I am interested on getting packets on udp port 33210 and 33220?

tcpdump -w test.pcap -i eth1 tcp port 6881 or udp \( 33210 or 33220 \)

‘\’ is an escape symbol for ‘(’ and ‘)’. Logic OR implies PLUS (+). In plain text is I want to capture tcp packets flows over port 6881 plus udp ports 33210 and 33220. Careful with ‘and’ in tcpdump filter expression, it means intersection. Thats why I put ‘or’ instead of and within udp port 33210 and 33220. The usage of ‘and’ in tcpdump will be illustrate later.

Ok, how about reading pcap that I saved previously?

tcpdump -nnr test.pcap

The -nn is to tell tcpdump not to resolve DNS on IP and Ports, where r is read.

Adding -tttt to makes the timestamp appears more readable format.

tcpdump -ttttnnr test.pcap
How about capture based on IP ?You need to tell tcpdump which IP you are interested in? Destination IP? or Source IP ? Let say I wanna sniff on destination IP 10.168.28.22 tcp port 22,

how should i write?

tcpdump -w test.pcap dst 10.168.28.22 and tcp port 22

So the ‘and’ makes the intersection of destination IP and port.

By default the sniff size of packets is 96 bytes, you somehow can overload that size by specified with -s.

tcpdump -w test.pcap -s 1550 dst 10.168.28.22 and tcp port 22
Some version of tcpdump allows you to define port range. You can as bellow for capturing packets based on a range of tcp port.

tcpdump tcp portrange 20-24
Bare in mind, the line above I didn’t specified -w which it won’t write to a file but i will just print the captured packets on the screen.

Bash Shortcuts and Tips

Repeating an argumentYou can repeat the last argument of the previous command in multiple ways. Have a look at this example:

$ mkdir /path/to/dir 
$ cd !$
The second command might look a little strange, but it will just cd to /path/to/dir.

Some keyboard shortcuts for editing

There are some pretty useful keyboard shortcuts for editing in bash. They might appear familiar to Emacs users:

• Ctrl + a => Return to the start of the command you're typing
• Ctrl + e => Go to the end of the command you're typing
• Ctrl + u => Cut everything before the cursor to a special clipboard
• Ctrl + k => Cut everything after the cursor to a special clipboard
• Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
• Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
• Ctrl + w => Delete the word / argument left of the cursor
• Ctrl + l => Clear the screen

Redirecting both Standard Output and Standard Error:
# ls -ltR 2>&1 > /tmp/temp.txt
Specify this in .bashrc
Make Bash append rather than overwrite the history on disk:

# shopt -s histappend
Whenever displaying the prompt, write the previous line to disk:

# export PROMPT_COMMAND=’history -a’
To erase duplicate entries in History.

# export HISTCONTROL=erasedups
(or)
# export HISTCONTROL=ignoreboth
To see the history with timestamps

# export HISTTIMEFORMAT="%d/%m/%Y-%H:%M:%S "
To set the Size of the historyHISTSIZE: The number of commands to remember in the command history. The default value is 500.

# export HISTSIZE=500

Searching the Past

  • Ctrl+R searches previous lines
This will put bash in history mode, allowing you to type a part of the command you're looking for. In the meanwhile, it will show the most recent occasion where the string you're typing was used. If it is showing you a too recent command, you can go further back in history by pressing Ctrl + r again and again. Once you found the command you were looking for, press enter to run it.

  • !tcp will execute the previous command which starts with "tcp"

Zettabyte File System (ZFS) in Solaris 10

ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.

ZFS is a new file system in Solaris 10 OS which provides excellent data integrity and performance compared to other file systems (considering the enterprise storage scenario). Unlike previous file systems, it's a 128-bit file system, which means it can scale up to accommodate very large data. It is perhaps the world's first 128-bit file system. But why do we need so much scalability? The reason is simple. In an enterprise, data is continuously stored on servers and it keeps on increasing. Enterprises want to keep as much of this data live as possible, so that it can be quickly retrieved when required.

ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems:

  • Provable integrity - it checksums all data (and metadata), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables, etc...)
  • Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots or power failures
  • Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks
  • Built-in (optional) compression
  • Highly scalable
  • Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc...)
  • Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
  • Many others (variable sector sizes, adaptive endianness, ...)

In traditional file systems, data is stored on a single disk or on a large volume, consisting of multiple disks. In ZFS, a pool of storage model is used, ie every single storage device is part of a single expandable storage pool, irrespective of where the data is being written. Each storage device which resides inside the pool can have different file systems, which helps administrators scale the system in an easy and efficient manner, ie you no longer need to take care of the file system. Just add a storage device to the pool. With this new architecture, each file system that resides under the pool can share the same amount of size and I/O resources as the pool itself. Also ZFS is used for correcting noisy data corruption. For eg, in cases when you've done an I/O operation, the disk returns an error message, say, 'Can't read the specified block.' The second case could be silent data corruption, wherein you do an I/O operation and the system returns corrupted results. ZFS identifies and if possible even corrects these data corruptions, something which existing file systems can't do. Managing existing file systems is also difficult. For example, you upgrade your system after which you find that the file system doesn't support the machine and you have to copy all the data. This would consume a lot of time, but ZFS helps alleviate this. Moreover, existing file systems have limitations in terms of volumes, file size, etc.

ZFS definitely looks like a great engineering achievement and its makers have all rights to be proud of it. In their own words, they've blown away 20 years of obsolete assumptions and now they refer to ZFS as the last word in filesystems.

When ZFS was first announced, I'm sure many Linux hackers had a thought how it would be a great idea to port such a great filesystem to Linux. Unfortunately, ZFS source is distributed under Sun's CDDL license which is (some say deliberatly) incompatible with the GPL license that Linux kernel uses. So, it looks like there will be no native port of ZFS for Linux in the foreseeable future. What a pity