Wednesday, October 12, 2011

Trying to umount: device is busy

You can see that the device is busy, so check out who is using it.

Syntax: fuser [options] |
Options:
-c Checks the mounted file system
-k Kills processes using the file system
-m Shows all processes using the file system
-u Displays user IDs
-v Provides verbose output

Try to unmount the file system to see whether it is busy:
# umount /opt/backup
umount: /opt/backup: device is busy.

Check to see what users are currently using the file system:
# fuser -cu /dev/hdc1

You can also use the lsof command for more details. View all open files:
# lsof /dev/hdc1
Note: You can either contact the users or terminate any open connections yourself.

To kill the open connections, you can use the fuser command again:
# fuser -ck /opt/backup

Now you should be able to unmount the file system:
# umount /opt/backup

Did You Benefit from My Articles? Here’s a Quick Way to Thank Me....


Many of you asked me this question: I read your articles regularly and learned a lot from them. I would like to thank Suresh. How can I help?
The answer is simple: Help me grow the blog by personally recommending it to your friends and colleagues. Send them thisLinux Administration Url, and request them to subscribe to the blog.

Some of you have been following this blog for a very long time. Some of you subscribed to the blog recently. Irrespective of how long you’ve been following the blog, you already know that my focus is to publish high quality tutorials on Linux that will educate you, and help you learn and explore Linux and open source technologies.
Send this  URL to others: http://www.sureshkumarpakalapati.in/
  • Send an email to your friends and colleagues with the above welcome URL, and with your personal recommendation of the suresh blog. Request them to subscribe to the blog to learn and explore Linux and open source technologies on an on-going basis with us.
  • If your company has an internal mailing list (or newsletter, or forum), post a message in it with your personal recommendation of the blog.
  • If you are student, post a message in your university mailing list and inform other students about the blog, and request them to subscribe to it.
  • When you recommend the blog to others, you look smart for recommending a high quality Linux blog to them, and they benefit by learning and exploring Linux by reading our articles, and you are also helping suresh in this process.
PS: I spend tons of time creating high quality articles for the blog to help you. You can thank me by just spending less than 60 seconds to send an email to your friends and colleagues, and request them to subscribe to the blog.

Journey of a C Program to Linux Executable in 4 Stages


You write a C program, use gcc to compile it, and you get an executable. It is pretty simple. Right?
Have you ever wondered what happens during the compilation process and how the C program gets converted to an executable?
There are four main stages through which a source code passes in order to finally become an executable.

The four stages for a C program to become an executable are the following:
  1. Pre-processing
  2. Compilation
  3. Assembly
  4. Linking
In Part-I of this article series, we will discuss the steps that the gcc compiler goes through when a C program source code is compiled into an executable.
Before going any further, lets take a quick look on how to compile and run a ‘C’ code using gcc, using a simple hello world example.
$ vi print.c
#include 
#define STRING "Hello World"
int main(void)
{
/* Using a macro to print 'Hello World'*/
printf(STRING);
return 0;
}
Now, lets run gcc compiler over this source code to create the executable.
$ gcc -Wall print.c -o print
In the above command:
  • gcc – Invokes the GNU C compiler
  • -Wall – gcc flag that enables all warnings. -W stands for warning, and we are passing “all” to -W.
  • print.c – Input C program
  • -o print – Instruct C compiler to create the C executable as print. If you don’t specify -o, by default C compiler will create the executable with name a.out
Finally, execute print which will execute the C program and display hello world.
$ ./print
Hello World
Note: When you are working on a big project that contains several C program, use make utilityto manage your C program compilation as we discussed earlier.
Now that we have a basic idea about how gcc is used to convert a source code into binary, we’ll review the 4 stages a C program has to go through to become an executable.

1. PRE-PROCESSING

This is the very first stage through which a source code passes. In this stage the following tasks are done:
  1. Macro substitution
  2. Comments are stripped off
  3. Expansion of the included files
To understand preprocessing better, you can compile the above ‘print.c’ program using flag -E, which will print the preprocessed output to stdout.
$ gcc -Wall -E print.c
Even better, you can use flag ‘-save-temps’ as shown below. ‘-save-temps’ flag instructs compiler to store the temporary intermediate files used by the gcc compiler in the current directory.
$ gcc -Wall -save-temps print.c -o print
So when we compile the program print.c with -save-temps flag we get the following intermediate files in the current directory (along with the print executable)
$ ls
print.i
print.s
print.o
The preprocessed output is stored in the temporary file that has the extension .i (i.e ‘print.i’ in this example)
Now, lets open print.i file and view the content.
$ vi print.i
......
......
......
......
# 846 "/usr/include/stdio.h" 3 4
extern FILE *popen (__const char *__command, __const char *__modes) ;
extern int pclose (FILE *__stream);
extern char *ctermid (char *__s) __attribute__ ((__nothrow__));

# 886 "/usr/include/stdio.h" 3 4
extern void flockfile (FILE *__stream) __attribute__ ((__nothrow__));
extern int ftrylockfile (FILE *__stream) __attribute__ ((__nothrow__)) ;
extern void funlockfile (FILE *__stream) __attribute__ ((__nothrow__));

# 916 "/usr/include/stdio.h" 3 4
# 2 "print.c" 2

int main(void)
{
printf("Hello World");
return 0;
}
In the above output, you can see that the source file is now filled with lots and lots of information, but still at the end of it we can see the lines of code written by us. Lets analyze on these lines of code first.
  1. The first observation is that the argument to printf() now contains directly the string “Hello World” rather than the macro. In fact the macro definition and usage has completely disappeared. This proves the first task that all the macros are expanded in the preprocessing stage.
  2. The second observation is that the comment that we wrote in our original code is not there. This proves that all the comments are stripped off.
  3. The third observation is that beside the line ‘#include’ is missing and instead of that we see whole lot of code in its place. So its safe to conclude that stdio.h has been expanded and literally included in our source file. Hence we understand how the compiler is able to see the declaration of printf() function.
When I searched print.i file, I found, The function printf is declared as:
extern int printf (__const char *__restrict __format, ...);
The keyword ‘extern’ tells that the function printf() is not defined here. It is external to this file. We will later see how gcc gets to the definition of printf().
You can use gdb to debug your c programs. Now that we have a decent understanding on what happens during the preprocessing stage. let us move on to the next stage.

2. COMPILING

After the compiler is done with the pre-processor stage. The next step is to take print.i as input, compile it and produce an intermediate compiled output. The output file for this stage is ‘print.s’. The output present in print.s is assembly level instructions.
Open the print.s file in an editor and view the content.
$ vi print.s
.file "print.c"
.section .rodata
.LC0:
.string "Hello World"
.text
.globl main
.type main, @function
main:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
movq %rsp, %rbp
.cfi_offset 6, -16
.cfi_def_cfa_register 6
movl $.LC0, %eax
movq %rax, %rdi
movl $0, %eax
call printf
movl $0, %eax
leave
ret
.cfi_endproc
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits
Though I am not much into assembly level programming but a quick look concludes that this assembly level output is in some form of instructions which the assembler can understand and convert it into machine level language.

3. ASSEMBLY

At this stage the print.s file is taken as an input and an intermediate file print.o is produced. This file is also known as the object file.
This file is produced by the assembler that understands and converts a ‘.s’ file with assembly instructions into a ‘.o’ object file which contains machine level instructions. At this stage only the existing code is converted into machine language, the function calls like printf() are not resolved.
Since the output of this stage is a machine level file (print.o). So we cannot view the content of it. If you still try to open the print.o and view it, you’ll see something that is totally not readable.
$ vi print.o
^?ELF^B^A^A^@^@^@^@^@^@^@^@^@^A^@>^@^A^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@0^
^@UH<89>å¸^@^@^@^@H<89>ǸHello World^@^@GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3^@^
T^@^@^@^@^@^@^@^AzR^@^Ax^P^A^[^L^G^H<90>^A^@^@^\^@^@]^@^@^@^@A^N^PC<86>^B^M^F
^@^@^@^@^@^@^@^@.symtab^@.strtab^@.shstrtab^@.rela.text^@.data^@.bss^@.rodata
^@.comment^@.note.GNU-stack^@.rela.eh_frame^@^@^@^@^@^@^@^@^@^@^@^
...
...
…
The only thing we can explain by looking at the print.o file is about the string ELF.
ELF stands for executable and linkable format.
This is a relatively new format for machine level object files and executable that are produced by gcc. Prior to this, a format known as a.out was used. ELF is said to be more sophisticated format than a.out (We might dig deeper into the ELF format in some other future article).
Note: If you compile your code without specifying the name of the output file, the output file produced has name ‘a.out’ but the format now have changed to ELF. It is just that the default executable file name remains the same.

4. LINKING

This is the final stage at which all the linking of function calls with their definitions are done. As discussed earlier, till this stage gcc doesn’t know about the definition of functions like printf(). Until the compiler knows exactly where all of these functions are implemented, it simply uses a place-holder for the function call. It is at this stage, the definition of printf() is resolved and the actual address of the function printf() is plugged in.
The linker comes into action at this stage and does this task.
The linker also does some extra work; it combines some extra code to our program that is required when the program starts and when the program ends. For example, there is code which is standard for setting up the running environment like passing command line arguments, passing environment variables to every program. Similarly some standard code that is required to return the return value of the program to the system.
The above tasks of the compiler can be verified by a small experiment. Since now we already know that the linker converts .o file (print.o) to an executable file (print).
So if we compare the file sizes of both the print.o and print file, we’ll see the difference.
$ size print.o
   text    data     bss     dec     hex filename
     97       0       0      97      61 print.o 

$ size print
   text    data     bss     dec     hex filename
   1181     520      16    1717     6b5 print
Through the size command we get a rough idea about how the size of the output file increases from an object file to an executable file. This is all because of that extra standard code that linker combines with our program.
Now you know what happens to a C program before it becomes an executable. You know about Preprocessing, Compiling, Assembly, and Linking stages There is lot more to the linking stage, which we will cover in our next article in this series.

Analyzing past System performance of a Linux server


Assumption: 
Today's date is 13th Aug, 2011.  You are asked to check the System performance of a Linux server on 7th Aug,2011 between 3 AM to 5 AM.

Solution: 
Run the 'sar' command on the respective 'sa' (System Activity) file created for the date "7th Aug,2011" with specifying the Starting and End time.

Illustration:
Go to /var/log/sa
[root@hostxyz sa]# ls -ltr sa??


-rw-r--r-- 1 root root 481776 Aug 5 23:50 sa05
-rw-r--r-- 1 root root 481776 Aug 6 23:50 sa06
-rw-r--r-- 1 root root 481776 Aug 7 23:50 sa07       # File that belongs to 7th Aug,2011
-rw-r--r-- 1 root root 481776 Aug 8 23:50 sa08
-rw-r--r-- 1 root root 481776 Aug 9 23:50 sa09
-rw-r--r-- 1 root root 481776 Aug 10 23:50 sa10
-rw-r--r-- 1 root root 481776 Aug 11 23:50 sa11
-rw-r--r-- 1 root root 481776 Aug 12 23:50 sa12
-rw-r--r-- 1 root root 287824 Aug 13 14:10 sa13
[root@hostxyz sa]#
[root@hostxyz sa]# sar -u -f /var/log/sa/sa07 -s 03:00:01 -e 05:00:01   # To check CPU utilization
Linux 2.6.18-92.el5 (hostxyz) 08/07/2011
03:00:01 AM CPU %user %nice %system %iowait %steal %idle
03:10:01 AM all 24.57 0.00 5.16 6.04 0.00 64.23
03:20:01 AM all 24.57 0.10 5.06 6.28 0.00 63.98
03:30:01 AM all 24.33 0.00 4.88 5.64 0.00 65.14
03:40:01 AM all 15.75 0.00 3.93 10.52 0.00 69.80
03:50:01 AM all 12.70 0.00 3.09 19.04 0.00 65.17
04:00:01 AM all 16.80 0.00 3.90 9.40 0.00 69.90
04:10:01 AM all 9.18 0.02 2.26 14.43 0.00 74.11
04:20:01 AM all 8.84 0.10 2.20 9.65 0.00 79.22
04:30:01 AM all 11.42 0.00 3.24 10.50 0.00 74.84
04:40:01 AM all 11.84 0.00 2.43 20.64 0.00 65.09
04:50:01 AM all 17.80 0.00 3.78 17.00 0.00 61.42
05:00:01 AM all 6.46 0.00 1.53 21.80 0.00 70.22
Average: all 15.35 0.02 3.46 12.58 0.00 68.59
[root@hostxyz sa]#
[root@hostxyz sa]#  sar -r -f /var/log/sa/sa07 -s 03:00:01 -e 05:00:01    # To check Memory status


[Output no shown]
.
[root@hostxyz sa]#  sar -q -f /var/log/sa/sa07 -s 03:00:01 -e 05:00:01    # To check Load average


[Output not shown]
.
[root@hostxyz sa]#  sar -b -f /var/log/sa/sa07 -s 03:00:01 -e 05:00:01     # To check I/O status
[Output not shown]


.
[root@hostxyz sa]#  sar -n DEV -f /var/log/sa/sa07 -s 03:00:01 -e 05:00:01    # To check Network status


[Output not shown]
.
[root@hostxyz sa]# 


Notes: In Linux, System activity report is collected for every 10 minutes by a cron job "sysstat" located under /etc/cron.d and at end of the day, a summary report is generated and saved in /var/log/sa/saXX file, which we can use for later analysis. 

[root@hostxyz cron.d]# cat sysstat

# run system activity accounting tool every 10 minutes
*/10 * * * * root /usr/lib64/sa/sa1 1 1
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A
root@hostxyz cron.d]#

Converting a file from uppercase to lowercase and vice-versa


Syntax to convert Upper to lowercase:
# dd if=[file with uppercase] of=[output filename] conv=lcase
[or]
# cat [file with uppercase] | tr '[:upper:]' '[:lower:]'  > output_file

Syntax to convert Lower to uppercase:
# dd if=[file with lowercase] of=[output filename] conv=ucase
[or]
# cat [file with lowercase] | tr '[:lower:]' '[:upper:]'  > output_file

Perl script to find Broken Symbolic links

Creating Symbolic links across filesystem are very handy but at the same time, they can be a real pain when they got broken. At my work I often seen Developers wasting their time in fixing application issues, which was fundamentally caused by broken symbolic links. So I came out with this script.

Upon execution of this script, it will prompt you to enter the filesystem paths as parameters. Once done, it will report all the broken symbolic links with count number.

I used the Perl module File::Find (comes default with any Perl package) for traversing through all the filenames in the specified directories and report the broken links. For each file it finds, it calls the &wanted subroutine, which in turn uses the Stat function to match the symbolic link files which are broken. To be honest, I grabbed this logic from an online book on Perl programming.

Supported platforms: Any Unix platform with Perl version 5.x installed.

EXAMPLE
[root@hostxyz opt]# perl check_broken_link.pl

Enter the filesystem path (like /etc /opt /var) : /var /etc /usr /home
Disconnected Link => /var/lib/jbossas/server/production/lib/jboss-remoting.jar -> /usr/share/java/jboss-remoting.jar
Disconnected Link => /var/lib/jbossas/server/default/lib/jboss-remoting.jar -> /usr/share/java/jboss-remoting.jar
Disconnected Link => /etc/alternatives/jaxws_api -> /usr/share/java/glassfish-jaxws.jar
Disconnected Link => /etc/alternatives/jaxws_2_1_api -> /usr/share/java/glassfish-jaxws.jar
Disconnected Link => /etc/alternatives/jaxb_2_1_api -> /usr/share/java/glassfish-jaxb.jar
Disconnected Link => /etc/alternatives/jaxb_api -> /usr/share/java/glassfish-jaxb.jar
Disconnected Link => /usr/share/java/jaxws_api.jar -> /etc/alternatives/jaxws_api
Disconnected Link => /usr/share/java/jaxb_api.jar -> /etc/alternatives/jaxb_api
Disconnected Link => /usr/share/java/jaxws_2_1_api.jar -> /etc/alternatives/jaxws_2_1_api
Disconnected Link => /usr/share/jbossas/client/jboss-remoting.jar -> /usr/share/java/jboss-remoting.jar

Total number of Disconnected links: 10
[root@hostxyz opt]# 

SCRIPT

#!/usr/bin/perl
use File::Find ();
use vars qw/*name *dir *prune/;
my ($cnt,$i,$cnt_sub) = (0,0,0);
print "\n";
*name = *File::Find::name;
*dir = *File::Find::dir;
*prune = *File::Find::prune;
print "Enter the filesystem path (like /etc /opt /var) : ";
my $arr = <>;
chomp($arr);
print "\n";
my @inpts = split(/ /, $arr);


foreach(@inpts)
{

File::Find::find({wanted => \&wanted}, $inpts[$i] ); # Calling wanted subroutine which use stat function to match broken links
$cnt = $cnt_sub + $cnt;
$i++;
$cnt_sub = 0;
}
print "Total number of Disconnected links: $cnt \n\n";


sub wanted {
if (-l $_) {
@stat = stat($_);
if ($#stat == -1)
{
$flname = `ls -l $name`;
($flperm, $numlnk, $flown1, $flown2, $dt, $mnth, $tm1, $tm2, $cfnm, $ar, $dsfl) = split /\s+/, $flname;
print "Disconnected Link => $cfnm $ar $dsfl\n\n";
$cnt_sub++;
  }
 }
}

Listing all Linux servers which are up in a network


Situation:
Suppose you want to find all the servers which are Up in Network or in a range of IPs.  We may need this information for trouble-shooting purpose like fixing IP conflicts or to get an idea about how many servers are online at a given point of time.


Solution:
# nmap -v -sP  
The network info can be given as a whole network (say 10.10.22.0/24) or as a range (say 10.10.22.1-40).


Example:
[root@gtxash01 ~]# nmap -v -sP 10.10.22.1-40      ß  Scans servers in the IP range of 10.10.22.1 to 10.10.22.40
Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2010-10-14 06:09 CDT
DNS resolution of 25 IPs took 5.50s.
Host 10.10.22.1 appears to be up.
Host 10.10.22.2 appears to be up.
Host 10.10.22.3 appears to be down.
Host 10.10.22.4 appears to be down.
Host 10.10.22.5 appears to be down.
Host 10.10.22.6 appears to be down.
Host 10.10.22.7 appears to be down.
Host 10.10.22.8 appears to be down.
Host 10.10.22.9 appears to be down.
Host 10.10.22.10 appears to be down.
Host 10.10.22.11 appears to be down.
Host rwbcat01.tcprod.local (10.10.22.12) appears to be up.
Host 10.10.22.13 appears to be down.
Host rsarash01.tcprod.local (10.10.22.14) appears to be up.
Host rdbash01.tcprod.local (10.10.22.15) appears to be up.
Host 10.10.22.16 appears to be down.
Host 10.10.22.17 appears to be down.
Host 10.10.22.18 appears to be down.
Host 10.10.22.19 appears to be down.
Host xenlashb1.tcprod.net (10.10.22.20) appears to be up.
Host webash04.tcprod.net (10.10.22.21) appears to be up.
Host webash23.tcprod.net (10.10.22.22) appears to be up.
Host xenc1bx-ih.tcprod.net (10.10.22.23) appears to be up.
.
< Output truncated >
.
Host gdsash02.tcprod.local (10.10.22.39) appears to be up.
Host 10.10.22.40 appears to be down.
Nmap finished: 40 IP addresses (25 hosts up) scanned in 6.178 seconds
               Raw packets sent: 110 (3740B) | Rcvd: 50 (2300B)


[root@gtxash01 ~]# nmap -v -sP 10.10.22.0/24  | grep up   # Scans servers in entire 10.10.22.0 network

Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2010-10-14 06:10 CDT
DNS resolution of 53 IPs took 5.50s.
Host 10.10.22.0 seems to be a subnet broadcast address (returned 1 extra pings).
Host 10.10.22.1 appears to be up.
Host 10.10.22.2 appears to be up.
.
< Output truncated >
.
Host 10.10.22.243 appears to be up.
Host 10.10.22.244 appears to be up.
Host l2ash.tcprod.local (10.10.22.250) appears to be up.
Host l22ash.tcprod.local (10.10.22.251) appears to be up.
Host 10.10.22.255 seems to be a subnet broadcast address (returned 1 extra pings).
Nmap finished: 256 IP addresses (53 hosts up) scanned in 7.755 seconds
               Raw packets sent: 914 (31.076KB) | Rcvd: 108 (4968B)
[root@gtxash01 ~]#

Capturing everything that scrolls on your Linux terminal


In many circumstances, we might want to capture all the messages that scrolls on your terminal. This we may require to review details about installations or deployments or to analyze some problem on the server. And this can be used to submit as a proof to Management. 

Here is an example.

script” is the command used for this purpose and the syntax goes like this:

script    or just  script

Whatever the filename specified after script command will gets created and that will capture everything. If you just execute the command “script”, then all the messages will be captured by the default file “typescript” as shown below:

[adevaraju@sys01 ~]$ script
Script started, file is typescript
[adevaraju@sys01 ~]$


[root@sys01 rmanbackp]# script capture_my_work        ß All the messages that appears on the screen will be captured by the file “capture_my_work”
Script started, file is capture_my_work
[root@sys01 rmanbackp]#
.
.
.
.
[root@sys01 ~]# set | grep SHLVL                                     
SHLVL=2                                                                   ß Please observe executing ‘script’ command will take one shell level up.
[root@sys01 ~]#
[root@sys01 rmanbackp]# exit
exit
Script done, file is capture_my_work
[root@sys01 rmanbackp]#
[root@sysllm01 ~]# set | grep SHLVL
SHLVL=1                                                                ß Shell level become 1 now (the base level)
[root@sys01 ~]#


Type “exit” when you want to stop capturing. By typing “exit” once, you will not thrown out of shell prompt since you will be in Shell level 2.

Copying only missing files on destination folder


Let’s say you have a requirement to copy the contents of folder to another but only the files which AREN’T present in the destination folder. Use the cp command with ‘-aru’ option. You got to notice certain things while doing this; it is explained with the below example here which is self-explanatory.

In this example, you got to just notice the time-stamp of each files created under the folder /dir1 & /dir2.

[root@host01 ~]# mkdir /dir1 /dir2
[root@host01 ~]# cd /dir1
[root@host01 dir1]#
[root@host01 dir1]# touch a b c d e f; mkdir d1 d2        
[root@host01 dir1]# touch d1/file1 d1/file2
[root@host01 dir1]# touch d2/take1 d2/take2
[root@host01 dir1]# ls -lR /dir1                                     ß Using -lR option to list the sub-folder contents of /dir1
/dir1:
total 8
-rw-r--r-- 1 root root    0 Dec  9 12:18 a
-rw-r--r-- 1 root root    0 Dec  9 12:18 b
-rw-r--r-- 1 root root    0 Dec  9 12:18 c
-rw-r--r-- 1 root root    0 Dec  9 12:18 d
drwxr-xr-x 2 root root 4096 Dec  9 12:18 d1
drwxr-xr-x 2 root root 4096 Dec  9 12:19 d2
-rw-r--r-- 1 root root    0 Dec  9 12:18 e
-rw-r--r-- 1 root root    0 Dec  9 12:18 f

/dir1/d1:
total 0
-rw-r--r-- 1 root root 0 Dec  9 12:18 file1
-rw-r--r-- 1 root root 0 Dec  9 12:18 file2

/dir1/d2:
total 0
-rw-r--r-- 1 root root 0 Dec  9 12:19 take1
-rw-r--r-- 1 root root 0 Dec  9 12:19 take2

[root@host01 dir1]#
[root@host01 dir1]# cd /dir2
[root@host01 dir2]# touch b d f Z; mkdir d1
[root@host01 dir2]# touch d1/key1 d1/key2
[root@host01 dir2]# ls -lR /dir2
/dir2:
total 4
-rw-r--r-- 1 root root    0 Dec  9 12:20 b
-rw-r--r-- 1 root root    0 Dec  9 12:20 d
drwxr-xr-x 2 root root 4096 Dec  9 12:20 d1
-rw-r--r-- 1 root root    0 Dec  9 12:20 f
-rw-r--r-- 1 root root    0 Dec  9 12:20 Z

/dir2/d1:
total 0
-rw-r--r-- 1 root root 0 Dec  9 12:20 key1
-rw-r--r-- 1 root root 0 Dec  9 12:20 key2
[root@host01 dir2]# cp -aru /dir1/* /dir2
[root@host01 dir2]# ls -lR /dir2
/dir2:
total 8
-rw-r--r-- 1 root root    0 Dec  9 12:18 a
-rw-r--r-- 1 root root    0 Dec  9 12:20 b
-rw-r--r-- 1 root root    0 Dec  9 12:18 c
-rw-r--r-- 1 root root    0 Dec  9 12:20 d
drwxr-xr-x 2 root root 4096 Dec  9 12:18 d1
drwxr-xr-x 2 root root 4096 Dec  9 12:19 d2
-rw-r--r-- 1 root root    0 Dec  9 12:18 e
-rw-r--r-- 1 root root    0 Dec  9 12:20 f
-rw-r--r-- 1 root root    0 Dec  9 12:20 Z


/dir2/d1:
total 0
-rw-r--r-- 1 root root 0 Dec  9 12:18 file1
-rw-r--r-- 1 root root 0 Dec  9 12:18 file2
-rw-r--r-- 1 root root 0 Dec  9 12:20 key1
-rw-r--r-- 1 root root 0 Dec  9 12:20 key2

/dir2/d2:
total 0
-rw-r--r-- 1 root root 0 Dec  9 12:19 take1
-rw-r--r-- 1 root root 0 Dec  9 12:19 take2
[root@host01 dir2]#

Conclusion on executing cp command with -aru option:
1. Files with same names (b, d & f ) were left un-touched. You can confirm it by timestamps of those files (12.20).
2. It copied all the missing files (a , c & e ) and missing folder (d2) on to the destination folder /dir2.
3. The file ‘Z’ which present only on /dir2 remains same. It haven’t got deleted
4. Contents of folder “d1” which is present both in source and destination folder is retained, however it copied the files (key1 & key2) which are present in /dir1 folder. So it didn’t replace the d1 folder on destination.

Usage: Use this option when incase the copy which you initiated before got interrupted for some reason. Using this you need not to copy it over again by typing “yes” for over-writing the existing files which are copied before.