Sunday, June 12, 2011

Software Raid

### Configure raid0 of 700mb and mount it on /common2
        # fdisk  -l   <<---  verify raid partition
        # fdisk /dev/sda
            create 2 raid partition (type  fd) of 350mb each  say /dev/sda11 & /dev/sda12
        # partprobe  -s
        # fdisk  -l   <<---- confirm
        # mdadm     << just check is command appearing using tab.. if no install it from yum server
        # yum   –y   install mdadm*
        # mdadm  --create  /dev/md0  --level=0  --raid-devices=2  /dev/sda11  /dev/sda12
        # mkfs.ext3   /dev/md0    ß-format  md0
        # echo “DEVICE   /dev/sda11  /dev/sda12 “  > /etc/mdadm.conf
        # mdadm   --examine   --scan  --config=/etc/mdadm.conf   > > /etc/mdadm.conf
                    OR
        # mdadm –detail –scan >> /etc/mdadm.conf
        # cat /proc/mdstat     ß verify
        # vi /etc/fstab
            /dev/md0       /common2    ext3    defaults    0   0
        # mount –a
        # mount    or   # df   -h     ß----varify  /common to be have near about 700mb mounted.
         If above question will asked for RAID1, just change a single command:
        # mdadm  --create  /dev/md0  --level=1  --raid-devices=2  /dev/sda11  /dev/sda12
        all other same as above
RAID-1:
      # fdisk /dev/sdb
            ##create 2 raid partition (type  fd) of 350mb each  say /dev/sdb8 & /dev/sdb9
      # partprobe  -s
      # fdisk  -l   <<---- confirm
      ## Create Raid -1
      # mdadm  --create  /dev/md1  --level=1  --raid-devices=2  /dev/sdb8  /dev/sdb9
      # mkfs.ext3   /dev/md1    ß-format  md1
      # mdadm   --examine   --scan  --config=/etc/mdadm.conf   > > /etc/mdadm.conf
                    OR
     # mdadm –detail –scan >> /etc/mdadm.conf
     # cat /proc/mdstat     <<- verify
     ##Mount it on /EXTRA/RAID-1
    # mount /dev/md1  /EXTRA/RAID-1
   ###Recovery -- Suppose my raid partition 8 has been crashed Or have created it for 2 diff disks and my 1  st disk has crashed
  ## So how to recover data as have craeted mirror RAID, In our case /dev/sd9 conyain mirror data
  REcovery on Raid 1:
  # umount    /dec/mb1
  # mdadm --stop  /dev/md1
  # mount  -t  ext3   /dev/sdb9   /EXTRA/TEST   <-- Have a fun :)
 # Remount the RAID
 # mdadm --assemble /dev/md1
 # mount /dev/md1 /EXTRA/RAID-1

#Simulation
# cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdb8[0] sdb9[1]
      200704 blocks [2/2] [UU]

md0 : active raid0 sdb7[1] sdb6[0]
      305024 blocks 64k chunks

unused devices:

## mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.03
  Creation Time : Wed Apr 13 09:58:15 2011
     Raid Level : raid1
     Array Size : 200704 (196.03 MiB 205.52 MB)
  Used Dev Size : 200704 (196.03 MiB 205.52 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Wed Apr 13 10:18:37 2011
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 44493b73:98b654f0:a313390c:1000baf6
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       24        0      active sync   /dev/sdb8
       1       8       25        1      active sync   /dev/sdb9

##add the new replacement partition to the RAID. It will be resynchronized to the original partition
# mdadm --add /dev/md0 /dev/sda15
# mdadm --detail /dev/md0

How to monitor server load on GNU/Linux


Gkrellm
==========
Gkrellm is the choice of the “g33k” types. It’s a graphical program that monitors all

sorts of statistics and displays them as numbers and charts. You can see examples of it

in use on nearly every GNU/Linux screenshot website. It is very flexible and capable,

and can monitor useful as well as ridiculous things via plugins. It can monitor the

status of a remote system, since it’s a client/server system.


“Task Manager” clones
=====================
gnome-system-monitor is a graphical program installed as part of the base Gnome system.

It is somewhat similar to the Task Manager in Microsoft Windows. It isn’t very

full-featured, with only three tabs (Processes, Resources, Devices). The Devices tab

just shows devices, Resources shows the history of CPU, memory, swap and network usage,

and the Processes tab shows the processes. The Processes tab is the only one that really

lets the user “do” anything, such as killing or re-nicing processes, or showing their

memory maps.

Of course, this tool is only available on systems with Gnome installed, and requires an

X server to be running. This makes it impractical for use on a server.

vmstat and related tools
=========================
vmstat is part of the base installation on most GNU/Linux systems. By default, it

displays information about virtual memory, CPU usage, I/O, processes, and swap, and can

print information about disks and more. It runs in a console. I find the command vmstat

-n 5 very helpful for printing a running status display in a tabular format.

It’s great for figuring out how heavily loaded a system truly is, and what the problem

(if any) is. For example, when I see a high number in the rightmost column (percent of

CPU time spent waiting for I/O) on a database server, I know the system is I/O-bound.

iostat
======
iostat is part of the sysstat package on Gentoo, as are mpstat and sar. iostat prints

similar statistics as vmstat, but gives more detail on specific devices and is geared

toward understanding I/O usage in more detail than vmstat is. mpstat is a similar tool

that prints processor statistics, and is multi-processor aware. sar collects, reports,

and saves system activity information (for example, for later analysis).

sysreport : A detail info about your system hw setup etc.. (Take a min to completed)
will create a bzip2 compressed file with all curretn deatil about ypur system

=========

All of these tools are very flexible and customizable. The user can choose what

information to see and what format to see it in. These tools are not usually installed

by default, except for vmstat.

top
======
top is the classic tool for monitoring any UNIX-like system. It runs in a terminal and

refreshes at intervals, displaying a list of processes in a tabular format. Each column

is something like virtual memory size, processor usage, and so forth. It is highly

customizable and has some interactive features, such as re-nicing or killing processes.

Since it’s the most widely known of the tools in this article, I won’t go into much

detail, other than to say there’s a lot to know about it — read the man page.

top is one of the programs in the procps package, along with:

ps, vmstat, w, kill, free, slabtop, and skill.

All these tools are in a default installation on most distributions.

htop
=====
is similar to top, except it is mouse-aware, has a color display, and displays little

charts to help see statistics at a glance. It also has some features top doesn’t have.

mytop :is a handy monitor for MySQL servers
======

tload
=========
tload runs in a terminal and displays a text-only “graph” of current system load

averages, garnered from /proc/loadavg. It is part of the base installation on most

GNU/Linux systems. I find it extremely useful for watching a system’s performance over

SSH, often within a GNU Screen session.

My favorite technique is to start a terminal, connect over SSH, resize the terminal to

150×80 or so, then start tload and shrink the window by CTRL-right-clicking and

selecting “Unreadable” as the font size. The result looks like the following:

watch
=========
watch isn’t really a load-monitoring tool, but it’s beastly handy because it takes any

command as input and monitors the result of running that command. For example, if I

wanted to monitor when the “foozle” program is executing, I could run


watch --interval=5 "ps aux | grep foozle | grep -v xaprb"

=========
running tload over SSH to monitor systems, and use vmstat, iostat and friends to

troubleshoot specific problems
========

lsof
=====
which lists open files. Don’t be fooled by how simple that sounds! It’s tremendously

powerful.

uptime
=======
System load averages is the average number of processes that are either in a runnable or

uninterruptable state. A process in a runnable state
is either using the CPU or waiting to use the CPU

How to create system report:
======================
# sysreport
<-- press Enter
Please enter your first initial and last name [server]: shirish
please neter case number that you are generating this report for: 1
<-- press Enter
now wait for few minutes it will create a biz2 compressed file in /tmp/sysreport-shirish.1-3-----.bz2
copy it and sent where yor require this file conatin all your sytem info capturede from /proc kernel...

---> Bow on some version it has been replace by command # sosreport but working is alomost same

# sosreport
==========
Display Memory status:
# free <--memory status on system
# free -t <--Total amt of memory available in system
# free -m <-- Display Memory used and free memory in MB

Disply information:
# dmidecode --type bios <--retrive bios info
# dmidecode --type system <--system hw info
# dmidecode --type processor <-- sys processor info
# dmidecode --type memory <--sys memory info
# dmidecode --type cache <--sys cahce info
# dmidecode -- connector <-- sys connector info
# dmidecode --type slot <--sys slots info

Sys Admin L1, L2, and L3 ?

What is the definition of L1, L2 and L3 UNIX / Linux / IT support?
Generally L1, L2, and L3 support apply to any form of technical support such as mobile phones, electronics devices, computers, servers, and networking devices. All levels have different meanings and differ slightly from company to company and IT support groups. Basically, each person working at each level must have more experience and education in the field of support than its previous level.
L1 is nothing but Level 1 support which is provided by a call center support person or engineer. L1 tech usually follows certain steps to solve the problem. In other words L1 will ask you various questions and some sort of software will be used to map your answers to further questions. L1 support takes your requests using the telephone, email or chat sessions. This kind of support engineers are are trained on the product with limited experience. They should able to resolve 50%-60% of all problems. For example, restart failed httpd service can be handled by L1.
If L1 support failed to solve your problem than it is escalated to L2 (Level 2) support engineer. L2 support will try to find out exact causes of the problems. Almost all L2 engineers are a subject matter expert with 3-5 years rocks solid experience. For example, if httpd can not be started after server reboot than L2 tech who is httpd and UNIX subject matter expert can try to resolve the problem using various debugging methods.
If L2 support failed to resolve your problem than it is escalated to L3 (Level 3) support professional. Usually, L3 support works closely with product engineering team or with source code itself with various debugging tools. L3 support only handles very difficult support cases.
Please note that some companies offer certain levels of support such as L3 only on a fee basis.

Imp Port NumberS


Question: What Is a Port Number?
Answer: In computer networking, a port number is part of the addressing information used to identify the senders and receivers of messages. Port numbers are most commonly used with TCP/IP connections. Home network routers and computer software work with ports and sometimes allow you to configure port number settings. These port numbers allow different applications on the same computer to share network resources simultaneously.
 
How Port Numbers Work:
Port numbers are associated with network addresses. For example, in TCP/IP networking, both TCP and UDP utilize their own set of ports that work together with IP addresses.
Port numbers work like telephone extensions. Just as a business telephone switchboard can use a main phone number and assign each employee an extension number (like x100, x101, etc.), so a computer has a main address and a set of port numbers to handle incoming and outgoing connections.
In both TCP and UDP, port numbers start at 0 and go up to 65535. Numbers in the lower ranges are dedicated to common Internet protocols (like 21 for FTP, 80 for HTTP, etc.).

?: why we r useing port numbers pls reply me
A: Ports are used to identify the type of service out of junk traffic

==================================
Some important port numbers

There are huge number of ports which are reserved. But the ports mentioned below are more important.
IMPORTANT PORTS:
=============================
Important Linux Port Numbers
15 – Netstat
20 --FTP Data
21 => FTP
22 => SSH
23 => Telnet
25 => SMTP Mail Transfer
37 – Time
42 – WINS
43 => WHOIS service
53 => name server (DNS)
67 – DHCP SERVER
68 – DHCP CLIENT
69 --TFTP
80 => HTTP (Web server)
443 -- HTTPS(SSL (https) (http protocol over TLS/SSL)
88 – Kerberos
101 – HOSTNAME
109 -- POP2
110 => POP protocol (for email)
123 – NTP (Network time protocol)
137-NetBIOS
161 – SNMP
143 -- IMAP
220 – IMAP3
995 => POP over SSL/TLS
9999 => Urchin
111 => rpcbind
953 => rndc
143 => IMAP Protocol (for email)
993 => IMAP Secure
443 => HTTP Secure (SSL for https:// )
500 – Internet Key Exchange, IKE (IPSec) (UDP 500
546-DHCPv6 client
547-DHCPv6 serveR
3306 = > MysQL Server
4643 => Virtuosso Power Panel
2082 => CPANEL
2083 => CPANEL - Secure/SSL
2086 => CPANEL WHM
2087 => CPANEL WHM - Secure/SSL
2095 => cpanel webmail
2096 => cpanel webmail - secure/SSL
3306 => SQL
Plesk Control Panel => 8443
DirectAdmin Control Panel => 2222
Webmin Control Panel => 10000

FAQs
1. How to find which ports are open?
You can find the ports in your linux server with the nmap command
netstat -nap --tcp

2. How to investigate a port and kill suspicious process?
A good tutorial is here

3. Where do i find a complete list of linux ports for reference?
You can find the ports list: here

4. Which firewall is best for linux servers?
I would recommend to install APF firewall. You can find a good tutorial here: http://www.mysql-apache-php.com/apf-firewall.htm
Warning: Make sure that you dont block the important ports with the firewall.

A port is a communication point where one or more computers in a network communicate with each other through a program or software

Difference TCP vs UDP Protocol 


TCP/IP Protocol:
It is a connection oriented protocol
It has flow control and error correction
It is not fast and primarily used for data transmission like (http,ssh,smtp,ftp, mail etc.) 
Most common services requiring confirmation of delivery like http,ssh,smtp,ftp, mail etc. use TCP ports
Asked for authentication like user name and password

UDP Protocol:
It is connectionless protocol which means it can send packets without establishing connection with the receiver at first.  
It is error prone during transmission.
It is fast and used mostly for audio and video streaming.
UDP ports are commonly used by services or programs that dont require the confirmation of delivery of packets. Most commonly used is DNS queries using UDP port 53.
no

Properties of Linux


  • Linux is free:
    As in free beer, they say. If you want to spend absolutely nothing, you don't even have to pay the price of a CD. Linux can be downloaded in its entirety from the Internet completely for free. No registration fees, no costs per user, free updates, and freely available source code in case you want to change the behavior of your system.
    Most of all, Linux is free as in free speech:
    The license commonly used is the GNU Public License (GPL). The license says that anybody who may want to do so, has the right to change Linux and eventually to redistribute a changed version, on the one condition that the code is still available after redistribution. In practice, you are free to grab a kernel image, for instance to add support for teletransportation machines or time travel and sell your new code, as long as your customers can still have a copy of that code. 
  • Linux is portable to any hardware platform:
    A vendor who wants to sell a new type of computer and who doesn't know what kind of OS his new machine will run (say the CPU in your car or washing machine), can take a Linux kernel and make it work on his hardware, because documentation related to this activity is freely available. 
  • Linux was made to keep on running:
    As with UNIX, a Linux system expects to run without rebooting all the time. That is why a lot of tasks are being executed at night or scheduled automatically for other calm moments, resulting in higher availability during busier periods and a more balanced use of the hardware. This property allows for Linux to be applicable also in environments where people don't have the time or the possibility to control their systems night and day. 
  • Linux is secure and versatile:
    The security model used in Linux is based on the UNIX idea of security, which is known to be robust and of proven quality. But Linux is not only fit for use as a fort against enemy attacks from the Internet: it will adapt equally to other situations, utilizing the same high standards for security. Your development machine or control station will be as secure as your firewall. 
  • Linux is scalable:
    From a Palmtop with 2 MB of memory to a petabyte storage cluster with hundreds of nodes: add or remove the appropriate packages and Linux fits all. You don't need a supercomputer anymore, because you can use Linux to do big things using the building blocks provided with the system. If you want to do little things, such as making an operating system for an embedded processor or just recycling your old 486, Linux will do that as well. 
  • The Linux OS and most Linux applications have very short debug-times:
    Because Linux has been developed and tested by thousands of people, both errors and people to fix them are usually found rather quickly. It sometimes happens that there are only a couple of hours between discovery and fixing of a bug.

What is Linux ?


Linux is Registered trademark of Linux Linus Torvalds.
History:
In order to understand the popularity of Linux, we need to travel back in time, about 30 years ago...
Imagine computers as big as houses, even stadiums. While the sizes of those computers posed substantial problems, there was one thing that made this even worse: every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators.
Computers were extremely expensive then, and sacrifices had to be made even after the original purchase just to get the users to understand how they worked. The total cost per unit of computing power was enormous.
Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was
  1. Simple and elegant. 
  2. Written in the C programming language instead of in assembly code. 
  3. Able to recycle code.
The Bell Labs developers named their project "UNIX."
The code recycling features were very important. Until then, all commercially available computer systems were written in a code specifically developed for one system. UNIX on the other hand needed only a small piece of that special code, which is now commonly named the kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware.
The software vendors were quick to adapt, since they could sell ten times more software almost effortlessly. Weird new situations came in existence: imagine for instance computers from different vendors communicating in the same network, or users working on different systems without the need for extra education to use another computer. UNIX did a great deal to help users become compatible with different systems.
Throughout the next couple of decades the development of UNIX continued. More things became possible to do and more hardware and software vendors added support for UNIX to their products.
UNIX was initially found only in very large environments with mainframes and minicomputers (note that a PC is a "micro" computer). You had to work at a university, for the government or for large financial corporations in order to get your hands on a UNIX system.
But smaller computers were being developed, and by the end of the 80's, many people had home computers. By that time, there were several versions of UNIX available for the PC architecture, but none of them were truly free and more important: they were all terribly slow, so most people ran MS DOS or Windows 3.1 on their home PCs.


Linus and Linux:


By the beginning of the 90s home PCs were finally powerful enough to run a full blown UNIX. Linus Torvalds, a young man studying computer science at the university of Helsinki, thought it would be a good idea to have some sort of freely available academic version of UNIX, and promptly started to code.
He started to ask questions, looking for answers and solutions that would help him get UNIX on his PC. Below is one of his first posts in comp.os.minix, dating from 1991:

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: Gcc-1.40 and a posix-question
Message-ID: <1991Jul3.100050.9886@klaava.Helsinki.FI>
Date: 3 Jul 91 10:00:50 GMT
Hello netlanders,
Due to a project I'm working on (in minix), I'm interested in the posix
standard definition. Could somebody please point me to a (preferably)
machine-readable format of the latest posix rules? Ftp-sites would be
nice.
From the start, it was Linus' goal to have a free system that was completely compliant with the original UNIX. That is why he asked for POSIX standards, POSIX still being the standard for UNIX.
In those days plug-and-play wasn't invented yet, but so many people were interested in having a UNIX system of their own, that this was only a small obstacle. New drivers became available for all kinds of new hardware, at a continuously rising speed. Almost as soon as a new piece of hardware became available, someone bought it and submitted it to the Linux test, as the system was gradually being called, releasing more free code for an ever wider range of hardware. These coders didn't stop at their PC's; every piece of hardware they could find was useful for Linux.
Back then, those people were called "nerds" or "freaks", but it didn't matter to them, as long as the supported hardware list grew longer and longer. Thanks to these people, Linux is now not only ideal to run on new PC's, but is also the system of choice for old and exotic hardware that would be useless if Linux didn't exist.
Two years after Linus' post, there were 12000 Linux users. The project, popular with hobbyists, grew steadily, all the while staying within the bounds of the POSIX standard. All the features of UNIX were added over the next couple of years, resulting in the mature operating system Linux has become today. Linux is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers. Today, a lot of the important players on the hard- and software market each have their team of Linux developers; at your local dealer's you can even buy pre-installed Linux systems with official support - eventhough there is still a lot of hard- and software that is not supported, too.


Current application of Linux systems :
Today Linux has joined the desktop market. Linux developers concentrated on networking and services in the beginning, and office applications have been the last barrier to be taken down. We don't like to admit that Microsoft is ruling this market, so plenty of alternatives have been started over the last couple of years to make Linux an acceptable choice as a workstation, providing an easy user interface and MS compatible office applications like word processors, spreadsheets, presentations and the like.
On the server side, Linux is well-known as a stable and reliable platform, providing database and trading services for companies like Amazon, the well-known online bookshop, US Post Office, the German army and many others. Especially Internet providers and Internet service providers have grown fond of Linux as firewall, proxy- and web server, and you will find a Linux box within reach of every UNIX system administrator who appreciates a comfortable management station. Clusters of Linux machines are used in the creation of movies such as "Titanic""Shrek" and others. In post offices, they are the nerve centers that route mail and in large search engine, clusters are used to perform internet searches.These are only a few of the thousands of heavy-duty jobs that Linux is performing day-to-day across the world.
It is also worth to note that modern Linux not only runs on workstations, mid- and high-end servers, but also on "gadgets" like PDA's, mobiles, a shipload of embedded applications and even on experimental wristwatches. This makes Linux the only operating system in the world covering such a wide range of hardware.


Reference:





 Does Linux have a future:


OPEN SOURCE

The idea behind Open Source software is rather simple: when programmers can read, distribute and change code, the code will mature. People can adapt it, fix it, debug it, and they can do it at a speed that dwarfs the performance of software developers at conventional companies. This software will be more flexible and of a better quality than software that has been developed using the conventional channels, because more people have tested it in more different conditions than the closed software developer ever can.
The Open Source initiative started to make this clear to the commercial world, and very slowly, commercial vendors are starting to see the point. While lots of academics and technical people have already been convinced for 20 years now that this is the way to go, commercial vendors needed applications like the Internet to make them realize they can profit from Open Source. Now Linux has grown past the stage where it was almost exclusively an academic system, useful only to a handful of people with a technical background. Now Linux provides more than the operating system: there is an entire infrastructure supporting the chain of effort of creating an operating system, of making and testing programs for it, of bringing everything to the users, of supplying maintenance, updates and support and customizations, etcetera. Today, Linux is ready to accept the challenge of a fast-changing world.

Types of Processes in linux

xxxxxxxxxxxxxxxxxx Types of Process in Linux xxxxxxxxxxxxxxxxxxxxxxxx

Every application it may be system specific or application daemon specific. It have to started as process
in background or foreground as designed.

Below are the few common process states:

Runnable: Process started and it's running and is in active queue in meantime it may in waiting for CPU resource.

Stopped: Process started and stopped in between but is not fully killed and can run if started again.

Sleeping/Waiting:  Process started and at meantime there are to many request to CPU from some other process or it's is waiting for another process to complete.

Zombie: Process started and started another child process and then parent process gone/died, in such condition that child process doesn't know how to end, so it's hang around but living, such process is known as zombie process.

File Permission Linux

xxxxxxxxxxxxxxxxxxxxxx File Permission xxxxxxxxxxxxxxxxxx

Set user ID, set group ID, sticky bit

- SUID or setuid: Change user ID on execution. If setuid bit is set, when the file will be executed by a user,   the process will have the same rights as the owner of the file being executed.
- SGID or setgid: Change group ID on execution. Same as above, but inherits rights of the group of the owner of the file on execution. For directories it also may mean that when a new file is created in the directory it will inherit the group of the directory (and not of the user who created the file).
- Sticky bit:  It was used to trigger process to "stick" in memory after it is finished, now this usage is obsolete. Currently its use is system dependant and it is mostly used to suppress deletion of the files that belong to other users in the folder where you have "write" access to.
               -->> If the sticky bit is set for a directory, only the owner of that directory or the owner of a file can delete or rename a file within that directory.

SUID bit is set for files ( mainly for scripts ). 
The SUID permission makes a script to run as the user who is the owner of the script, rather than the user who started it.

SGID, it will run with the privileges of the files group owner, instead of the privileges of the person running the program.

-----------------------------------------------------------------------------------
0755 -> setuid, setgid, sticky bits are cleared        000
1755 -> sticky bit is set                                              001
2755 -> setgid bit is set                                             010
3755 -> setgid and sticky bits are set                      011
4755 -> setuid bit is set                                             100
5755 -> setuid and sticky bits are set                      101    
6755 -> setuid and setgid bits are set                     110
7755 -> setuid, setgid, sticky bits are set                111
-----------------------------------------------------------------------------------

Diff ext2 ext3 ext4


xxxxxxxxxxxxxxxxxxxxx Diff ext2 ext3 ext4 xxxxxxxxxxxxxxxxxxxxxxxxx
Extended n file system

ext2 :
- Introduced with kernel 1.0 in 1993
- Flexible can handle upto 4TB
- Support file-name upto 1012 chars
- super block feature increase file system performance
- ext2 reserve 5% of disk space for root
- ext2 is popular on USB and other solid-state devices.
  This is because it does not have a journaling function.
  so it generally makes fewer reads and writes to the drive,
  effectively extending the life of the device .
-  NO journalalizm

ext3 :
- Provide all the feature of ext 2 + journaling and backward compatibility .
- can upgrade ext2 to ext3 without loss of data.
- journaling feature speed up the system to recover the state after power-failure
  or improper mount unmount etc.
- Example: In ext2 in an improper unmount or in-between power-off etc.. so in time
  of receiver it checks whole file system .
  But in ext3 it keeps record of uncommitted file transactions and checks applied
  on on them so system will come back up in faster and quicker .
-

ext4: 
- Introduced with kernel 2.6.28
- Ext4 is a deeper improvement over Ext3
- support  larger filesystem, faster checking, nanosecond timestamps,
  and verification of the journal through checksums.
- It’s backward and forward compatible with versions 2 and 3, so we can
  mount a ext2 or ext3 filesystem as ext4 .
- The main benefits that ext4 has over ext3 are:
  - faster time-stamping
  - faster file system checking
  - journaling check-sums
  - extents (basically automatic space allocation to avoid fragmentation)

What is Journalism in linxu file syatem ?
journaling file system is a file system that keeps track of the changes that will be made in
a journal (usually a circular log in a dedicated area of the file system) before committing them to
the main file system. In the event of a system crash or power failure, such file systems are quicker
to bring back online and less likely to become corrupted.