Thursday, August 4, 2011

How to Setup GFS2 or GFS in Linux Centos

It has been a nightmare for me setting up GFS2 with my 3 shared hosting servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i managed to setup my GFS2 on my Dell MD3200i with 10TB of disk space.


GFS2/GFS Test Environment


Here is the test environment equipment that i utilized for this setup.
3 Centos Web Server
1 MD3200i Dell SAN Storage
1 Switch to connect all these equipment together


Assumption


I will assume you would have setup all your 3 Centos servers to communicate with your SAN ISCSI storage. This means that all your 3 Centos servers will be able to view your newly created LUN using iscsiadmn. And you have switch off your iptabls and selinux. If your iscsi storage hasn’t configure, you can do so at cyberciti.


Setup GFS2/GFS packages


On all of your 3 Centos servers, you must install the following packages:


cman
gfs-utils
kmod-dlm
modcluster
ricci
luci
cluster-snmp
iscsi-initiator-utils
openais
oddjobs
rgmanager


Or you can simple type the following yum on all 3 Centos machine view sourceprint?

1 yum install -y cman gfs-utils kmod-gfs kmod-dlm modcluster ricci luci cluster-snmp iscsi-initiator-utils openais oddjob rgmanager


Or even simplier, you can just add the cluster group via the following line view sourceprint?

1 yum groupinstall -y Clustering
2 yum groupinstall -y "Storage Cluster"


Oh, remember to update your Centos before proceeding to do all of the above.

1 yum -y check-update
2 yum -y update


After you have done all of the above, you should have all the packages available to setup GFS2/GFS on all your 3 Centos machine.


Configuring GFS2/GFS Cluster on Centos


Once you have your required centos packages installed, you would need to setup your Centos machine. Firstly, you would need to setup all your hosts file with all 3 servers machine name. Hence, i appended all my 3 servers machine name across and in each machine i would have the following additional line in my /etc/hosts file.

1 111.111.111.1 gfs1.hungred.com
2 111.111.111.2 gfs2.hungred.com
3 111.111.111.3 gfs3.hungred.com

where *.hungred.com is each machine name and the ip beside it are the machine ip addresses which allows each of them to communicate with each other by using the ip stated there.

Next, we will need to setup the cluster configuration of the server. On each machine, you will need to execute the following instruction to create a proper cluster configuration on each Centos machine.
view sourceprint?
1 ccs_tool create HungredCluster
2 ccs_tool addfence -C node1_ipmi fence_ipmilan ipaddr=111.111.111.1 login=root passwd=machine_1_password
3 ccs_tool addfence -C node2_ipmi fence_ipmilan ipaddr=111.111.111.2 login=root passwd=machine_2_password
4 ccs_tool addfence -C node3_ipmi fence_ipmilan ipaddr=111.111.111.3 login=root passwd=machine_3_password
5 ccs_tool addnode -C gfs1.hungred.com -n 1 -v 1 -f node1_ipmi
6 ccs_tool addnode -C gfs2.hungred.com -n 2 -v 1 -f node2_ipmi
7 ccs_tool addnode -C gfs3.hungred.com -n 3 -v 1 -f node3_ipmi


Next, you will need to start cman.view sourceprint?

1 service cman start
2 service rgmanager start
cman should starts without any error. If you have any error while starting cman, your GFS2/GFS will not work. If everything works fine, you should see the following when you type the command as shown below,


view sourceprint?

1 [root@localhost ]# cman_tool nodes
2 10.0.0.1
3 Node Sts Inc Joined Name
4 1 M 16 2011-1-06 02:30:27 gfs1.hungred.com
5 2 M 20 2011-1-06 02:30:02 gfs2.hungred.com
6 3 M 24 2011-1-06 02:36:01 gfs3.hungred.com

If the above shows, this means that you have properly setup your GFS2 cluster. Next we will need to setup GFS2!


Setting up GFS2/GFS on Centos


You will need to start the following services.


service gfs start
service gfs2 start

Once, this two has been started. All you need to do is to partition your SAN storage LUN. If you want to use GFS2, partition it with gfs2


view sourceprint?1 /sbin/mkfs.gfs2 -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb


Likewise, if you like to use gfs, just change it to gfs instead of gfs2


view sourceprint?1 /sbin/mkfs.gfs -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb


A little explanation here. HungredCluster is the one we created while we were setup out GFS2 Cluster. /dev/sdb is the SAN storage lun space which was discovered using iscsiadm. -j 10 is the number of journals. each machine within the cluster will require 1 cluster. Therefore, it is good to determine the number of machine you will place into this cluster. -p lock_dlm is the lock type we will be using. There are other 2 more types beside lock_dlm which you can search online.


P.S: All of the servers that will belong to the GFS cluster will need to be located in the same VLAN. Contact support if you need assistance regarding this.


If you are only configuring two servers in the cluster, you will need to manually edit the file /etc/cluster/cluster.conf file on each server. After the tag, add the following text:


If you do not make this change, the servers will not be able to establish a quorum and will refuse to cluster by design.


Setup GFS2/GFS run on startup


Key the following to ensure that GFS2/GFS starts everytime the system reboot.


view sourceprint?
1 chkconfig gfs on
2 chkconfig gfs2 on
3 chkconfig clvmd on //if you are using lvm
4 chkconfig cman on
5 chkconfig iscsi on
6 chkconfig acpid off
7 chkconfig rgmanager on
8 echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
9 mount /dev/sdb


Once this is done, your GFS2/GFS will have mount on your system to /home. You can check whether it works using the following command.


view sourceprint?

1 [root@localhost ~]# df -h


You should now be able to create files on one of the nodes in the cluster, and have the files appear right away on all the other nodes in the cluster.


Optimize clvmd


We can try to optimize lvmd to control the type of locking lvmd is using.


view sourceprint?

1 vi /etc/clvmd/clvmd.conf
2 find the below variables and change it to the variable as shown below
3 locking_type = 3
4 fallback_to_local_locking = 0
5 service clvmd restart
credit goes to http://pbraun.nethence.com/doc/filesystems/gfs2.html


Optimize GFS2/GFS
There are a few ways to optimize your gfs file system. Here are some of them.
Set your plock rate to unlimited and ownership to 1 in /etc/cluster/cluster.conf
view sourceprint?1
Set noatime and nodiratime in your fstab.
view sourceprint?1 echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
lastly, we can tune gfs directy by decreasing how often GFS2 demotes its locks via this method.
view sourceprint?
1 echo "
2 gfs2_tool settune /GFS glock_purge 50
3 gfs2_tool settune /GFS scand_secs 5
4 gfs2_tool settune /GFS demote_secs 20
5 gfs2_tool settune /GFS quota_account 0
6 gfs2_tool settune /GFS statfs_fast 1
7 gfs2_tool settune /GFS statfs_slots 128
8 " >> /etc/rc.local


credit goes to linuxdynasty.
iptables and gfs2/gfs port
If you wish to have iptables remain active, you will need to open up the following ports.


view sourceprint?
1 -A INPUT -i 10.10.10.200 -m state --state NEW -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dport 5404, 5405 -j ACCEPT
2 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 8084 -j ACCEPT
3 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 11111 -j ACCEPT
4 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 14567 -j ACCEPT
5 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 16851 -j ACCEPT
6 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 21064 -j ACCEPT
7 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 41966,41967,41968,41969 -j ACCEPT
8 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50006,50008,50009 -j ACCEPT
9 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50007 -j ACCEPT


Once these ports are open on your iptables, your cman should be able to restart properly without getting start either on fencing or cman starting point. Good Luck!


Troubleshooting


You might face some problem setting up GFS2 or GFS. Here are some of them which might be of some help


CMAN fencing failed
You get something like the following when you start your cman
view sourceprint?
1 Starting cluster:
2 Loading modules... done
3 Mounting configfs... done
4 Starting ccsd... done
5 Starting cman... done
6 Starting daemons... done
7 Starting fencing... failed


One of the possibility that is causing this is that your gfs2 has already been mounted to a drive. Hence, fencing failed. Try to unmount it and start it again.


mount.gfs2 error
if you are getting the following error
view sourceprint?

1 mount.gfs2: can't connect to gfs_controld: Connection refused
you need to try to start the cman service

Clearing kernel cache in linux

This is about the drop_caches tunability. It's available in kernel 2.6.16 and above, and exists in /proc/sys/vm.

If you echo various values to it, various kernel cache data structures are dropped. This is a non-destructive operation, so if you still see stuff hanging out after it, it's likely that it was dirty cache. Anyhow, on to the values:

1 - drop the pagecache

2 - drop the dentry and inode caches

3 - drop both the dentry and inode caches, as well as the pagecache.

echo 3 > /proc/sys/vm/drop_caches as a root or admin user
A gateway is a node that allows you to gain entrance into a network and vice versa. On the Internet the node which is the stopping point can be a gateway or a host node. A computer that controls the traffic your network or your ISP (Internet Service Provider) receives is a node. In most homes a gateway is the device provided by the Internet Service Provider that connects users to the internet. We can find the gateway IP address on Windows, Linux and Mac as below.

On Windows :

Click Start -> Run -> Type cmd to launch command prompt.

In command prompt type :

route PRINT

The output will have a list of gateway addresses for the particular system.

On Lunux :

Start Terminal and Run ,

$ route -n

This will display the routing table as below

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.31.0 192.168.1.1 255.255.255.0 UG 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo In the above output the default gateway for the system is 192.168.1.1 as the flag is set as G (refers to gateway) for this ip adress.

On Mac :

Open Terminal and Type : $ netstat -rn The output will look like as below. Routing tables

Internet:

Destination Gateway Flags Refs Use Netif Expire

default 192.168.1.1 UGSc 7 0 en0

127 localhost UCS 0 0 lo0

localhost localhost UH 1 7373 lo0

The output shows the default gateway for the mac as 192.168.1.1 a the flag for default network connection is set as "G".

How to erase the harddisk completly using linux ?

Erasing the entire hard disk using Linux:

Run the following command to fill your entire hard disk with zeros

$ dd if=/dev/zero of=/dev/sda bs=1M

The above command will whole hard disk with zeros. Note: this will take long time to complete.

Erasing/filling the hard Disk with random data:

We can fill the hard disk with random data in case of any security requirement to prevent data loss. Run the below

$ dd if=/dev/random of=/dev/sda bs=1M

Erasing the MBR(Master Boot Record) of the Hard Disk.

To Erase only code area in your MBR:

Run

$ dd if=/dev/zero of=/dev/sda bs=446 count=1

To Erase entire MBR :

Run

$ dd if=/dev/zero of=/dev/sda bs=512 count=1

Note : If linux is not installed, Boot into Linux image from Live CD to perform above operations on the hard disk.




How to setup Redhat cluster and GFS2 on RedHat Enterprise Linux 6 on Vmware ESXi

1) Installing RHEL6 on Vmware esxi with clustering packages.


a) Creating a RedHat Enterprise Linux 6.0 Virtual image.

i) Open vSphere Client by connecting to a Vmware ESXi Server.

ii) Login into your vSphere Client

iii) Goto File -> New -> Virtual Machine (VM).

iv) Select Custom option in Create New Virtual Machine Window and click Next

v) Give a name to the virtual machine(VM) ( In my case name of my virtual machine is – RHEL6-ClusterNode1) and click next.

vi) Select a resource pool where you want your VM to reside ( In my case , I have created a resource pool named RHEL6-Cluster.) and click Next.

vii) Select a datastore to store your VM files and Click Next.

viii) Select VM version which is suitable for your environment.( In my case VM version is 7) and click Next.

ix) Specify the guest operating system type as Linux and select version as RedHat Enterprise Linux 6.0 -32 bit. Click Next.

x) Select number of CPU for the VM ( you can assign multiple CPU if your processor is multicore.) (in my case : I had assigned 1 cpu) and Click Next.

xi) Configure the memory for your VM (assign the memory wisely, so that VM performance is not degraded when multiple VM’s run in parallel). Click Next.

xii) Create Network Connection for your VM ( generally do not change the default connection ) . Click Next.

xiii) Select SCSI controller as LSI Logic Parallel , Click Next.

xiv) Select “Create New Virtual Disk” and Click Next.

xv) Allocate virtual disk capacity for the VM as needed.( In my case : virtual disk size was assigned as 10GB. Select “Support Clustering features such as fault tolerance. Select “ Specify a datastore “ and assign a datastore to store the VM. Click Next

xvi) Under Advanced options, Let the Virtual Device Node be SCSI(0:0). Click Next.

xvii) On “the Ready to Complete” window select “Edit the virtual machine settings before completion “ and Click continue.

xviii) On the “ RHEL6-Cluster1 – VM properties window”, select New SCSI controller and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s

xix) Similarly for “New CD/DVD” supply either client device or host device or Operating system installer ISO file located on the datastore to start the installation of the operating system. Note: do not forget to enable “Connect at power on “ option for Host Device or Datastore ISO device option.

xx) Now Click Finish, No you are ready to start the installation of the RHEL6 operating system on Virtual Machine.

2) Installing RedHat Enterprise for Linux 6.0 on the Virtual Machine.

a) File System Partitioning for the RHEL6.0 VM.

i) Start the RHEL Installation.

ii) Select custom partitioning for disk.

iii) Create a /boot partition of 512MB

iv) Create physical LVM Volume from remaining free space on the virtual disk.

v) Create logical volume group and create a logical volume for swap and “/” on the available LVM disk space.

vi) Apply the above changes to create the partition structure.

b) Selecting the packages required for clustering

i) Select the packages to be installed on to the disk by selecting custom package selection ( Enable additional repository High Availability, Resilient storage

ii) Select all packages under High Availability, Resilient storage. Click next to start installation of the operating system.

Note : At the end of the installation cman, luci, ricci, rgmanager, clvmd, modclusterd, gfs2-tools packages will get installed onto the system.

iii) After the operating system is installed, Restart the VM to boot into the VM and perform post installation tasks and shutdown the guest RHEL6.0 VM.


3) Cloning the RHEL6.0 VM image into two copies named as RHEL6-Cluster2 and RHEL6-Cluster3.

i) Open the datastore of your VMware ESXi by right clicking and selecting “Browse Datastore” on the datastore in the summary page of the ESXi console.

ii) Create two directories RHEL6-Cluster2 and RHEL6-Cluster3

iii) Copy the VM image files from RHEL6-Cluster1 directory to above two directories i.e., RHEL6-Cluster2 and RHEL6-Cluster3.

iv) Once you have copied the all the files to respective directory, browse to RHEL6-Cluster2 directory under datastore and locate “RHEL6-Cluster1.vmx” file, right click on it and select “Add to Inventory”.

v) In the “Add to Inventory” window add the VM as RHEL6-Cluster2 and finish the process

vi) Similarly perform previous step to add RHEL6-cluster3 to the inventory.

4) Adding a shared harddisk to all the 3 VM’s

a) Adding a hard disk for clustering to RHEL6-Cluster1 VM/node.

i) In vSphere Client select RHEL6-Cluster1 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.

ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.

iii) Select Hard Disk as device type, Click Next.

iv) Select “ Create a new virtual disk” and click Next.

v) Specify the required disk size and select Disk Provisioning as “Support clustering features such fault tolerance” and Location as “Store with the virtual machine” Click Next.

vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process.

vii) On the “ RHEL6-Cluster1 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

b) Sharing the RHEL6-Cluster1 node’s additional hard disk with other two VM/cluster nodes.

i) In vSphere Client select RHEL6-Cluster2 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.

ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.

iii) Select Hard Disk as device type, Click Next.

iv) Select “Use an existing virtual disk” and click Next.

v) Browse the datastore, locate RHEL6-cluster1’ directory and select RHEL6-Cluster1_1.vmdk ( Note : Additional hardisk will named as VMname _1 or 2 or 3.vmdk . Do not select RHEL6-Cluster1.vmdk as this your VMimage file) to add as second hard disk to the VM. Click Next.

vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process.

vii) On the “ RHEL6-Cluster2 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

c) Similarly perform the above steps described under section (b) for the 3rd node.

5) Configuring the static IP address, hostname and /etc/hosts file on all three nodes.

Assign the static IP addresses to all the three VM as below

Ex :

192.168.1.10 RHEL6-Cluster1

192.168.1.11 RHEL6-Cluster2

192.168.1.12 RHEL6-Cluster3

Gateway in this case is :192.168.1.1

DNS in this case is : 192.168.1.1

DOMAIN in this case is: linuxlabs.com

i) To Assign above IP and hostname Start all the three VM’s

ii) Note : When you have started the VM, The network manager daemon/service on RHEL6 would have started the network by getting an ipaddress from DHCP and assigning it to eth0 or eth1. Note down the hardware address of your Active Ethernet by running ifcfg command ( The HWaddr would be like 00:0C:29:86:D3:E6 etc as this needed to added into “/etc/sysconfig/network-scripts/ifcfg-eth0” depending upon which Ethernet port is active on your image.).

iii) Disable and stop the Network Manager daemon as other cluster related daemons require this daemon to be off.

To stop the network manager daemon, run:

/etc/init.d/NetworkManager stop

To disable the network manager daemon service , run:

Chkconfig –level 345 NetworkManager off

iv) Add the following details to “/etc/sysconfig/network-scripts/ifcfg-eth0” file

DEVICE="eth0"

NM_CONTROLLED="no"

ONBOOT="yes"

HWADDR=00:0C:29:96:D3:E6

TYPE=Ethernet

BOOTPROTO=none

IPADDR=192.168.1.10

PREFIX=24

GATEWAY=192.168.1.1

DNS1=192.168.1.1

DOMAIN=linuxlabs.com

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="System eth0"

UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03


Note : HWADDR and DEVICE may change from VM to VM.

v) Now add hostnames as RHEL6-cluster1 or RHEL6-Cluster2 or RHEL6-Cluster3 to “/etc/sysconfig/network” file inside each VM.

After adding the hostname the “/etc/sysconfig/network” file will look like as below:

NETWORKING=yes

HOSTNAME=RHEL6-Cluster1

vi) Now add hostname resolution information to /etc/hosts file. As below.


#192.168.1.232 RHEL6-Cluster1 # Added by NetworkManager

192.168.1.10 RHEL6-Cluster1.linuxlabs.com

#127.0.0.1 localhost.localdomain localhost

#::1 RHEL6-Cluster1 localhost6.localdomain6 localhost6

192.168.1.11 RHEL6-Cluster2.linuxlabs.com

192.168.1.12 RHEL6-Cluster3.linuxlabs.com

Note : Similarly perform the above steps on the other two VM’s .

vii) After configuring all the 3 VM’s , Restart the VM’s and Verify the network connection by pinging each other VM to confirm the network configuration is correct and working fine.


6) Configuring the cluster on RHEL6.0 with High Availability Management web UI.

i) Start the luci service on all the 3 nodes by running command in terminal.
service lcui start

ii) Start the ricci service on all the 3 nodes by running the command in terminal. Ricci daemon runs on 11111 port.

service ricci start

iii) Open the browser, type https://rhel6-cluster1.linuxlabs.com:8084/ to High Availability Management Console.

iv) Login into the console with your root user credentials.

v) Create a cluster as “mycluster”

vi) Add All the 3 client nodes to the cluster as below:

Node Host name Root Password Ricci Port

RHEL6-Cluster1.linuxlabs.com ********* 11111

RHEL6-Cluster2.linuxlabs.com ********* 11111

RHEL6-Cluster3.linuxlabs.com ********* 11111

Click on “Create Cluster” to create and Add all the nodes to the cluster.

By performing above action , the all the 3 nodes are now part of the cluster “mycluster” . cluster.conf under “/etc/cluster/cluster.conf “ on all the three nodes gets populated like some thing as below.

[root@RHEL6-Cluster1 ~]# cat /etc/cluster/cluster.conf


7) Creating GFS2 file system with clustering.

a) Once you have created a cluster and added all the 3 nodes as cluster member. Run the following command on all three nodes to verify the cluster node status.

ccs_tool lsnode

the output will be

[root@RHEL6-Cluster1 ~]# ccs_tool lsnode

Cluster name: mycluster, config_version: 1

Nodename Votes Nodeid Fencetype

RHEL6-Cluster1.linuxlabs.com 1 1

RHEL6-Cluster2.linuxlabs.com 1 2

RHEL6-Cluster3.linuxlabs.com 1 3


b) Now start the cman and rgmanager service on all 3 nodes by running command

service cman start

service rgmanager start

c) now check the status of your cluster by running the commands below.

clustat

cman_tool status

The output of the clustat command would be something like:


[root@RHEL6-Cluster1 ~]# clustat

Cluster Status for mycluster @ Wed Jul 6 16:27:36 2011

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

RHEL6-Cluster1.linuxlabs.com 1 Online, Local

RHEL6-Cluster2.linuxlabs.com 2 Online

RHEL6-Cluster3.linuxlabs.com 3 Online

The output of the cman_tool status command would be something like:


[root@RHEL6-Cluster1 ~]# cman_tool status

Version: 6.2.0

Config Version: 1

Cluster Name: mycluster

Cluster Id: 65461

Cluster Member: Yes

Cluster Generation: 48

Membership state: Cluster-Member

Nodes: 3

Expected votes: 3

Total votes: 3

Node votes: 1

Quorum: 2

Active subsystems: 9

Flags:

Ports Bound: 0 11 177

Node name: RHEL6-Cluster1.linuxlabs.com

Node ID: 1

Multicast addresses: 239.192.255.181

Node addresses: 192.168.1.10

d) Now we need enable clustering on LVM2 by running the command as below:

lvmconf –enable-cluster

e) Now we need to create the LVM2 volumes on the additional hard disk attached to the VM. Follow below commands exactly.

pvcreate /dev/sdb

vgcreate –c y mygfstest_gfs2 /dev/sdb

lvcreate –n mytestGFS2 –L 7G mygfstest_gfs2

Note : By executing the above list of commands serially we would have created physical lvm volume, volume group and logical volume.

f) Now start clvmd service on all 3 nodes by running:

service clvmd start

g) Now we have to create a GFS2 file system on the above LVM volume. To create the GFS2 file system , run the command as below:

Format of the command is as below.

mkfs -t -p -t : -j

mkfs -t gfs2 -p lock_dlm -t mycluster:mygfs2 -j 4 /dev/mapper/mygfstest_gfs2-mytestGFS2

this will format the LVM device and create a GFS2 file system .

h) Now we have to mount the GFS2 file system on all the 3 nodes by running the command as below:

mount /dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2

where /GFS2 is mount point. You might need to create /GFS2 directory to create a mount point for the GFS2 file system.

i) Congrats, your GFS2 file system setup with cluster is ready for use.

Run the below command to know the size and mount details of the file system by running:

mount

df -kh

8) Now that we have a fully functional cluster and a mountable GFS2 file system, we need to make sure all the necessary daemons start up with the cluster whenever VM are restarted.

chkconfig --level 345 luci on

chkconfig --level 345 ricci on

chkconfig --level 345 rgmanager on

chkconfig --level 345 clvmd on

chkconfig --level 345 cman on

chkconfig --level 345 modclusterd on

chkconfig --level 345 gfs2 on

a) If you want the GFS2 file system to be mounted at startup you can add the filesytem and mount point details to /etc/fstab file

echo "/dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2 gfs2 defaults,noatime,nodiratime 0 0" >> /etc/fstab