Thursday, August 4, 2011

How to setup Redhat cluster and GFS2 on RedHat Enterprise Linux 6 on Vmware ESXi

1) Installing RHEL6 on Vmware esxi with clustering packages.


a) Creating a RedHat Enterprise Linux 6.0 Virtual image.

i) Open vSphere Client by connecting to a Vmware ESXi Server.

ii) Login into your vSphere Client

iii) Goto File -> New -> Virtual Machine (VM).

iv) Select Custom option in Create New Virtual Machine Window and click Next

v) Give a name to the virtual machine(VM) ( In my case name of my virtual machine is – RHEL6-ClusterNode1) and click next.

vi) Select a resource pool where you want your VM to reside ( In my case , I have created a resource pool named RHEL6-Cluster.) and click Next.

vii) Select a datastore to store your VM files and Click Next.

viii) Select VM version which is suitable for your environment.( In my case VM version is 7) and click Next.

ix) Specify the guest operating system type as Linux and select version as RedHat Enterprise Linux 6.0 -32 bit. Click Next.

x) Select number of CPU for the VM ( you can assign multiple CPU if your processor is multicore.) (in my case : I had assigned 1 cpu) and Click Next.

xi) Configure the memory for your VM (assign the memory wisely, so that VM performance is not degraded when multiple VM’s run in parallel). Click Next.

xii) Create Network Connection for your VM ( generally do not change the default connection ) . Click Next.

xiii) Select SCSI controller as LSI Logic Parallel , Click Next.

xiv) Select “Create New Virtual Disk” and Click Next.

xv) Allocate virtual disk capacity for the VM as needed.( In my case : virtual disk size was assigned as 10GB. Select “Support Clustering features such as fault tolerance. Select “ Specify a datastore “ and assign a datastore to store the VM. Click Next

xvi) Under Advanced options, Let the Virtual Device Node be SCSI(0:0). Click Next.

xvii) On “the Ready to Complete” window select “Edit the virtual machine settings before completion “ and Click continue.

xviii) On the “ RHEL6-Cluster1 – VM properties window”, select New SCSI controller and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s

xix) Similarly for “New CD/DVD” supply either client device or host device or Operating system installer ISO file located on the datastore to start the installation of the operating system. Note: do not forget to enable “Connect at power on “ option for Host Device or Datastore ISO device option.

xx) Now Click Finish, No you are ready to start the installation of the RHEL6 operating system on Virtual Machine.

2) Installing RedHat Enterprise for Linux 6.0 on the Virtual Machine.

a) File System Partitioning for the RHEL6.0 VM.

i) Start the RHEL Installation.

ii) Select custom partitioning for disk.

iii) Create a /boot partition of 512MB

iv) Create physical LVM Volume from remaining free space on the virtual disk.

v) Create logical volume group and create a logical volume for swap and “/” on the available LVM disk space.

vi) Apply the above changes to create the partition structure.

b) Selecting the packages required for clustering

i) Select the packages to be installed on to the disk by selecting custom package selection ( Enable additional repository High Availability, Resilient storage

ii) Select all packages under High Availability, Resilient storage. Click next to start installation of the operating system.

Note : At the end of the installation cman, luci, ricci, rgmanager, clvmd, modclusterd, gfs2-tools packages will get installed onto the system.

iii) After the operating system is installed, Restart the VM to boot into the VM and perform post installation tasks and shutdown the guest RHEL6.0 VM.


3) Cloning the RHEL6.0 VM image into two copies named as RHEL6-Cluster2 and RHEL6-Cluster3.

i) Open the datastore of your VMware ESXi by right clicking and selecting “Browse Datastore” on the datastore in the summary page of the ESXi console.

ii) Create two directories RHEL6-Cluster2 and RHEL6-Cluster3

iii) Copy the VM image files from RHEL6-Cluster1 directory to above two directories i.e., RHEL6-Cluster2 and RHEL6-Cluster3.

iv) Once you have copied the all the files to respective directory, browse to RHEL6-Cluster2 directory under datastore and locate “RHEL6-Cluster1.vmx” file, right click on it and select “Add to Inventory”.

v) In the “Add to Inventory” window add the VM as RHEL6-Cluster2 and finish the process

vi) Similarly perform previous step to add RHEL6-cluster3 to the inventory.

4) Adding a shared harddisk to all the 3 VM’s

a) Adding a hard disk for clustering to RHEL6-Cluster1 VM/node.

i) In vSphere Client select RHEL6-Cluster1 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.

ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.

iii) Select Hard Disk as device type, Click Next.

iv) Select “ Create a new virtual disk” and click Next.

v) Specify the required disk size and select Disk Provisioning as “Support clustering features such fault tolerance” and Location as “Store with the virtual machine” Click Next.

vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process.

vii) On the “ RHEL6-Cluster1 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

b) Sharing the RHEL6-Cluster1 node’s additional hard disk with other two VM/cluster nodes.

i) In vSphere Client select RHEL6-Cluster2 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.

ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.

iii) Select Hard Disk as device type, Click Next.

iv) Select “Use an existing virtual disk” and click Next.

v) Browse the datastore, locate RHEL6-cluster1’ directory and select RHEL6-Cluster1_1.vmdk ( Note : Additional hardisk will named as VMname _1 or 2 or 3.vmdk . Do not select RHEL6-Cluster1.vmdk as this your VMimage file) to add as second hard disk to the VM. Click Next.

vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process.

vii) On the “ RHEL6-Cluster2 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

c) Similarly perform the above steps described under section (b) for the 3rd node.

5) Configuring the static IP address, hostname and /etc/hosts file on all three nodes.

Assign the static IP addresses to all the three VM as below

Ex :

192.168.1.10 RHEL6-Cluster1

192.168.1.11 RHEL6-Cluster2

192.168.1.12 RHEL6-Cluster3

Gateway in this case is :192.168.1.1

DNS in this case is : 192.168.1.1

DOMAIN in this case is: linuxlabs.com

i) To Assign above IP and hostname Start all the three VM’s

ii) Note : When you have started the VM, The network manager daemon/service on RHEL6 would have started the network by getting an ipaddress from DHCP and assigning it to eth0 or eth1. Note down the hardware address of your Active Ethernet by running ifcfg command ( The HWaddr would be like 00:0C:29:86:D3:E6 etc as this needed to added into “/etc/sysconfig/network-scripts/ifcfg-eth0” depending upon which Ethernet port is active on your image.).

iii) Disable and stop the Network Manager daemon as other cluster related daemons require this daemon to be off.

To stop the network manager daemon, run:

/etc/init.d/NetworkManager stop

To disable the network manager daemon service , run:

Chkconfig –level 345 NetworkManager off

iv) Add the following details to “/etc/sysconfig/network-scripts/ifcfg-eth0” file

DEVICE="eth0"

NM_CONTROLLED="no"

ONBOOT="yes"

HWADDR=00:0C:29:96:D3:E6

TYPE=Ethernet

BOOTPROTO=none

IPADDR=192.168.1.10

PREFIX=24

GATEWAY=192.168.1.1

DNS1=192.168.1.1

DOMAIN=linuxlabs.com

DEFROUTE=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME="System eth0"

UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03


Note : HWADDR and DEVICE may change from VM to VM.

v) Now add hostnames as RHEL6-cluster1 or RHEL6-Cluster2 or RHEL6-Cluster3 to “/etc/sysconfig/network” file inside each VM.

After adding the hostname the “/etc/sysconfig/network” file will look like as below:

NETWORKING=yes

HOSTNAME=RHEL6-Cluster1

vi) Now add hostname resolution information to /etc/hosts file. As below.


#192.168.1.232 RHEL6-Cluster1 # Added by NetworkManager

192.168.1.10 RHEL6-Cluster1.linuxlabs.com

#127.0.0.1 localhost.localdomain localhost

#::1 RHEL6-Cluster1 localhost6.localdomain6 localhost6

192.168.1.11 RHEL6-Cluster2.linuxlabs.com

192.168.1.12 RHEL6-Cluster3.linuxlabs.com

Note : Similarly perform the above steps on the other two VM’s .

vii) After configuring all the 3 VM’s , Restart the VM’s and Verify the network connection by pinging each other VM to confirm the network configuration is correct and working fine.


6) Configuring the cluster on RHEL6.0 with High Availability Management web UI.

i) Start the luci service on all the 3 nodes by running command in terminal.
service lcui start

ii) Start the ricci service on all the 3 nodes by running the command in terminal. Ricci daemon runs on 11111 port.

service ricci start

iii) Open the browser, type https://rhel6-cluster1.linuxlabs.com:8084/ to High Availability Management Console.

iv) Login into the console with your root user credentials.

v) Create a cluster as “mycluster”

vi) Add All the 3 client nodes to the cluster as below:

Node Host name Root Password Ricci Port

RHEL6-Cluster1.linuxlabs.com ********* 11111

RHEL6-Cluster2.linuxlabs.com ********* 11111

RHEL6-Cluster3.linuxlabs.com ********* 11111

Click on “Create Cluster” to create and Add all the nodes to the cluster.

By performing above action , the all the 3 nodes are now part of the cluster “mycluster” . cluster.conf under “/etc/cluster/cluster.conf “ on all the three nodes gets populated like some thing as below.

[root@RHEL6-Cluster1 ~]# cat /etc/cluster/cluster.conf


7) Creating GFS2 file system with clustering.

a) Once you have created a cluster and added all the 3 nodes as cluster member. Run the following command on all three nodes to verify the cluster node status.

ccs_tool lsnode

the output will be

[root@RHEL6-Cluster1 ~]# ccs_tool lsnode

Cluster name: mycluster, config_version: 1

Nodename Votes Nodeid Fencetype

RHEL6-Cluster1.linuxlabs.com 1 1

RHEL6-Cluster2.linuxlabs.com 1 2

RHEL6-Cluster3.linuxlabs.com 1 3


b) Now start the cman and rgmanager service on all 3 nodes by running command

service cman start

service rgmanager start

c) now check the status of your cluster by running the commands below.

clustat

cman_tool status

The output of the clustat command would be something like:


[root@RHEL6-Cluster1 ~]# clustat

Cluster Status for mycluster @ Wed Jul 6 16:27:36 2011

Member Status: Quorate

Member Name ID Status

------ ---- ---- ------

RHEL6-Cluster1.linuxlabs.com 1 Online, Local

RHEL6-Cluster2.linuxlabs.com 2 Online

RHEL6-Cluster3.linuxlabs.com 3 Online

The output of the cman_tool status command would be something like:


[root@RHEL6-Cluster1 ~]# cman_tool status

Version: 6.2.0

Config Version: 1

Cluster Name: mycluster

Cluster Id: 65461

Cluster Member: Yes

Cluster Generation: 48

Membership state: Cluster-Member

Nodes: 3

Expected votes: 3

Total votes: 3

Node votes: 1

Quorum: 2

Active subsystems: 9

Flags:

Ports Bound: 0 11 177

Node name: RHEL6-Cluster1.linuxlabs.com

Node ID: 1

Multicast addresses: 239.192.255.181

Node addresses: 192.168.1.10

d) Now we need enable clustering on LVM2 by running the command as below:

lvmconf –enable-cluster

e) Now we need to create the LVM2 volumes on the additional hard disk attached to the VM. Follow below commands exactly.

pvcreate /dev/sdb

vgcreate –c y mygfstest_gfs2 /dev/sdb

lvcreate –n mytestGFS2 –L 7G mygfstest_gfs2

Note : By executing the above list of commands serially we would have created physical lvm volume, volume group and logical volume.

f) Now start clvmd service on all 3 nodes by running:

service clvmd start

g) Now we have to create a GFS2 file system on the above LVM volume. To create the GFS2 file system , run the command as below:

Format of the command is as below.

mkfs -t -p -t : -j

mkfs -t gfs2 -p lock_dlm -t mycluster:mygfs2 -j 4 /dev/mapper/mygfstest_gfs2-mytestGFS2

this will format the LVM device and create a GFS2 file system .

h) Now we have to mount the GFS2 file system on all the 3 nodes by running the command as below:

mount /dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2

where /GFS2 is mount point. You might need to create /GFS2 directory to create a mount point for the GFS2 file system.

i) Congrats, your GFS2 file system setup with cluster is ready for use.

Run the below command to know the size and mount details of the file system by running:

mount

df -kh

8) Now that we have a fully functional cluster and a mountable GFS2 file system, we need to make sure all the necessary daemons start up with the cluster whenever VM are restarted.

chkconfig --level 345 luci on

chkconfig --level 345 ricci on

chkconfig --level 345 rgmanager on

chkconfig --level 345 clvmd on

chkconfig --level 345 cman on

chkconfig --level 345 modclusterd on

chkconfig --level 345 gfs2 on

a) If you want the GFS2 file system to be mounted at startup you can add the filesytem and mount point details to /etc/fstab file

echo "/dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2 gfs2 defaults,noatime,nodiratime 0 0" >> /etc/fstab