I/O fencing is one of the very important feature of VCS and provides user the data integrity required. Let us now see how we can configure fencing in a VCS setup. This setup assumes that you already have a VCS setup up and running without fencing. Fencing can be configured in 2 ways – By using the installer script and by using command line.
Before configuring the disks for fencing you can run a test to confirm whether they are SCSI3-PGR compatible disks. The vxfentsthdw script guides you through to test all the disks to be configured in the fencing DG. You can also directly give the fencing DG name to test an entire DG.
# vxfentsthdw # vxfentsthdw -c vxfencoorddg
Steps to configure I/O fencing
##### Using the installer script ######
1. Initialize disks for I/O fencing
Minimum number of disks required to configure I/O fencing is 3. Also number of fencing disks should always be an odd number. We’ll be using 3 disks of size around 500 MB as we have a 2 node cluster. Initialize the disks to be used for the fencing disk group. We can also test whether the disks are SCSI3 PGR compatible by using the vxfentsthdw command for the fendg.
# vxdisk -eo alldgs list # vxdisksetup -i disk01 # vxdisksetup -i disk02 # vxdisksetup -i disk03
2. Run the installvcs script from the install media with fencing option
# cd /cdrom/VRTS/install # ./installvcs -fencing Cluster information verification: Cluster Name: geekdiary Cluster ID Number: 3 Systems: node01 node02 Would you like to configure I/O fencing on the cluster? [y,n,q] y
3. Select disk based fencing
We will be doing a disk based fencing rather than a server based fencing also called as CP (coordinator point) client based fencing.
Fencing configuration 1) Configure CP client based fencing 2) Configure disk based fencing 3) Configure fencing in disabled mode Select the fencing mechanism to be configured in this Application Cluster:[1-3,q] 2
4. Create new disk group
You can create a new disk group or use an existing disk group for fencing. We will be using a new fencing DG which is a preferred way of doing it.
Since you have selected to configure Disk based fencing, you would be asked to give either the Disk group to be used as co-ordinator of asked to create disk group and the mechanism to be used. Select one of the options below for fencing disk group: 1) Create a new disk group 2) Using an existing disk group 3) Back to previous menu Press the choice for a disk group: [1-2,b,q] 1
5. Select disks to be used for the fencing DG
Select the disks which we initialized in step 1 to create our new disk group.
List of available disks to create a new disk group 1) [disk name 1] 2) disk01 3) disk02 4) disk03 ... b) Back to previous menu Select an odd number of disks and at least three disks to form a disk group. Enter the disk options, separated by spaces: [1-4,b,q] 1 2 3
6. Enter the fencing disk group name, fendg
enter the new disk group name: [b] fendg
7. Select the fencing mechanism : raw/dmp(dynamic multipathing)
Enter fencing mechanism name (raw/dmp): [b,q,?] dmp
8. Confirm configuration and warnings
I/O fencing configuration verification Disk Group: fendg Fencing mechanism: dmp Is this information correct? [y,n,q] (y) y
Installer will stop VCS before applying fencing configuration. To make sure VCS shuts down successfully, unfreeze any frozen service groups in the cluster. Are you ready to stop VCS on all nodes at this time? [y,n,q] (n) y
##### Using Command line ######
1. Initialize disks for I/O fencing
First step is same as above method. We’ll initialize 3 disks of 500 MB each. on one node :
# vxdisk -eo alldgs list # vxdisksetup -i disk01 # vxdisksetup -i disk02 # vxdisksetup -i disk03
2. Create the fencing disk group fendg
# vxdg -o coordinator=on init fendg disk01 # vxdg -g fendg adddisk disk02 # vxdg -g fendg adddisk disk03
3. create the vxfendg file
# vxdg deport vxfendg # vxdg -t import fendg # vxdg deport fendg # echo "fendg" > /etc/vxfendg (on both nodes)
4. Enabling fencing
# haconf -dump -makero # hastop -all # /etc/init.d/vxfen stop # vi /etc/VRTSvcs/conf/config/main.cf ( add SCSI3 entry ) cluster geekdiary ( UserNames = { admin = "ass76asishmHajsh9S." } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 ) # hacf -verify /etc/VRTSvcs/conf/config # cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode (if you are using dmp )
7. Start fencing
# /etc/init.d/vxfen start # /opt/VRTS/bin/hastart
Testing the fencing configuration
1. Check status of fencing
# vxfenadm -d Fencing Protocol Version: 201 Fencing Mode: SCSI3 Fencing Mechanism: dmp Cluster Members: * 0 (node01) 1 (node02) RSM State Information node 0 in state 8 (running) node 1 in state 8 (running)
2. Check GAB port “b” status
# gabconfig -a GAB Port Memberships ================================== Port a gen 24ec03 membership 01 Port b gen 24ec06 membership 01 Port h gen 24ec09 membership 01
3. Check for configuration files
# grep SCSI3 /etc/VRTSvcs/conf/config/main.cf UseFence = SCSI3 # cat /etc/vxfenmode ... vxfen_mode=scsi3 ... scsi3_disk_policy=dmp # cat /etc/vxfendg fendg cat /etc/vxfentab ... /dev/vx/rdmp/emc_dsc01 /dev/vx/rdmp/emc_dsc02 /dev/vx/rdmp/emc_dsc03
4. Check for SCSI reservation keys on all the coordinator disks
In my case I have 2 nodes and 2 paths per disk, so I should be able to see 4 keys per disk (1 for each path and 1 for each node) in the output of below command.
# vxfenadm -s all -f /etc/vxfentab Reading SCSI Registration Keys... Device Name: /dev/vx/rdmp/emc_dsc01 Total Number Of Keys: 4 key[0]: [Numeric Format]: 32,74,92,78,21,28,12,65 [Character Format]: VF000701 * [Node Format]: Cluster ID: 5 Node ID: 1 Node Name: node02 key[1]: [Numeric Format]: 32,74,92,78,21,28,12,65 [Character Format]: VF000701 * [Node Format]: Cluster ID: 5 Node ID: 1 Node Name: node02 key[2]: [Numeric Format]: 32,74,92,78,21,28,12,66 [Character Format]: VF000700 * [Node Format]: Cluster ID: 5 Node ID: 0 Node Name: node01 key[3]: [Numeric Format]: 32,74,92,78,21,28,12,66 [Character Format]: VF000700 * [Node Format]: Cluster ID: 5 Node ID: 0 Node Name: node01