Sun Fire 3800, 4800, 4810, 6800, E4900, and E6900 System Controller (SC) software contains multiple shells. Each have specific functionality regarding administration of the platform. The shells include:
1. Domain Console: The domain’s console device connection (/dev/console). When connected, the user will have one of the following prompts:
a. OpenBoot Prom (OBP)
b. console login
2. Domain Shell: A shell on the system controller with the ability to administer a specific domain on the platform
3. Platform Shell: A shell on the system controller with the ability to administer the entire platform
Sometimes, someone is connected to the console of an active domain to which console access in needed. To gain access to the console we must disconnect the user which is already connected to it. Here, we attempt to gain access to the domain A console shell on an E6900 from the master system controller, but someone else is already connected.
System Controller 'e6900-sca11-a-sc0': Type 0 for Platform Shell Type 1 for domain A console Type 2 for domain B console Type 3 for domain C console Type 4 for domain D console Input: 1 Connection refused, console busy Connection closed.
We then try to connect to the domain A console shell from the platform shell of the master SC but have the same result.
System Controller 'e6900-sca11-a-sc0': Type 0 for Platform Shell Type 1 for domain A console Type 2 for domain B console Type 3 for domain C console Type 4 for domain D console Input: 0 Platform Shell e6900-sca11-a-sc0:SC> console a Connection refused, console busy Connection closed. e6900-sca11-a-sc0:SC>
The first thing to do is identify what is currently connected to the domain A console shell. Here we see that the console shells for all four domains are in use by user1.example.com.
e6900-sca11-a-sc0:SC> connections ID Hostname Idle Time Connected On Connected To --- ----------------------------- --------- -------------- -------------- 3 user1.example.com - Feb 10 21:31 Domain B 4 user1.example.com - Feb 10 21:31 Domain C 5 user1.example.com - Feb 10 21:31 Domain A 8 user1.example.com - Feb 15 13:51 Domain D 10 Localhost - Feb 15 13:52 Platform e6900-sca11-a-sc0:SC>
Then, from the platform shell of the master SC, we forcibly disconnect the domain A console shell from user1.example.com and again attempt to connect to the domain A console shell from the platform shell of the master SC. This is usually all that is needed.
e6900-sca11-a-sc0:SC> disconnect 5 e6900-sca11-a-sc0:SC> console a Connected to Domain A geeklab #
What are the configuration files used in SVM ?
1. The file is empty by default. The file is only used when metainit command is issued by the administrator. It is configured manually.
2. It can be populated by appending the output of # metastat -p. For example #metastat -p >> /etc/lvm/md.tab.
3. It can be used to recreate all the metadevices in one go. Best used in recovery of SVM configurations.
# metainit -a (to create all metadevices mentioned in md.tab file) # metainit dxx (create metadevice dxx only)
4. DO NOT use it on root file system though.
SVM uses the configuration files /etc/lvm/mddb.cf to store the location of state database replicas. Do not edit this file manually.
The configuration file /etc/lvm/md.cf contains the automatically generated configuration information for the default (unspecified or local) disk set.
This file can also be used to recover the SVM configuration If your system loses the information maintained in the state database.
Again do not edit this file manually.
The configuration file md.conf contains fields like nmd (i.e. number of volumes (metadevices) that the configuration supports) etc. The file can be edited to change the default values for various such parameters.
The RC script configures and starts SVM at boot and can be used to start/stop the daemons.
The RC script checks the SVM configuration at boot, start sync of mirrors if necessary and start the active monitoring daemon (mdmonitord).
If one of the root disk under mirrored SVM fails and you have to reboot the system, would the system reboot without any error ?
This is one of the most common question asked on SVM. Now in case of losing any state database replica (metadb) SVM determines the valid state database replica by using majority consensus algorithm. According to the algorithm it is required to have atleast (half + 1) to be available at boot time to be able to consider any of them to be valid. So in our case we if we had 6 metadb in total (3 on each disk), then we would need atleast 4 metadbs to be able to boot the system successfully, which we do not have. Hence we can’t boot the system.
To avoid this we need to add one entry in the /etc/system file to bypass the majority consensus algorithm. This enable us to boot from a single disk, which may be the requirement in many cases in production like patching the system etc. The entry is :
set md:mirrored_root_flag = 1
How to create different RAID layouts in SVM ?
RAID 0 (stripe and concatenation)
1. Creating a concatenation from slice S2 of 3 disks :
# metainit d1 3 1 c0t1d0s2 1 c1t1d0s2 1 c2t1d0s2
d1 - the metadevice 3 - the number of components to concatenate together 1 - the number of devices for each component.
2. Creating a stripe from slice S2 of 3 disks :
# metainit d2 1 3 c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k
d2 - the metadevice 1 - the number of components to concatenate 3 - the number of devices in each stripe.
-i 16k – the stripe segment size.
3. Creating three, 2 disk concatenation and stripe them together :
# metainit d3 3 2 c0t1d0s2 c1t1d0s2 -i 16k 2 c3t1d0s2 c4t1d0s2 -i 16k 2 c6t1d0s2 c7t1d0s2 -i 16k
d3 - the meatadevice 3 - the number of stripes 2 - the number of disk (slices) in each stripe -i 16k - the stripe segment size.
How to create a mirrored (RAID 1) layout in SVM ?
In SVM mirroring is a 2 step procedure – create the 2 sub-mirrors (d11 and d12) first and associate them with the mirror (d10).
# metainit -f d11 1 1 c0t3d0s7 # metainit -f d12 1 1 c0t4d0s7 # metainit d10 -m d11 # metattach d10 d12
Here d10 is the device to mount and and d11 and d12 hold the 2 copies of the data.
How to creare a RAID 5 layout in SVM ?
To setup a RAID 5 mirror using 3 disks :
# metainit d1 -r c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k
how to remove a metadevice ?
The metadevice can be removed if they are not open (i.e. not mounted):
# metaclear d3
To delete all the metadevices (use it carefully as it blows away entire SVM configuration):
# metaclear -a -f
How to view the SVM configuration and status of metadevices ?
To view the entire SVM configuration and status of all the metadevices :
# metastat -p
To check the configuration and status of a particular device :
# metastat d3
Another command to view SVM configuraion is :
# metastat -c
How to extend a metadevice ?
To grow a metadevice we need to attach a slice to the end and then grow the underlying filesystem:
# metattach d1 c3t1d0s2
If the metadevice is not mounted :
# growfs /dev/md/rdsk/d1
If the metadevice is mounted :
# growfs -M /export/home /dev/md/rdsk/d1
How to create metasets in SVM ?
Example below has 2 nodes (node01 and node02) with 2 shared disks assigned to both.
metadb -afc 3 c0d0s7
Create disk set
Add hosts to the diskset
On node01 :
# metaset -s [disk_set] -a -h node01 node02
Take ownership of the disk set
# metaset -s nfs1 -t -f -t --> for taking ownership -f --> forcefully
Add disks to the disk set :
# metaset -s [disk_set] -a /dev/did/rdsk/c15t0d0 /dev/did/rdsk/c15t1d0
Create the volumes in the diskset :
# metainit -s [disk_set] d11 1 1 c15t0d0 [disk_set]/d11: Concat/Stripe is setup
# metastat -s [disk_set]
How to do a root encapsulation and mirroring under SVM ?
How to grow a concat metadevice ?
How to grow a RAID 5 metadevice ?
How to grow a Mirrored metadevice ?
How to replace a failed root disk under SVM ?
In a general production environment you’ll find the OS root disk is mirrored to avoid any single point of failures. Its is important to know how to find the primary and alternate boot device to troubleshoot in some cases. You can either identify the boot device from the OK prompt (in case of a SPARC machine) or when the OS is booted up.
Identifying boot device at OK prompt
To identify the boot device at OK prompt :
ok> printenv boot-device boot-device = rootdisk mirrordisk
Here the primary boot device is set to a device alias, “rootdisk”. The alternate boot device is specified using the devalias mirrordisk. So the system first looks for the disk with alias rootdisk and if its not bootable it goes for the 2nd disk i.e. the mirrordisk. Use the devalias command to see the physical device path for the boot-device:
ok> devalias screen [email protected],0/SUNW,[email protected] net [email protected],[email protected],1 cdrom2 [email protected],[email protected][email protected],0:f cdrom1 [email protected],[email protected][email protected],0:f cdrom [email protected],[email protected][email protected],0:f rootdisk [email protected],[email protected][email protected],0 -- the physical path to the 'rootdisk' alias mirrordisk [email protected],[email protected][email protected],0 -- the physical path to the 'mirrordisk' alias disk2 [email protected],[email protected][email protected],0 disk1 [email protected],[email protected][email protected],0 disk0 [email protected],[email protected][email protected],0 ide [email protected],[email protected] floppy [email protected],[email protected]/dma/floppy ttyb [email protected],[email protected][email protected],2e8 ttya [email protected],[email protected][email protected],3f8 name aliases
Note : Here the physical device is rootdisk [email protected],[email protected][email protected],0. In case if boot-device is set wrong, we can either boot from CDROM or in single user mode to identify the correct boot device from the OS command line.
Identifying boot device using OS command line
Using the eeprom command
We can also find out the boot device using the operating system commands. Use the eeprom command to find the boot device :
# eeprom | grep boot-device boot-device=rootdisk mirror-disk
Here, again the primary boot-device is a devalias “rootdisk”, So in order to find out the physical path of the primary boot disk use :
To find the logical device name of the rootdisk, we can use the format command :
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 [ST320414A cyl 39533 alt 2 hd 16 sec 63] [email protected],[email protected][email protected],0 -- the logical device c0t0d0 is the same physical device as the 'rootdisk' alias from prtconf -vp 1. c0t2d0 [DEFAULT cyl 39533 alt 2 hd 16 sec 63] [email protected],[email protected][email protected],0
Similarly you can find the mirrordisk physical path.
x86 / x64
x86 machines uses BIOS instead of the OBP so to duplicate the functionality of eeprom in SPARC machine, x86 systems uses a file /boot/solaris/bootenv.rc at boot time. To find out the primary and alternate boot disk from eeprom :
# eeprom |grep boot [email protected],0/pci15ad,[email protected][email protected],0:a [email protected],0/pci15ad,[email protected][email protected],0:a
Also when the system is booted up you can see the 2 different boot options to select from. Below is a screen shot of mirrored boot disk under veritas volume manager.
Solaris Volume Manager (SVM) : Growing RAID 5 metadevices online
In the example shown below, the concat metadevice d80 is configured using the slice c1t3d0s0 of size 1 GB. The high level steps to grow this metadevice are :
1. Umount the file system on the metadevice if any.
2. Increase the size of disk partition being used by metadevice.
3. Recreate the metadevice.
4. Growing the file system.
# metastat d10 d10: Concat/Stripe Size: 2104515 blocks (1.0 GB) === 1gb of size Stripe 0: Device Start Block Dbase Reloc c1t3d0s0 0 No Yes
Increase size of disk partition
We would increase the size of partition 0 on disk c1t3d0 to around 1.5 GB. Check the prtvtoc command output for the increased space :
# prtvtoc /dev/rdsk/c1t3d0s0 * /dev/rdsk/c1t3d0s0 partition map * ....(output truncated for brevity) * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 0 00 417690 3148740 3566430 === Size increased to 1.5gb (3148740 sectors) .....
Interestingly, if you see the metastat command output, it would still show the size of metadevice d10 same as previous (1 GB).
Recreate the metadevice
Now, to reflect the change in size for metadevice d10, we have to recreate it. Make sure the file system using this metadevice is un-mounted before recreating the metadevice.
# metaclear -r d10 d10: Concat/Stripe is cleared
# metainit d10 1 1 c1t3d0s0 d10: Concat/Stripe is setup
Verify the chnage in size :
# metastat -c d10 s 1.5GB c1t3d0s0 === 1.5gb of size
Growing the UFS file system
The final step is to increase the file system.
# growfs -M /data /dev/md/rdsk/d80 /dev/md/rdsk/d80: 3148740 sectors in 209 cylinders of 240 tracks, 63 sectors 1537.5MB in 35 cyl groups (6 c/g, 44.30MB/g, 10688 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 90816, 181600, 272384, 363168, 453952, 544736, 635520, 726304, 817088, 2269632, 2360416, 2451200, 2541984, 2632768, 2723552, 2814336, 2905120, 2995904, 3086688
Mount the file system and verify the new size of the file system :
# df -h /data Filesystem size used avail capacity Mounted on /dev/md/dsk/d80 1.5G 18M 1.4G 2% /data