The post is intended for Linux System Administrators seeking to initially configure or further optimise existing LVM configured systems. It discusses:
- the need for specific LVM filter string configuration for the particular storage type in use.
- provide sample LVM filter strings for a variety of common storage devices
LVM Configuration – filter parameter
The primary LVM configuration file is /etc/lvm/lvm.conf. The file consists of a number of sections each containing various parameters/values. This article specifically focuses on the filter parameter within the devices section.
Following is a sample lvm.conf file:
devices { dir = "/dev" scan = [ "/dev" ] obtain_device_list_from_udev = 1 preferred_names = [ ] filter = [ "a/.*/" ] cache_dir = "/etc/lvm/cache" cache_file_prefix = "" write_cache_state = 1 sysfs_scan = 1 multipath_component_detection = 1 md_component_detection = 1 md_chunk_alignment = 1 default_data_alignment = 0 data_alignment_detection = 1 data_alignment = 0 data_alignment_offset_detection = 1 ignore_suspended_devices = 0 disable_after_error_count = 0 require_restorefile_with_uuid = 1 pv_min_size = 2048 issue_discards = 0 } log { verbose = 0 syslog = 1 overwrite = 0 level = 0 indent = 1 command_names = 0 prefix = " " } backup { backup = 1 backup_dir = "/etc/lvm/backup" archive = 1 archive_dir = "/etc/lvm/archive" retain_min = 10 retain_days = 30 } shell { history_size = 100 } global { umask = 077 test = 0 units = "h" si_unit_consistency = 0 activation = 1 proc = "/proc" locking_type = 1 wait_for_locks = 1 fallback_to_clustered_locking = 1 fallback_to_local_locking = 1 locking_dir = "/var/lock/lvm" prioritise_write_locks = 1 abort_on_internal_errors = 0 detect_internal_vg_cache_corruption = 0 metadata_read_only = 0 } activation { checks = 0 udev_sync = 1 udev_rules = 1 verify_udev_operations = 0 missing_stripe_filler = "error" reserved_stack = 256 reserved_memory = 8192 process_priority = -18 mirror_region_size = 512 readahead = "auto" mirror_log_fault_policy = "allocate" mirror_image_fault_policy = "remove" snapshot_autoextend_threshold = 100 snapshot_autoextend_percent = 20 use_mlockall = 0 monitoring = 1 polling_interval = 15 } dmeventd { mirror_library = "libdevmapper-event-lvm2mirror.so" snapshot_library = "libdevmapper-event-lvm2snapshot.so" }
By default, upon system boot, LVM scans devices defined by the filter parameter to discover LVM devices. Using the default filter string above (filter = [ “a/.*/” ]), LVM scans all available devices on the system. As PVs are discovered, VGs are assembled, LVs activated, then filesystems (if exist) subsequently mounted.
For systems with a considerable number of storage devices (LUNs) attached, it may not be desirable or necessary for LVM to scan every available device. In that case, the LVM filter string may be modified (optimized) to scan a user-specified set of devices.
LVM and Multipathing
Other than local storage, users commonly create LVM devices on SAN storage. Furthermore, access to SAN storage is often multipathed i.e. multiple paths to the same SAN LUN exist on the system. In the case of device-mapper-multipath, the native Oracle Linux multipath solution, the following devices might exist that all reference the same SAN LUN:
/dev/mapper/mpath1 /dev/dm-1 /dev/sda /dev/sdb
Multipath implementations differ – in the case of EMC PowerPath, the following devices might exist that all reference the same SAN LUN:
/dev/emcpowera /dev/sda /dev/sdb
As earlier stated, the default lvm.conf filter string value instructs LVM to scan all attached/available devices. Unfortunately, this may be problematic when using LVM in conjunction with multipathing. Depending on device (path) discovery order, LVM may ultimately utilise singlepath devices e.g. /dev/sd[a,b] to construct VGs instead of using the intended multipath device e.g. /dev/mapper/mpath1. If this occurs, the LVM device is not afforded the benefits of multipathing i.e. path loss redundancy, high availability, etc. This same issue similarly applies to systems configured with boot from SAN.
Messages such as the following are typically observed when LVM systems using multipath are not optimally configured to exclude singlepath devices:
# pvs Found duplicate PV Yvq85ssLqAXeBvZpVtAqBIbm44KU8cd5: using /dev/dm-1 not /dev/sda Found duplicate PV Yvq85ssLqAXeBvZpVtAqBIbm44KU8cd5: using /dev/mapper/mpath1 not /dev/dm-1 Found duplicate PV Yvq85ssLqAXeBvZpVtAqBIbm44KU8cd5: using /dev/sdb not /dev/mapper/mpath1 PV VG Fmt Attr PSize PFree /dev/sdb VolGroup01 lvm2 a-- 1.00G 1.00G /dev/cciss/c0d0p2 VolGroup00 lvm2 a-- 48.81G 0
Above, LVM erroneously uses singlepath device /dev/sdb instead of multipath device /dev/mapper/mpath1. To ensure that LVM utilises intended storage devices/paths, customise the LVM filter string to specifically include and/or exclude wanted and/or unwanted devices. Due to the range and variety of local and SAN storage available, no one single LVM file configuration will necessarily suit every possible deployment. Therefore, the LVM filter string must be customised for individual system/storage combinations.
Sample LVM filter strings
This section offers an incomplete range of sample LVM filter string values. Note that LVM accepts various combinations of regular expression syntax for filter string values. The following samples denote one such variation, however, other variations/combinations are accepted. However, LVM will readily complain in the presence of major syntax errors.
Accept(a) Filters
Filter | Meaning |
---|---|
filter = [ “a/.*/” ] | All devices |
filter = [ “a|^/dev/sd*|” ] | All SCSI devices only |
filter = [ “a|^/dev/sda|” ] | SCSI device /dev/sda |
filter = [ “a|^/dev/sda[1-9]$|” ] | All partitions on SCSI device /dev/sda only |
filter = [ “a|^/dev/cciss/*|” ] | HP SmartArray controlled devices (cciss) only |
filter = [ “a|^/dev/loop*|” ] | All loop devices – /dev/loop* |
filter = [ “a|^/dev/loop1[0-2]$|” ] | Loop devices 10, 11, 12 only – /dev/loop1[0-2] |
filter = [ “a|^/dev/hda1$|” ] | Partition 1 on IDE device /dev/hda |
filter = [ “a|^/dev/mapper/*|” ] | device mapper multipath devices |
filter = [ “a|^/dev/emcpower*|” ] | All EMC PowerPath devices |
filter = [ “a|^/dev/vpath[a-z]*|” ] | All IBM Subsystem Device Driver (SDD) devices |
filter = [ “a|^/dev/sddlm*|” ] | All Hitachi Dynamic Link Manager (HDLM) devices |
Reject(r) Filters
Filter | Meaning |
---|---|
filter = [ “r|^/dev/*|” ] | All devices |
filter = [ “r|^/dev/cdrom|” ] | CD/DVD device /dev/cdrom |
filter = [ “r|^/dev/hdc|” ] | IDE device /dev/hdc only |
LVM filter strings may be individually specified or multiple values used in conjunction as required. To avoid ambiguity or unintended device scan/usage, any intended devices (a) should be defined then be immediately followed by an explicit exclude string (r) to prevent any other devices from being scanned/used.
Working examples of LVM filter strings
A system with LVM devices on local SCSI storage and device-mapper-multipath SAN storage might define:
filter = [ "a|^/dev/sda[1-9]$|", "a|^/dev/mapper/*|", "r|^/dev/*|" ]
An HP system with LVM devices on local Smart Array storage and remote EMC PowerPath SAN storage might define:
filter = [ "a|^/dev/cciss/*|", "a|^/dev/emcpower*|", "r|^/dev/*|" ]
An system with LVM devices on local SCSI storage and IBM Subsystem Device Driver SAN storage might define:
filter = [ "a|^/dev/sda[1-9]$|", "a|^/dev/vpath[a-z]*|", "r|^/dev/*|" ]
Validation of candidate LVM filter strings
When devising and testing LVM filter strings, ensure that LVM discovers/uses all (and only) intended devices and that other, unintended devices are not scanned/used. The validation process should include the likes of:
- back up the original /etc/lvm/lvm.conf file
- optionally take an lvmpdump to backup the entire LVM configuration
- customise the LVM filter string as required i.e. /etc/lvm/lvm.conf: filter = […]
- remove the LVM cache file e.g. # /bin/rm /etc/lvm/cache/.cache
- re-scan for LVM devices e.g. # /sbin/pvscan -vv
Devices listed in the ‘Walking through all physical volumes’ section of the pvscan output denotes which devices were scanned by LVM. The trailing section of pvscan output lists any/all PV devices discovered. Note that an incorrect or suboptimally configured LVM filter string may result in:
- use of unintended devices e.g. singlepath instead of multipath
- unnecessary LVM device scanning, resulting in prolonged system boot
- failure to discover intended LVM devices, resulting in device/filesystem unavailability
- failure to boot the system i.e. kernel panic etc.
The following system console output denotes typical boot time messages when a system is unable to find the LVM device containing the root filesystem:
root (hd0,0) Filesystem type is ext2fs, partition type 0x83 kernel /vmlinuz-2.6.18-348.el5 ro root=/dev/VolGroup00/root 3 crashkernel=128@16M elevator=deadline [Linux-bzImage, setup=0x1e00, size=0x1fd6fc] initrd /initrd-2.6.18-348.el5.img [Linux-initrd @ 0x37a7c000, 0x57396d bytes] Warning: pci_mmcfg_init marking 256MB space uncacheable. Red Hat nash version 5.1.19.6 starting. lpfc 0000:06:00.0 0:1303 Link Up Event x1 received Data : x1 xf7 x10 x9 x0 x0 0 lpfc 0000:06:00.1 1:1303 Link Up Event x1 received Data : x1 xf7 x10 x9 x0 x0 0 Unable to access resume device (/dev/VolGroup00/swap) mount: could not find filesystem '/dev/root' setuproot: moving /dev failed: No such file or directory setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!
Implementing LVM configuration changes
A copy of the lvm.conf file is stored within in the system initial ram disk (initrd) file that is used during system boot. Therefore, LVM configuration change warrant the rebuild of the initrd for changes to be effective at boot time. Once an appropriate LVM filter has been defined/validated, perform the following actions:
1. Remove the LVM cache file e.g.
# rm /etc/lvm/cache/.cache
2. Rebuild the initial ramdisk (initrd) as follows
Note that rebuilding the initrd file with an incorrectly configured LVM filter may result in complete system boot failure. Accordingly, the following alternate approaches are provided to help prevent against such failure.
Option 1 (recommended)
This option involves defining a new GRUB kernel boot entry in order to test LVM changes without overwriting the current initrd.
# cd /boot # mkinitrd -v -f /boot/initrd-`uname -r`.LVM.img `uname -r` Creating initramfs ...
# ls -lart ... -rw------- 1 root root 3805700 Nov 1 16:40 initrd-2.6.18-348.el5.LVM.img
Next, review the GRUB configuration file /boot/grub/grub.conf. GRUB kernel boot entries, beginning with title, are listed one after the other. The value of the default parameter defines the current default boot kernel. GRUB boot entry numbering starts from zero (0), therefore:
– default=0 refers to the first listed GRUB kernel boot entry.
– default=3 refers to the fourth listed GRUB kernel boot entry.
Copy all of the lines of the default kernel boot entry beneath itself. Modify the initrd line of new kernel boot entry to reflect the name of the newly created initrd file. Modify the value of the default parameter to reflect the newly created GRUB kernel boot entry. If the original default parameter value was 0 and the new GRUB entry was created immediately beneath it, modify the default parameter value to 1, for example:
# cat /etc/grub/grub.conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda #default=0 default=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Oracle Linux Server (2.6.18-348.el5) root (hd0,0) kernel /vmlinuz-2.6.18-348.el5 ro root=/dev/VolGroup00/LogVol00 crashkernel=128M@32 numa=off initrd /initrd-2.6.18-348.el5.img title Oracle Linux Server (2.6.18-348.el5) LVM root (hd0,0) kernel /vmlinuz-2.6.18-348.el5 ro root=/dev/VolGroup00/LogVol00 crashkernel=128M@32 numa=off initrd /initrd-2.6.18-348.el5.LVM.img ...
Upon reboot, the system will boot using the newly created GRUB boot entry including the newly created initrd. In the event of any issues, reboot the system, interrupt the boot process to access GRUB menu and select to boot the system using the original boot entry.
Option 2 (expert)
This option involves overwriting the existing default GRUB kernel boot entry and overwriting the current initrd.
# cd /boot # mv initrd-`uname -r`.img initrd-`uname -r`.img.orig # mkinitrd -v -f /boot/initrd-`uname -r`.img `uname -r` Creating initramfs ...
Upon reboot, the system will boot using the existing GRUB boot but use the newly rebuilt initrd.
3. After reboot, verify that all LVM devices (PVs, VGs, LVs) exist and are using intended physical and/or multipath devices. Repeat the above actions for any further LVM filter configuration optimization or whenever further LVM/storage changes/reorganization warrant reconfiguration.