How to create a ZFS volume ?
The command to create a ZFS volume of 1 GB is :
# zfs create -V 1g geekpool/volume01
How to do a dry run before actually creating a ZFS pool ?
We can simulate a ZFS operation using the -n option without actually writing to the disk devices. For example, a dry run of zpool creation is :
# zpool create -n geekpool mirror c1t0d0 c1t1d0 would create 'geekpool' with the following layout: tank mirror c1t0d0 c1t1d0
How to destroy a ZFS pool ?
To destroy a ZFS pool :
# zpool destroy geekpool
To destroy a damaged ZFS pool forcefully :
# zpool destroy -f geekpool
How to resize a ZFS volume ?
We need to only set the volume size (either higher or lower than the original size) using the volsize property :
# zfs set volsize=2g fort/geekvol
How to resize a ZFS mount-point ?
To resize a ZFS mount point we need to set the reservation property :
# zfs set reservation=10g tank/geek
Remember that the quota property does not resize the ZFS mount-point. It limits space that can be consumed by the ZFS file system, but does not reserve it.
How to list all the ZFS pools ?
The command to list all the ZFS pools on a system is :
# zpool list
How to add a ZFS volume as a swap ?
First create a ZFS volume (size – 1 GB) and the add it as a swap :
# zfs create -V 1g rpool/swapvol # swap -a /dev/zvol/dsk/rpool/swapvol
How would you create different RAID levels in ZFS ? Give Examples.
Below are few examples of creating different RAID level ZFS pools.
Using Whole disks
# zpool create geekpool c1t1d0
Using disk slices
# zpool create geekpool c1t1d0s0
# mkfile 100m file1 # zpool create geekpool /file1
Using Dynamic striping
# zpool create geekpool c1t1d0 c1t2d0
Mirrored ZFS pool
# zpool create geekpool mirror c1t1d0 c1t2d0
3 way mirror
# zpool create geekpool mirror c1t1d0 c1t2d0 c1t3d0
# zpool create geekpool raidz c1t1d0 c1t2d0 c1t3d0
# zpool create geekpool raidz2 c1t1d0 c1t2d0 c1t3d0
# zpool create geekpool raidz3 c1t1d0 c1t2d0 c1t3d0 c1t4d0
What is the difference between quota and reservation ?
– Quota limits the amount of space a dataset and all its children can consume.
– When you set a quota on parent dataset all the child dataset inherit it from the parent. But you can set a different quota on the children if you want.
– If you set the quota on child dataset, it will not affect the quota of the parent dataset.
– Quotas can not be set on ZFS volumes as the volsize property acts as an implicit quota.
– Reservation sets the minimum amount of space that is guaranteed to a dataset and all its child datasets.
– Similar to quota, when you set reservation on a parent dataset, all the child dataset inherit it from the parent.
– Setting reservation on child dataset, does not affect the reservation of the parent.
– Reservation can not be set on ZFS volumes.
Example : Consider a ZFS pool (datapool) of size 10 GB. Setting a reservation of 5 GB on zfs file system fs1 will reserve 5 GB for fs1 in the pool and no other dataset can use that space. But fs1 can use more than 5 GB if there is space in the pool.
# zfs set reservation=5g datapool/fs1
Similarly, when we set a quota of 5 GB on fs1, it can not use space more than 5 GB from the pool. But its not reserved for fs1. It means that any other dataset can use a space of 8GB out of 10 GB even if the quota for fs1 is 5GB.
# zfs set quota=5g datapool/fs1
Setting both the properties on the dataset makes fs1 to use only 5 GB from the pool and no other dataset can use this 5 GB reserved for fs1.
How to import and export a ZFS pool ?
Exporting a pool, writes all the unwritten data to pool and remove all the information of the pool from the source system.
# zpool export geekpool
In a case where you have some file systems mounted, you can force the export
# zpool export -f geekpool
To check the pools that can be imported :
# zpool import
To import a exported pool (geekpool) :
# zpool import geekpool
What is ZFS snapshot and how would you create one ?
zfs snapshot is a read-only copy of zfs file system or volume. They consume no extra space in the zfs pool and can be created instantly. They can be used to save a state of file system at particular point of time and can later be rolled back to exactly same state. You can also extract some of the files from the snapshot if required and not doing a complete roll back.
The command to create a snapshot of a file system “geekpool/fs1” :
# zfs snapshot geekpool/[email protected]
How to roll back a ZFS snapshot ?
we can completely roll back to an older snapshot which will give us the point in time copy at the time snapshot was taken :
# zfs rollback geekpool/[email protected]
How to take a recursive snapshot of all file systems ?
Now by default when you take a snapshot of a filesystem or a dataset, only the parent dataset snapshot is created and not for the child dataset. So to take a recursive snapshot for parent as well as child datasets :
# zfs snapshot -r geekpool/[email protected] (to take snapshot of all FS under fs1)
How to move ZFS snapshots to other system ?
ZFS has an option to backup or move the snapshots to other system. The send and receive command can be used to send the snapshot to other system.
To take the backup of snapshot on the same system :
# zfs send geekpool/[email protected] > /geekpool/fs1/oct2013.bak # zfs receive anotherpool/fs1 < /geekpool/fs1/oct2013.bak
We can also combine both the commands into one :
# zfs send geekpool/[email protected] | zfs receive anotherpool/fs1
To move the snapshot to remote system (node02) :
node02 # zfs create testpool/testfs (create a test file-system on another system) node01 # zfs send geekpool/[email protected] | ssh node02 "zfs receive testpool/testfs"
To send only the incremental data :
node01# zfs send -i geekpool/[email protected]| ssh node02 zfs recv testpool/testfs
What is a ZFS clone and how would you create one ?
ZFS clones as contrary to ZFS snapshots are writable copy of the file system with initial content same as the file system. Clones can only be created from snapshots. Snapshot can’t be delete until you have delete the clone created from it.
Command to create a clone from a snapshot :
# zfs clone geekpool/[email protected] geekpool/fs1/clone01
How to change the mount-point name of a ZFS file system online ?
The command to change the mount-point name of ZFS file system is :
# zfs set mountpoint=/mountpoint_name
What happens if the mountpoint property of a ZFS dataset is set to legacy ?
By default the mountpoint property for a ZFS dataset is set to either the mountpoint name you mentioned while creating it or the inherited name from the parent dataset. This makes it possible to mount the ZFS file system automatically at boot and does not require to edit /etc/vfstab.
Now when you set the mountpoint property to legacy, the zfs dataset will not mount automatically at boot and even at later stages using zfs mount command. To mount/umount the FS, we have to use the legacy mount/umount commands. For example :
# zfs set mountpoint=legacy tank/home/geek # mount -F zfs tank/home/geek /geek
Also we need to add an entry to /etc/vfstab for the FS to mount automatically at boot.
#device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # tank/home/eric - /mnt zfs - yes -
What is the command to list only the currently mounted ZFS file systems ?
The command to see all the currently mounted ZFS file systems is :
# zfs mount
Whats the equivalent command of “mount”, “mount -a” and “umount” to mount the ZFS file systems?
The legacy mount and umount commands does not work on ZFS file systems unless the mountpoint property is set to legacy. The equivalent commands in ZFS file system are :
# zfs mount tank/home/geek [ give only the dataset name here ] # zfs mount -a [ to mount all the ZFS file systems ] # zfs unmount users/home/geek [ its unmount and not umount ]
How would you share ZFS dataset as NFS in Solaris 10 ?
Traditional UFS way
ZFS allows to use the traditional way of sharing a ZFS file system as shown below.
# share -F nfs /tank/home/geek # cat /etc/dfs/dfstab share -F nfs -d "Geek Home" /tank/home/geek
The ZFS way
ZFS makes it very easy to share a file system as NFS. There is no need to edit the /etc/dfs/dfstab.
# zfs set sharenfs=on tank/home/geek
By default when you set sharenfs property to on, the file system is shared as read-write for all users.
how to share all the ZFS file systems on the system ?
To share all the ZFS file system on the system in one go :
# zfs share -a
How to un-share a ZFS file system ?
The command to un-share a specific ZFS file system is :
# zfs unshare tank/home/geek
To un-share all the shared ZFS file systems :
# zfs unshare -a
How to check the health of ZFS pools ?
To check the health of ZFS pools on the system :
# zpool status -x all pools are healthy
To check the errors specific to a pool :
# zpool status -v geekpool
How to check the integrity of a ZFS pool ?
The command to check the integrity of a ZFS pool :
# zpool scrub geekpool
To check the status of the scrubing :
# zpool status -v geekpool pool: geekpool state: ONLINE scrub: scrub completed after 0h4m with 0 errors on wed Dec 2 11:39:00 2013 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors
To stop the scrubbing :
# zpool scrub -s geekpool