Question : How to set certain netmask for a non global zone ?
To define a certain netmask for a non global zone, use zonecfg command. The network address is one of:
- a valid IPv4 address, optionally followed by “/” and a prefix length.
- a valid IPv6 address, which must be followed by “/” and a prefix length.
Steps to follow
1. On the global zone, enter the zone configuration mode :
global_zone # zonecfg -z zone01 zonecfg:zone01> add net zonecfg:zone01:net> set address=192.168.0.10/24 zonecfg:zone01:net> set physical=eri0 zonecfg:zone01:net> end zonecfg:zone01> exit
2. Verify the zone configuration :
global_zone # zonecfg -z zone02 info
3. To check and verify the settings you can use ifconfig within the non global zone :
zone01 # ifconfig -a
Solaris 11.2 provides a supported way to add new resources to a running zone online. Prior to this version, a reboot of the non-global zone was required to make the changes effective. The post describes an example of adding a raw disk device online to a non-global zone.
1. Check the Solaris OS version.
# uname -a SunOS test 5.11 11.2 sun4v sparc SUNW,T5240
2. List the zones available on the system. In our example we will be adding disk resource to the zone – zone01
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 zone01 running /zones/zone01 solaris shared
3. Verify the availability of the disk being assigned to the zone.
# ls -l /dev/dsk/c1d2s0 lrwxrwxrwx 1 root root 62 Jan 4 17:14 /dev/dsk/c1d2s0 -> ../..[email protected][email protected][email protected]:a
4. Start modifying the zone configuration. Assign the disk to the zone – zone01.
# zonecfg -z zone01 zonecfg:zone01> add device zonecfg:zone01:device> set match=/dev/dsk/c1d2s0 zonecfg:zone01:device> set match=/dev/rdsk/c1d2s0 zonecfg:zone01:device> end zonecfg:zone01> commit zonecfg:zone01> exit
5. If you check the /dev/rdsk path in the zone, currently you will not find any disk available. This is because we have not yet applied the changes to the non-globa zone.
# zlogin zone01 ls -l /dev/rdsk total 0
6. Apply the changes to the zone.
# zoneadm -z zone01 apply zone 'zone01': Checking: Removing net physical=net0 zone 'zone01': Checking: Adding net physical=net0 zone 'zone01': Checking: Adding device match=/dev/rdsk/c1d2s0 zone 'zone01': Applying the changes
7. Verify the new disk under /dev/rdsk directory :
# zlogin zone01 ls -l /dev/rdsk total 0 crw------- 1 root sys 264, 16 Jan 4 18:29 c1d2s0
A system with non-global zones will share one kernel for all zones (the global zone as well as all configured non-global zone). As a result, there is only one date/time on the entire setup and this time is usually controlled by the global zone only. By default, the privilege to change the date and time is not available inside a non-global zone and therefore the NTP service will fail to adjust the time.
The default configuration for non-global zones assumes that the time synchronization is done in the global zone and that there is no need to adjust the system time from inside a non-global zone. If the administrator of a non-global zone is able to change the system time then these changes will affect all running zones (including the global zone) and this may be considered a security risk.
The time synchronization can be delegated to a non-global zone if required. Please keep in mind that multiple time adjustments from different sources will likely cause problems and that only one zone should run the NTP service. If you want to delegate the NTP synchronization to a non-global zone then it is recommended to disable the NTP service in the global zone.
As mentioned above, the ability to adjust the time is controlled by a Solaris privilege. The privilege name for this is called sys_time and the information for this privilege can be viewed by using the ppriv command:
# ppriv -lv sys_time sys_time Allows a process to manipulate system time using any of the appropriate system calls: stime, adjtime, ntp_adjtime and the IA specific RTC calls.
If you are unsure whether the sys_time privilege is currently available to you then you can use the following command (as root) to check whether the privilege is available:
# ppriv -v $$ | grep sys_time
By default the command will only show output in the global zone but not in any non-global zone. By default the sys_time privilege is not assigned to a non-global zone. Starting with Solaris 10 Update 3 (11/06) the available privileges of a non-global zone can be changed by using the limitpriv option of the zonecfg command. In the default configuration the limitpriv setting would be empty:
global-zone# zonecfg -z zonename info limitpriv limitpriv:
If you want to add the sys_time privilege to a zone then you can use the zonecfg command to modify the property and reboot the zone to activate the change:
global-zone# zonecfg -z zonename set limitpriv="default,sys_time" global-zone# zoneadm -z zonename reboot
Once the sys_time privilege is available in the non-global zone you can continue to setup NTP as usual, i.e. configure the /etc/ntp.conf file and enable the ntp service.
Question : I am unable to share nfs from a non-global zone.
Consider the following example.
– sysadmin creates a non-global zone and wants to share a zfs mount point from non-global zone.
– While setting ZFS parameter “sharenfs”, he encounters an error :
cannot set property for 'findisk/faktudt_users': 'sharenfs' cannot be set on dataset in a non-global zone
– Another case is when ufs file system is shared from the non-globa zone using the /etc/dfs/dfstab file or on command line. The shareall command fails :
/usr/sbin/shareall share_nfs: /var/opt/sun: Operation not supported share_nfs: /var/js: Operation not supported share_nfs: /opt/SUNWjet: Operation not supported
– The sysadmin would not be able to bring the nfs server service up. While the /var/svc/log/network-nfs-server log in the non-global zone shows errors:
[Jan 2 16:12:41 Executing start method ("/lib/svc/method/nfs-server start") ] The NFS server is not supported in a local zone [ Jan 2 16:12:43 Method "start" exited with status 0 ] [ Jan 2 16:12:43 Stopping because all processes in service exited. ] [ Jan 2 16:12:43 Executing stop method ("/lib/svc/method/nfs-server stop 2274998") ] [ Jan 2 16:12:43 Method "stop" exited with status 0 ]
This is anticipated behavior.
Although this is a limitation in solaris 10 non-global zone there are 3 alternatives which you can use to overcome this.
alternative 1 : share resources from the global zone via LOFS:
On the global zone :
1. Be superuser, or have the required rights profile.
2. Use the zonecfg command to edit non-global zone configuration
global# zonecfg -z my-zone
3. Add a file system to the configuration.
zonecfg:my-zone> add fs
4. Set the mount point for the file system, /datafiles in my-zone.
zonecfg:my-zone:fs> set dir=/datafiles
5. Specify that /export/datafiles in the global zone is to be mounted as /datafiles in my-zone.
zonecfg:my-zone:fs> set special=/export/datafiles
6. Set the file system type.
zonecfg:my-zone:fs> set type=lofs
7. End the specification.
8. Verify and commit the configuration.
zonecfg:my-zone> verify zonecfg:my-zone> commit
On the client :
# mount -F lofs (global_zone_mount_point) (local_zone_mount_point) where (global_zone_mount_point) ----> /datafiles and (local_zone_mount_point) ----> /export/datafiles/
Use third party software, but there would be no technical support from Oracle.
Use Solaris 11 Native Zones, which can be NFS servers.
Question : Can I configure additional IP addresses on the same interface in non-global zone? If yes, how?
Yes, you can add additional IP addresses on the existing interface configured in the non global zone. In fact the procedure remains the same. To add a secondary shared interface to configured shared ip zone you would use the zonecfg command. Below is an example of adding a second interface in a zone named zone01 that will be persistent upon reboots. In this example, the Interface in the Global Zone is bge0. Additional interface in the configured zone will be bge0:2 and it will be persistent upon reboots.
1. Login into the global zone as root and proceed as follows:
global_zone # zonecfg -z zone01 zonecfg:zone01 > info <---- info shows how the zone is configured now zonename: zone01 zonepath: /zones/zone01 brand: native autoboot: true bootargs: pool: limitpriv: scheduling-class: ip-type: shared hostid: inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 10.10.2.3/24 physical: bge0 defrouter not specified
Next we add the second interface:
zonecfg:zone01 > add net zonecfg:zone01 :net> set physical=bge0 zonecfg:zone01 :net> set address=10.10.2.4 zonecfg:zone01 :net> end zonecfg:zone01 > verify zonecfg:zone01 > commit zonecfg:zone01 > info <------- info again will show you the new interface you just added zonename: zone01 zonepath: /zones/zone01 brand: native autoboot: true bootargs: pool: limitpriv: scheduling-class: ip-type: shared hostid: inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 10.10.2.3/24 physical: bge0 defrouter not specified net: address: 10.10.2.4 physical: bge0 defrouter not specified zonecfg:zone01 > exit
Then we reboot the the zone zone01 :
globa_zone # zoneadm -z zone01 reboot
Now login into the zone after the reboot and run ifconfig to see the new interface:
# ifconfig -a lo0:1: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0:1: flags=1000843 mtu 1500 index 2 inet 10.10.2.3 netmask ffffff00 broadcast 10.10.2.255 bge0:2: flags=1000843 mtu 1500 index 2 inet 10.10.2.4 netmask ffffff00 broadcast 10.10.2.255
In the global zone we see the new interface added to the bge interface as bge0:2
global_zone # ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 lo0:1: flags=2001000849 mtu 8232 index 1 zone zone01 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843 mtu 1500 index 2 inet 10.152.24.44 netmask ffffff00 broadcast 10.152.24.255 ether 0:3:ba:e4:7f:b0 bge0:1: flags=1000843 mtu 1500 index 2 zone zone01 inet 10.10.2.3 netmask ffffff00 broadcast 10.10.2.255 bge0:2: flags=1000843 mtu 1500 index 2 zone zone02 inet 10.10.2.4 netmask ffffff00 broadcast 10.10.2.255
This is all you need to do to add the interface. You do not need to do anything in the global zone and it will remain after any reboot.
It is important that the zone is in the appropriate installed state before one can boot it up. The configured zone has to be installed with the operating environment. That is, the zone configuration has to be instantiated and packages have to be installed under the zone’s root path. The zone status should be marked as installed.
To check the current status of the zone, run the zoneadm command with the subcommand list together with the -cv switches. The following zoneadm output confirms that the zone002 has been installed.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zone002 installed /zones/zone002 native shared
If the zone is not yet installed, the status will be indicated as configured.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zone002 configured /zones/zone002 native shared
In this case, the zone is either detached or will have to be installed by running the zoneadm -z install command.
To check whether the zone is in state detached, check the content of the zonepath. If you find the directories dev and root and SUNWdetached.xml, then you can try to attach the zone to the system
# ls -l /zones/zone002 total 12 drwxr-xr-x 12 root sys 50 Aug 18 14:40 dev drwxr-xr-x 2 root root 2 Feb 4 2014 lu drwxr-xr-x 19 root root 21 Aug 14 15:30 root -rw-r--r-- 1 root root 4511578 Aug 18 14:41 SUNWdetached.xml
# zoneadm -z zone002 attach -U Getting the list of files to remove Removing 4 files Remove 9 of 9 packages Installing 6103 files Add 10 of 10 packages Updating editable files
The file /var/sadm/system/logs/update_log within the zone contains a log of the zone update. If the zonepath is empty, then the zone needs to be installed using zoneadm -z [zone_name] install
# zoneadm -z zone002 install Preparing to install zone . Creating list of files to copy from the global zone. Copying  files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize  packages on the zone. Initialized  packages on zone. Zone is initialized. The file contains a log of the zone installation.
Once the zone is installed, the zoneadm list -cv output will indicate the zone as installed.
There are times when it would be useful for a zone to access it’s host system’s CDROM device. This can be done using the Loop Back File System(lofs).
Procedure for Solaris 10
# zonecfg -z zone01 zonecfg:zone01> add fs zonecfg:zone01:fs> set dir=/cdrom zonecfg:zone01:fs> set special=/cdrom zonecfg:zone01:fs> set type=lofs zonecfg:zone01:fs> end zonecfg:zone01> commit zonecfg:zone01> exit
Solaris 10 uses the vold SMF service to mount a CDROM/DVD. The above set of commands adds a loop back file system to mount the global /cdrom file system on the /global directory of the non-global zone. The commit command writes the zone configuration from memory to disk. Next, reboot the non-global zone to make the configuration changes effective.
# zonecfg -z zone01 reboot
Place the media into the system’s CDROM drive. After it is mounted in the global zone, it will be available to the non-global zone, zone01 in this example. Once you are finished using the CDROM, type eject from the global zone to retrieve the CD media.
Procedure for Solaris 11
The value of the attribute “special” in the zones configuration should be ‘/media’ for Solaris 11. ‘/media’ is the standard path on Global zone where CDROM media gets mounted by rmmount command. Ensure the package media-volume-manager is installed
global_zone # pkg list media-volume-manager NAME (PUBLISHER) VERSION IFO system/storage/media-volume-manager 0.5.11-0.175.3.3.0.3.0 i--
Ensure the SMF filesystem/rmvolmgr is ‘online’. Solaris 11 uses the rmvolmgr SMF service to mount a CDROM/DVD
global_zone # svcs rmvolmgr STATE STIME FMRI online 12:14:20 svc:/system/filesystem/rmvolmgr:default
Check the available devices and aliases and default for the CDROM on Global zone. In this case you can see the title of the currently mounted Solaris_10 DVD with title ‘SOL_10_SPARC’. Look for the DVD device in global zone :
global_zone # ls -l /dev/removable-media/dsk/*s2 lrwxrwxrwx 1 root root 18 Aug 27 2012 /dev/removable-media/dsk/c3t0d0s2 -> ../../dsk/c3t0d0s2
Mount the DVD in global zone :
global_zone # rmmount /dev/dsk/c3t0d0s2 /dev/dsk/c3t0d0s2 mounted at /media/SOL_10_SPARC
List aliases :
global_zone # rmmount -l /dev/dsk/c3t0d0s2 cdrom,cdrom0,cd,cd0,sr,sr0,SOL_10_SPARC,/media/SOL_10_SPARC
List default device :
global_zone # rmmount -d Default device is: cdrom
Add the device to a non-global-zone using zonecfg :
global_zone # zonecfg -z zone01 zonecfg:my-zone> add fs zonecfg:my-zone:fs> set dir=/cdrom zonecfg:my-zone:fs> set special=/media zonecfg:my-zone:fs> set type=lofs zonecfg:my-zone:fs> add options [ro,nodevices] zonecfg:my-zone:fs> end zonecfg:my-zone:fs> commit zonecfg:my-zone:fs> exit
In case the System is running Solaris 11.2 or higher, the modifications done by zonecfg can be applied online (without reboot of the non-global zone) by
# zoneadm -z zone01 apply
For Solaris 11 versions older than 11.2 you must reboot the non-global zone to activate the zonecfg changes. Confirm that the content of the DVD is visible in non-global zone :
global_zone # zlogin zone01 ls -l /cdrom/ total 4 drwxr-xr-x 7 root other 2048 Dec 4 2002 SOL_10_SPARC
global_zone # zlogin zone01 ls -l /cdrom/SOL_10_SPARC
Resource pools in Solaris 10/solaris 11 provide a mechanism to assign processor set and scheduling class to a non-global zone. The resource pool framework is as shown in the figure below. The dynamic resource pools come in extremely useful when you have a variable load on zones and you want to dynamically change the resource allocation to these zones. Unlike the capped CPU allocation, dynamic resource pool allocation to zones is online and can be changed anytime online.
For this example, we will first configure a zone for dynamic resource pool usage and then demonstrate how to change the number of CPUs of a zone from 1 to 2, then back to 1, then to 3.
Configuring zone for dynamic resource pool usage
Follow these steps to create a processor set, a resource pool, and bind it to a zone.
1. Start the /system/pools/dynamic service :
# svcadm enable /system/pools/dynamic
2. Create a file /var/tmp/pool.cfg (could be any file name) with the following contents:
# cat /var/tmp/pool.cfg create pset pset_1 (uint pset.min = 1; uint pset.max = 1) create pool pool_1 associate pool pool_1 (pset pset_1)
3. Create a pool from the above configuration:
# poolcfg -f /var/tmp/pool.cfg
4. Change the zone configuration to bind the pool to the zone (this is to make the pool assignment permanent across server and zone reboots):
# zonecfg -z zone_1 zonecfg:zone_1> set pool=pool_1 zonecfg:zone_1> verify zonecfg:zone_1> commit zonecfg:zone_1> exit
5. Bind pool pool_1 to zone zone_1 (this is to make the pool assignment effective immediately):
# poolbind -p pool_1 -i zoneid zone_1
6. Verify (list) the whole pool configuration:
# pooladm -n ...
7. Write the pool configuration to pool config file /etc/pooladm.conf:
# pooladm -s
8. Activate the pool configuration:
# pooladm -c
Changing the CPU count
The steps below demonstrates how to change the number of CPUs of zone zone01 to 2, then to 1, then to 3.
1. Increase the number of CPUs assigned to pset_1 (= pool_1 = zone01) from 1 to 2:
# poolcfg -dc 'modify pset pset_1 (uint pset.min = 2; uint pset.max = 2)'
2. Write this new pool configuration to pool config file /etc/pooladm.conf:
# pooladm -s
3. Display the new pool configuration:
# poolcfg -c 'info pool pool_1' /etc/pooladm.conf ... uint pset.min 2 uint pset.max 2 ...
4. Activate the pool configuration:
# pooladm -c
5. In the zone, verify that only 2 CPUs are assigned to the zone, using mpstat 2 2 :
# zlogin zone01 mpstat 2 2 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 207 203 1 6 0 0 0 0 3 0 0 0 100 1 0 0 3 10 0 6 0 0 0 0 1 0 1 0 99 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 203 203 1 1 0 0 0 0 8 0 0 0 100 1 5 0 0 2 0 1 0 0 0 0 5 0 0 0 100
6. Decrease the number of CPUs assigned to pset_1 (= pool_1 = zone01) from 2 to 1:
# poolcfg -dc 'modify pset pset_1 (uint pset.min = 1; uint pset.max = 1)'
7. Write this new pool configuration to pool config file /etc/pooladm.conf:
# pooladm -s
8. Display the new pool configuration:
# poolcfg -c 'info pool pool_1' /etc/pooladm.conf ... uint pset.min 1 uint pset.max 1 ...
9. Activate the pool configuration:
# pooladm -c
10. In the zone, verify that only 1 CPU is assigned to the zone, using mpstat 2 2 :
# zlogin zone01 mpstat 2 2 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 1 0 0 3 10 0 6 0 0 0 0 1 0 1 0 99 1 4 0 0 3 0 3 0 0 0 0 13 0 0 0 100
11. Increase the number of CPUs assigned to pset_1 (= pool_1 = zone01) from 1 to 3:
# poolcfg -dc 'modify pset pset_1 (uint pset.min = 3; uint pset.max = 3)'
12. Write that new pool configuration to pool config file /etc/pooladm.conf:
# pooladm -s
13. Display the new pool configuration:
# poolcfg -c 'info pool pool_1' /etc/pooladm.conf ... uint pset.min 3 uint pset.max 3 ...
14. Activate the pool configuration:
# pooladm -c
15. In the zone, verify that now 3 CPUs are assigned to the zone, using mpstat 2 2 :
# zlogin zone01 mpstat 2 2 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 207 203 1 6 0 0 0 0 3 0 0 0 100 1 0 0 3 10 0 6 0 0 0 0 1 0 1 0 99 2 0 0 3 10 0 5 0 0 0 0 0 0 0 0 100 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 202 201 1 0 0 0 0 0 0 0 0 0 100 1 5 0 0 2 0 1 0 0 0 0 7 0 0 0 100 2 0 0 0 1 0 0 0 0 0 0 0 0 0 0 100
Resource management in solaris zones : CPU shares, capped CPU, Dedicated CPU assignment
Resource management in solaris zones : Dynamic Resource pools
Resource management in solaris zones : Capped memory
Question : How to set a unique hostid for non-global zone ?
Per default, a non global zone in Solaris gets the same hostid as global zone. Starting from Solaris 10 update 9, It is allowed to set unique hostid for a non-global zone using zonecfg command.
Procedure to set unique hostid
1. Shtidown the non-global zone (in our case zone01).
# zlogin zone01 shutdown
2. Set the hostid of your choice using zonecfg command.
# zonecfg -z zone01 zonecfg:zone01> set hostid=80f0c086 (HOSTID of your choice) zonecfg:zone01> exit
3. Boot the zone for the changes to take effect.
# zoneadm -z zone01 boot
Below are some troubleshooting tips when the command # zoneadm -z install fails to install the non-global zone.
Verify the configuration before install
Make sure the configuration of the zone is correct using the verify sub-command.
global # zonecfg -z zone01 zonecfg:zone01> verify zonecfg:zone01> ---- No error should be reported here.
Verify proper privileges
Check the user is superuser or has “Zone Management” profile.
global # id uid=20123(user) gid=1(other) global # profiles | grep "Zone Management" Zone Management
Confirm zone install path is accessible by appropriate user
Make sure the zone install path is having correct permission :
global # zonecfg -z zone01 info | grep zonepath zonepath: /zones/zone01 global # ls -ld /zones/zone01 drwx------ 4 root root 512 Mar 24 14:48 /zones/zone01/ (should be 700)
Confirm sufficient disk space
The global zone administrator must ensure that sufficient disk space is available for the zone installation. Depending on the non-global zone type (whole root Vs sparse root) space taken by zone’s root file system may vary. The nature of the packages installed in the global zone affects the space requirements of the non-global zones that are created. Depending on these factor provision the disk space for the non-global zone.
Check for installation logs
We can also check the installation logs in /root/var/sadm/system/logs. In case of zonepath = /zones/zone01, you can find the log at /zones/localzone/root/var/sadm/system/logs, file name is install_log.
Zoneadm install in debug/verbose mode
“zoneadm install” calls a Live Upgrade component to actually populate the zone’s root file system. Add the following lines to /etc/default/lu for debug/verbose mode :
LU_DEBUG_STATE=zon export LU_DEBUG_STATE
Contact Oracle support
Finally if you can not determine the cause of failure, you can contact Oracle support with necessary debug data :
Collect the truss output for zoneadm and ptree output.
# ptree [pid of zoneadm install command] # truss -alef -v all -o truss_zoneadm.out zoneadm -z install
Also provide the install logs :
/root/var/sadm/system/logs zones/localzone/root/var/sadm/system/logs/install_log ---> (for zone01)