The Hardware Management Console (HMC) is used to control the System p servers connected to a private network. Control functions such as power on/off, dynamic logical partitioning (DLPAR), logical partitioning (LPAR), and capacity upgrade on demand (CUoD) are handled by the HMC. In the InfiniBand environment, the HMC has an important function: it runs the IBM Network Manager (IBM NM) that manages the InfiniBand network.
HMC and Managed System
1. To list all users in a HMC:
# lshmcusr
2. To list only user names and managed resource roles for all HMC users:
# lshmcusr -F name:resourcerole
3. To create a user:
# mkhmcusr -u User_Id -a ROLE -d DESCRIPTION --passwd PASSWORD -MPASSWD_EXPIRATION_DAYS
4. To remove a user:
# rmhmcusr -u USER_NAME
5. To change an hmc user’s password:
# chhmcusr -u User_Name -t passwd -v New_Password
6. To change the task role for the user “user1” to hmcoperator:
# chhmcusr -r user1 -t taskrole -v hmcoperator
Available task roles are hmcsuperadmin, hmcoperator, hmcviewer, hmcpe, hmcservicerep or a user defined task role.
7. To list all managed resource objects:
# lsaccfg -t resource
8. To list all managed resource roles:
# lsaccfg -t resourcerole
9. To create a task role using a config file:
# mkaccfg -t resourcerole -f /tmp/fil1
10. To create a task role:
# mkaccfg -t taskrole -i "name=tr1,parent=hmcsuperadmin,"resources=cec:chcod+lscod+lshwres,lpar:chssyscfg+lssyscfg+mksyscfg"
11. To change a task role:
# chaccfg -t taskrole -i "name=tr1,"resources=cec:chhwres+chsysstate,lpar:chssyscfg+chled+chhwres""
12. To remove a task role:
# rmaccfg -t taskrole -n tr1
Partitions and Profiles
1. To list all machines configured in a hmc:
# lssyscfg -r sys
2. To list all lpars(partitions) in a power machine:
# lssyscfg -r lpar -m Managed_System
3. To activate/start an LPAR:
# chsysstate -r lpar -m Managed_System -o on -n LPAR_Name -f Profile_ name
4. To deactivate/shutdown an LPAR:
# chsysstate -r lpar -m Managed_System -o shutdown --immed -n LPAR_Name
5. To open the console of a partition:
# mkvterm -m Managed_System -p LPAR_Name
6. To close the console of a partition:
# rmvterm -m Managed_System -p LPAR_Name
7. To list the profile of a partition:
# lssyscfg -r prof -m Managed_System --filter "lpar_names=LPAR_Name,profile_names=Profile_Name"
8. To change the min/desired/maximum memory settings of a partition profile:
# chsyscfg -r prof -m Managed_System -i "name=Profile_Name,lpar_name=LPAR_Name,min_mem=512,desired_mem=19456,max_mem=20480"
9. To change the min/desired/maximum processor units of a partition profile:
# chsyscfg -r prof -m Managed_System -i "name=Profile_Name,lpar_name=LPAR_Name,min_proc_units=0.2,desired_proc_units=0.5,max_proc_units=2.0"
10. To change the min/desired/maximum virtual processor of a partition profile:
# chsyscfg -r prof -m Managed_System -i "name=Profile_Name,lpar_name=LPAR_Name,min_procs=1,desired_procs=2,max_procs=6"
11. To change capped/uncapped setting in a partition profile:
# chsyscfg -r prof -m Managed_System -i "name=Profile_Name,lpar_name=LPAR_Name,sharing_mode=uncap,uncap_weight=128"
Possible values for sharing_mode are cap and uncap. Possible values for uncap_weight are from 0 to 128.
12. To change the name of a partition profile:
# chsyscfg -r prof -m Managed_System -i "name=Profile_Name,lpar_name=LPAR_Name,new_name=New_Profile_Name"
13. To change the name of a partition:
# chsyscfg -r lpar -m Managed_System -i "name=LPAR_Name,new_name=New_LPAR_Name"
14. To change the default profile of a partition:
# chsyscfg -r lpar -m Managed_System -i "name=LPAR_Name,default_profile=Partition_Profile_Name"
15. To set “power off the machine after all partitions are shutdown” for a power machine:
# chsysscfg -r sys -m Managed_System -i "power_off_policy=0"
Possible values are
- 0 -> Power off after all partitions are shutdown
- 1 -> Do not power off after all partitions are shutdown
16. To rename a system profile:
# chsyscfg -r sysprof -m Managed_System -i name=Sys_Prof_Name,new_name=New_Sys_Prof_Name"
17. To add 2 more partition profiles to a system profile:
# chsyscfg -r sysprof -m Managed_System -i "name=,"lpar_names+=partition3,partition4","profile_names+=profile3,profile4""
User Management
1. To list all users in a HMC:
# lshmcusr
2. To list only user names and managed resource roles for all HMC users:
# lshmcusr -F name:resourcerole
3. To create a user:
# mkhmcusr -u User_Id -a ROLE -d DESCRIPTION --passwd PASSWORD -MPASSWD_EXPIRATION_DAYS
4. To remove a user:
# rmhmcusr -u USER_NAME
5. To change an hmc user’s password:
# chhmcusr -u User_Name -t passwd -v New_Password
6. To change the task role for the user “user1” to hmcoperator:
# chhmcusr -r user1 -t taskrole -v hmcoperator
Available task roles are hmcsuperadmin, hmcoperator, hmcviewer, hmcpe, hmcservicerep or a user defined task role.
7. To list all managed resource objects:
# lsaccfg -t resource
8. To list all managed resource roles:
# lsaccfg -t resourcerole
9. To create a task role using a config file:
# mkaccfg -t resourcerole -f /tmp/fil1
10. To create a task role:
# mkaccfg -t taskrole -i "name=tr1,parent=hmcsuperadmin,"resources=cec:chcod+lscod+lshwres,lpar:chssyscfg+lssyscfg+mksyscfg"
11. To change a task role:
# chaccfg -t taskrole -i "name=tr1,"resources=cec:chhwres+chsysstate,lpar:chssyscfg+chled+chhwres""
12. To remove a task role:
# rmaccfg -t taskrole -n tr1
HMC Backup
1. To backup HMC data on DVD:
# bkconsdata -r dvd
2. To backup HMC data to a ftp server:
# bkconsdata -r ftp -h ftp_server_name -u ftp_username --passwd ftp_password
3. To backup HMC data to a NFS mounted file system:
# bkconsdata -r nfs -n nfs_server_name -l Nfs_mount_point
4. To list storage media devices:
# lsmediadev
5. To backup profile data for a managed system:
# bkprofdata -m Managed-System -f File_name
Profile data files are kept under /var/hsc/profiles/Managed-Machine-Serial-Number
6. To restore a managed profile data:
# rstprofdata -m Managed-System -l restore_type -f File-Name
Valid restore types are
- 1 – Full restore from the backup file.
- 2 – Merge the current profile data and backup profile data, with priority to backup.
- 3 – Merge the current profile data and backup profile data, with priority to current data.
- 4 – Initialize the profile data. All partition, partition/system profiles will be deleted.
DLPAR Operations
1. To list the memory by system level:
# lshwres -r mem -m Managed-System --level sys
2. To list the memory by lpar level:
# lshwres -r mem -m Managed-System --level lpar
3. To list the processor / processing units by system level:
# lshwres -r proc -m Managed-System --level sys
4. To list the processor / processing units by lpar level:
# lshwres -r proc -m Managed-System --level lpar
5. To list the processor / processing units by pool level:
# lshwres -r proc -m Managed-System --level pool
6. To add 1GB of memory to an lpar dynamically:
# chhwres -r mem -m Managed-System -o a -p Lpar_name -q 1024
7. To remove 1GB of memory to an lpar dynamically:
# chhwres -r mem -m Managed-System -o r -p Lpar_name -q 1024
8. To move 1GB of memory from lpar_a to lpar_b dynamically:
# chhwres -r mem -m Managed-System -o m -p Lpar_a_name -t Lpar_b_name -q 1024
9. To add 1 dedicated cpu to an lpar dynamically:
# chhwres -r proc -m Managed-System -o a -p Lpar_name -procs 1
10. To remove 1 dedicated cpu to an lpar dynamically:
# chhwres -r proc -m Managed-System -o r -p Lpar_name -procs 1
11. To move 1 dedicated cpu from lpar_a to lpar_b dynamically:
# chhwres -r proc -m Managed-System -o m -p Lpar_a_name -t Lpar_b_name -procs 1
12. To add 0.5 processing unit to an lpar dynamically:
# chhwres -r proc -m Managed-System -o a -p Lpar_name -procunits 0.5
13. To remove 0.5 processing unit to an lpar dynamically:
# chhwres -r proc -m Managed-System -o r -p Lpar_name -procunits 0.5
14. To move 0.5 processing unit from lpar_a to lpar_b dynamically:
# chhwres -r proc -m Managed-System -o m -p Lpar_a_name -t Lpar_b_name -procunits 0.5
15. To restore memory resources on a lpar based on its profile:
# rsthwres -r mem -m Managed-System -p Lpar_name
16. To restore memory resources for all partitions in a managed system:
# rsthwres -r mem -m Managed-System
17. To restore processing resources on a lpar based on its profile:
# rsthwres -r proc -m Managed-System -p Lpar_name
18. To restore processing resources for all partitions in a managed system:
# rsthwres -r proc -m Managed-System
19. To restore physical I/O slots on a lpar based on its profile:
# rsthwres -r io -m Managed-System -p Lpar_name
20. To restore physical I/O slots for all partitions in a managed system:
# rsthwres -r io -m Managed-System
Reference Code
1. To list the current reference code for the managed system:
# lsrefcode -r sys -m Managed-System
2. To list last 10 reference codes for the managed system:
# lsrefcode -r sys -m Managed-System -n 10
3. To list the reference code (Its called as LED in old pSeries servers) for each partition in the managed system:
# lsrefcode -r lpar -m Managed-System -F lpar_name,time_stamp,refcode
4. To list last 25 reference codes (led) for partitions lpar-a and lpar-b:
# lsrefcode -r lpar -m Managed-System -n 25 --filter ""lpar_names=lpar-a,lpar-b""
HMC General Terms
1. What is the maximum number of servers managed by HMC?
– Maximum of 48 non-590-595 servers
– Maximum of 32 590/595 servers
2. What is the maximum number of LPARs supported by a HMC?
– Maximum of 254 LPARs
3. How many HMCs can manage a server at one time?
– You can have a maximum of 2 HMCs manage a server at one time
4. What are the different types of dynamic operations you can do with CPU, Memory and I/O Adapter on a LPAR?
– Add
– Remove
– Move
5. How do we connect the HMC to power machines?
– For Power-4 machines, we connect the hmc using serial cables. But for Power-5 machines, HMC connects to service processors via SSL-encrypted Ethernet, replacing the serial cables.
6. Do we have firewall configured in HMC?
– Yes. Each network card has an integrated firewall.
7. Do we need to configure DHCP in HMC?
– HMC may be a DHCP server for entry and mid-range servers. But for high-end servers like P595, HMC must be a DHCP server
8. can we have the same HMC to manage P4 and P5 machines?
– POWER5 HMCs cannot manage POWER4 servers, and vice versa.
9. Can we have the existing P4 HMCs upgraded to support P5 machines?
– Yes. We can. This involves a complete overwirte of teh disk and the loss of all previous configuration including user profiles.
10. What to do incase of disk failure in HMC?
– We can restore the HMC using recovery CD. Then restore the latest Critical consule data backup which will restore the profiles, user ids, passwords, etc..
11. What is the default user id and password for the HMC?
– When the HMC is powered on the first time, login as hscroot and password as ‘abc123’.
12. Can we manage a power machine without a HMC?
– Yes. We can run a server in manufacturing default mode, will all resources but no logical partitionings, CoD or Service Focal point,etc. For entry level server, we can use the Integrated Virtualization Manager.
13. What is the network critetia for dual HMC connection?
– Dual HMCs require two different private networks.
14. What is the default service processor IP address in Power-5 Machines?
Eth0 – HMC1 – 192.168.2.147 / 255.255.255.0
Eth1 – HMC2 – 192.168.3.147 / 255.255.255.0
15. What is the default user id and password for accessing service processor?
User id – admin
Password – admin
16. Do we need a HMC for p5 model servers?
– One HMC is mandatory for 590, 595 or 575. Dual HMC are recommended.
17. Do we need private network for HMc connectivity for p5-595?
– One private network is mandatory for p5 590,595 or 575.
18. Can we have IVM support multiple servers?
– One IVM allowed per server and it only manages partitions on one server.
19. What does FSP (Flexible Service Processor) has?
FSP has
a. Operating System
b. UserIds / Passwords
c. Filesystem
d. Networking
e. Firewall
f. Webserver
g. ASMI
h. Firmware
20. What to do if you forgot the admin password for FSP?
– If you do not know the admin password, place a hardware call to get ‘celogin’
21. What to do if you forgot the HMC hostname/ipaddress for a long running LPAR?
– You can always get the HMC IPaddress from a LPAR if we have performed “handshake” with the HMC.
Issue the below command to get the HMC IPAddress
# lsrsrc IBM.ManagementServer Resource Persistent Attributes for IBM.ManagementServer resource 1: Name = "169.121.54.48" Hostname = "169.121.54.48" ManagerType = "HMC" LocalHostname = "169.121.54.59" ClusterTM = "9078-160" ClusterSNum = "" ActivePeerDomain = "" NodeNameList = {"SAP-PRodServer"}
22. One HMC should be within 8metres of Managed Server.
23. Each FSP Ethernet port should be connected to only one HMC.
DLPAR Requirements
1. What is version requirment for DLPAR operations?
a. A P4 processor based pSeries system or later
b. October 2002 or later system microcode update
c. A HMC at version R3V1.0 or later
d. AIX 5L Version 5.2 or later
2. What are the AIX filesets required for DLPAR?
# lslpp -l rsct.core* # lslpp -l csm.client
3. What are the daemons required for DLPAR?
# lssrc -a | grep rsct ctrmc rsct 21044 active IBM.CSMAgentRM rsct_rm 21045 active IBM.ServiceRM rsct_rm 11836 active IBM.DRM rsct_rm 20011 active IBM.HostRM rsct_rm 20012 active IBM.DMSRM rsct_rm 906 active IBM.LparCmdRM rsct_rm 901 active
4. On HMC, how to list partitions recognized by DLAPR?
# lspartition -dlpar
If all active AIX 5.2 partitions are listed as Active<1>, …, DCaps:<0xf> your system has been set up properly for DLPAR. If you’re missing some active partitions or some partitions are reported as Active<0>, your system probably still has a network/hostname set up problem.
5. How to resolve name resolution issues between LPARs and HMC?
Step I:
# vi /etc/resolv.conf
1. Same DNS server for LPARs and HMC
2. Remove the duplicate entries.
Step II:
Please check to see the that ct_node_id is unique for each node in the environment:
# cat /var/ct/cfg/ct_node_id
If duplicate ct_node_id values are found issue a recfgct on the problem node(s) to have a new/unique ct_node_id generated.
# /usr/sbin/rsct/install/bin/recfgct
(This command will start/restart ctcas,ctrmc system and will generate a new id in the file /var/ct/cfg/ct_node_id )
Step III:
ping from aix.
Step IV:
Please also do the following steps on the LPAR(s) to refresh RMC subsystem
/usr/sbin/rsct/bin/rmcctrl -z —-> Stops the RMC subsystem and all resource managers.
/usr/sbin/rsct/bin/rmcctrl -A —-> Adds and starts the RMC subsystem
/usr/sbin/rsct/bin/rmcctrl -p —-> Enables remote client connections
Step V:
Ensure /var directory is not 100% full
After expanding the /var directory, execute the following command.
# /usr/sbin/rsct/bin/rmcctrl -z # rm /var/ct/cfg/ct_has.thl # rm /var/ct/cfg/ctrmc.acls # /usr/sbin/rsct/bin/rmcctrl -A
Step VI:
If problem still persists, please run the below command to collect the DLPAR log in /tmp/ctsupt:
# /usr/sbin/rsct/bin/ctsnap
6. How to find the parent device of a device like cdrom in AIX ?
# lsdev -Cl cd0 -F parent
HMC – System Plan
1. How to make a system plan from a running machine?
# mksysplan -f marc.sysplan -m Machine-Name -v
where marc.sysplan is the file name.
2. How to list a system plan?
# lssysplan
3. How to delete a particular system plan?
# rmsysplan
4. How to reploy a system plan on a managed server?
# deploysysplan
5. How to copy a system plan from/into the HMC?
# cpsysplan