HP XC System Software Release Notes for Version 4.0

alias eth0 tg3
alias eth1 tg3
alias eth2 e1000
alias eth3 e1000
3. Save your changes and exit the text editor.
4. Use the text editor of your choice to edit the /etc/sysconfig/network-scripts/
ifcfg-eth[0,1,2,3] files, and remove the HWADDR line from each file if it is present.
5. If you made changes, save your changes and exit each file.
6. Reload the modules:
# modprobe tg3
# modprobe e1000
7. Follow the instructions in the HP XC System Software Installation Guide to complete the cluster
configuration process (beginning with the cluster_prep command).
5.2 Notes that Apply to the Discovery Process
The notes in this section apply to the discover command.
5.2.1 Discovery of HP ProLiant DL145 G3 Nodes Fails When Graphics Cards Are
Present
When an HP ProLiant DL145 G3 node contains a graphics card, the nodes often fail to PXE boot.
Even when the BIOS boot settings are configured to include a PXE boot, these settings are often
reset to the factory defaults when the BIOS restarts after saving the changes. This action causes
the discovery and imaging processes to fail.
Follow this procedure to work around the discovery failure:
1. Begin the discovery process as usual by issuing the appropriate discover command.
2. When the discovery process turns on power to the nodes of the cluster, manually turn off
the DL145 G3 servers that contain graphics cards.
3. Manually turn on power to each DL145 G3 server one at a time, and use the clusters console
to force each node to PXE boot. Do this by pressing the F12 key at the appropriate time
during the BIOS start up.
After you complete this task for each DL145 G3 server containing a graphics card, the discovery
process continues and completes successfully.
The work around for the imaging failure on these servers is described in “HP ProLiant DL145
G3 Node Imaging Fails When Graphics Cards Are Present” (page 22), which is the appropriate
place to perform the task.
5.3 Notes that Apply Before Running the cluster_config Utility
Read the notes in this section before you invoke the cluster_config utility.
5.3.1 Availability Sets That Do Not Contain the Head Node Must Use a Quorum
Server for Quorum
When Serviceguard package files are created by the cluster_config utility, the name of the
quorum server or the lock LUN is placed in the Serviceguard cluster configuration files. However,
if the device names are reordered, the specification of the lock LUN in the Serviceguard cluster
configuration file might be incorrect. If a Serviceguard cluster includes the head node, a rule is
automatically created to provide a unique lock LUN name that will exist regardless of device
name reordering.
This capability is not possible for Serviceguard clusters that do not contain the head node because
the disk specific information is not available during cluster_config processing.
20 System Discovery, Configuration, and Imaging