HP XC System Software Release Notes for Version 4.0

1.5 Changes to Internal Node Numbering When Double Density Server
Blades Are Present
Double density server blades like the HP ProLiant BL2x220c have two separate nodes for each
server blade, which means that an HP BladeSystem c7000 enclosure can have a maximum of 32
nodes per enclosure compared to the maximum of 16 single density server blades. A c3000
enclosure can have a maximum of 16 nodes per enclosure compared to the maximum of 8 single
density server blades.
Internal node numbers are calculated based on enclosure bay location as follows:
slot number + (blade number in the slot * maximum number of bays) = bay number
For example, when a c7000 enclosure is populated with double density server blades, node
numbers are calculated as follows:
1A (node 1, blade 1) = 1 + (1 * 16) = 17 (Node number=1, Host name={node_prefix}1)
1B (node 2, blade 1) = 1 + (2 * 16) = 33 (Node number=2, Host name={node_prefix}2)
2A (node 1, blade 2) = 2 + (1 * 16) = 18 (Node number=3, Host name={node_prefix}3)
2B (node 2, blade 2) = 2 + (2 * 16) = 34 (Node number=4, Host name={node_prefix}4)
1.6 New Parameters to discover Command Control Node Naming for
Double Density Server Blades
The following new feature has been delivered through a patch. If you require this functionality,
you must install the latest patch for this release. See Section 2.2 (page 13) for more information
about downloading patches.
Two parameters have been added to the discover --enclosurebased --nodesonly
-extended command that enable you to specify the node numbering scheme used by the
discovery process for double density server blades:
discover --enclosurebased --nodesonly --extended contiguous
discover --enclosurebased --nodesonly --extended non-contiguous
The contiguous parameter results in serial node naming. For example: 1A, 1B, 2A, 2B, 3A,
3B...16A, 16B are nodes n1, n2, n3, n4, n5, n6...n31, n32, respectively
The non-contiguous parameter results in naming the nodes in the order in which they are
connected to the interconnect switches on the enclosure backplane. For example: 1A, 1B, 2A, 2B,
3A, 3A...16A, 16B are numbered as n1, n17, n2, n18, n3, n19... n16, n32, respectively.
1.7 Enhancements to Improved Availability
The following services have been added to the list of services that can be configured for improved
availability with HP Serviceguard and the use of availability sets:
hptc_cluster_fs service (/hptc_cluster file system)
This service provides access to head node services if the head node goes down. In addition
to HP Scalable File Share (SFS), shared fiber channel storage managed by Serviceguard for
Linux can be used to enable improved availability of the hptc_cluster_fs service.
Using Serviceguard to enable improved availability of the /hptc_cluster file system
requires shared fiber channel storage connecting the head node and the other node in the
availability set. The shared storage must contain an LVM volume that includes an ext3 file
system type for /hptc_cluster to be mounted locally on one of the nodes and served to
the rest of the nodes in the HP XC configuration.
kdump service (for kernel crash dumps)
Using Serviceguard to enable improved availability of the kdump service requires shared
fiber channel storage connecting the head node and the other node in the availability set.
8 New and Changed Features