HP XC System Software Installation Guide Version 4.0

If you expect LSF to be accessible from outside the HP XC system, all nodes with the
resource_management role must also be configured with the external role and have
the appropriate hardware and wiring to directly access the external network.
You must assign the disk_io role to any node that is exporting SAN storage.
By default, no nodes are configured with the login role. Assigning a login role to a node
enables the LVS director service. You must assign a login role to each node on which you
expect users to be able to log in and use the system.
By default, no nodes are configured with the nis_server role. Assigning a nis_server
role to a node establishes the node as a NIS slave server. Any node assigned with the
nis_server role must also have an external network connection defined.
You must assign an external role to any node that has an external network connection
configured.
By default, no nodes are assigned with the cisco_hsm or voltaire_hsm roles, which are
required only if your InfiniBand hardware configuration does not contain a functioning
subnet manager.
You can use a host based subnet manager from Cisco or Voltaire in certain InfiniBand switch
configurations where there is no InfiniBand managed switch running a subnet manager. A
functioning subnet manager is required for a functioning InfiniBand fabric.
A typical use for a host based subnet manager is for small server blade configurations that
only contain blade InfiniBand switches, which are unmanaged. In this situation, using a
host based subnet manager can eliminate the cost of an extra external switch, however,
additional license costs for the host based subnet managers apply reducing the cost advantage.
Another instance where a host based subnet manager is required is in a hardware
configuration with only large (> 24 port) Cisco InfiniBand switches that do not run subnet
managers.
See Section F.3.3 (page 218) and Section F.3.16 (page 221) for more information about the
cisco_hsm and voltaire_hsm roles.
F.2.2 Special Considerations for Hardware Configurations with 63 or Fewer Nodes
Before deciding whether you want to accept the default configuration for hardware configurations
with 63 or fewer nodes, consider that a compute role is assigned to the head node by default.
Therefore, when users submit jobs, it is possible that the jobs run on the head node. In that
situation, less than optimal performance is obtained if interactive users are also on the head node.
Consider removing the compute role from the head node to prevent it from being configured
as a compute node.
F.2.3 Special Considerations for Hardware Configurations with 64 or More Nodes
Before deciding whether you want to accept the default configuration provided for hardware
configurations with 64 or more nodes, consider that the cluster is optimized for computation by
default; that is, compute nodes have no additional services on them. Therefore, consider whether
you want more compute nodes overall at the expense of impact to other services on those nodes.
F.2.4 Special Considerations for Improved Availability
Special considerations for assigning roles for improved availability of services are documented
in Table 1-2 (page 31).
F.3 Role Definitions
A node role is defined by the services provided to the node. The role is an abstraction that
combines one or more services into a group. Roles provide a convenient way of installing services
on a node. Node roles, listed alphabetically, are characterized as follows:
Availability Role” (page 217)
Avail_node_management Role” (page 217)
“Cisco_hsm Role” (page 218)
216 Node Roles, Services, and the Default Configuration