AB291A Fabric Clustering System Support Guide (12-port Switch), April 2004

Table Of Contents
Installation Planning
Planning the Cluster
Chapter 3
22
The HP Fabric Clustering System product has no support capability to balance the load
across all available resources in the cluster, including nodes, adapter cards, links, and
multiple links between switches.
Configuration Parameters
This section discusses the maximum limits for Fabric configurations. There are numerous variables that can
impact the performance of any particular Fabric configuration. For more information on specific Fabric
configurations for applications, see “Fabric Supported Configurations” on page 22.
HP Fabric Clustering System is supported only on rx1600, rx2600, rx4640, rx5670, rx7620, rx8620, and
Integrity Superdome servers running 64-bit HP-UX 11i.v2.
Maximum Supported Nodes and Adapter Cards
HP recommends creating switched Fabric cluster configurations with a maximum of 64 nodes.
In point-to-point configurations running HP Fabric Clustering System applications, only two servers may
comprise a cluster. More that one adapter card may be used per server, though.
NOTE HP-MPI is constrained to support only a single port per node in a point-to-point
configuration. Use of more than one port will cause the MPI application to abort.
A maximum of 8 fabric adapter cards are supported per instance of the HP-UX operating system. The
actual number of adapter cards a particular node is able to accommodate also depends on slot availability
and system resources. See node specific documentation for details.
Maximum Number of Switches
You can interconnect (mesh) the 12-port copper switches in a single Fabric cluster. HP recommends
meshing a maximum of three12-port copper switches but no software constraints are imposed on using
more. In the event additional ports are needed, HP recommends using a high-port count switch.
Trunking Between Switches (multiple connections)
Trunking between switches can be used to increase bandwidth and cluster throughput. Trunking is also a
way to eliminate a possible single point of failure. The number of trunked cables between nodes is only
limited by port availability. To assess the effects of trunking on the performance of any particular Fabric
configuration, consult the whitepapers available on the HP documentation website.
Maximum Cable Lengths
The longest supported cable is 10 meters. This constrains the maximum distance between servers and
switches or between servers in node-to-node configurations.
Fabric Supported Configurations
Multiple Fabric configurations are supported to match the performance, cost and scaling requirements of each
installation.
In the section, “Configuration Parameters” on page 22, the maximum limits for Fabric hardware
configurations were outlined. This section discusses the Fabric configurations that HP supports. These
recommended configurations offer an optimal mix of performance and availability for a variety of operating
environments.
There are many variables that can impact the HP Fabric Clustering System performance. If you are
considering a configuration that is beyond the scope of the following HP supported configurations, contact
your HP representative.