HyperFabric Configuration Guidelines

HyperFabric Configuration Guidelines
TCP/UDP/IP
10
resource is removed from a cluster. The difference between DRU and
OLAR is that OLAR applies only to the addition or replacement of
adapter cards from nodes.
Load Balancing: Supported
When an HP 9000 HyperFabric cluster is running TCP/UDP/IP
applications, the HyperFabric driver balances the load across all
available resources in the cluster, including nodes, adapter cards,
links, and multiple links between switches.
Switch Management: Not Supported
Switch Management is not supported. Switch management will not
operate properly if you enable it on a HyperFabric cluster.
For more information on HyperFabric switch management, see
Installing and Administering HyperFabric Part Number
B6257-90030 Edition E0601 and HyperFabric Administrators Guide
Part Number B6257-90039.
Diagnostics: Supported
Diagnostics can be run to obtain information on many of the
HyperFabric components via the clic_diag, clic_probe and
clic_stat commands, as well as the Support Tools Manager (STM).
For more information on HyperFabric diagnostics, see Installing and
Administering HyperFabric Part Number B6257-90030 Edition
E0601 and HyperFabric Administrators Guide Part Number
B6257-90043.
Configuration Parameters
This section describes the maximum limits for TCP/UDP/IP HyperFabric
configurations. There are numerous variables that can impact the
performance of any particular HyperFabric configuration. For guidance
on specific HyperFabric configurations for TCP/UDP/IP applications, see
the TCP/UDP/IP Supported Configurations on page 17.
HyperFabric is supported only on the HP 9000 series UNIX servers
and workstations.
TCP/UDP/IP is supported for all HyperFabric hardware and
software.
Maximum Supported Nodes and Adapter Cards: