HyperFabric Configuration Guidelines
HyperFabric Configuration Guidelines
TCP/UDP/IP
11
In point-to-point configurations, the complexity and performance
limitations of having a large number of nodes in a cluster make it
necessary to include switching in the fabric. Typically, point-to-point
configurations consist of only 2 or 3 nodes.
In switched configurations, HyperFabric supports a maximum of 64
interconnected adapter cards.
A maximum of 8 HyperFabric adapter cards are supported per
instance of the HP-UX operating system. The actual number of
adapter cards a particular node is able to accommodate also depends
on slot availability and system resources. See node specific
documentation for details.
HyperFabric subsystem supports a maximum of 8 configured IP
addresses per instance of the HP-UX operating system.
• Maximum Number of Switches:
You can interconnect (mesh) up to 4 switches (16-port copper, 16-port
fiber or Mixed 8 fiber ports / 4 copper ports) in a single HyperFabric
cluster.
• Trunking Between Switches (multiple connections):
You can use trunking between switches to increase bandwidth and
cluster throughput. Trunking is also a way to eliminate a possible
single point of failure. The number of trunked cables between nodes
is limited only by port availability. To assess the effects of trunking
on the performance of any particular HyperFabric configuration,
contact your HP representative.
• Maximum Cable Lengths:
HF1 (copper): The maximum distance between two nodes or between
a node and a switch is 60 ft. (Two standard cable lengths are sold and
supported: 35 ft. and 60 ft.)
TCP/UDP/IP supports up to four HF1 switches connected in series
with a maximum cable length of 60 ft. between the switches and 60
ft. between switches and nodes.
HF2 (fiber): The maximum distance is 200m (Four standard cable
lengths are sold and supported: 2m, 16m, 50m and 200m).
TCP/UDP/IP supports up to four HF2 switches connected in series
with a maximum cable length of 200m between the switches and
200m between switches and nodes.










