AB291A Fabric Clustering System Support Guide (12-port Switch), April 2004
Table Of Contents
- About This Document
- 1 Introduction to Technology
- 2 Hardware Overview
- 3 Installation Planning
- 4 Installing HP Fabric Clustering System
- 5 Administration and Management
- HP-UX Host Administration and Management
- Switch Administration and Management
- CLI Overview
- Using the CLI
- Advanced Switch Setup
- Configuration, Image, and Log Files
- Configuration, Image, and Log File Overview
- File Management
- Listing Configuration, Image, and Log Files
- Viewing Configuration Files
- Viewing Log Files
- Saving Configuration Files
- Saving for System Reboot
- Saving the Backup Configuration
- Specifying the Configuration to Use at
- Saving and Copying Files
- Downloading Files to the System
- Deleting Configuration, Image, and Log Files
- Managing Log Files
- Understanding the Log Format
- Uploading Log Files
- Administering the System
- 6 Monitoring and Troubleshooting
- A Specifications
- B HP 12-Port 4X Fabric Copper Switch Commands
- Show Commands
- show arp ethernet
- show arp IB
- show authentication
- show backplane
- show boot-config
- show card
- show card-inventory
- show clock
- show config
- show fan
- show host
- show ib
- show ib sm configuration
- show ib sm multicast
- show ib sm neighbor
- show ib sm node subnet-prefix
- show ib sm partition
- show ib sm port
- show ib sm service
- show ib sm switch
- show ib-agent channel-adapter
- show ib-agent summary
- show ib-agent switch
- show ib-agent switch linear-frd-info
- show ib-agent switch all mcast-info lid
- show ib-agent switch all node-info
- show ib-agent switch all pkey-info
- show ib-agent switch port-info
- show ib-agent switch sl-vl-map
- show ib-agent switch switch-info
- show interface ib
- show interface ib sm
- show interface ib sm statistics
- show interface mgmt-ethernet
- show interface mgmt-ib
- show interface mgmt-serial
- show ip
- show location
- show logging
- show ntp
- show power-supply
- show running-status
- show sensor
- show snmp
- show system-services
- show terminal
- show trace
- show user
- show version
- IP Commands
- HP Fabric Clustering System Commands
- Administrative Commands
- action
- boot-config
- broadcast
- card
- clock
- configure
- copy
- delete
- dir
- disable
- enable
- exec
- exit
- ftp-server enable
- gateway
- help
- history
- hostname
- install
- interface
- interface mgmt-ethernet
- interface mgmt-ib
- ip
- location
- login
- logging
- logout
- more
- ntp
- ping
- radius-server
- reload
- shutdown
- snmp-server
- telnet
- terminal length
- terminal time-out
- trace
- type
- username
- who
- write
- Show Commands
- C How to Use Windows HyperTerminal
- Glossary

Installation Planning
Planning the Cluster
Chapter 3
22
The HP Fabric Clustering System product has no support capability to balance the load
across all available resources in the cluster, including nodes, adapter cards, links, and
multiple links between switches.
Configuration Parameters
This section discusses the maximum limits for Fabric configurations. There are numerous variables that can
impact the performance of any particular Fabric configuration. For more information on specific Fabric
configurations for applications, see “Fabric Supported Configurations” on page 22.
• HP Fabric Clustering System is supported only on rx1600, rx2600, rx4640, rx5670, rx7620, rx8620, and
Integrity Superdome servers running 64-bit HP-UX 11i.v2.
• Maximum Supported Nodes and Adapter Cards
HP recommends creating switched Fabric cluster configurations with a maximum of 64 nodes.
In point-to-point configurations running HP Fabric Clustering System applications, only two servers may
comprise a cluster. More that one adapter card may be used per server, though.
NOTE HP-MPI is constrained to support only a single port per node in a point-to-point
configuration. Use of more than one port will cause the MPI application to abort.
A maximum of 8 fabric adapter cards are supported per instance of the HP-UX operating system. The
actual number of adapter cards a particular node is able to accommodate also depends on slot availability
and system resources. See node specific documentation for details.
• Maximum Number of Switches
You can interconnect (mesh) the 12-port copper switches in a single Fabric cluster. HP recommends
meshing a maximum of three12-port copper switches but no software constraints are imposed on using
more. In the event additional ports are needed, HP recommends using a high-port count switch.
• Trunking Between Switches (multiple connections)
Trunking between switches can be used to increase bandwidth and cluster throughput. Trunking is also a
way to eliminate a possible single point of failure. The number of trunked cables between nodes is only
limited by port availability. To assess the effects of trunking on the performance of any particular Fabric
configuration, consult the whitepapers available on the HP documentation website.
• Maximum Cable Lengths
The longest supported cable is 10 meters. This constrains the maximum distance between servers and
switches or between servers in node-to-node configurations.
Fabric Supported Configurations
Multiple Fabric configurations are supported to match the performance, cost and scaling requirements of each
installation.
In the section, “Configuration Parameters” on page 22, the maximum limits for Fabric hardware
configurations were outlined. This section discusses the Fabric configurations that HP supports. These
recommended configurations offer an optimal mix of performance and availability for a variety of operating
environments.
There are many variables that can impact the HP Fabric Clustering System performance. If you are
considering a configuration that is beyond the scope of the following HP supported configurations, contact
your HP representative.










