HP Fabric Clustering System for InfiniBand™ Interconnect Performance on HP-UX 11iv2
Confidential Page 7 1/28/2005
Bandwidth Scaling: Point-to-Point configuration (rx4640/3CPUs/2GB
RAM)
0
200
400
600
800
1000
1200
1400
1600
12
Number of HCAs
Bandwidth (MB/s)
0
10
20
30
40
50
60
70
80
90
100
CPU Utilization (%)
Bandwidth (MB/s)
Bi-directional Bandwidth (MB/s)
CPU Util-Uni Directional
CPU Util - Bi-Directional
The HP-UX Fabric Clustering System interconnect solution scales without adversely affecting the service demand
requirements on the server. Because of lack of dual rope slots on rx4640, HCA scaling was tested only up to 2
HCAs.
HP Fabric Clustering System – Switch Fabric
HP-UX Fabric Clustering System Interconnect solution can be used to build multi-node clusters using these switches. The
AB291A – 12-port 4X Switch is used in testing the switch configurations. The following switch configurations were
used for testing the performance characteristics:
•
2-node switch configuration
VGA CONSOLE / R EMOTE / UPS
LAN 10/100
LAN Gb
SCSI LVD/SE
USB
CONSOLE
SERIAL A
SERIAL B
PWR
1
PWR
2
Management Card
LAN 10/100
GSP RESE TS
SOFT H ARD
TOC
PCI-X 133
PCI-X 133
PCI-X 133
PCI-X 133
Infiniband
rx2600 rx2600
IB 4X Cable
VGA CONSOLE / REMOTE / UPS
LAN 10/100
LAN Gb
SCSI LVD/SE
USB
CONSOLE
SERIAL A
SERIAL B
PWR
1
PWR
2
Management Card
LAN 10/100
GSP RESETS
SOFT HA RD
TOC
PCI-X 133
PCI-X 133
PCI-X 133
PCI-X 133
Infiniband
A
B
2
9
1
A
789101112
12 3 4 5 6
IB 4X Cable
AB291A - 12 port IB
Switch
• 4-node (3 clients & 1 server) switch configuration