HP Fabric Clustering System for InfiniBand™ Topologies
3
process per node and two processes per node. The key representative tests from the PMB suite we
have examined are as follows:
• Single Transfer Benchmarks
o PingPong
o PingPing
• Parallel Transfer Benchmarks
o SendRecv
o Exchange
• Collective Benchmarks
o Reduce
N-to-1
o Alltoall
N-to-N
o Bcast
1-to-N
o Barrier test
Our intent has been to yield recommendations for size of cluster (i.e. number of compute nodes) and
configurations of switches specifically when using AB291A 12-port switches as building blocks.
While large node count switch fabrics can be constructed by cabling together many small port count
switches, there is additional cabling complexity versus using a large port count switch. In the interest
of reducing that cabling complexity for customers, HP has limited the number of small port count
switches that we recommend be used to construct clusters.
To achieve our desired goal, we have tested the following configurations:
12-node cluster
18-node cluster
For each of the configurations, we have used the 96-port TS170 switch as a reference baseline and
then configured one or two AB291A 12-port switches to yield the desired number of ports.