Migrating from HyperFabric to other interconnect solutions such as Infiniband and Gigabit/10-Gigabit Ethernet
Audience
This Document is intended for customers who have their cluster interconnect solutions on
HyperFabric/HyperFabric2 and are looking to migrate from HyperFabric to other cluster interconnect
options such as Infiniband and Gigabit or 10-Gigabit Ethernet.
HyperFabric/HyperFabric2
HyperFabric is an HP high-speed, packet-based interconnect for node-to-node communications.
Instead of using a traditional bus-based technology, HyperFabric is built around switched fabric
architecture, providing the bandwidth necessary for high speed data transfer. HyperFabric hardware
consists of host-based interface adapter cards; interconnect cables, and optional switches.
HyperFabric software resides in Application Specific Integrated Circuits (ASICs) and firmware on the
adapter cards and includes user-space components and HP-UX drivers.
Following are the use case scenarios where this clustering solution delivers the performance,
scalability, and high availability required:
• Parallel Database Clusters
o Oracle 9i Real Application Clusters (RAC)
o Oracle 8i Parallel Servers (OPS)
• Parallel Computing Clusters
• Client/Server Architecture Interconnects (for example, SAP)
• Multi-Server Batch Applications (for example, SAS Systems)
• Enterprise Resource Planning (ERP)
• Technical Computing Clusters
• HP Message Passing Interface (MPI) based applications
• Open View Data Protector (earlier known as Omni back)
HyperFabric Features for TCP/UDP/IP and HMP Applications
The following table summarizes the features that HyperFabric offers to applications running the above
protocol stack. Detailed information on these features is available in the product documentation.
Features TCP/IP/UDP HMP
Online Addition &
Replacement
Supported Not Supported
Event Monitoring service Supported Supported
Service Guard Supported Supported
High Availability Supported Supported
Dynamic resource utilization Supported Partially supported
Load Balancing Supported Supported
Switch Management Not Supported Not Supported
Diagnostics Supported Supported
Limitations with HyperFabric
• This clustering interconnect offers 2.56 Gb link rate and more than 20us latency.
• The competing technologies are providing higher bandwidth.
• The RDMA interconnect technology used (HMP) is a proprietary solution. To ensure HP-UX has
competitive feature set, it needs to be industry standard.
2







