6.2 HP IBRIX 9000 Storage Network Best Practices Guide (TA768-96069, December 2012)

1 Overview of HP IBRIX 9000 Series networking
The IBRIX solution uses network attached components and associated software to implement a
fault-tolerant distributed file system. This network-centric overview describes the components and
networking concepts used to implement the IBRIX networking solution. Specific attention is given
to the fault-tolerant aspects of the implementation, as these make the implementation more
complicated than a typical network attached system.
IBRIX components
The IBRIX solution defines the following network attached components, which collaborate across
the network to implement the distributed file system functionality:
File Clients
File Clients are network connected devices that make requests for files from the IBRIX distributed
file system. The file clients can make requests using either the dedicated IBRIX client software,
or using one of the standard SMB or NFS clients. File Clients are deployed on existing or new
customer infrastructure and are sold separately from the IBRIX NAS solution.
File serving nodes (FSN)
Clients request files from the file serving nodes, or FSNs. The FSNs are servers that provide
the bridge between the physical storage medium and the requesting file clients. To accomplish
this, FSNs are connected to both the customer network and the physical storage. In the X97xx
platforms, the FSNs map to the c7000 enclosure’s server blades. In the X93xx platform, the
FSNs are rack-mounted servers.
Fusion Manager (FM)
The Fusion Manager maintains the cluster configuration and provides graphical and
command-line user interfaces for managing and monitoring the cluster. The Fusion Manager
software is installed on all FSNs when the cluster is installed. The Fusion Manager is active
on one node, and is passive on the other nodes. When failover occurs the active FM is moved
to one of the previously passive nodes.
Networks
The networks described in this section refer to logical groupings of related task-specific network
traffic. In early IBRIX implementations, these streams of data were implemented in physically
separate networks with dedicated hardware for maximum redundancy and performance. Modern
networking hardware and topologies make the physical separation of these networks potentially
unnecessary and/or undesirable, but the term network has been retained to refer to the logical
separation of the traffic types.
To achieve maximum performance, the physical networking implementation must provide sufficient
bandwidth to support all file data movement to and from file clients, as well as cluster intra-FSN
file data movement.
Cluster network
This network manages the systems that constitute a unified cluster. Cluster membership, health
monitoring, strategic data movement, and private back-channel communications occur over
this network. The cluster network must be operational at all times for the system to be online
and functional.
IBRIX components 5