6.2 HP IBRIX 9000 Storage Network Best Practices Guide (TA768-96069, December 2012)

For 1GbE configurations, HP strongly recommends that the cluster network be configured as
a private network that is separate from the user data-serving network. For 10GbE configurations,
HP recommends that the cluster network and user network be collapsed into a single network.
User network
This network provides user client systems access to the file system through supported file access
protocols such as NFS, SMB, FTP, and HTTP.
Management network
This network is used for all intra-rack configuration, control, and continuous health monitoring
of the enclosure and storage components. The management network must be accessible at all
times from the FSNs and the network-attached storage system components to ensure maximum
redundancy and fault protection for the storage system.
Many of the components attached to the management network are embedded systems with
a limited amount of processing power. These components can sometimes be overwhelmed by
the full amount of network traffic that can be encountered on a physical data network. Care
should be taken to segregate the management network in a way that limits network traffic that
is not management related. One approach is to segregate all management components onto
a separate subnet that is routable from the main FSN data network.
The cluster and user networks are configured as follows:
All file serving nodes in the IBRIX cluster must be connected to the cluster network.
All file serving nodes in the IBRIX cluster and all file-requesting clients must have access to the
user network.
File requests using NFS, SMB, FTP, or HTTP clients traverse the user network.
The IBRIX Clients (WIC and LIC) are configured to traverse the cluster network by default, but
can optionally be configured to traverse one of the user networks.
Background data services such as remote replication default to using the cluster network, but
can optionally be configured to traverse one of the user networks.
FSN physical networking
Each file serving node is equipped with multiple physical connections to the customer network.
This section describes how a FSN uses these connections to provide a fault tolerant connection to
the customer network.
Physical interface
Network interface devices that have an associated physical hardware component in the server are
referred to as physical interfaces in this document. In the IBRIX networking implementation, the
physical interfaces of a FSN are aggregated together to provide multipath redundancy. The
aggregation is accomplished using a bond, and the result is a new bonded interface. Linux
networking tools such as ifconfig display the physical interfaces using an eth# device label.
For example, the first Ethernet device in a server is labeled eth0.
Bond
Linux bonding is a mechanism to create a virtual network interface by aggregating multiple physical
network interfaces. The Linux bond driver is responsible for routing network traffic between the
virtual bond interface and the underlying physical interfaces. Because it is composed of multiple
physical interfaces, the bonded interface can still function when one of the physical pathways has
failed. IBRIX makes use of this capability to give the FSNs a degree of redundancy against network
failures.
6 Overview of HP IBRIX 9000 Series networking