6.0 HP X9720 Network Storage System Administrator Guide (AW549-96034, December 2011)

problems if the cluster interface fails.) There is no difference in the way that monitoring is set up
for the cluster interface and a user network interface. In both cases, you set up file serving nodes
to monitor each other over the interface.
Sample scenario
The following diagram illustrates a monitoring and failover scenario in which a 1:1 standby
relationship is configured. Each standby pair is also a network interface monitoring pair. When
SS1 loses its connection to the user network interface (eth1), as shown by the red X, SS2 can no
longer contact SS1 (A). SS2 notifies the management console, which then tests its own connection
with SS1 over eth1 (B). The management console cannot contact SS1 on eth1, and initiates
failover of SS1’s segments (C) and user network interface (D).
Identifying standbys
To protect a network interface, you must identify a standby for it on each file serving node that
connects to the interface. The following restrictions apply when identifying a standby network
interface:
The standby network interface must be unconfigured and connected to the same switch (network)
as the primary interface.
The file serving node that supports the standby network interface must have access to the file
system that the clients on that interface will mount.
Virtual interfaces are highly recommended for handling user network interface failovers. If a VIF
user network interface is teamed/bonded, failover occurs only if all teamed network interfaces
fail. Otherwise, traffic switches to the surviving teamed network interfaces.
To identify standbys for a network interface, execute the following command once for each file
serving node. IFNAME1 is the network interface that you want to protect and IFNAME2 is the
standby interface.
<installdirectory>/bin/ibrix_nic -b -H HOSTNAME1/IFNAME1,HOSTNAME2/IFNAME2
The following command identifies virtual interface eth2:2 on file serving node s2.hp.com as the
standby interface for interface eth2 on file serving node s1.hp.com:
<installdirectory>/bin/ibrix_nic -b -H s1.hp.com/eth2,s2.hp.com/eth2:2
Cluster high availability 33