Using NFS as a file system type with HP Serviceguard A.11.20 on HP-UX and Linux

3
The NFS server itself should be highly available. The specific configuration will depend on the server platform being
used. It is possible to have the package that uses the NFS file system as shared storage and the package that provides
the NFS file system running in the same cluster. In such a configuration, the NFS client package must not depend on
the NFS server package. If the NFS server package fails over, the client package need not be failed over. The NFS client
package can continue to run and can reconnect to the NFS server once the NFS server package is up.
Any NFS file systems used by the Serviceguard packages should be configured to restrict access to the nodes in the
Serviceguard cluster.
Serviceguard supports exclusive activation for volume groupsfor example, LVM volume groups. When you use NFS
client-side locks, which HP recommends, exclusive activation is not available so you must make sure you follow these
recommendations:
Only the cluster nodes can have access to the file system.
The file system is used by only one Serviceguard failover package.
If the package fails, you never manually restart it without first ensuring that the file system has been
properly unmounted.
Limitations
An NFS file system must be used by only one Serviceguard failover package.
NFS file systems used by Serviceguard packages must be mounted with the -o llock mount option on HP-UX and
-O local_lock=all on Linux to enforce local locking semantics on the NFS client.
Mount NFS-imported file systems used by Serviceguard packages only as part of starting the package.
A cluster node must not mount NFS shares that are configured as part of any Serviceguard package, as part of the
boot up process; otherwise the package may fail when it starts or fails over.
So that Serviceguard can verify that all I/O from a node on which a package has failed is flushed before the package
starts on an adoptive node, all the switches and routers between the NFS server and client must support a worst-case
timeout; after which, packets and frames are dropped. This timeout is known as the Maximum Bridge Transit Delay
(MBTD). Switches and routers that do not support MBTD must not be used in a Serviceguard configuration. This might
lead to delayed packets that could lead to data corruption.
Networking among the Serviceguard nodes must be configured in such a way that a single failure in the network does
not cause a package failure.
Setting up the NFS server
See the latest Serviceguard NFS Toolkit Administrators Guide available at hp.com/go/hpux-serviceguard-docs,
under HP Serviceguard NFS Toolkit for instructions on configuring the NFS server and shares.
Configuring the NFS package and cluster parameters
Configuring the NFS package parameters
In the modular package configuration file, the new parameter fs_server specifies the name of the NFS server.
The value of this parameter can be either the hostname of the NFS server or its IP address (both IPv4 and IPv6
addresses are supported). The NFS server can be configured on a different subnet or in a different domain than the
Serviceguard cluster.
fs_types specifies the file system type. Set this to “NFS” to use this feature.
fs_mount_opt specifies the mount option. On HP-UX this must include -o llock in addition to any other options
you specify. -o llock specifies local locking for the NFS file system. On Linux this must include O local_lock=all.
fs_fsck_opt should not be used. If any option is found in fs_fsck_opt for an NFS-imported file system, a
warning will be logged and the value will be ignored.