Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.03 Administrator's Guide
are not currently present on the NFS server node, the node cannot boot properly. This
happens if the server is an adoptive node for a file system, and the file system is available
on the server only after failover of the primary node.
3. If your NFS servers must serve PC clients, set the PCNFS_SERVER variable to 1 in the /etc/
rc.config.d/nfsconf file on the primary node and each adoptive node.
PCNFS_SERVER=1
If you run the NFS monitor script, setting the PCNFS_SERVER variable to 1 will cause the
monitor script to monitor the pcnfsd daemon. Then, if the pcnfsd daemon fails, your NFS
package will fail over to an adoptive node. If you do not want to monitor pcnfsd, do not
run the NFS monitor script, or set the PCNFS_SERVER variable to 0 and run pcnfsd manually
from the command line.
4. If your NFS servers will also be NFS clients, set the START_MOUNTD variable to 1 in the
/etc/rc.config.d/nfsconf file on the primary node and each adoptive node.
START_MOUNTD=1
If you configure rpc.mountd in the /etc/inetd.conf file, set the START_MOUNTD variable
to 0. If the START_MOUNTD variable is set to 0, the NFS monitor script will not monitor the
rpc.mountd process. If the START_MOUNTD variable is set to 1, and you run the NFS monitor
script, your NFS package will fail over to an adoptive node if rpc.mountd fails.
5. On the primary node and all adoptive nodes for the NFS package, set the NUM_NFSD variable
in the /etc/rc.config.d/nfsconf file to the number of nfsd daemons required to
support all the NFS packages that could run on that node at once. It is better to run too many
nfsd processes than too few. In general, you should configure a minimum of four nfsd
processes and at least two nfsd processes for each exported file system. So, for example, if
a node is the primary node for a package containing two exported file systems, and it is an
adoptive node for another package containing three exported file systems, you should
configure it to run at least 10 nfsd processes.
NUM_NFSD=10
6. Issue the following command on the primary node and all adoptive nodes to start the NFS
server processes.
/sbin/init.d/nfs.server start
7. Configure the disk hardware for high availability. Disks must be protected using HP's
MirrorDisk/UX product or an HP High Availability Disk Array with PV links. Data disks
associated with Serviceguard NFS must be external disks. All the nodes that support the
Serviceguard NFS package must have access to the external disks. For most disks, this means
that the disks must be attached to a shared bus that is connected to all nodes that support
the package. For information on configuring disks, see the Managing Serviceguard manual.
8. Use SAM or LVM commands to set up volume groups, logical volumes, and file systems as
needed for the data that will be exported to clients.
The names of the volume groups must be unique within the cluster, and the major and minor
numbers associated with the volume groups must be the same on all nodes. In addition, the
mounting points and exported file system names must be the same on all nodes.
The preceding requirements exist because NFS uses the major number, minor number, inode
number, and exported directory as part of a file handle to uniquely identify each NFS file.
If differences exist between the primary and adoptive nodes, the client's file handle would
no longer point to the correct file location after movement of the package to a different node.
It is recommended that filesystems used for NFS be created as journaled file systems (FStype
vxfs). This ensures the fastest recovery time in the event of a package switch to another node.
9. Make sure the user IDs and group IDs of those who access the Serviceguard NFS file system
are the same on all nodes that can run the package. Make sure the /etc/passwd and /etc/
Before Creating a Serviceguard NFS Package 25