Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.03 Administrator's Guide

filesystems that moves between servers as part of the package. This holding directory is a
configurable parameter and must be dedicated to hold the Status Monitor (SM) entries only.
A new script, nfs.flm, periodically (default value is five seconds; you can change this
value by modifying the >PROPAGATE_INTERVAL parameter in the nfs.flm script) copies
SM entries from the /var/statmon/sm directory into the package holding directory. To
edit the nfs.flm script, see “Editing the File Lock Migration Script (nfs.flm)” (page 31).
Upon package failover, the holding directory transitions from the primary node to the
adoptive node, because it resides in one of the filesystems configured as part of the HA/NFS
package.
Once the holding directory is on the adoptive node, the SM entries residing in the holding
directory are copied to the /var/statmon/sm directory on the adoptive node. This populates
the new server's SM directory with the entries from the primary server.
After failover, the HA/NFS package IP address is configured on the adoptive node, and
rpc.statd and rpc.lockd are killed and restarted. This killing and restarting of the
daemons triggers a crash recovery notification event, whereby rpc.statd sends crash
notification messages to all the clients listed in the /var/statmon/sm directory.
These crash recovery notification messages contain the relocatable hostname of the HA/NFS
package that was previously running on the primary node and is currently running on the
adoptive node.
Any client that holds NFS file locks against files residing in the HA/NFS package (transitioned
between servers) sends reclaim requests to the adoptive node (where the exported filesystems
currently reside) and reclaims its locks.
After rpc.statd sends the crash recovery notification messages, the SM entries in the
package holding directory are removed, and the nfs.flm script is started on the adoptive
node. The script once again copies each /var/statmon/sm file on the HA/NFS server into
the holding directory, every five seconds. Each file residing in the /var/statmon/sm
directory on the adoptive node following the package migration represents a client that
either reclaimed its locks after failover or has established new locks after failover.
NOTE: To enable the File Lock Migration feature, you need Serviceguard version A.11.15 or
above.
To ensure that the File Lock Migration feature functions properly on HP-UX 11i v1, install NFS
General Release and Performance patch, PHNE_26388 (or a superseding patch). For HP-UX 11i
v2 and HP-UX 11i v3, the feature functions properly without a patch.
Overview of NFSv4 File Lock Migration Feature
Serviceguard NFS introduces the “NFSv4 File Lock Migration” feature beginning with version
A.11.31.03. This feature is an extension to the current file lock migration feature that can only be
enabled in NFSv2 and v3 servers. A description of its operation and information on how to add
the NFSv4 feature to the current file lock migration scheme is as follows:
At the same time each HA/NFS package designates a holding directory for NFSv2 and
NFSv3, a unique holding directory for NFSv4 must also be specified. This NFSv4 holding
directory is a configurable parameter and must be dedicated to hold the v4_state entries
only. Both holding directories should be located in the same filesystem.
The nfs.flm script now copies v4_state entries from /var/nfs4/v4_state to the NFSv4 holding
directory when copying SM entries from the /var/statmon/sm directory into the NFSv2 and
NFSv3 holding directory.
Upon package failover, the v4_state entries in the NFSv4 holding directory are also copied
to the /var/nfs4/v4_state on the adoptive node when the SM entries residing in NFSv2
and NFSv3 holding directory are copied to the /var/statmon/sm directory. This populates
Overview of NFSv4 File Lock Migration Feature 9