HP Serviceguard Toolkit for NFS on Linux User Guide Version A.12.00.00

5 File lock migration
HP Serviceguard Toolkit for NFS on Linux provides file lock migration feature for the NFS directories
that are exported to the client. You must provide a unique holding directory as part of the NFS
package located on a shared file system. This implies that an empty directory is created on a
shared filesystem that moves between servers as part of the package failover. The
NFS_FLM_HOLDING_DIR parameter in the hanfs.conf file is a user configurable parameter
that holds the status monitor entries.
In case of Metrocluster, the directory specified for the NFS_FLM_HOLDING_DIR parameter must
be created on the disks that are part of the Metrocluster replication group. This allows the status
monitor entries to be available even on the other site.
For NFSv2 or NFSv3 protocol, the nfs.flm script periodically copies the status monitor entries
from /var/lib/nfs/statd/sm directory on Red Hat to the package holding directory. For
NFSv4 protocol nfs.flm script periodically copies entries from the /var/lib/nfs/v4recovery
directory on Red Hat. By default, the PROPAGATE_INTERVAL parameter is commented with no
value. If the lock_migration is set to Yes, you must configure the PROPAGATE_INTERVAL
parameter.
In case of the package failover, the holding directory transitions from the primary node to the
adoptive node because it resides on a shared filesystem, which is configured as part of the NFS
package. Once the holding directory is available on the adoptive node, the status monitor entries
residing in the holding directory are copied to the status monitor directory on the adoptive node
(/var/lib/nfs/statd/sm on Red Hat). This sequence of actions sync the status monitor directory
of the adoptive server with that of the primary server. When lock migration is enabled, you cannot
run two NFS toolkit packages on the same node.
For NFS package configured for NFSv2 or NFSv3 protocol after failover, NFS package is started
on the adoptive node and rpc.statd is restarted on the adoptive node using the package IP if
lock migration is enabled for the NFS package. Restarting this daemon triggers a crash recovery
notification event, whereby rpc.statd sends crash notification messages to the client nodes listed
in the status monitor directory. NFSv4 protocol handles crash event notification internally, provided
NFSv4 recovery directory contents are available.
Any client that holds NFS file locks against files exported by the NFS package sends reclaim
requests to the adoptive node (where the exported file systems currently reside) and reclaims its
locks.
For NFSv4 protocol, crash event notification is handled by protocol itself, so only the contents of
NFSv4 recovery directory is used as the package holding directory.
When the file lock migration is enabled, HP recommends that you do not use the NFS server as
NFS client. When you halt the package, SIGKILL signal is sent to the lockd kernel thread to release
file locks so that the filesystem can be unmounted successfully. If the server is also an NFS client,
it loses the NFS file locks obtained by client-side processes when SIGKILL signal is sent to the lockd
kernel thread to release server side locks. So, if the client applications use NFS file locking, HP
recommends that you do not use the clustered nodes configured for the NFS package as an NFS
client for any server.
In addition, HP recommends that you set the SERVICE_FAIL_FAST_ENABLED option to yes for
the NFS monitoring service in pkg.conf file for the lock migration feature to work consistently.
NOTE:
If the lock migration is enabled, the toolkit does not support multiple NFS packages.
21