Technical Considerations for a Serviceguard Cluster that Spans Multiple IP Subnets, July 2009

24
Most applications do not tolerate having an active file system forcibly removed from the system, even
a file system that is not responding to requests. HP therefore strongly recommends that any
applications using the hung NFS file system be completely shut down prior to issuing the “umount –f
command.
Once the applications have been stopped and the NFS file systems forcibly un-mounted, the file
systems may be re-mounted using the new virtual IP address of the package running in the adoptive
subnet. Since HP recommends mounting Serviceguard NFS file systems using the Fully Qualified
Domain Name (FQDN) associated with the NFS package, the name resolution on the client’s system
needs to be updated by means discussed in the “General application integration considerations”
section.
Workaround 2: Automatically unmount idle file systems via AutoFS
When NFS file systems are mounted manually via the mount command or automatically during boot
time via entries in the /etc/fstab file, those file systems remain mounted until they are explicitly un-
mounted. As described earlier, this can lead to issues when an NFS server package migrates to a
node in a different IP subnet. If, however, the NFS file system is managed by AutoFS, there is a
possibility that the file system will not be mounted at the time of the Serviceguard package failover.
This could allow the client to mount the file system successfully from the NFS server node in the
adoptive subnet.
AutoFS is an NFS client-side service that automatically and transparently mounts file systems as they
are needed and unmounts file systems once they have been idle for a configurable period of time (10
minutes by default). This automated un-mounting of idle file systems is potentially beneficial to cross-
subnet configurations because if AutoFS has un-mounted a file system prior to the NFS server package
failover event, the NFS client system may be able to re-mount the file system from the new NFS server
in the adoptive subnet via AutoFS, provided that the NFS client’s hostname resolution mechanism (i.e.
DNS, LDAP, NIS, /etc/hosts, etc.) is updated to reflect the new virtual IP address associated with the
NFS package running in the adoptive subnet.
For example, if the NFS client uses DNS for hostname resolution, the DNS database would need to
be updated to reflect the IP address of the NFS package in the adoptive subnet when the failover
occurs so that any new attempt by AutoFS to mount file systems using the FQDN will be sent to the
relocatable IP address of the NFS package at the adoptive subnet.
This AutoFS workaround only addresses the case of idle NFS mount points and does not help with
currently mounted NFS file systems that are active at the time of a Serviceguard package failover to
the adoptive subnet. However, Workaround 1 described above, involving the forcible umount
command, could also be used to unmount hung AutoFS-managed NFS file systems. As with the
manual mount case, once the hung file systems are un-mounted, AutoFS may be able to re-mount the
file system from the new NFS server, provided the hostname resolution service has been updated to
reflect the new package virtual IP address.
Workaround 3: NFS Client-side Failover
HP introduced many NFS enhancements in HP-UX 11i v3, including a new feature called client-side
failover. Client-side failover allows the systems administrator to configure multiple NFS servers for a
given file system. An NFS client configured to use client-side failover can switch to a different server if
the original server supporting a replicated file system becomes unavailable. The failover is usually
transparent to applications. A failover can occur at any time without disrupting the processes running
on the client.
Client-side failover is only supported for read-only file systems, and the shared file systems must be
kept synchronized among the listed NFS servers.
In a cross-subnet cluster environment, the system administrator could configure the NFS client mount
points to reference both FQDNs associated with the NFS package when it runs in each subnet. The
client would then mount the file system using the first listed FQDN associated with the NFS package in