Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64), April 2009

LOCAL mount
There are several reasons a LOCAL mount point is required in an SAP configuration:
The first reason is that some files used in an SAP configuration are not cluster aware. As an example
consider the case of SAP file /usr/sap/tmp/coll.put. This file contains system performance data
collected by the SAP performance collector. If the file system /usr/sap/tmp is shared with other
cluster nodes the SAP performance collector running on those systems would write into the same file
system and file coll.put - causing this shared file to become corrupted. Therefore this file system has to
be mounted with a LOCAL mount point.
A second reason could be performance reasons. A locally mounted file system will achieve higher I/O
throughput than a NFS mounted file system.
A third reason could be the wish of having SAP binaries available locally. In case of a network (and
therefore NFS) failures the binaries could still be executed. On the other side if there is a NFS failure
other SAP components will probably also fail.
As mentioned above, local copies of the file system contents require administration overhead to keep them
synchronized. Serviceguard provides a tool cmcp(1m) that allows easy replication of a single file to all
cluster nodes.
NOTE: In prior SGeSAP/LX documentation this category (LOCAL) was called "Environment specific".
SHARED NFS or SHARED EXCLUSIVE
SHARED NFS
In this category file system data is shared among the SAP instances via NFS and is made available / exported
to all cluster nodes. An example could be /sapmnt/<SID>/profile directory which contains the profile
for all SAP instances in the cluster. Each cluster mounts this directory as an NFS client.
One of the cluster nodes will also be the NFS server of these directories. This node will have both the NFS
client and the NFS server mounts active at the same time. This is sometimes called a NFS loopback mount.
Should this cluster node fail, then the NFS exported directories will be relocated to another cluster node and
will be NFS served from this node.
An advantage of an NFS shared mount is that any changes or updates to an NFS file system contents are
available instantaneously to all cluster nodes. Compared to a LOCAL mount point and where the file systems
have to be kept in sync cluster wide with local copies this option doesn't have this administration overhead.
A limiting factor could be that NFS mounted file systems will not provide the same level of IO performance
a LOCAL mounted file system. Many of the above listed SAP file systems though are more of STATIC nature
in their IO behavior so therefore should not cause an I/O limitation. If high I/O is required then the SHARED
EXCLUSIVE mount should be used.
NOTE:
In prior SGeSAP/LX documentation this category was called "System specific."
NFS filesystems can be mounted using either the TCP or the default, UDP protocol. If mounted using
TCP, when a failover occurs, the TCP connection will be held for approximately 15 minutes, thereby
preventing the NFS directory access on the first node. The timeout value can be significantly reduced
by setting net.ipv4. tcp_retries2 = 3 in /etc/sysctl.conf which also changes the timeout
system-wide.
SHARED EXCLUSIVE
This type of storage mount, relocates the storage volume together with the relocation of the application and
the Serviceguard package from one cluster node to another cluster node. The file system is exclusively
mounted on the node the application is running on. The file system is not mounted by any of the other cluster
nodes or accessed by these. Compared to the SHARED NFS category this solution will provide the highest
I/O performance.
However, there is the disadvantage and the risk of a failure during the relocation of the file systems to another
cluster node. Any failure in the sequence of stopping the application, unmounting the file systems, deactivating
About Impacted File Systems 33