Managing Serviceguard Extension for SAP on Linux (IA64 Integrity and x86_64), April 2009

5 Serviceguard Extension for SAP on Linux Cluster
Administration
A SAP application within a Serviceguard Extension for SAP on Linux cluster is no longer treated as though
it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can
be moved to any of the hosts that are nodes of the Serviceguard cluster. The Serviceguard packages provide
a SAP adoptive computing layer that keeps the application independent of specific server hardware. The
SAP adoptive computing is transparent in many aspects, but in some areas special considerations may
apply. This affects the way a system gets administered. Administration topics presented in this chapter include:
Change Management
Mixed Clusters
Switching Serviceguard Extension for SAP on Linux On and Off
Change Management
Serviceguard has to store information about the cluster configuration. It especially needs to know the
relocatable IP addresses and its subnets, storage volume groups, the Logical Volumes and their mountpoints.
System Level Changes
Serviceguard Extension for SAP on Linux provides some flexibility for hardware change management. For
example if you have to maintain the cluster node on which an SAP SCS or SAP ASCS instance is running,
this instance can temporarily be moved to the cluster node that runs its Replicated Enqueue without interrupting
ongoing work. Some users might experience a short delay in the response time for their ongoing transaction.
No downtime is required for the maintenance action.
If you add new hardware and SAP software needs access to it to work properly, make sure to allow this
access from any host of the cluster by appropriately planning the hardware connectivity. E.g. it is possible
to increase database disk space by adding a new shared LUN from a SAN device as physical volume to
the shared volume groups on the primary host on which a database runs. The changed volume group
configuration has to be redistributed to all cluster nodes afterwards via vgexport(1m) and vgimport(1m).
It is a good practice to keep a list of all directories that were identified in Chapter Two to be common
directories that are kept local on each node. As a rule of thumb, files that get changed in these directories
need to be manually copied to all other cluster nodes afterwards. There might be exceptions. E.g.
/home/<SID>adm does not need to be the same on all of the hosts. In clusters that do not use CFS, it is
possible to locally install additional Dialog Instances on hosts of the cluster, although it will not be part of
any package. SAP startup scripts in the home directory are only needed on this dedicated host. You do not
need to distribute them to other hosts.
If remote shell access is used, never delete the .rhosts entries of the root user and <sid>adm on any of
the nodes. Or in the case of a secure shell setup is being used instead of remote shell access do not delete
this setup either.
Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group should be kept unified across
all cluster nodes.
If you use an ORACLE database, be aware that the listener configuration file of SQL*Net V2 is kept as a
local copy in /etc/listener.ora by default.
Files in the following directories and all subdirectories are typically shared:
/usr/sap/<SID>/DVEBMGS<INR>
/export/usr/sap/trans (except for stand-alone J2EE)
/export/sapmnt/<SID>
Chapter Two can be referenced for a full list of cluster shared directories. These directories are only available
on a cluster node if the package they belong to is running on it. They are empty on all other nodes. The
Serviceguard package and the directories for this package together failover to another cluster node.
All directories below /export have an equivalent directory whose fully qualified path comes without this
prefix. These directories are managed by the automounter. NFS file systems and get mounted automatically
Change Management 119