Managing Serviceguard Extension for SAP Version A.06.
Legal Notices © Copyright 2012 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Valid license from HP is equired for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Contents 1 Overview..................................................................................................5 About this Manual....................................................................................................................5 Related documentation..............................................................................................................5 2 SAP cluster concepts...................................................................................
Prerequisites......................................................................................................................47 Node preparation and synchronization.................................................................................47 Intermediate synchronization and verification of virtual hosts....................................................48 Intermediate synchronization and verification of mount points...................................................
1 Overview About this Manual This document describes how to plan, configure and administer highly available SAP Netweaver systems on Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) systems using HP Serviceguard high availability cluster technology in combination with the HP Serviceguard Extension for SAP (HP SGeSAP).
• HP Serviceguard A.11.20.10 for Linux Release Notes • HP Serviceguard Toolkit for NFS version A.03.03.
2 SAP cluster concepts This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP for Linux (HP SGeSAP/LX) and explains several naming conventions. The following sections provide recommendations and examples for typical cluster layouts that can be implemented for SAP environments. SAP-specific cluster modules HP SGeSAP extends HP Serviceguard's failover cluster capabilities to SAP application environments.
Instance-type specific handling is provided by the module for SAP ABAP Central Service Instance, SAP JAVA Central Service Instance, SAP ABAP Application Server Instances, SAP JAVA Application Server Instances, SAP Central Instances, SAP Enqueue Replication Server Instances, SAP Gateway Instances and SAP Webdispatcher Instances. The module sgesap/mdminstance extends the coverage to the SAP Master Data Management Instance types. The module to cluster SAP liveCache instances is called sgesap/livecache.
Configuration restrictions • It is not allowed to specify a single SGeSAP package with two database instances in it. • It is not allowed to specify a single SGeSAP package with a Central Service Instance [A]SCS and its Replication Instance ERS. • Diagnostic Agent instances are not mandatory for SAP line-of-business processes, but they become installed on the relocatable IP address of the corresponding instance and must move with the relocatable IP address.
Figure 1 One-package failover cluster concept SGeSAP Cluster One-Package Concept Shared Disks dbciC11 DBCI package moved and required resources freed up in the event of a failure Node 1 QAS Systems Dialog Instances Node 2 LAN Application Servers To maintain an expensive idle standby is not required. SGeSAP allows utilizing the secondary node(s) with different instances during normal operation.
Figure Visualization of a one-package cluster concept in Serviceguard Manager If the primary node fails, the database and the Central Instance failover and continue functioning on an adoptive node. After failover, the system runs without any manual intervention needed. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can stay up or can be restarted (triggered by a failover).
A cluster can be configured in a way that two nodes back up each other. The basic layout is depicted in Figure 3 (page 12). Figure 3 Two-package failover with mutual backup scenario SGeSAP Cluster Mutual Failover Shared Disks dbC11 DB and CI package can fail and recover independently Node 1 ciC11 Node 2 LAN Application Servers It is a best practice to base the package naming on the SAP instance naming conventions whenever possible.
failure of the Enqueue Service, the table with all locks that have been granted gets lost. After package failover and restart of the Enqueue Service, all Dialog Instances need to get notified that the lock table content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These transactions need to be restarted. Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service on the Dialog Instances.
Resource) for each Replicated Enqueue. An EnqOR resource is refered to by the system as sgesap.enqor__ERS. Setting up the enqor MNP implements a protected follow-and-push behavior for the two packages that include enqueue and its replication. As a result, an automatism will make sure that Enqueue and its Enqueue Replication server are never started on the same node initially.
1. 2. SAP self-controlled using High Availability polling with Replication Instances on each cluster node(active/passive). Completely High Availability failover solution controlled with one virtualized Replication Instance per Enqueue. SGeSAP implements the second concept and avoids costly polling and complex data exchange between SAP and the High Availability cluster software. There are several SAP profile parameters that are related to the self-controlled approach.
Figure 7 Dedicated failover server SGeSAP Cluster Dedicated Failover Server with Replicated Enqueue jdbscsC11 Shared Disks Node 1 dbascsC12 Failover paths from all primary partitions to the dedicated backup server ers00C11 ers10C12 ersnnCnn Node 2 Dialog Instances dbscsCnn Node 3 Figure 7 (page 16) shows an example configuration. The dedicated failover host can serve many purposes during normal operation.
A dedicated SAPNFS package is specialized to provide access to shared filesystems that are needed by more than one mySAP component. Typical filesystems served by SAPNFS would be the common SAP transport directory or the global MaxDB executable directory of MaxDB 7.7. The MaxDB client libraries are part of the global MaxDB executable directory and access to these files is needed by APO and liveCache at the same time. Beginning with MaxDB 7.
It helps to better understand the concept, if one considers that all of these operations for non-clustered instances are considered inherently non-critical. If they fail, this failure won’t have any impact on the ongoing package operation. A best-effort attempt is made, but there are no guarantees that the operation succeeds. If such operations need to succeed, package dependencies in combination with SGeSAP Dialog Instance packages need to be used. Dialog Instances can be marked to be of minor importance.
3 SAP cluster administration In SGeSAP environments, SAP application instances are no longer considered to run on dedicated (physical) servers. They are wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a server virtualization layer. The virtualization is transparent in most aspects, but in some areas special considerations apply. This affects the way a system gets administered.
NOTE: Enabling package maintenance allows to temporarily disable the cluster functionality for the SAP instances of any SGeSAP package. The configured SGeSAP monitoring services tolerates any internal SAP instance service state while maintenance mode is activated. SAP support personnel might request or perform maintenance mode activation as part of reactive support actions. Similarly, you can use Serviceguard Live Application Detach (LAD) mechanism to temporarily disable the cluster for the whole node.
Figure 10 sgesap/sapinstance module configuration overview for a replication instance To monitor a SGeSAP toolkit package: • Check the badges next to the SGeSAP package icons in the main view. Badges are tiny icons that are displayed to the right of the package icon. Any Serviceguard Failover Package can have Status, Alert, and HA Alert badges associated with it. In addition to the standard Serviceguard alerts, SGeSAP packages report SAP application-specific information via this mechanism.
system administrators, that is, sidadm users that are logged in to the Linux operating system and it includes remote SAP basis administration access via the SAP Management Console (SAP MC) or SAP’s plugin for Microsoft Management Console (SAP MMC). The SAP Netweaver 7.x startup framework is made up of a host control agent (hostctrl) software process that runs on each node of the cluster and a sapstart service agent (sapstartsrv) software per SAP instance.
During startup of the instance startup framework, a SAP instance with the SGeSAP HA library configured , prints the following messages in the sapstartsrv.log file located at the instance work directory: SAP HA Trace: HP SGeSAP (SG) cluster-awareness SAP HA Trace: Cluster is up and stable SAP HA Trace: Node is up and running SAP HA Trace: SAP_HA_Init returns: SAP_HA_OK ...
Other methods provided by SAP's sapcontrol command for instance shutdown work in the similar way. HP Serviceguard Manager displays a package alert that lists the manually halted instances of a package. The service monitoring for a halted instance is automatically suspended until you restart the instance. HP Serviceguard Manager displays a package alert (see Figure 11 (page 21)) that lists the manually halted instances of a package.
Triggering a package halt is possible whether instances of the package are currently halted or not. The operation causes the cluster to loose information about all the instances that were manually halted during the package run. NOTE: Activating package maintenance mode is a way to pause all SGeSAP service monitors of a package immediately, but it can only be triggered with Serviceguard commands directly. While package maintenance mode is active, failover of the package is disabled.
The debug mode allows package start up to the level of SAP specific steps. All instance startup attempts will be skipped. Service monitors will be started, but they do not report failures as long as the debug mode is turned on. In this mode it is possible to attempt manual startups of the database and/or SAP software. All rules of manual troubleshooting of SAP instances now apply. For example it is possible to access the application work directories of the SAP instance to have a look at the trace files.
Print requests to other spool servers stay in the system after failure until the host is available again and the spool server has been restarted. These requests can be moved manually to other spool servers if the failed server is unavailable for a longer period of time. Batch jobs can be scheduled to run on a particular instance. Generally speaking, it is better not to specify a destination host at all.
existence of a file ${SGRUN}/debug_sapverify_ skips verification only for a single package on that cluster node. Generic and SGeSAP clustering specific check routines which are not related to SAP requirements towards local operating environment configurations are not deactivated and are executed as part of both cmcheckconf(1) and cmapplyconf(1) commands. The deploysappkgs(1) command is used during initial cluster creation.
Table 2 Summary of methods that allow SAP instance stop operations during package uptime (continued) Method Granularity How achieved? Effect SGeSAP debug flag SGeSAP Package Create debug flag file touch debug_ in the SG run directory location, which is /usr/local/cmcluster/run on Red Hat and /opt/cmcluster/run on SUSE) All SGeSAP package non-production SGeSAP service monitoring is cluster trouble-shooting temporarily suspended; SGeSAP modules are skipped during package start Live Applic
4 SAP cluster storage layout planning Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. The standard Logical Volume Manager (LVM) . The following steps describe two standard setups for the LVM volume manager.
NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools. Depending on what SAP changed, this might introduce additional dependencies on shared libraries that weren't required before the patch. Depending on the shared library path settings (LD_LIBRARY_PATH) of the root user, it may not be possible for SGeSAP to execute the SAP tools after applying the patch. The introduced additional libraries are not found.
To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that Reside on Shared Disks Volume groups on SAN shared storage are configured as part of the SGeSAP packages.
Table 5 Instance specific volume groups for exclusive activation with a package Mount point Access point Recommended packages setups /usr/sap//SCS Shared disk SAP instance specific For example, /usr/sap/C11/SCS10 Combined SAP instances Database plus SAP instances /usr/sap//ASCS For example, /usr/sap/C11/ASCS11 /usr/sap//DVEBMGS For example, /usr/sap/C11/DVEBMGS12 /usr/sap//D SAP instance specific /usr/sap//J Combined SAP instances For example, /u
If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory must not be added to any package. This ensures that they are independent from any SAP SAP Netweaver system and you can mount them on any host by hand if needed. All files ystems mounted below /export are part of NFS cross-mounting via the automount programm. The automount program uses virtual IP addresses to access the NFS directories via the path that comes without the /export prefix.
In clustered SAP environments prior to 7.x releases, install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory.
Table 7 Availability of SGeSAP storage layout options for different Database RDBMS DB Technology SGeSAP Storage Layout Options Cluster Software Bundles Oracle Single-Instance SGeSAP NFS clusters 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit Idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit (Optional) SAP MaxDB Sybase ASE Oracle Single Instance Oracle single instance RDBMS Single Instance Oracle databases can be used with both SGeSAP storage layout options.
The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts where an Application Server might run. The directories cannot be relocated to different paths. The content needs to be identical to the content of the corresponding directories that are shared as part of the database SGeSAP package.
• MaxDB 7.8 does not create SAP_DBTech.ini anymore. The [Globals] section is defined in /etc/opt/sdb. With the concept of isolated installations , a DB installation contains its own set of (version specific) executables (/sapdb//db/bin), its own data directory (/sapdb//data), and a specific client directory (/sapdb/clients/). At runtime, there will be a database specific set of x_server related processes.
pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node in the MaxDB for the liveCache instance to run. • /sapdb/clients (MaxDB 7.8): Contains the client files in subdirectories for each database installation. • /var/lib/sql: Certain patch level of MaxDB 7.6 and 7.7 (see SAP Note 1041650) use this directory for shared memory files. Needs to be local on each node. NOTE: In HA scenarios, valid for SAPDB/MaxDB versions up to 7.
directories are shared between the instances and must be part of an instance package. Otherwise the halt of one instance would prevent the other one to be started or run. Sybase ASE storage considerations SGeSAP supports failover of Sybase ASE databases as part of SGeSAP NFS cluster option 1. It is possible to consolidate SAP instances in SGeSAP ASE environments.
way to deal with this is to make the client libraries available throughout the cluster via AUTOFS cross-mounts from a dedicated NFS package. Table 13 File system layout for liveCache in a non-MaxDB environment (Option 2) Mount point Storage type Owning packages /sapdb/data Shared disk Dedicated liveCache package (lc) Autofs shared sapnfs1 /sapdb//sapdata /sapdb//saplog /var/spool/sql /sapdb/programs /sapdb/clients 1 This can be any standard, standalone NFS package.
5 Clustering SAP using SGeSAP packages Overview This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). Each task is described with examples. A prerequisite for clustering SAP using SGeSAP is that the Serviceguard cluster software installation must have completed and the cluster must have setup and running.
There can also be the requirement to convert an existing SAP instance or database for usage in a Serviceguard cluster environment. For more information on how to convert an existing SAP instance or database see, “Converting an existing SAP instance” (page 78). SGeSAP modules and services The following components are important for the configuration of a Serviceguard package with SGeSAP: • Modules and the scripts that are used by these modules. • Service monitors.
Table 15 SGeSAP monitors (continued) Monitor Description sapwebdisp.mon To monitor a SAP Web Dispatcher that is included either as a part of (W-type) instance installation into a dedicated SID or by unpacking and bootstrapping into an existing SAP Netweaver SID. sapgw.mon To monitor a SAP Gateway (G-type instance). sapdatab.mon To monitor MaxDB, Oracle, and Sybase ASE database instances. Additionally, it monitors the xserver processes for MaxDB and listener processes for Oracle. saplc.
2. 3. an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP SID. A guided installation using the Serviceguard Manager GUI: A web based graphical interface, with plugins for automatic pre-filling of SGeSAP package attributes based on the currently installed SAP and DB instances.
SGeSAP easy deployment This section describes the installation and configuration of packages using easy deployment (via the deploysappkgs command, which is part of the SGeSAP product) . This script allows easy deployment of the packages that are necessary to protect the critical SAP components.
Infrastructure setup, pre-installation preparation (Phase 1) This section describes the infrastructure that is provided with the setup of a NFS toolkit package and a base package for the upcoming SAP Netweaver installation. It also describes the prerequisites and some selected verification steps. There is a one to one or one to many relationship between a Serviceguard package and SAP instances and a one to one relationship between a Serviceguard package and a SAP database.
Intermediate synchronization and verification of virtual hosts To synchronize virtual hosts: 1. Ensure that all the virtual hosts that are used later in the SAP installation and the NFS toolkit package setup are added to the /etc/hosts. If a name resolver is used instead of /etc/hosts, then ensure that all the virtual hosts resolve correctly. 2. Verify the order and entries for the host name lookups in the /etc/nsswitch.conf.
3. Select NFS toolkit and click Next >>. The Package Selection screen appears. Figure 12 Toolkit selection page 4. In the Package Name box, enter a package name that is unique for the cluster. NOTE: The name can contain a maximum of 39 alphanumeric characters, dots, dashes, or underscores. The Failover package type is pre-selected and Multi-Node is disabled. NFS does not support Multi-Node. 5. Click Next >>. The Modules Selection screen appears.
Figure 13 Configuration summary page- sapnfs package Figure 14 Configuration summary page- sapnfs package (continued) 50 Clustering SAP using SGeSAP packages
Creating NFS toolkit package using Serviceguard CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes the CLI steps and the GUI steps are described in the“Creating NFS Toolkit package using Serviceguard Manager” (page 48) section. 1. 2. Run the cmmakepkg –n sapnfs -m tkit/nfs/nfs sapnfs.config command to create the NFS server package configuration file using the CLI. Edit the sapnfs.config configuration file.
Solutionmanager diagnostic agent file system preparations related to NFS toolkit If a dialog instance with a virtual hostname is installed initially and clustering the instance is done later, then some steps related to the file system layout must be performed before the SAP installation starts. These steps are optional if: • It is planned to keep all the diagnostic agent installations on the “local” file system or • The agent is not configured to move with the related dialog instance.
1. 2. 3. Check if the package will start up on each cluster node, where it is configured. Run showmount –e and verify if name resolution works. Run showmount –e on an external system (or a cluster node currently not running the sapnfs package) and check the exported file systems are shown. On each NFS client in the cluster check the following: • Run cd /usr/sap/trans command to check the read access of the NFS server directories.
1. From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and then select Create a Modular Package from the drop down menu. If Metrocluster is installed, a Create a Modular Package screen for selecting Metrocluster appears. If you do not want to create a Metrocluster package, click no (default is yes). Click Next >> and another Create a Modular Package screen appears. 2. 3. 4. 5. If toolkits are installed, a Create a Modular Package screen for selecting toolkits appears.
9. Click Next >> and another Create a Modular Package screens appears with the following message: Step 1 of X: Configure Failover module attributes (sg/failover), where X will vary, depending on how many modules you selected. There are two windows in this screen, Select Nodes and Specify Parameters. By default, nodes and node order are pre-selected in the Select Nodes window. You can deselect nodes, or change the node order, to accommodate your configuration requirements.
Initially no SGeSAP attributes are enabled except for the mandatory attribute sgesap/sap_global/sap_system, which must be set to the SAP SID designated for the installation. All others SGeSAP related attributes must be left unspecified at this point. For a database package, specify the module sgesap/dbinstance. An sgesap/dbinstance does not have any mandatory attributes. For a combined package, both sgesap/sapinstance and sgesap/dbinstance module must be specified. Other combinations are also possible.
Figure 17 Module selection page Click Reset at the bottom of the screen to return to the default selections. 6. After you are done with all the Create a Modular Package configuration screens, the Verify and submit configuration change screen appears. Use the Check Configuration and Apply Configuration buttons to confirm and apply your changes. Creating the package configuration file with the CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI.
For examples of these attributes, see “Creating NFS toolkit package using Serviceguard CLI” (page 51) section. 2. Verify the package using the cmcheckconf –P .config command, and if there are no errors, run the cmapplyconf –P .config command to apply the configuration. Verification steps A simple verification of the newly created base package is to test if the package startup succeeds on each cluster node where it is configured.
Post SAP installation tasks and final node synchronization (Phase 3a) After the SAP installation has completed in Phase 2, some SAP configuration values may have to be changed for running the instance in the cluster. Additionally, each cluster node (except the primary where the SAP installation runs) must be updated to reflect the configuration changes from the primary.
Avoid database startup as part of Dialog instance startup A dialog instance installation contains Start_Program_00 = immediate $(_DB) entry in its profile. This entry is generated by the SAP installation to start the DB before the dialog instance is started. It is recommended to disable this entry to avoid the possible conflicts with the DB startup managed by the SGeSAP database package. MaxDB/liveCache: Disable Autostart of instance specific xservers With an “isolated installation” each MaxDB/liveCache 7.
Table 18 DB configuration files (continued) DB type File(s) Path Fields/Description MaxDB .XUSER.62 /home/adm Nodename in xuser list output. If necessary recreate userkeys with xuser … -n vhost Sybase interfaces $SYBASE Fourth column of master and query entry for each server User synchronization The synchronization of the user environment consists of the following three sub tasks after completing the SAP installation: • Synchronize the user and group ids.
NOTE: For more information of the terms local, shared exclusive, and shared nfs file systems used in this section, see chapter 4 “SAP cluster storage layout planning” (page 30). Along with synchronizing user and group information, the HOME directories of the administrators must be created on the local file system (unless this directory does not reside on a local disk as in the case for some DB users) on each secondary node.
Table 21 Services on the primary node (continued) Service name Remarks saphostctrl SAP hostctrl saphostctrls SAP hostctrl (secure) tlistsrv Oracle listener port sql6 MaxDB sapdbni72 MaxDB NOTE: • = 00..99. • There are no services related to Sybase ASE database in /etc/services. NFS and automount synchronization 1. 2. Synchronize the automount configuration on all the secondary nodes, if it is not done in Phase 1.
For more information about the Oracle Instant Client Installation and Configuration into SAP environments, see SAP note 819829. Oracle client installation for MDM If an MDM configuration is configured as a distributed Oracle configuration, for example, database and MDM server run in separate packages, then the full Oracle client installation is required. 1. After the installation, update tnsnames.ora with the virtual hostnames as described in “Check Database Configuration Files”. 2.
NOTE: • To get complete package configurations it is recommend that SAP database and instances are running on the node where deploysappkgs is invoked. Otherwise, attributes (especially regarding the filesystem and volume_group module) might be missing. • deploysappkgs can also be invoked at the end of Phase2 on the primary node. However, the created package configurations cannot be applied yet.
The following table describes the SGeSAP parameters and their respective values: Parameter Possible value Description sgesap/sap_global/sap_system C11 Defines the unique SAP System Identifier (SAP SID) sgesap/sap_global/rem_comm ssh Defines the commands for remote executions. rsh Default is ssh sgesap/sap_global/parallel_startup yes no Allows the parallel startup of SAP application server instances. If set to no, the instances start sequentially.
Module sgesap/sapinstance – SAP instances This module contains the common attributes for any SAP Netweaver Instance. The following table describes the SAP instance parameters: Parameter Possible value Description sgesap/stack/sap_instance 1. SCS40 2. ERS50 Defines any SAP Netweaver Instances such as: DVEBMGS, SCS, ERS, D,J, ASCS, MDS, MDIS,MDSS,W,G. sgesap/stack/sap_virtual_hostname 1. vhostscs 2. vhosters Corresponds to the virtual hostname, which is specified during the SAP installation.
Parameter Possible value Description sgesap/oracledb_spec/listener_password Specify Oracle listener password, if set For db_vendor = maxdb sgesap/maxdb_spec/maxdb_userkey c User key of control user For db_vendor = sybase sgesap/sybasedb_spec/aseuser sapsa Sybase system administration or monitoring user sgesap/sybasedb_spec/asepasswd Password for specified aseuser attribute For Sybase the attribute aseuser and asepaswd are optional.
Parameter Value Description sgesap/stack/sap_instance MDIS02 Example for defining an MDM MDIS instance with instance number 02 in the same package sgesap/stack/sap_instance MDSS03 Example for defining an MDM MDSS instance with instance number 03 in the same package sgesap/stack/sap_virtual_hostname mdsreloc Defines the virtual IP hostname that is enabled with the start of this package sgesap/db_global/db_system MO7 Determines the name of the database (schema) for SAP sgesap/db_global/db_vendo
Configure a database monitor with: service_name datab service_cmd $SGCONF/monitors/sgesap/sapdatab.mon All other values are set up as described in the Table 22 (page 69) table. Module sg/generic_resource – SGeSAP enqor resource A generic resources has to be setup for SCS and ERS packages, if the SGeSAP enqueue follow-and-push mechanism is used. There is a common resource for each SCS/ERS pair. The naming schema of the resource follows the below convention: sgesap.
On command line a enqor MNP can be created with: cmmakepkg –n enqor –m sgesap/enqor enqor.config The resulting enqor.config can be applied without editing. The Serviceguard Manager offers SAP Netweaver Operations Resource in the Select the SAP Components in the Package screen for configuring the enqor MNP. deploysappkgs creates the enqor.config file when the follow-and-push mechanism is the recommended way of operation for the creates SCS/ERS packages (and no enqor MNP is not configured yet).
Configuring external instances (sgesap/sapextinstance) External dialog instances (“D” and “J”-type) can be configured into a SGeSAP package using the sgesap/sapextinstance module. These instances can eitherbelong to the SID configured in the package (values sap_ext_system and sap_system are identical), ora as a “foreign” SID (values of sap_ext_system and sap_system are different).
Table 23 Overview of reasonable treat values (continued) Value (. = y/n) Meaning Description ...y. (position 4) Stop if package local Application server is stopped when package fails over to local node, that is on the same node where the application server is currently running (own & foreign SID) ....y (position 5) Reserved for future used Figure 20 Configuring sapextinstance screen Supported operating systems for running external instances are Linux, HP-UX and Microsoft Windows-Server.
The failover node is running a central, non-clustered test system QAS and a dialog instance D03 of the clustered SG1. All these must be stopped in case of a failover to the node, in order to free up resources. sap_ext_instance sap_ext_system sap_ext_host sap_ext_treat sap_ext_instance sap_ext_host sap_ext_treat DVEBMGS10 QAS node2 nnnyn D03 node2 yyyyn Example 3: The package contains one or more dialog instances configured for vhost1 for which also a diagnostic agent is configured.
sap_infra_sw_type sapccmsr sap_infra_sw_params /sapmnt/C11/profile/ccmsr_profilename sap_infra_sw_type sapwebdisp sap_infra_sw_treat startnstop sap_infra_sw_params “-shm_attachmode 6” When using the Serviceguard Manager to configure this module the following can be configured: Figure 21 Configuring SAP infrastructure software components screen To add an SAP infrastructure software component to the Configured SAP Infrastructure Software Components list: 1.
Module sgesap/livecache – SAP liveCache instance The liveCache setup is very similar to a sgesap/dbinstance setup with MaxDB. However, there are a few minor differences: • The liveCache installation does not create a XUSER file (.XUSER.
Attributes offered by the liveCache module are: Parameter Parameter (Example) Description lc_system LC1 lc_virtual_hostname reloc1 The virtual hostname onto which the liveCache has been installed lc_start_mode online List of values which define into which state the liveCache must be started. Possible values are offline, (only vserver will be started) admin, (start in admin mode) slow, (start in cold-slow mode) online.
Cluster conversion for existing instances The recommended approach to clustering SAP configurations with Serviceguard is described in “Installation options” (page 44), “SAP installation (Phase 2)” (page 58), “Post SAP installation tasks and final node synchronization (Phase 3a)” (page 59), and “Completing SGeSAP package creation (Phase 3b)” (page 64) sections. The basic goal is to initially setup a “cluster-aware” environment and then install SAP using virtual hostnames and the appropriate storage layout.
Converting an existing database • Adapt entries in the database configuration files listed in the Table 18 (page 60) table • Adapt all SAP profile entries referring to the DB (not all entries below need to exist).