Managing HP Serviceguard Extension for SAP for Linux Version A.06.00.20 Abstract This guide describes how to plan, configure, and administer highly available SAP systems on Red Hat Enterprise Linux and SUSE Linux Enterprise Server systems using HP Serviceguard.
© Copyright 2013 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Valid license from HP is required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Contents 1 Overview..................................................................................................7 1.1 About this manual................................................................................................................7 1.2 Related documentation.........................................................................................................8 2 SAP cluster concepts...................................................................................9 2.
.7.2 Option 2: Non-MaxDB environments...........................................................................48 5 Clustering SAP Netweaver using SGeSAP packages......................................49 5.1 Overview.........................................................................................................................49 5.1.1 Three phase approach................................................................................................49 5.1.2 SGeSAP modules and services............
.5.11.2 Configuring external instances (sgesap/sapextinstance)..........................................80 5.5.11.3 Configuring SAP infrastructure components (sgesap/sapinfra)..................................83 5.5.11.4 Module sgesap/livecache – SAP liveCache instance..............................................84 5.6 Cluster conversion for existing instances...............................................................................86 5.6.1 Converting an existing SAP instance.........................
1 Overview 1.1 About this manual This document describes how to plan, configure, and administer highly available SAP Netweaver systems on Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) systems using HP Serviceguard high availability cluster technology in combination with HP Serviceguard Extension for SAP (SGeSAP). To use SGeSAP, you must be familiar with the knowledge of Serviceguard concepts and commands, Linux operating system administration, and SAP basics.
1.2 Related documentation The following documents contain related information: • Serviceguard Extension for SAP Version A.06.00 on Linux Release Notes • Managing HP Serviceguard A.11.20.20 for Linux • HP Serviceguard A.11.20.20 for Linux Release Notes • HP Serviceguard Toolkit for NFS version A.03.03.10 on Linux User Guide The documents are available at http://www.hp.com/go/linux-serviceguard-docs.
2 SAP cluster concepts This chapter introduces the basic concepts used by SGeSAP for Linux. It also includes recommendations and typical cluster layouts that can be implemented for SAP environments. 2.1 SAP-specific cluster modules HP SGeSAP extends Serviceguard failover cluster capabilities to SAP application environments. It is intended to be used in conjunction with the HP Serviceguard Linux product and the HP Serviceguard toolkit for NFS on Linux.
Server Instances, SAP Central Instances, SAP Enqueue Replication Server Instances, SAP Gateway Instances and SAP Webdispatcher Instances. The module sgesap/mdminstance extends the coverage to the SAP Master Data Management Instance types. The module to cluster SAP liveCache instances is called sgesap/livecache. SGeSAP also covers single-instance database instance failover with built-in routines.
with the above mentioned modules. Explicitly using sub-modules during package creation is not allowed. NOTE: HP recommends that any SGeSAP configuration use only modular-based packages. For more information on modular package creation, see the following sections in chapter 5“Clustering SAP Netweaver using SGeSAP packages” (page 49): • Creating SGeSAP package with easy deployment • Creating SGeSAP package with guided configuration using Serviceguard Manager 2.
Figure 1 One-package failover cluster concept SGeSAP Cluster One-Package Concept Shared Disks dbciC11 DBCI package moved and required resources freed up in the event of a failure Node 1 QAS Systems Dialog Instances Node 2 LAN Application Servers To maintain an expensive idle standby is not required as SGeSAP allows utilizing the secondary node(s) with different instances during normal operation.
Figure 2 Visualization of a one-package cluster concept in Serviceguard Manager If the primary node fails, the database and the Central Instance failover and continue functioning on an adoptive node. After failover, the system runs without any manual intervention. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can stay up or can be restarted (triggered by a failover).
Figure 3 Two-package failover with mutual backup scenario SGeSAP Cluster Mutual Failover Shared Disks dbC11 DB and CI package can fail and recover independently Node 1 ciC11 Node 2 LAN Application Servers It is a best practice to base the package naming on the SAP instance naming conventions whenever possible. Each package name must also include the SAP System Identifier (SID) of the system to which the package belongs.
content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These transactions need to be restarted. Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service on the Dialog Instances. Thus transactions no longer need to be restarted.
Setting up the enqor MNP allows a protected follow-and-push behavior for the two packages that contain the enqueue and its replication. As a result, an automatism will make sure that Enqueue and its Enqueue Replication server are never started on the same node initially. Enqueue will not invalidate the replication accidentally by starting on a non-replication node while replication is active elsewhere.
1. 2. SAP self-controlled using High Availability polling with Replication Instances on each cluster node (active/passive). Completely High Availability failover solution controlled with one virtualized Replication Instance per Enqueue. SGeSAP implements the second concept and avoids costly polling and complex data exchange between SAP and the High Availability cluster software. There are several SAP profile parameters that are related to the self-controlled approach.
Figure 7 Dedicated failover server SGeSAP Cluster Dedicated Failover Server with Replicated Enqueue jdbscsC11 Shared Disks Node 1 dbascsC12 Failover paths from all primary partitions to the dedicated backup server ers00C11 ers10C12 ersnnCnn Node 2 Dialog Instances dbscsCnn Node 3 Figure 7 (page 18) shows an example configuration. The dedicated failover host can serve many purposes during normal operation.
a synchronous, memory-synchronous, or asynchronous replication relationship. The production system is the primary system from which replication takes place. Netweaver instances that connect to the HANA must be disjoined and distinct from one another from the cluster nodes running the HANA primary and secondary instance. In an SGeSAP configuration for SAP HANA the following packages are configured: • A primary package that makes sure that one of the system works as production system.
Figure 8 Primary instance failover sequence in a HANA dual-purpose scenario Quorum Server Data-Customer/Hearbeat LAN Appliance Management/Heartbeat LAN Primary SAP HANA Package SAP HANA System Replication Secondary SAP HANA Package Single-Host HP AppSystem Single-Host HP AppSystem Serviceguard for Linux 0-~50KM: sync or async >~50KM: async HP D2700 HP D2700 2.
SGeSAP setups are designed to avoid NFS with heavy traffic on shared filesystems if possible. For many implementations, this allows the use of one SAPNFS package for all NFS needs in the SAP consolidation. 2.10 Virtualized dialog instances for adaptive enterprises Databases and Central Instances are Single Points of Failure where as ABAP and JAVA Dialog Instances can be installed in a redundant fashion. In theory, there are no SPOFs in redundant Dialog Instances.
The described functionality can be achieved by adding the module sgesap/sapextinstance to the package. NOTE: Declaring non-critical Dialog Instances in a package configuration does not add them to the components that are secured by the package. The package won't react to any error conditions of these additional instances. The concept is distinct from the Dialog Instance packages that got explained in the previous section.
3 SAP cluster administration In SGeSAP environments, SAP application instances are no longer considered to run on dedicated (physical) servers. They are wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a server virtualization layer. The virtualization is transparent in most aspects, but in some areas special considerations apply. This affects the way a system gets administered.
NOTE: Enabling package maintenance allows to temporarily disable the cluster functionality for the SAP instances of this SGeSAP package. While maintenance mode is activated, the configured SGeSAP monitoring services recognizes whether an instance is manually stopped. If maintenance mode is deactivated, failover does not occur. SAP support personnel might request or perform maintenance mode activation as part of reactive support actions.
Figure 11 sgesap/sapinstance module configuration overview for a replication instance To monitor a SGeSAP toolkit package: • Check the badges next to the SGeSAP package icons in the main view. Badges are tiny icons that are displayed to the right of the package icon. Any Serviceguard Failover Package can have Status, Alert, and HA Alert badges associated with it. In addition to the standard Serviceguard alerts, SGeSAP packages report SAP application-specific information via this mechanism.
standard administration commands issued manually from outside of the cluster environment. These commands can be sapcontrol operations triggered by SAP system administrators, that is, sidadm users who are logged in to the Linux operating system or remote SAP basis administration commands via the SAP Management Console (SAP MC) or commands via SAP’s plugin for Microsoft Management Console (SAP MMC). The SAP Netweaver 7.
With this parameter being active, the sapstart service agent notifies the cluster software of any triggered instance halt. Planned instance downtime does not require any preparation of the cluster. Only the sapstart service agent needs to be restarted in order for the parameter to become effective. During startup of the instance startup framework, a SAP instance with the SGeSAP HA library configured , prints the following messages in the sapstartsrv.
root@ sapdisp.mon[xxx]: (sapdisp.mon,check_if_stopped): Manual stop in effect for DVEBMGS41 Other functions provided by sapcontrol for instance shutdowns work in a similar way. The HP Serviceguard Manager displays a package alert (see Figure 12 (page 25)) that lists the manually halted instances of a package. The SGeSAP software service monitoring for a halted instance is automatically suspended until the instance is restarted.
NOTE: Activating package maintenance mode is a way to pause all SGeSAP service monitors of a package immediately, but it can only be triggered with Serviceguard commands directly. While package maintenance mode is active, failover of the package is disabled. Maintenance mode also works for instances without HA library configured. 3.3 Change management Serviceguard manages the cluster configuration.
CAUTION: Make sure that all debug flag files become removed before a system is switched back to production use. NOTE: The debug/partial package setup behavior is different from the Serviceguard package maintenance mode. In package maintenance mode, the debug file does not disable package failover or allow partial startup of the package, but allows a package in running state. Startup with debug mode starts all the SGeSAP service monitors, except the monitored application software.
tables TBTCO,TBTCS,.... In case a batch job is ready to run, the application server name will be used to start it. Therefore, when using the relocatable name to build the Application Server name for the instance, you do not need to change batch jobs that are tied to it after a switchover. This is true even if the hostname, that is also stored in the above tables, differs. Plan to use saplogon to application server groups instead of saptemu/sapgui to individual application servers.
changes must be reflected in the cluster package configuration. deploysappkgs(1) command is aware of the existing package configurations and compares them to settings of the SAP configuration and the operating system. 3.5 Upgrading SAP software SAP rolling kernel switches can be performed in a running SAP cluster exactly as described in the SAP Netweaver 7.x documentation and support notes.
services of existing clusters if there are SGeSAP/LX A.03.00 legacy configurations. Thus, for the majority of existing clusters, no additional migration tool is required to move from legacy to modular. For other cases, like liveCache, SAP external instance and SAP infrastructure tool clusters, the conversion of SGeSAP/LX 3.xx legacy configurations to SGeSAP/LX A.06.xx module configurations requires manual steps. Preparatory effort lies in the range of 1 hour per package.
The secondary package startup operation will fail if it cannot establish an initial connection with the primary system to resynchronize the local data. A secondary package will not fail if there is no primary instance available, but it will attempt to start the secondary instance once the primary system is available. If you manually halt the secondary package on the node configured as the secondary system, this will not stop the instance of the secondary system.
You can implement “Starting and stopping the primary HANA system” (page 33) and “Starting and stopping of the secondary HANA system” (page 33) in full role-reversal operation mode also. 3. To failback perform: If 1 and 2 are repeated with the primary and secondary nodes reversed, a manual failback is possible, and then the nodes will be back to their original state. Such a failback operation is desirable in an environment that is not symmetric.
4 SAP Netweaver cluster storage layout planning Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate the storage groups. This chapter discusses disk layout for clustered SAP components and database components of several vendors on a conception level.
are used in order to allow each node of the cluster to switch roles between serving and using NFS shares. It is possible to access the NFS file systems from servers outside of the cluster that is an intrinsic part of many SAP configurations. 4.1.1.
System-specific volume groups get accessed from all instances that belong to a particular SAP System. Environment-specific volume groups get accessed from all instances that belong to all SAP Systems installed in the whole SAP environment. System and environment-specific volume groups are set up using NFS to provide access for all instances. They must not be part of a package that is only dedicated to a single SAP instance if there are several of them.
Table 5 Instance specific volume groups for exclusive activation with a package Mount point Access point Recommended packages setups /usr/sap//SCS Shared disk SAP instance specific For example, /usr/sap/C11/SCS10 Combined SAP instances Database plus SAP instances /usr/sap//ASCS For example, /usr/sap/C11/ASCS11 /usr/sap//DVEBMGS For example, /usr/sap/C11/DVEBMGS12 /usr/sap//D SAP instance specific /usr/sap//J Combined SAP instances For example, /u
If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory must not be added to any package. This ensures that they are independent from any SAP Netweaver system and you can mount them on any host by hand if needed. All files systems mounted below /export are part of NFS cross-mounts used by the automount program. The automount program uses virtual IP addresses to access the NFS directories via the path that comes without the /export prefix.
In clustered SAP environments prior to 7.x releases, executables must be installed locally. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. The availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory.
Table 7 Availability of SGeSAP storage layout options for different Database RDBMS DB Technology SGeSAP Storage Layout Options Cluster Software Bundles Oracle Single-Instance SGeSAP NFS clusters 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit Idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit (Optional) SAP MaxDB SAP Sybase ASE IBM DB2 Oracle Single Instance 4.3 Oracle single instance RDBMS Single Instance Oracle databases can be used with both SGeSAP storage layout options.
The Oracle database server and SAP server might need different types of NLS files. The server NLS files are part of the database Serviceguard package. The client NLS files are installed locally on all hosts. Do not mix the access paths for ORACLE server and client processes. The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts where an Application Server might run.
• The sections [Installations], [Databases], and [Runtime] are stored in separate files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData path /sapdb/data/config. • MaxDB 7.8 does not create SAP_DBTech.ini anymore. The [Globals] section is defined in /etc/opt/sdb.
dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node in the MaxDB for the liveCache instance to run. • /sapdb/clients (MaxDB 7.8): Contains the client files in subdirectories for each database installation. • /var/lib/sql: Certain patch level of MaxDB 7.6 and 7.
local copies is possible, though not recommended because there are no administration tools that keep track of the consistency between the local copies of these files on all the systems. Using NFS toolkit filesystems underneath or export Table 10 (page 46) is required when multiple MaxDB based components (including liveCache) are either planned or already installed. These directories are shared between the instances and must be part of an instance package.
4.7.1 Option 1: Simple cluster with separated packages Cluster layout constraints: • The liveCache package does not share a failover node with the SCM central instance package. • There is no MaxDB or additional liveCache running on cluster nodes. • There is no intention to install additional SCM Application Servers within the cluster.
5 Clustering SAP Netweaver using SGeSAP packages 5.1 Overview This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). Each task is described with examples. A prerequisite for clustering SAP by using SGeSAP is that the Serviceguard cluster software installation must have been completed, and the cluster setup and running.
This implies that two SAP instances sharing these resources must be part of the same package. Finally, before clustering SAP using the Phase 1 approach, it is important to decide on the following: • The file systems to be used as local copies. • The file systems to be used as shared exclusive file systems. • The file systems to be used as shared nfs file systems. For more information on file system configurations, see chapter 4 “SAP Netweaver cluster storage layout planning” (page 37).
Table 16 SGeSAP monitors Monitor Description sapms.mon To monitor a message service that comes as part of a Central Instance or System Central Service Instance for ABAP/JAVA usage. sapenq.mon To monitor an enqueue service that comes as part of a System Central Service Instance for ABAP/JAVA usage. sapenqr.mon To monitor an enqueue replication service that comes as part of an Enqueue Replication Instance. sapdisp.
5.1.3 Installation options Serviceguard and SGeSAP provide three different methods for installing and configuring packages in an SAP environment. 1. 2. 3. SGeSAP Easy Deployment using the deploysappkgs script: This is applicable for some selected SAP installations types. For example, SAP Central Instance installations. It provides an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP SID.
5.1.3.1 Serviceguard Manager GUI and Serviceguard CLI For more information about the installation option 2: package creation using the Serviceguard Manager GUI, see the respective online help available in the Serviceguard Manager GUI. For more information about installation option 3: package creation using the Serviceguard Command Line Interface (CLI), see the Managing HP Serviceguard A.11.20.20 for Linux manual at http:// www.hp.com/go/ linux-serviceguard-docs. 5.1.3.
Table 18 SGeSAP use cases for easy package deployment (continued) Use case Scenario Add a new SAP instance to an already configured For example: package A newly installed ASCS must be part of the existing SCS package. Easy deployment will add such an ASCS into the existing package, if it is configured to the same virtual host as the SCS or if option "combined" is selected. Update existing package with additionally required resources For example: A new volume group related to a SAP instance must be added.
5.2.2 Node preparation and synchronization Node preparation needs to be performed on every cluster node only once. If a node is added to the cluster after the SGeSAP package setup, node preparation must be performed before the packages are enabled on that node.
NOTE: If a common sapnfs package already exists, it can be extended by the new volume groups, file systems, and exports. Mount points for the directories that are used by the NFS toolkit package and the automount subsystem must exist as part of the prerequisites. If the mount points do not exist, you must create it depending on the requirement. For example: mkdir -p /export/sapmnt/C11 5.2.5.
There are two tables in this screen, Select Nodes and Specify Parameters. By default, nodes and node order are pre-selected in the Select Nodes table. You can clear the selection or change the node order, to accommodate your configuration requirements. Alternatively, you can select Enable package to run on any node configured in the cluster (node order defined by Serviceguard) and allow Serviceguard to define the node order.
Figure 15 Configuration summary page- sapnfs package (continued) 5.2.5.2 Creating NFS toolkit package using Serviceguard CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes the CLI steps and the GUI steps are described in the “Creating NFS Toolkit package using Serviceguard Manager” (page 56) section. 1. 2. Run the cmmakepkg –n sapnfs -m tkit/nfs/nfs sapnfs.config command to create the NFS server package configuration file using the CLI.
… tkit/nfs/nfs/XFS … NOTE: 3. 4. "-o rw,no_root_squash,fsid=102 *:/export/sapmnt/C11" Change the service_name attribute, if it is not unique within the cluster. Run the cmapplyconf –P sapnfs.config command to apply the package. Run the cmrunpkg sapnfs command to start the package. 5.2.5.3 Automount setup On each NFS client, add a direct map entry /- /etc/auto.direct to the /etc/auto.master automounter configuration file.
4. 5. Mount that file system to /export/sapmnt/DASID (create this directory if it doesn’t exist yet) and export it via NFS. Mount the exported filesystem to /sapmnt/DASID. To make this exported filesystem highly available, the same mechanism as for other SAP SIDs can be used. 1. 2. 3. Add the exported file system together with its volume_group, logical volume, and file system mountpoints to a NFS toolkit package. Add /sapmnt/DASID to the automount configuration.
1. 2. Set up the package with both Serviceguard and SGeSAP modules. Set up the package with only Serviceguard modules. 5.2.6.1 Intermediate synchronization and verification of mount points The procedure for synchronizing mount points are as follows: • Ensure that all the file system mount points for this package are created on all the cluster nodes as mentioned in the prerequisites.
6. 7. Click Next >> and in the Select package type window, enter a package name. The Failover package type is pre-selected and Multi-Node is disabled. The SGeSAP Package with SAP Instances does not support Multi-Node. Click Next >> at the bottom of the screen and another Create a Modular Package screen appears with the following messages at the top of the screen: The recommended modules have been preselected. Choose additional modules for extra Serviceguard capabilities. 8.
11. After you are done with all the Create a Modular Package configuration screens, the Verify and submit configuration change screen appears. Use the Check Configuration and Apply Configuration buttons to confirm and apply your changes. 5.2.6.2.2 Creating the package configuration file with the CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI.
Choose additional modules for extra Serviceguard capabilities. 5. The modules in the Required Modules window are set by default and cannot be changed. In the Select Modules window, you can select additional modules (or clear the default recommended selections) by selecting the check box next to each module that you want to add (or remove) from the package. Figure 18 Module selection page Click Reset at the bottom of the screen to return to the default selection. 6.
The required attributes are vg, fs_name, fs_directory, fs_type, ip_subnet, and ip_address. For examples of these attributes, see “Creating NFS toolkit package using Serviceguard CLI” (page 58) section. 2. Verify the package configuration by using the cmcheckconf –P .config command, and if there are no errors, run the cmapplyconf –P .config command to apply the configuration. 5.2.6.
NOTE: While all three installation types can be clustered, the recommended installation type is HA system. The HA option is available for all the Netweaver 7.x versions. After completing the installation, the SAP system must be up and running on the local (primary) node. For more information on the installation types, see the SAP installation guides. 5.
In [A]SCS profile the line with Restart (number might vary) Restart_Program_01 = local $(_EN) has to be changed to Start. Start_Program_01 = local $(_EN) Avoid database startup as part of Dialog instance startup A dialog instance installation contains Start_Program_00 = immediate $(_DB) entry in its profile. This entry is generated by the SAP installation to start the DB before the dialog instance is started.
Table 19 DB Configuration Files DB type File(s) Path Oracle tnsnames.ora $ORACLE_HOME/network/admin Fields/Description (HOST = hostname) /oracle/client//network/ admin /sapmnt//profile/oracle listener.ora $ORACLE_HOME/network/admin (HOST = hostname) MaxDB .XUSER.62 /home/adm Nodename in xuser list output. If necessary recreate userkeys with xuser … -n vhost Sybase interfaces $SYBASE Fourth column of master and query entry for each server dbenv.
For example: # cat/db2/db2/.rhosts db2 When using ssh for the virtual host configuration, the public key of each physical host in the cluster must be put into the system's known hosts file. Each of these keys must be prefixed by the virtual hostname and IP. Otherwise, DB2 triggers an error when the virtual hostname fails over to another cluster node, and reports different key for the virtual hostname.
Table 21 Groupfile file groups Groups Remark sapsys Primary group for all SAP SID users and DB users sapinst SAP installer group, secondary for SAP SID and DB users sdba MaxDB file owner oper Oracle database operators (limited privileges) dba Oracle database administrators db2admdb2mntdb2ctldb2mon IBM DB2 authorization groups NOTE: For more information of the terms local, shared exclusive, and shared nfs file systems used in this section, see chapter 4 “SAP Netweaver
mkdir /home/adm/.hdb/ cp /home/adm/.hdb//SSFS_HDB.DAT /home/adm/.hdb//SSFS_HDB.DAT 5.4.3 Network services synchronization During the SAP installation the file /etc/services is updated with the SAP definitions. These updates must be synchronized with the /etc/services on the secondary nodes. The very first SAP installation on the primary node creates all the entries for the first four type of entries in Table 22 (page 71).
installation. If not, it must be installed according to the instructions in SAP note 1031096. The sapinst used for instance installation may offer an option to install the hostagent. This step can be executed within or directly after phase 2. Make sure that both the uid of sapadm and the gid of sapsys are identical in all the cluster nodes. 5.4.
5.5 Completing SGeSAP package creation (Phase 3b) The three options for creating the final SGeSAP package are as follows: • Easy deployment with deploysappkgs command. • Guided configuration using Serviceguard Manager. • Package creation with the CLI interface using the cmmakepkg and cmapplyconf command.
3. 4. 5. Run the mv .config.config.SAVE command to add dbinstance module to an existing package. Run the cmmakepkg –m sgesap/dbinstance -i .config.SAVE .config command Edit the package configuration file to update the relevant SGeSAP attributes. NOTE: Non SGeSAP attributes such as service or generic_resource must also be updated. 6. 7. Run the cmapplyconf –P .config command to apply the configuration. Run the cmrunpkg command to start the package. 5.5.
Parameter Possible value Description sgesap/sap_global/retry_count 5 Specifies the number of retries for several cluster operations that might not succeed immediately due to racing conditions with other parts of the system. The Default is 5. sgesap/sap_global/sapcontrol_usage preferred Specifies whether the SAP sapcontrol interface and the SAP startup agent framework are required for startup, shutdown, and monitoring of SAP software components.
Figure 19 Configuring SAP instance screen 5.5.5 Module sgesap/dbinstance – SAP databases This module defines the common attributes of the underlying database. Parameter Possible example value Description sgesap/db_global/db_vendor oracle Defines the underlying RDBMS database: Oracle, MaxDB, DB2, or Sybase sgesap/db_global/db_system C11 Determines the name of the database (schema) for SAP For db_vendor = oracle sgesap/oracledb_spec/listener_name LISTENER Oracle listener name.
Figure 20 Configuring SAP database 5.5.6 Module sgesap/mdminstance – SAP MDM repositories The sgesap/mdminstance module is based on sgesap/sapinstance with additional attributes for MDM repositories, MDM access strings, and MDM credentials. Many configurations combine the MDM instances like MDS, MDIS, and MDSS (and possibly a DB instance) into one SGeSAP package. This is called a "MDM Central" or "MDM Central System" installation.
Parameter Value Description sgesap/mdm_spec/mdm_credentialspec_user Admin User credential for executing MDM CLIX commands sgesap/mdm_spec/mdm_credentialspec_password Password credential for executing MDM CLIX commands The following contains some selected SGeSAP parameters relevant to MDM repository configuration. For more information, see package configuration file.
SAP System C11 has SCS40 and ERS41 configured. ERS41 replicates SCS40. Both the package containing SCS40 and the package of ERS41 must have the generic_resource_name sgesap.enqor_C11_ERS41 setup with the generic_resource module. The resource must be of evaluation_type before_package_start. Up_criteria for the SCS package is “>=1”, for the ERS package “>=2”. For example: For the SCS package: generic_resource_name generic_resource_evaluation_type generic_resource_up_criteria sgesap.
• Start and stop packages on each configured node. When testing SGeSAP follow-and-push mechanism the enqor MNP package must be up. This will restrict the possible nodes for SCS and ERS package startup. • Make sure client applications (dialog instances) can connect 5.5.11 Configuring sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache This section describes configuring SGeSAP toolkit with sgesap/sapextinstance, sgesap/sapinfra and sgesap/livecache parameters. 5.5.11.
(enabled per default on SGeSAP/LX), SGeSAP will try to use sapcontrol commands to start and stop instances. For instances on remote hosts, sapcontrol will use the –host option to control the remote instance. Note this requires that the remote instance’s sapstartsrv is already running and the required webservices (for starting and stopping the instance) are open for remote access from the local host (For more information, see the SAP notes 927637 and 1439348).
Figure 21 Configuring sapextinstance screen Supported operating systems for running external instances are Linux, HP-UX and Microsoft Windows-Server. For Windows the example functions start_WINDOWS_app and stop_WINDOWS_app must be adapted to the remote communication mechanism used on the Windows Server. In this case, there must be a customer specific version of these functions in customer_functions.sh.
Example 3: The package contains one or more dialog instances configured for vhost1 for which also a diagnostic agent is configured. It must stop before the instances are stopped and started after the instances are started. sap_ext_instance sap_ext_system sap_ext_host sap_ext_treat SMDA97 DAA vhost1 yynnn 5.5.11.
Figure 22 Configuring SAP infrastructure software components screen To add an SAP infrastructure software component to the Configured SAP Infrastructure Software Components list: 1. 2. Enter information to the Type, Start/Stop, and Parameters boxes. Click
• Disable liveCache xserver autostart (optional • Create the liveCache monitoring hook Create XUSER file The SGeSAP liveCache module requires that a userkey with the "control user" has been setup for the lcsidadm user. Normally, key c is used for this, but other keys can also be used. If the c key does not exist, login as user adm and execute xuser –U c –u control,password –d LCSID –n virtual-host set to create the XUSER file and the c key.
Check SAP liveCache instance in the initial SGeSAP module selection screen (Select a Toolkit -> SGeSAP -> SAP liveCache instance). The configuration dialogs brings up the screen for configuring the liveCache module. Figure 23 Configuring liveCache screen From the command line use: cmmakepkg –m sgesap/livecache lcLC1.config to create the package configuration file. Then edit and apply the configuration file.
system to the newly generated ones. This step is usually straight-forward and is not covered here. • Adapting each occurrence of the old instance hostname with the new instance virtual hostname. This concerns filenames as well as configuration values containing hostname strings in the files. For a Java based instances, this will also require the use of the SAP “configtool”.
6 Clustering SAP HANA System Replication using SGeSAP packages 6.1 Building a HANA cluster environment The prerequisites for configuring a HANA system in SGeSAP environment are: • Appliance hardware: Two single-host HP AppSystem for HANA appliances can be coupled to build a HANA System Replication cluster with automated failover capabilities see Figure 24 (page 89).
6.2 Configuring HANA replication To configure the replication relationship between the two HANA instances: NOTE: Configure the replication relationship on one of the HANA nodes. 1. 2. Backup your primary HANA system using hdbstudio utility. Edit /hana/shared//global/hdb/custom/config/global.ini on both nodes. a. In the communication section, set listeninterface to .global. b. In the persistence section, set log_mode to normal. 3. Enable the replication of HANA system configuration.
# rpm -ivh serviceguard-extension-for-sap-A.06.00.20.99-0.sles11.x86_64.rpm 3. • Depending on the date on which the used HANA appliances were pre-installed, a pidentd package might already be on the systems. • Do not replace HANA appliances with any other version if the pre-installed one is of a higher version. Run the following command to check if SGeSAP is installed: # rpm -qa | grep serviceguard serviceguard-A.11.20.20-0.sles11 serviceguard-extension-for-sap-A.06.00.20.99-0.sles11 4.
The output displays a HANA system ID. sgesap/hdb_global/hdb_system= 3. Deploy the cluster package configuration files. For additional information about the deploysapppkgs(1) command, see the corresponding manpage. • To create a HANA clustering configuration for multi packages: # deploysappakgs multi This command creates Serviceguard configuration files and pre-fills the configuration files with discovered values for the primary and secondary HANA node.
Table 26 Serviceguard cluster configuration parameters for HANA Parameter name Value Perform action sg/basic/package_name User Defined User can perform changes to package name. sg/basic/package_description User Defined User can perform changes to package description. sg/basic/module_name User Defined User can add additional Serviceguard modules. sg/basic/node_name User Defined User can change the node name if a host name is changed.
Table 27 HANA specific SGeSAP cluster package parameter settings (continued) Parameter name Description sgesap/hdb_global/hdb_retry_count Specifies the number of attempts for HANA database operations and pollings. Default value Pre-set value 5 5 not set 0 in case of (mem-) sync replication; otherwise it is not set. not set 0 in case of (mem-) sync replication; otherwise it is not set. This parameter helps to increase the SAP operations timeout. Set higher values for large in-memory instances.
3. Add and configure sgesap/hdbdualpurpose module in the primary package configuration file for using additional instances. Example: # cmgetconf –p pkg.config # cmmakepkg -m sgesap/hdbdualpurpose -i pkg.config > new.config • Specify the non-production instances in new.config configuration file by editing sgesap/hdbdualpurpose parameters.