Building Disaster Recovery Serviceguard Solutions Using Metrocluster with EMC SRDF HP Part Number: 698671-001 Published: February 2013
Legal Notices © Copyright 2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................8 Overview of EMC SRDF............................................................................................................8 Terms and concepts.............................................................................................................8 Types of configuration..........................................................................................................
Configuring the storage device using SG SMS CFS or CVM................................................40 Configuring the storage device using Veritas CVM.............................................................41 Configuring the storage device using SLVM......................................................................42 Configuring the complex workload stack at the Source Disk Site...............................................
Shutting down a complex workload......................................................................................66 Moving a complex workload to a remote site........................................................................66 Restarting a failed Site Controller package............................................................................67 Administering Metrocluster with Serviceguard Manager...............................................................67 Rolling upgrade...................
Configuring SADTA..........................................................................................................104 Setting up replication.......................................................................................................104 Configuring Metrocluster with sites.....................................................................................104 Creating a Serviceguard cluster with sites configured.......................................................
Deleting nodes online on the primary site where the RAC database package stack is running......................................................................................................................128 Deleting nodes online on the site where the RAC database package stack is down..............129 Starting a disaster tolerant Oracle database 10gR2 RAC......................................................129 Shutting down a disaster tolerant Oracle database 10gR2 RAC..................................
1 Introduction The EMC Symmetrix Remote Data Facility (EMC SRDF) disk arrays allow you to configure physical data replication solutions to provide disaster recovery for Serviceguard clusters over long distances. Overview of EMC SRDF EMC SRDF is a Symmetrix-based business continuance and disaster recovery solution. SRDF is a configuration of Symmetrix systems, the purpose of which is to maintain multiple, real-time copies of data in more than one location.
SRDF/Synchronous SRDF/Synchronous ensures that every write by a host connected to a Symmetrix unit at the R1 site is replicated to the R2 site before the local Symmetrix unit at R1 sends back an acknowledgement to the host. SRDF/Asynchronous SRDF/Asynchronous (SRDF/A) provides a long-distance replication solution with minimal impact on performance. This protection level ensures minimal host application impact and maintains a ready-to-start copy of data at R2 site.
For information about EMC SRDF, see the document EMC® Symmetrix®Remote Data Facility Product Guide available at EMC documentation website. Overview of solution for Metrocluster with EMC SRDF Overview of a Metrocluster configuration A Metrocluster is configured with the nodes at Site1 and Site2. When Site1 and Site2 form a Metrocluster, a third location is required where Quorum Server or arbitrator nodes must be configured.
Figure 2 Overview of a Metrocluster configuration Quorum Server Network Switch Ethernet Network Switch Network Switch Network Switch DWDM DWDM Node A FC Switch Network Switch Node C Node B FC Switch Node D FC Switch FC Switch FC Switch FC Switch IP Network FC Switch Arrays FC Switch Arrays Figure 2 (page 11) depicts an example of two applications distributed in a Metrocluster with EMC SRDF environment balancing the server and replication load.
2 Configuring an application in a Metrocluster solution Installing the necessary software Before you begin any configuration, ensure the following software is installed on all the nodes: • Symmetrix EMC Solutions Enabler software that allows the management of the Symmetrix disks from the node. • If you are building an M by N configuration using RDF Enginuity Consistency Assist (RDF-ECA), you must install only Symmetrix EMC Solutions Enabler. You do not have to install any other software.
NOTE: Do not set the SYMCLI_SID and SYMCLI_DG environment variables before running the symcfg command. These environment variables limit the amount of information gathered when the EMC Solutions Enabler database is created, and therefore will not be a complete database. Also, you must not set the SYMCLI_OFFLINE variable since this environment variable disables the command line interface.
NOTE: The format of output varies depending on the Solutions Enabler version. Sample symrdf list Output from R1 Side Symmetrix ID: 000192603927 Local Device View ---------------------------------------------------------------------------STATUS MODES RDF S T A T E S Sym RDF --------- ----- R1 Inv R2 Inv ---------------------Dev RDev Typ:G SA RA LNK MDATE Tracks Tracks Dev RDev Pair ---- ---- -------- --------- ----- ------- ------- --- ---- ------------80B1 034E R1:2 RW RW RW S..1.
Table 1 Mapping for a 4 node Cluster connected to 2 Symmetrix arrays (continued) Symmetrix ID, Node 1 /dev/rdsk device #, and type device file name Type R2 ID 50 Dev# 012 Type R1 ID 95 Dev# 040 Type GK ID 50 Dev# 041 Type GK ID 95 Dev# 028 Type BCV Node 2 /dev/rdsk device file name Node 3 /dev/rdsk device file name Nodes 4 /dev/rdsk device file name c3t0d2 c4t3d2 c0t15d0 c0t15d0 c3t15d1 c5t15d1 c4t3d2 c4t3d2 n/a n/a NOTE: The Symmetrix device number may be the same or diffe
Figure 3 Mapping HP-UX device file names to Symmetrix units 1 Symmetrix ID 95 Symmetrix ID 50 2 3 Node 1 4 R1 R2 9 10 R2 R1 5 12 SRDF 6 Node 3 11 GK 13 GK 14 7 BCV 8 Node 2 Node 4 Data Center A 1. 2. 3. 4. /dev/rdsk/c0t4d0 /dev/rdsk/c0t2d2 /dev/rdsk/c0t15d0 /dev/rdsk/c4t3d2 5. 6. 7. 8. Data Center B /dev/rdsk/c6t0d0 /dev/rdsk/c0t4d2 /dev/rdsk/c0t15d0 /dev/rdsk/c4t3d2 9. 10. 11. /dev/rdsk/c4t0d0 /dev/rdsk/c3t0d2 /dev/rdsk/c3t15d1 12. 13. 14.
the HP-UX path only to determine the Symmetrix device you are referring. The Symmetrix device can be added to the device group only once. NOTE: Symmetrix Logical Device names must be the default names in the DEVnnn (for example, DEV001) format. Do not use this option for creating your own device names. The script mk3symgrps.nodename must be customized for each system including: • Particular HP-UX device file names.
Sym Dev ---80A4 80A5 80A6 80A7 80A8 80A9 80AA 2. RDev ---0340 0341 0342 0343 0344 0345 0346 RDF Typ:G -------R1:2 R1:2 R1:2 R1:2 R1:2 R1:2 R1:2 STATUS --------SA RA LNK --------RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW RW MODES ----- R1 Inv MDATE Tracks ----- ------S..1. 0 S..1. 0 S..1. 0 S..1. 0 S..1. 0 S..1. 0 S..1.
ST LI ST Standard A N A Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair Device Dev E Tracks Tracks S Dev E Tracks Tracks MDAE STATE -------------------------------- -- ------------------------ ----- -----------DEV001 DEV002 DEV003 DEV004 DEV005 80A4 80A5 80A6 80A7 80A8 RW RW RW RW RW 0 0 0 0 0 0 0 0 0 0 RW RW RW RW RW 0340 0341 0342 0343 0344 WD WD WD WD WD 0 0 0 0 0 0 0 0 0 0 A... A... A... A... A...
Figure 4 Devices and Symmetrix Units in M by N configurations Array Node 1 Gatekeeper /dev/rdsk/c7t0d0 (002) Array Node 3 Gatekeeper /dev/rdsk/c5t0d0 (010) R1 Devices /dev/rdsk/c6t0d0 (00C) /dev/rdsk/c6t0d1 (00D) R2 Devices /dev/rdsk/c8t0d2 (018) /dev/rdsk/c8t0d1 (019) Symmetrix A BCV Devices /dev/rdsk/c8t0d2 (01A) /dev/rdsk/c8t0d3 (01B) Array Symmetrix C Array Gatekeeper /dev/rdsk/c6t0d0(00B) Gatekeeper /dev/rdsk/c5t0d1 (001) R1 Devices /dev/rdsk/c5t0d2 (010) /dev/rdsk/c5t0d3 (011) Pkg B R2 Dev
Figure 5 Example of an M by N configuration - 2 by 1 configuration Node 3 Array Array Node 1 SRDF Links Node 4 Pkg A BCVs R2 Vols R1 Vols SRDF Links Node 5 Array Node 2 Node 6 Pkg B R1 Vols Third Location (Arbitrators) Figure 6 shows a bidirectional 2 by 2 configuration with additional packages on node3 and node4, and R1 and R2 volumes at both data centers. In this configuration, R1 volumes and pkg A and pkg B are at Data Center A, and R2 volumes are at Data Center B.
devices. If an I/O cannot be written to a remote Symmetrix because a remote device or an RDF link has failed, the data flow to the other Symmetrix is halted in less than one second. Once mirroring is resumed, any updates to the data is propagated with normal SRDF operation. Figure 7 shows how the use of consistency groups (depicted as dashed rectangle lines) ensures that the other two links are also suspended when there is a break in the links between two of the Symmetrix frames.
4. For each node on the R2 side (node3 and node4), assign the R2 devices to the device groups. # symld -sid 021 -g dgoraA add dev 018 # symld -sid 021 -g dgoraA add dev 019 # symld -sid 363 -g dgoraB add dev 050 # symld -sid 363 -g dgoraB add dev 051 5. On each node on the R2 side (node3 and node4), associate the local BCV devices to the R2 device group. # symbcv -g dgoraA add dev 01A # symbcv -g dgoraA add dev 01B # symbcv -d dgoraB add dev 052 # symbcv -d dgoraB add dev 053 6.
The following examples are based on the configuration shown in Figure 4. For each package, to create consistency groups: 1. On each node in the cluster, create an empty consistency group using the symcg command. To create a consistency group using PowerPath on the R1 side, run the following command: # symcg create cgoradb -ppath -type rdf1 Replace rdf1 with rdf2 in the command to create the consistency group on the R2 side.
Building a composite group for SRDF/Asynchronous MSC To perform an operation on a consistency group for SRDF/Asynchronous MSC data replication, the composite group must be configured with devices that are SRDF/Asynchronous capable within the RDF group. To create a composite group: 1. List SRDF/Asynchronous capable devices on the source Symmetrix unit and ensure that the SRDF/Asynchronous capable devices are mapped to RDF group for use.
Setting up the RDF daemon The cycle switch process required for SRDF/Asynchronous MSC is provided by the Solutions Enabler software executing an RDF daemon that implements the MSC functionality. You can enable or disable the RDF daemon on each host using the SYMAPI_USE_RDFD option in the SYMAPI file. The default value of the SYMAPI_USE_RDFD option is DISABLE. To enable the RDF daemon, set the SYMAPI_USE_RDFD to ENABLE. Setting this option to ENABLE activates the RDF daemon for SRDF/Asynchronous MSC.
SITE san_jose NODE_NAME SJC_2 SITE san_jose Use the cmviewcl command to view the list of sites that are configured in the cluster and their associated nodes. Following is a sample of the command, and the output: # cmviewcl -l node SITE_NAME san_francisco NODE STATUS STATE SFO_1 up running SFO_2 up running .........
NOTE: If you are using HP-UX 11i v3 March 2008 version or later, you can skip step 2; vgcreate (1m) will create the device file for you. 3. Create the volume groups. Be careful not to span Symmetrix frames. # vgcreate /dev/vgoraA /dev/rdsk/c6t0d0 # vgextend /dev/vgoraA /dev/rdsk/c6t0d1 # vgcreate /dev/vgoraB /dev/rdsk/c5t0d2 # vgextend /dev/vgoraB /dev/rdsk/c5t0d3 4. Create the logical volumes. (XXXX indicates size in MB) # lvcreate -L XXXX /dev/vgoraA # lvcreate -L XXXX /dev/vgoraB 5.
NOTE: While creating a volume group, you can choose either the legacy or agile Device Special File (DSF) naming convention. To determine the mapping between these DSFs, use the # ioscan –m dsf command. Creating VxVM disk groups using Metrocluster with EMC SRDF If you are using Veritas volume manager, use the following procedure to create disk groups. The following procedure describes how to set up Veritas disk groups. On one node do the following: 1.
5. Start the logical volume in the disk group. # vxvol -g logdata startall 6. Create a directory to mount the volume. # mkdir /logs 7. Mount the volume. # mount /dev/vx/dsk/logdata/logfile /logs 8. Verify and ensure that the file system is present, then unmount the file system and deport the disk group. # umount /logs # vxdg deport logdata Repeat steps 4 through 8 on all the nodes that will access the disk group. 9. Establish the SRDF link. # symrdf -g devgrpA establish IMPORTANT: VxVM 4.
Configuring a Metrocluster package Using command line When configuring Modular packages using Metrocluster with EMC SRDF A.09.00 or later, only the package configuration file must be edited. All parameters that were previously available in the Metrocluster environment file are now configured from the package configuration file. The Metrocluster environment file is automatically generated on all the nodes when the package configuration is applied in the cluster.
Using Serviceguard Manager To configure a Metrocluster package using Serviceguard Manager: 1. Access one of the node’s System Management Home Page at http://:2301. Log in using the root user credentials of the node. 2. Click Tools, if Serviceguard is installed, one of the widgets will have Serviceguard as an option. Click the Serviceguard Manager link within the widget. 3. On the Cluster’s Home Page, click the Configuration tab and then select Create A Modular Package option.
Figure 10 Configuring package name 7. Select additional modules depending on the application. For example, if the application uses LVM volumegroups or VxVM diskgroups, select the volume_group module. Click Next. Figure 11 Selecting additional modules 8. Review the node order in which the package will start, and modify other attributes, if required. Click Next.
Figure 12 Configuring generic failover attributes 9. You are prompted to configure the attributes for a Metrocluster package. Ensure that all the mandatory attributes (marked *) are accurately filled. Figure 13 Configuring Metrocluster attributes 10. Enter the values for other modules selected in step 7. 11. After you enter the values for all modules, in the final screen review all the inputs for the various attributes, and then click APPLY to apply the configuration.
/dev/vx/dsk/cvm_dg0/lvol2” service_restart none service_fail_fast_enabled no service_halt_timeout 300 Easy deployment of Metrocluster modular packages Starting with Serviceguard version A.11.20, the Package Easy Deployment feature is introduced. This feature is available from the Serviceguard Manager version B.03.10 onwards. It provides a simple way to quickly deploy Metrocluster modules in supported toolkit applications.
• The Domino Mode must be enabled for M x N configuration to ensure the following: ◦ data currency on all Symmetrix frames. ◦ no possibility of inconsistent data at the R2 side in case of SRDF links failure. If Domino Mode is not enabled and all SRDF links fail, the new data is not replicated to the R2 side while the application continues to modify the data on the R1 side. This results in the R2 side containing a copy of the data only up to the point of the SRDF link failure.
• To minimize contention, each device group used in the package must be assigned two unique gatekeeper devices on the Symmetrix for each host where the package will run. These gatekeeper devices must be associated with the Symmetrix device groups for that package. The gatekeeper devices are typically a 2880 KB logical device on the Symmetrix.
3 Configuring complex workloads using SADTA SADTA enables deploying complex workloads in a Metrocluster. Complex workloads are applications configured using multi-node and failover packages with dependencies. For more information on SADTA, see Understanding and Designing Serviceguard Disaster Recovery Architectures at http://www.hp.com/go/hpux-serviceguard-docs.
SITE ... . . . NODE_NAME SITE . . . NODE_NAME SITE . . . NODE_NAME SITE . . . 3. 4. Run the cmapplyconf command to apply the configuration file. Run the cmruncl command to start the cluster. After the cluster is started, you can run the cmviewcl command to view the site configuration.
The storage device for a complex workload must first be configured at the site with the source disk of the replication disk group. Then, a complex workload package stack must be created at this site. It is only at this stage that an identical complex workload using the target replicated disk must be configured with the complex workload stack at the other site.
8. Verify the package configuration file. # cmcheckconf -P cfspkg1.ascii 9. Apply the package configuration file. # cmapplyconf -P cfspkg1.ascii 10. Run the package. # cmrunpkg Configuring the storage device using Veritas CVM To set up the CVM disk group volumes, perform the following steps on the CVM cluster master node in the Source Disk Site: 1. Initialize the source disks of the replication pair.
dependency_name SG-CFS-pkg_dep dependency_condition SG-CFS-pkg=up dependency_location same_node 5. Apply the newly created package configuration. # cmapplyconf -v -P .conf Configuring the storage device using SLVM To create volume groups on the Source Disk Site: NOTE: If you are using HP-UX 11i v3 March 2008 version or later, you can skip step 2; vgcreate (1m) will create the device file for you. 1. Define the appropriate volume groups on each host system in the Source Disk Site.
Configuring complex workload packages to use SG SMS CVM or Veritas CVM When a storage used by complex workload is CVM disk groups, the complex workload packages must be configured to depend on the CVM disk group multi-node package. With this package dependency, the complex workload will not run until its dependent CVM disk group multi-node package is up, and will halt before the CVM disk group multi-node package is halted.
package_name cvm_disk_group cvm_activation_mode "node3=sw node4=sw" cfs_mount_point cfs_volume / cfs_mount_options "node3=cluster node4=cluster" cfs_primary_policy "" where node3 and node4 are the nodes at the target disk site. Do not configure any mount specific attributes such as cfs_mount_point and cfs_mount_options if SG SMS CVM is configured as raw volumes. 4. Verify the package configuration file.
Configure the identical complex workload stack at the recovery site The complex workload must be packaged as Serviceguard failover or MNP packages. This step creates the complex workload stack at the target disk site that will be configured to be managed by the Site Controller Package. For more information about configuring the complex workload stack, see the section “Configuring the complex workload stack at the Source Disk Site” (page 42).
Figure 16 Selecting Metrocluster module 5. 6. You are prompted to include any other toolkit modules, if installed. Skip this step if required, and move to the next screen. Enter the package name. Site Controller packages can be configured only as failover packages. Ensure that this option is selected as shown in Figure 17 (page 46) and then click Next. Figure 17 Configuring package name 7. 8. Next, you are prompted to select additional modules required by the package.
Figure 19 Selecting complex workload packages 10. Configure the attributes for a Metrocluster package. All the mandatory attributes (marked *) are required. Select the RDF mode and specify the Devicegroup attribute. 11. Review the service that is going to be monitor the complex workload packages. Skip to the next step if no changes are required. Figure 20 Configuring service module attributes 12.
Configuring Site Controller package using command line interface Configuring an empty site controller package Following are the guidelines that you must follow while configuring an empty Site Controller package: • The default value of the priority parameter is set to no_priority. The Site Controller Package must not be subjected to any movement because of package prioritization. So do not change this default value.
The following output is displayed: /dts/mcsc/cw_sc: Resource Instance The current value of the resource is DOWN (0) Configuring the Site Safety Latch dependencies for a complex workload After the Site Controller package configuration is applied, the corresponding Site Safety Latch is automatically configured in the cluster. This section describes the procedure to configure the Site Safety Latch dependencies. To configure the Site Safety Latch dependency for a complex workload: 1.
1. 2. Run the cmviewcl command to view the complex workload configuration in a Metrocluster. Enable all the nodes in the Metrocluster for the Site Controller package. # cmmodpkg –e –n –n -n –n cw_sc 3. Start the Site Controller Package. # cmmodpkg -e cw_sc The Site Controller package, and the complex-workload package starts up on local site. 4. 50 Check the Site Controller Package log file to ensure clean startup.
4 Metrocluster features Data replication storage failover preview In an actual failure, packages are failed over to the standby site. As part of the package startup, the underlying storage is failed over based on the parameters defined in the Metrocluster environment file. The storage failover can fail due to many reasons, and can be categorized as the following: • Incorrect configuration or setup of Metrocluster and data replication environment.
Live Application Detach There may be circumstances in which you want to do maintenance that involves halting a node, or the entire cluster, without halting or failing over the affected packages. Such maintenance might consist of anything short of rebooting the node or nodes, but a likely case is networking changes that will disrupt the heartbeat. New command options in Serviceguard A.11.
Table 4 Validating Metrocluster package (continued) available in the Metrocluster package directory. Verify whether the disks belonging to cmcheckconf [–v] a volume group or a disk group are cmapplyconf being replicated. cmcheckconf [-P/-p] Warns you if the disks belonging to a volume group or a diskgroup are not being replicated. It reports an error if the disks belong to a replication group that is different from what is mentioned in the environment file or the package configuration file.
Table 5 Additional validation of Site Controller packages (continued) predecessors in the dependency order among the packages that are configured to be managed by the Site Controller package on both sites. NOTE: The checks and validations mentioned in “Validating Metrocluster package” (page 52) is not applicable for legacy packages. HP recommends that you add the location of the package environment file available in the Metrocluster package directory to the list of files in /etc/ cmcluster/cmfiles2check.
5 Understanding Failover/Failback scenarios Table 6 (page 55) describes the package startup behavior in various failure scenarios depending on the AUTO parameters and the presence of FORCEFLAG file in the package directory.
Table 6 Package startup behavior in various failure scenarios (continued) Failover/Failback SRDF States AUTO parameters Metrocluster behaviour automate failover, set AUTOR1UIP to 0. However, it is better to wait for the update to complete before starting up the package. 56 Failover sync or within the async primary site (R1) when the recovery site (R2) or the SRDF link is down.
Table 6 Package startup behavior in various failure scenarios (continued) Failover/Failback SRDF States AUTO parameters Metrocluster behaviour the package directory and restart the package. To automate failover, set AUTOSPLITR1 to 1. Failover to the sync or recovery site async (R2) when the SRDF Links are in mixed state. ( This can happen with consistency groups where one link is in Partitioned state and the other is in Suspended state).
complex-workload package is down, having failed in the cluster. This special flag is set to yes when the complex-workload package is down and manually halted. Serviceguard sets this flag to no only when the last surviving instance of the complex workload package is halted as a result of a failure. The flag is set to yes if the last surviving instance is manually halted, even if other instances are halted earlier due to failures.
When a node, on which the Site Controller package is running, is restarted, the Site Controller package fails over to the next available adoptive node. Based on the site adoptive node that the Site Controller package is started on, and the status of the active complex-workloads packages, the Site Controller package performs a site failover, if necessary. Network partitions across sites A network partition across sites is similar to a site failure.
Site failure A site failure is a scenario where a disaster or an equivalent failure results in the failure of all the nodes in a site. The Serviceguard cluster detects this failure, and reforms the cluster without the nodes from the failed site. The Site Controller Package that was running on a node on the failed site fails over to an adoptive node in the remote site.
6 Administering Metrocluster Adding a node to a Metrocluster To add a node to Metrocluster with EMC SRDF: 1. Add the node in a cluster by editing the Serviceguard cluster configuration file and applying the configuration: # cmapplyconf -C cluster.config 2. 3. Configure the device groups or consistency groups used by the Metrocluster packages on the newly added node. For more information, see “Creating Symmetrix device groups” (page 16) or “Creating the consistency groups” (page 23).
3. Distribute the Metrocluster EMC SRDF configuration changes. # cmapplyconf -P pkgconfig 4. Restore the logical SRDF links for the package. In the pre.cmquery script, replace the device group name with the device group in your environment. # /opt/cmcluster/toolkit/SGSRDF/Samples/post.cmapply 5. Start the package with the appropriate Serviceguard command. # cmmodpkg -e pkgname The status of the SA/FA ports is not verified. It is assumed that at least one PVLink is functional.
4. After the resynchronization is completed, enable the package switching on the node on R2 side. # cmmodpkg -e pkgname -n node_name 5. Re-establish the BCV to R2 devices on R2 as a mirror. # symmir -g dgname -full est Alternatively, from the node on R1 side.
Scenario 1: In this scenario, the package failover is because of host failure or because of planned downtime maintenance. The SRDF links and the Symmetrix frames are still up and running. Because the package startup time is longer when the swapping is done automatically, you can choose not to have the swapping done by the package, and later manually execute the swapping after the package is up and running on the R2 side.
moved to the remote site before halting the node in the cluster. For more information about moving a site aware disaster tolerant complex workload to a remote site, see “Moving a complex workload to a remote site” (page 66). Maintaining the site Maintenance operation at a site might require that all the nodes on that site are down. In such a case, the site aware disaster tolerant workload can be started on the other site to provide continuous service.
Run the following command to enable global switching for the Site Controller Package. # cmmodpkg –e When the Site Controller package is halted in the DETACH mode, the active complex workload configuration on the site can be halted and restarted at the same site as the Site Safety Latch is still open in the site. Deleting the Site Controller package To delete Site Controller package in a cluster: 1. Halt the Site Controller package. 2.
To move a complex workload to a remote site: 1. Halt the Site Controller Package of the complex workload. # cmhaltpkg 2. Ensure the complex-workload packages are halted successfully. # cmviewcl -l package 3. Start the Site Controller Package on a node in the remote site. # cmrunpkg The Site Controller package starts up on a node in the remote site and starts the complex-workload packages that are configured.
Rolling upgrade Metrocluster configurations without SADTA feature configured follow the HP Serviceguard rolling upgrade procedure. The HP Serviceguard documentation includes rolling upgrade procedures to upgrade the Serviceguard version, the HP-UX operating environment, as well as other software. This Serviceguard procedure, along with recommendations, guidelines and limitations, is applicable to upgrading Metrocluster versions that do not use the SADTA feature.
Select a site to perform the rolling upgrade. 2. Select a node in a site to perform the rolling upgrade. # cmviewcl –l node -S 3. View all the packages running on the selected node. # cmviewcl -l package -n Identify the Site Controller packages that are running on the node. 4. Move all Site Controller packages running on the selected node to a different node in the same site using DETACH halt mode.
Limitations of the rolling upgrade for Metrocluster Following are the limitations of the rolling upgrade for Metrocluster: • The cluster or package configuration cannot be modified until the rolling upgrade is completed. If the configuration must be edited, upgrade all the nodes to the new release, and then modify the configuration file and copy it to all the nodes in the cluster.
7 Troubleshooting Troubleshooting Metrocluster Analyse Metrocluster and symapi log files to understand the problem in the respective environment and follow a recommended action based on the error or warning messages. Metrocluster log Regularly review the following files for messages, warnings, and recommended actions. It is good to review these files after each system, data center, and/or application failures: • View the system log at /var/adm/syslog/syslog.log.
1. Clean the Site Safety Latch on the site by running the cmresetsc tool. On a node from the site, run the following command: # /usr/sbin/cmresetsc IMPORTANT: 2. 3. Root user credentials are required to run this command. Check the package log file of the Site Controller Package on the node it failed on and fix any reported issues. Enable node switching for the Site Controller package on that node. # cmmodpkg –e -n 4. 5.
and allows the Site Controller Package to start. Complete this procedure for all the nodes where the MNP package instance has halted unclean. # cmmodpkg –e -n Understanding Site Controller package logs This section describes the various messages that are logged in the log files and the methods to resolve those error messages. Table 7 describes the error messages that are displayed and the recommended resolution.
Table 7 Error messages and their resolution (continued) Log Messages Cause 3. Enable node switching for the package managed by Site Controller Package on the site. 4. Clean the site using the cmresetsc tool. 5. Restart the Site Controller package. Refer to Metrocluster documentation for cleanup procedures needed before restarting the Site Controller. Site Controller startup failed. Starting Site Controller package on site siteB. Site Controller start up on the site siteA has failed.
Table 7 Error messages and their resolution (continued) Log Messages Cause Error: Metrocluster environment file does not exist in /etc/cmcluster/ There is no Metrocluster 1. Restore the Metrocluster environment file environment file in the Site under the Site Controller package Controller package directory. directory on the node 2. Check if the Metrocluster environment where Site Controller file is named using the Metrocluster package failed. defined naming convention. 3.
Table 7 Error messages and their resolution (continued) Log Messages Cause Check for any error messages in the package dependency configured log file on all the nodes in the site siteA for the for the packages that failed is not met on this packages managed by Site Controller. site. Fix any issue reported in the package log files and enable node switching for the packages on nodes they have failed. Resolution 4. Clean the site using the cmresetsc tool. 5. Restart the Site Controller package.
Table 7 Error messages and their resolution (continued) Log Messages Cause Resolution This message is logged because the Serviceguard command cmviewcl failed due to cluster reformation or transient error conditions. 1. Wait for the cluster to reform (until there is no node in reforming state). 2. Restart the Site Controller package. Reset the site siteA using cmresetsc command and start the Site Controller package again. Site Controller startup failed.
8 Support and other resources Information to collect before contacting HP Ensure that the following information is available before you contact HP: • Software product name • Hardware product model number • Operating system type and version • Applicable error message • Third-party hardware or software • Technical support registration number (if applicable) How to contact HP Use the following methods to contact HP technical support: • In the United States, see the Customer Service / Contact HP Un
HP authorized resellers For the name of the nearest HP authorized reseller, see the following sources: • In the United States, see the HP U.S. service locator website: http://www.hp.com/service_locator • In other locations, see the Contact HP worldwide website: http://welcome.hp.com/country/us/en/wwcontact.html Documentation feedback HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to: docsfeedback@hp.
IMPORTANT An alert that calls attention to essential information. NOTE An alert that contains additional or supplementary information. TIP An alert that provides helpful information.
A Checklist and worksheet for configuring Metrocluster with EMC SRDF Disaster recovery checklist Use this checklist to ensure you have adhered to the disaster recoveryarchitecture guidelines for two main data centers and a third location configuration. Data centers A and B have the same number of nodes to maintain quorum in case an entire data center fails. Arbitrary nodes or Quorum Server nodes are located in a separate location from either of the primary data centers (A or B).
Member Timeout: _________________________________________________________ Network Polling Interval: ___________________________________________________ AutoStart Delay: __________________________________________________________ Package configuration worksheet Use this package configuration worksheet either in place of, or in addition to the worksheet provided in the latest version of the Managing Serviceguard manual available at http://www.hp.com/go/ hpux-serviceguard-docs —> HP Serviceguard.
Worksheet for configuring SADTA Table 11 Site configuration Item Site Site Site Physical Location Name of the location Site Name One word name for the site that will be used in configurations Node Names 1) 2) 1) 2) Name of the nodes to be used for configurations First Heartbeat Subnet IP IP address of the node on the first Serviceguard Heart Beat Subnet Second Heart Beat Subnet IP IP address of the node on the second Serviceguard Heart Beat Subnet Table 12 Replication configuration Item Data Repl
Table 12 Replication configuration (continued) Item Data Dev_name parameter 1) 2) 3) 4) 5) 6) 7) 8) 9) 10) Table 13 Configuring a CRS Sub-cluster using CFS Item Site CRS Sub Cluster Name Name of the CRS cluster CRS Home Local FS Path for CRS HOME CRS Shared Disk Group name CVM disk group name for CRS shared disk CRS cluster file system mount point Mount point path where the vote and OCR will be created CRS Vote Disk Path to the vote disk or file CRS OCR Disk Path to the OCR disk or file CRS DG MNP pack
Table 13 Configuring a CRS Sub-cluster using CFS (continued) Item Site Site IP addresses for RAC Interconnect Private IP names IP address names for RAC Interconnect Virtual IP IP addresses for RAC VIP Virtual IP names IP address names for RAC VIP Table 14 RAC Database configuration Property Value Database Name Name of the database Database Instance Names Instance names of the database RAC data files file system mount point Mount Point for oracle RAC data files RAC data files CVM Disk group name CVM Di
Table 14 RAC Database configuration (continued) Property Value RAC Flash Area DG MNP CFS DG MNP package name for RAC flash file system RAC Flash Area MP MNP CFS MP MNP package name for RAC flash file system Node Names Database Instance Names Table 15 Site Controller package configuration PACKAGE_NAME Name of the Site Controller package Site Safety Latch /dts/mcsc/ Name of the EMS resource.
B Package attributes for Metrocluster with EMC SRDF This appendix lists all Serviceguard package attributes that are modified or added for Metrocluster with EMC SRDF. HP recommends that you use the default settings for most of these variables, so exercise caution when modifying them. AUTOR1RWSUSP Default: 0 This variable is used to indicate whether a package must be automatically started when it fails over from an R1 host to another R1 host and the device group is in suspended state.
automatically start the package under these conditions, set AUTOR2RWNL=0 AUTOR2XXNL Default: 0 A value of 0 for this variable indicates that when the package is started on an R2 host and at least one (but not all) SRDF link is down, the package is automatically started. This is usually the case when the ‘Partitioned+Suspended’ RDF Pairstate exists. We cannot verify the state of all Symmetrix volumes on the R1 side to validate conditions, but the Symmetrix on the R2 side must be in a ‘normal’ state.
PKGDIR RDF_MODE If the package is a legacy package, then this variable contains the full path name of the package directory. If the package is a modular package, then this variable contains the full path name of the directory where the Metrocluster SRDF environment file is located. Default: sync. This parameter defines the data replication modes for the device group. The supported modes are “sync” for synchronous and “async” for Asynchronous. If RDF_MODE is not defined, synchronous mode is assumed.
C Sample output of the cmdrprev command The following procedure shows you how to use the cmdrprev command to preview the data replication preparation for a package in an MC/SRDF environment. 1. Verify that the Metrocluster environment file for the package pkga is present in the package directory on node . 2.
D Legacy packages Configuring Serviceguard legacy packages for automatic disaster recovery Before implementing the procedures to configure a legacy package it is necessary to do the following: • Configure your cluster hardware according to disaster recovery architecture guidelines. See the Understanding and Designing Serviceguard Disaster Recovery Architectures user’s guide. • Configure the Serviceguard cluster according to the procedures outlined in Managing Serviceguard user’s guide.
The value of RUN_SCRIPT_TIMEOUT in the package ASCII file must be set to NO_TIMEOUT or to a large enough value that takes into consideration the extra startup time because of getting status from the Symmetrix. 4. Create a package control script. # cmmakepkg -s pkgname.cntl Customize the control script as appropriate to your application using the guidelines in the Managing Serviceguard user’s guide.
avoid contention when more than one package is starting on a node. RETRY * RETRYTIME must be approximately five minutes to keep package startup time under 5 minutes. g. h. 9. RETRYTIME RETRY pkgA 5 seconds 60 attempts pkgB 7 seconds 43 attempts pkgC 9 seconds 33 attempts Uncomment the CLUSTER_TYPE variable and set it to metro. (The value continental is only for use with the Continentalclusters product.
IMPORTANT: This command generates a package configuration file. Do not apply this configuration file until you complete the migration procedure. For more information about the cmmigratepkg command, see the Managing Serviceguard manual available at http://www.hp.com/go/hpux-serviceguard-docs. 2. If the Metrocluster Legacy package uses ECM toolkits, then generate a new Modular package configuration file using the package configuration file generated in step 1.
# /etc/vx/bin/vxdisksetup -i 2. Create a disk group for the complex workload data. # vxdg –s init \ 3. Create Serviceguard Disk Group MNP packages for the disk groups with a unique name in the cluster. # cfsdgadm add all=sw\ where node1 and node2 are the nodes in the Source Disk Site. 4. Activate the CVM disk group in the Source Disk Site CFS sub-cluster.
4. NOTE: Skip the following steps if you want to use the storage devices as raw CVM volumes. Create the # mkdir # chmod # mkdir mount point directories for the complex workload cluster file systems. /cfs 775 /cfs /cfs/ 5. Create the Mount Point MNP package with a unique name in the cluster. # cfsmntadm add \ cvm_dg_name> all=rw\ where node1 and node2 are the nodes at the target disk site. 6.
• Edit the application's configuration file and change its dependency from legacy CFS mount point or CVM disk group MNP packages to the newly created modular SMS CFS/CVM package. Apply this package configuration. cmapplyconf -P • Get the current configuration of the Site Controller. Modify the Site Controller configuration with the new set of packages that need to be managed on the recovery site. Leave the set of packages that are being managed on the primary site as it is.
E Configuring Oracle RAC in SADTA Overview of Metrocluster for RAC The Oracle RAC database can be deployed in a Metrocluster environment for disaster recovery using SADTA. This configuration is referred to as Metrocluster for RAC. In this architecture, a disaster tolerant RAC database is configured as two RAC databases that are replicas of each other; one at each site of the Metrocluster.
A disaster tolerant RAC database has two identical but independent RAC databases configured over the replicated storage in a Metrocluster. Therefore, packages of both sites RAC MNP stacks must not be up and running simultaneously. If the packages of the redundant stack at both sites are running simultaneously, it leads to data corruption. SADTA provides a Site Safety Latch mechanism at the site nodes that prevents inadvertent simultaneous direct startup of the RAC MNP stack packages at both sites.
To set up SADTA in your environment: 1. Set up replication using EMC SRDF in your environment. 2. Install software for configuring Metrocluster. This includes: a. Creating Serviceguard Clusters b. Configuring Cluster File System-Multi-node Package (SMNP) 3. Install Oracle. a. Install and configure Oracle Clusterware. b. Install and configure Oracle Real Application Clusters (RAC). c. Create RAC databases. d. Create identical RAC databases at the remote site. 4.
If using SLVM, create appropriate SLVM volume groups with required raw volumes over the replicated disks. b. 11. 12. 13. 14. 15. 16. Set up file systems for RAC database flash recovery. If you have SLVM, CVM, or CFS configured in your environment, see the following documents available at http://www.hp.
cluster is configured with the site, there are two CFS sub-clusters; one at the Site A site with membership from SFO_1 and SFO_2 nodes and the other at the Site B site with membership from SJC_1 and SJC_2 nodes. SiteA_hrb SiteB_hrb SiteA_hrdb_mp SiteB_hrdb_mp SiteA_hrdb_dg SiteB_hrdb_dg Site safety latch hrdb SC CFS Subcluster Node 1 CFS Subcluster Node 3 Node 2 Node 4 Site A Site B Router Router hrdb_dg hrdb_dg Disk Array Disk Array Active Inactive To configure SADTA: 1.
Table 16 CRS sub-clusters configuration in the Metrocluster (continued) Site Site A Site B /cfs/sfo_crs/OCR/ocr /cfs/sjc_crs/OCR/ocr CRS Voting Disk /cfs/sfo_crs/VOTE/vote /cfs/sjc_crs/VOTE/vote CRS mount point /cfs/sfo_crs /cfs/sjc_crs CRS MP MNP package sfo_crs_mp sjc_crs_mp CRS DG MNP package sfo_crs_dg sjc_crs_dg sfo_crsdg sjc_crsdg CRS OCR CVM DG Name Private IPs Virtual IPs 192.1.7.1 SFO_1p.hp.com 192.1.8.1 SJC_1p.hp.com 192.1.7.2 SFO_2p.hp.com 192.1.8.2 SJC_2p.hp.com 16.89.
In this example, create a Site Controller Package titled hrdb_sc to provide automatic site failover for the hrdb RAC database between Site A and Site B. Configure the RAC database MNP packages using the critical_package attribute, and then configure CFS MP MNP and CVM DG MNP database packages using the managed_package attribute. As a result, the Site Controller Package monitors only the RAC database MNP package and initiates a site failover when it fails.
NETWORK_INTERFACE STATIONARY_IP NETWORK_INTERFACE NETWORK_INTERFACE STATIONARY_IP NETWORK_INTERFACE lan4 #SFO_CRS CSS HB 192.1.7.1 lan5 #SFO_CRS CSS HB standby lan1 # SFO client access 16.89.140.201 lan6 # SFO client access standby NODE_NAME sfo_2 SITE san_francisco NETWORK_INTERFACE lan2 #SG HB 1 HEARTBEAT_IP 192.1.3.2 NETWORK_INTERFACE lan3 #SG HB 2 HEARTBEAT_IP 192.1.5.2 NETWORK_INTERFACE lan4 # SFO_CRS CSS HB STATIONARY_IP 192.1.7.
Node : SFO_1 Cluster Manager : up CVM state : up (MASTER) Node : SFO_2 Cluster Manager : up CVM state : up Installing and configuring Oracle clusterware After you set up replication in your environment and configuring the Metrocluster, install Oracle Clusterware. Use the Oracle Universal Installer to install and configure the Oracle Clusterware.
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib: $ORACLE_HOME/rdbms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin: /usr/local/bin: CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib: $ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export ORACLE_SID= Configuring the storage device for installing Oracle clusterware When Oracle Clusterware is installed in a site, it is
8. Mount the clustered file system on the site CFS sub-cluster. # cfsmount /cfs/sfo_crs 9. Create the Clusterware OCR directory in the clustered file system. # mkdir /cfs/sfo_crs/OCR # chmod 755 /sfo_cfs/crs/OCR 10. Create the Clusterware VOTE directory in the clustered file system. # mkdir /cfs/sfo_crs/VOTE # chmod 755 /cfs/sfo_crs/VOTE 11. Set oracle as the owner for the Clusterware directories.
8. On the Specify Voting Disk Location screen, select External Redundancy and specify the CFS file system directory if you have an independent backup mechanism for the Voting Disk. To use the internal redundancy feature of Oracle, select Normal Redundancy and specify additional locations. In this example, for the SFO Clusterware sub-cluster, the location is specified as: /cfs/sfo_crs/VOTE/vote 9. Follow the remaining on-screen instructions to complete the installation.
4. 5. On the Select Configuration Option screen, select the Install Database Software Only option. Create a listener on both nodes of the site using Oracle NETCA. For more information about using NETCA to configure listeners in a CRS cluster, see the Oracle RAC Installation and Configuration user’s guide. After installing Oracle RAC, you must create the RAC database.
7. Create mount points for the RAC database data files and set appropriate permissions. # mkdir /cfs # chmod 775 /cfs # mkdir /cfs/rac 8. Create the Mount Point MNP packages. # cfsmntadm add hrdbdg rac_vol /cfs/rac sfo_hrdb_mp all=rw SFO_1\ SFO_2 9. Mount the cluster file system on the CFS sub-cluster. # cfsmount /cfs/rac 10. Create a directory structure for the RAC database data files in the cluster file system. Set proper permission and owners for the directory.
8. Create Mount Point MNP package for the cluster file system. # cfsmntadm add flashdg flash_vol /cfs/flash sfo_flash_mp all=rw SFO_1\ SFO_2 9. Mount the RAC database flash recovery file system in the site CFS sub-cluster. # cfsmount /cfs/flash 10. Create directory structure in the cluster file system for the RAC database flash recovery area.
Before creating an identical RAC database at the remote site, you must first prepare the replication environment. The replication setup depends on the type of arrays that are configured in your environment. Based on the arrays in your environment, see the respective chapters of this manual to configure replication. After configuring replication in your environment, configure the replica RAC database. For example, to prepare the replication environment: 1. Split the logical SRDF links.
3. Copy the second RAC database instance pfile from the source site to the target site second RAC database instance node. In this example, copy the RAC database instance pfile from the SFO_2 node to the SJC_2 node. # cd /opt/app/oracle/product/10.2.0/db_1/db # rcp -p inithrdb2.ora SJC_2:$PWD The -p option retains the permissions of the file. 4. Set up the second RAC database instance on the target site. In this example, run the following commands from the SJC_2 node: # cd /opt/app/oracle/product/10.2.
Halting the RAC database on the recovery cluster You must halt the RAC database on the Target Disk Site so that it can be restarted at the source disk site. Use the cmhaltpkg command to halt the RAC MNP stack on the replication Target Disk Site node. Deport the disk groups at the replication Target Disk Site nodes using the vxdg deport command.
IMPORTANT: Package: You must adhere to the following guidelines while configuring the Site Controller • The default value of the priority parameter is set to no_priority. The Site Controller Package must not be subjected to any movement due to package prioritization. Do not change this default value. • The default value of the failover_policy parameter for the Site Controller Package is set to site_preferred.
After applying the Site Controller Package configuration, run the cmviewcl command to view the packages that are configured. Starting the disaster tolerant RAC database in the Metrocluster At this point, you have completed configuring SADTA in your environment with the Oracle Database 10gR2 RAC. This section describes the procedure to start the disaster tolerant RAC database in the Metrocluster. To start the disaster tolerant RAC database: 1.
gather information about service availability on the RAC servers and assist in making client connections to the RAC instances. In addition, they provide failure notifications and load advisories to clients, therefore enable fast failover of client connections and client-side load-balancing. These capabilities are facilitated by an Oracle 10g feature called Fast Application Notification (FAN). For more information about Fast Application Notification, see the following documents: http://www.oracle.
To configure the SGeRAC Cluster Interconnect packages: 1. Create a package directory on all the nodes in the site. # mkdir -p /etc/cmcluster/pkg/sfo_ic 2. Create a package configuration file and control script file. Use site-specific names for the files. You must follow the legacy package creation steps. # cmmakepkg -p sfo_ic.conf # cmmakepkg -s sfo_ic.cntl 3. 4. 5. 6. 7. Specify a site-specific package name in the package configuration file.
Figure 24 Sample Oracle RAC Database with ASM in SADTA SiteA CRS SiteB CRS SiteA RAC DB pkg SiteA_hrdb_dg SiteA ASM DG pkg SiteB RAC DB pkg Site safety latch SiteB ASM DG pkg Site Controller Node 1 Node 3 Node 2 Node 4 Site A Site B Router Router RAC DB Disk RAC DB Disk CRS, OCR, & Voting CRS, OCR, & Voting Disk Array Disk Array Active Inactive The Oracle Clusterware software must be installed at every site in the Metrocluster.
3. 4. 5. 6. 7. 8. 9. 10. The Install and configure Oracle Clusterware. Install Oracle Real Application Clusters (RAC) software. Create the RAC database with ASM: a. Configure ASM disk group. b. Configure SGeRAC Toolkit Packages for the ASM disk group. c. Create the RAC database using the Oracle Database Configuration Assistant. d. Configure and test the RAC MNP stack at the source disk site. e. Halt the RAC database at the source disk site. Configure the identical ASM disk group at the remote site.
Installing Oracle RAC software The Oracle RAC software must be installed twice in the Metrocluster, once at every site. Also, the RAC software must be installed in the local file system in all the nodes in a site. To install Oracle RAC, use the Oracle Universal Installer (OUI). After installation, the installer prompts you to create the database. Do not create the database until you install Oracle RAC at both sites. You must create identical RAC databases only after installing RAC at both sites.
1. 2. When using Oracle 11g R2 with ASM, the remote_listener for the database is set to the : by default. But, in the Metrocluster for RAC configuration, the SCAN name is different for every site CRS subcluster. So, the remote_listener for the database must be changed to the net service name configured in the tnsnames.ora for the database. This task must be done prior to halting the RAC database stack on the Source Disk Site: a. Log in as the Oracle user. # su – oracle b.
The -p option retains the permissions of the file. 4. Setup the first ASM instance on the target disk site. In this example, run the following commands from node1 in the site2. # cd /opt/app/oracle/product/11.1.0/db_1/dbs # ln –s /opt/app/oracle/admin/+ASM/pfile/init.ora init+ASM1.ora # chown -h oracle:oinstall init+ASM1.ora # chown oracle:oinstall orapw+ASM1 5. Copy the second ASM instance pfile and password file from site1 to the second ASM instance node in site2.
3. Copy the second RAC database instance pfile and password file from the source site to the second RAC database instance node in the target disk. In this example, run the following commands from the second node in site1: # cd /opt/app/oracle/product/11.1.0/db_1/dbs # rcp -p inithrdb2.ora :$PWD # rcp -p orapwhrdb2 :$PWD The -p option retains the permissions of the file. 4. Set up the second RAC database instance on the target disk site.
Configuring the Site Safety Latch dependencies After the Site Controller Package configuration is applied, the corresponding Site Safety Latch is also configured automatically in the cluster. This section describes the procedure to configure the Site Safety Latch dependencies. To configure the Site Safety Latch dependencies: 1. Add the EMS resource details in ASM DG package configuration file.
When the RAC MNP package is configured as a critical_package, the Site Controller Package considers only the RAC MNP package status to initiate a site failover. Since the RAC MNP package fails when the contained RAC database fails, the Site Controller Package fails over to start on the remote site node and initiates a site failover from the remote site.
1. 2. 3. 4. 5. 6. Install the required software on the new node and prepare the node for Oracle installation. Halt the Site Controller Package in the DETACH mode to avoid unnecessary site failover of the RAC database. Ensure that the new node can access the Clusterware OCR and VOTE disks, and Oracle database disks, and add the node to the Serviceguard cluster. Extend the Oracle Clusterware software to the new node.
3. Delete an instance from the RAC database. For more information about deleting an instance, see the documentation available at the Oracle documentation site. 4. Delete the RAC database software and Oracle Clusterware. For more information about deleting the RAC database and Oracle Clusterware, see the documentation available at the Oracle documentation site. 5. 6. 7. Remove the node from the node list of the Site Controller Package. Run the cmhaltnode command to halt the cluster on this node.
The Site Controller Package starts on the preferred node at the site. At startup, the Site Controller Package starts the corresponding RAC MNP stack packages in that site that are configured as managed packages. After the RAC MNP stack packages are up, you must verify the package log files for any errors that might have occurred at startup. If the CRS MNP instance on a node is not up, the RAC MNP stack instance on that node does not start. However, if CVM/CFS is configured, the CVM DG and CFS MP MNP starts.
can restart the RAC MNP package only by restarting the Site Controller Package. This is because the Site Safety Latch closes when the Site Controller Package halts. Maintaining Oracle database 10gR2 RAC A RAC database configured using SADTA has two replicas of the RAC database configuration; one at each site. The database configuration is replicated between the replicas using a replicated storage.
Glossary A arbitrator Nodes in a disaster recovery architecture that act as tie-breakers when all of the nodes in a data center go down at the same time. These nodes are full members of the Serviceguard cluster and must conform to the minimum requirements. The arbitrator must be located in a third data center to ensure that the failure of an entire data center does not bring the entire cluster down.
E, F Environment File Metrocluster Metrocluster Metrocluster Metrocluster uses a configuration file that includes variables that define the environment for the to operate in a Serviceguard cluster. This configuration file is referred to as the environment file. This file needs to be available on all the nodes in the cluster for to function successfully. ESCON Enterprise Storage Connect.
Q quorum server A cluster node that acts as a tie-breaker in a disaster recovery architecture in case all of the nodes in a data center go down at the same time. R R1 The Symmetrix term indicating the data copy that is the primary copy. R2 The Symmetrix term indicating the remote data copy that is the secondary copy. It is normally read-only by the nodes at the remote site.
Index C Cluster data replication, 12 command line symdg, 16 command line interface, EMC Symmetrix, 12 configuration Symmetrix array, 9 configuring gatekeeper devices, 17 verifiying EMC Symmetrix configuration, 19 configuring Metrocluster, 91 creating EMC Symmetrix device groups, 16 D device groups creating, 16 device names EMC Symmetrix logical devices, 17 mapping, 13 mapping Symmetrix to command line symld, 16 mapping Symmetrix to HP-UX, 14 device names, EMC Symmetrix, 13 devices gatekeeper, 17 disaster r