Hitachi Command Control Interface (CCI) User and Reference Guide Hitachi Universal Storage Platform V/VM Hitachi TagmaStore® Universal Storage Platform Hitachi TagmaStore® Network Storage Controller Hitachi Lightning 9900™ V Series Hitachi Lightning 9900™ MK-90RD011-25
Copyright © 2008 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”).
Source Documents for this Revision RAID Manager Basic Specifications, revision 64 (3/24/2008) Changes in this Revision Added support for the following host platforms (section 3.1): – Microsoft Windows 2008 – HP OpenVMS 8.3 support for IPv6 – HP OpenVMS for Integrity Server – 64-bit RAID Manager for RH/IA64 Added “SSB” to the output of the EX_CMDRJE error message (Table 5.3). Added support for Oracle10g H.A.R.
Preface This document describes and provides instructions for installing and using the Command Control Interface (CCI) software for Hitachi RAID storage systems.
Conventions for Storage Capacity Values Storage capacity values for logical devices (LDEVs) on the Hitachi RAID storage systems are calculated based on the following values: 1 KB (kilobyte) = 1,024 bytes 1 MB (megabyte) = 1,0242 bytes 1 GB (gigabyte) = 1,0243 bytes 1 TB (terabyte) = 1,0244 bytes 1 PB (petabyte) = 1,0245 bytes 1 block = 512 bytes Referenced Documents Hitachi Universal Storage Platform V/VM documents: Universal Storage Platform V/VM User and Reference Guide, MK-96RD635 Storage Navigat
Hitachi TrueCopy User and Reference Guide, MK-92RD108 Open LDEV Guard User’s Guide, MK-93RD158 DB Validator Reference Guide, MK-92RD140 Hitachi Lightning 9900™ documents: User and Reference Guide, MK-90RD008 Remote Console User’s Guide, MK-90RD003 Hitachi ShadowImage User’s Guide, MK-90RD031 Hitachi TrueCopy User and Reference Guide, MK-91RD051 Comments Please send us your comments on this document. Make sure to include the document title, number, and revision.
viii Preface
Contents Chapter 1 Overview of CCI Functionality...............................................................................................1 1.1 1.2 1.3 Chapter 2 Overview of Command Control Interface ..................................................... 1 Overview of Hitachi Data Replication Functions ............................................ 2 1.2.1 Hitachi TrueCopy ........................................................................ 2 1.2.2 Hitachi ShadowImage .............................
2.8.11 Log and Trace Files .................................................................... 75 2.8.12 User-Created Files ..................................................................... 75 2.9 Configuration Definition File .................................................................. 76 2.9.1 Configuration Definition for Cascading Volume Pairs ............................ 99 2.10 Error Monitoring and Configuration Confirmation........................................ 107 2.10.
4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 Checking Attribute and Status (Pairvolchk) ............................................... 194 4.7.1 Recovery in Case of SVOL-Takeover ............................................... 202 4.7.2 PVOL-PSUE-Takeover ................................................................ 205 4.7.3 Recovery in Case of PVOL-PSUE-Takeover........................................ 206 4.7.4 SVOL-SSUS Takeover in Case of ESCON/Fibre/Host Failure....................
4.18 LDM Volume Discovery and Flushing for Windows ....................................... 4.18.1 Volume Discovery Function......................................................... 4.18.2 Mountvol Attached to Windows 2008/2003/2000 Systems .................... 4.18.3 System Buffer Flushing Function .................................................. 4.19 Special Facilities for Windows 2008/2003/2000 Systems ............................... 4.19.
List of Figures Figure 2.1 Figure 2.2 Figure 2.3 Figure 2.4 Figure 2.5 Figure 2.6 Figure 2.7 Figure 2.8 Figure 2.9 Figure 2.10 Figure 2.11 Figure 2.12 Figure 2.13 Figure 2.14 Figure 2.15 Figure 2.16 Figure 2.17 Figure 2.18 Figure 2.19 Figure 2.20 Figure 2.21 Figure 2.22 Figure 2.23 Figure 2.24 Figure 2.25 Figure 2.26 Figure 2.27 Figure 2.28 Figure 2.29 Figure 2.30 Figure 2.31 Figure 2.32 Figure 2.33 Figure 2.34 Figure 2.35 Figure 2.36 Figure 2.37 Figure 2.38 Figure 2.39 Figure 2.40 Figure 2.41 Figure 2.
xiv Figure 2.48 Figure 2.49 Figure 2.50 Figure 2.51 Figure 2.52 Figure 2.53 Figure 2.54 Figure 2.55 Figure 2.56 Figure 2.57 Figure 2.58 Figure 2.59 Pairdisplay on HORCMINST0 ........................................................... Pairdisplay on HORCMINST1 ........................................................... Pairdisplay on HORCMINST0 ........................................................... TrueCopy/ShadowImage Cascading Connection and Configuration File .......
Figure 4.32 Figure 4.33 Figure 4.34 Figure 4.35 Figure 4.36 Figure 4.37 Figure 4.38 Figure 4.39 Figure 4.40 Figure 4.41 Figure 4.42 Figure 4.43 Figure 4.44 Figure 4.45 Figure 4.46 Figure 4.47 Figure 4.48 Figure 4.49 Figure 4.50 Figure 4.51 Figure 4.52 Figure 4.53 Figure 4.54 Figure 4.55 Figure 4.56 Figure 4.57 Figure 4.58 Figure 4.59 Figure 4.60 Figure 4.61 Figure 4.62 Figure 4.63 Figure 4.64 Figure 4.65 Figure 4.66 Figure 4.67 Figure 4.68 Figure 4.69 Figure 4.70 Figure 4.71 Figure 4.72 Figure 4.
Figure 4.82 Figure 4.83 Figure 4.84 Figure 4.85 Figure 4.86 Figure 4.87 Figure 4.88 Operation Across SLPRs Using two Command Devices on a Single Host ........ Operation Across SLPRs Using a Shared Command Device on a Single Host ... SLPR Configuration on Dual Hosts .................................................... Operation Across SLPRs Using two Command Devices on Dual Hosts ........... Operating SLPR#N by Sharing the Command Device............................... TrueCopy Operation using SLPR .........
Table 4.12 Table 4.13 Table 4.14 Table 4.15 Table 4.16 Table 4.17 Table 4.18 Table 4.19 Table 4.20 Table 4.21 Table 4.22 Table 4.23 Table 4.24 Table 4.25 Table 4.26 Table 4.27 Table 4.28 Table 4.29 Table 4.30 Table 4.31 Table 4.32 Table 4.33 Table 4.34 Table 4.35 Table 4.36 Table 4.37 Table 4.38 Table 4.39 Table 4.40 Table 4.41 Table 4.42 Table 4.43 Table 4.44 Table 4.45 Table 4.46 Table 4.47 Table 4.48 Table 4.49 Table 4.50 Table 4.51 Table 4.52 Results of Pairmon Command Options .........................
xviii Contents
Chapter 1 1.1 Overview of CCI Functionality Overview of Command Control Interface The Hitachi Command Control Interface (CCI) software product enables you to configure and control Hitachi data replication and data protection operations by issuing commands from the open-systems host to the Hitachi RAID storage systems.
1.2 Overview of Hitachi Data Replication Functions The Hitachi data replication features controlled by CCI include: 1.2.1 TrueCopy (section 1.2.1) ShadowImage (section 1.2.2) Universal Replicator (section 1.2.3) Copy-on-Write Snapshot (section 1.2.4) Hitachi TrueCopy The Hitachi TrueCopy feature enables you to create and maintain remote copies of the data stored on the RAID storage systems for data backup and disaster recovery purposes.
1.2.2 Hitachi ShadowImage The ShadowImage data duplication feature enables you to set up and maintain multiple copies of logical volumes within the same storage system. The RAID-protected ShadowImage duplicates are created and maintained at hardware speeds. ShadowImage operations for UNIX/PC server-based data can be performed using either the Command Control Interface (CCI) software on the UNIX/PC server host, or the ShadowImage software on Storage Navigator.
1.2.4 Hitachi Copy-on-Write Snapshot Copy-on-Write (COW) Snapshot provides ShadowImage functionality using less capacity of the disk storage system and less time for processing than ShadowImage. COW Snapshot enables you to create copy pairs, just like ShadowImage, consisting of primary volumes (P-VOLs) and secondary volumes (S-VOLs). The COW Snapshot P-VOLs are logical volumes (OPEN-V LDEVs), but the COW Snapshot S-VOLs are virtual volumes (V-VOLs) with pool data stored in memory.
1.3 Overview of Hitachi Data Protection Functions The Hitachi data protection features controlled by CCI include: 1.3.1 Database Validator (section 1.3.1) Data Retention Utility (section 1.3.2) Hitachi Database Validator The Database Validator feature is designed for the Oracle® database platform to prevent data corruption between the database and the storage system.
1.3.2 Hitachi Data Retention Utility (Open LDEV Guard) Data Retention Utility (called Open LDEV Guard on 9900V/9900) enables you to prevent writing to specified volumes by the RAID storage system guarding the volumes. Data Retention Utility is similar to the Database Validator feature, setting a guarding attribute to the specified LU. The RAID storage system supports parameters for guarding at the volume level.
Chapter 2 Overview of CCI Operations This chapter provides a high-level description of the operations that you can perform with Hitachi Command Control Interface: Overview (section 2.1) Features of Paired Volumes (section 2.2) Overview of CCI ShadowImage Operations (section 2.3) Hitachi TrueCopy/ShadowImage Volumes (section 2.4) Applications of Hitachi TrueCopy/ShadowImage Commands (section 2.5) Overview of Copy-on-Write Snapshot operations (section 2.
2.1 Overview CCI allows you to perform Hitachi TrueCopy and ShadowImage operations by issuing TrueCopy and ShadowImage commands from the UNIX/PC server host to the Hitachi RAID storage system. Hitachi TrueCopy and ShadowImage operations are nondisruptive and allow the primary volume of each volume pair to remain online to all hosts for both read and write operations. Once established, TrueCopy and ShadowImage operations continue unattended to provide continuous data backup.
2.2 Features of Paired Volumes The logical volumes, which have been handled independently by server machines, can be combined or separated in a pair being handled uniformly by the Hitachi TrueCopy and/or ShadowImage pairing function. Hitachi TrueCopy and ShadowImage regard those two volumes to be combined or separated as unique paired logical volume used by the servers. It is possible to handle paired volumes as groups by grouping them in units of server software or in units of database and its attribute.
2.2.1 ShadowImage Duplicated Mirroring Duplicated mirroring of a single primary volume is possible when the ShadowImage feature is used. The duplicated mirror volumes of the P-VOL are expressed as virtual volumes using the mirror descriptors (MU#0-2) in the configuration definition file as shown below.
2.2.2 ShadowImage Cascading Pairs ShadowImage provides a cascading function for the ShadowImage S-VOL. The cascading mirrors of the S-VOL are expressed as virtual volumes using the mirror descriptors (MU#1-2) in the configuration definition file as shown below. The MU#0 of a mirror descriptor is used for connection of the S-VOL.
2.2.2.1 Restrictions for ShadowImage Cascading Volumes Pair Creation. Pair creation of SVOL (oradb1) can only be performed after the pair creation of S/PVOL (oradb). If pair creation of SVOL (oradb1) is performed at the SMPL or PSUS state of S/PVOL (oradb), paircreate will be rejected with EX_CMDRJE or EX_CMDIOE. oradb PVOL 0 0 S/P VOL 1 oradb1 SVOL 2 oradb2 SVOL Pair Splitting.
2.2.2.2 Restriction for TrueCopy/ShadowImage Cascading Volumes Pair restore (resynchronization from SVOL (oradb1) to S/PVOL) can only be performed when the TrueCopy VOL (oradb) is SMPL or PSUS(SSUS), and another PVOL (oradb2) on the S/PVOL is SMPL or PSUS. If pairresync of S-VOL (oradb1) is performed when the S/PVOL (oradb or oradb2) is in any other state, the pairresync (-restore option) command will be rejected with EX_CMDRJE or EX_CMDIOE.
2.2.3 Hitachi TrueCopy Takeover Commands Figure 2.4 illustrates the server failover system configuration. When a server software error or a node error is detected, the operation of the failover software causes the Cluster Manager (CM) to monitor server programs, and causes the CM of the standby node to automatically activate the HA control script of the corresponding server program.
Standby Standby Primary Secondary Secondary Primary Figure 2.5 2.2.4 SÆA Active Package Transfer on High Availability (HA) Software Hitachi TrueCopy Remote Commands Figure 2.6 illustrates a Hitachi TrueCopy remote configuration. The Hitachi TrueCopy remote commands support a function which links the system operation for the purpose of volume backup among UNIX servers with the operation management of the server system.
Host B Host A Operation Management Server software Operation Management Commands Commands HORCM (CCI) HORCM (CCI) Command device Pair generation and resync Command device Primary/ secondary volume Secondary/ primary volume Hitachi RAID Figure 2.6 2.2.5 Server software Pair splitting Hitachi RAID Hitachi TrueCopy Remote System Configuration Hitachi TrueCopy Local Commands Figure 2.7 illustrates a Hitachi TrueCopy local configuration.
2.3 Overview of CCI ShadowImage Operations Figure 2.8 illustrates the ShadowImage configuration. The ShadowImage commands support a function which links the system operation for the purpose of volume backup among UNIX servers with the operation management of the server system. For detailed information on the operational requirements for ShadowImage, please refer to the Hitachi ShadowImage User’s Guide for the storage system. Pair creation command: Creates a new volume pair.
2.4 Hitachi TrueCopy/ShadowImage Volumes Hitachi TrueCopy commands allow you to create volume pairs consisting of one primary volume (P-VOL) and one secondary volume (S-VOL). The TrueCopy P-VOL and S-VOL can be in different storage systems. Hitachi TrueCopy provides synchronous and asynchronous copy modes. TrueCopy Asynchronous can only be used between separate storage systems (not within one storage system).
2.4.1 TrueCopy/ShadowImage/Universal Replicator Volume Status Each TrueCopy pair consists of one P-VOL and one S-VOL, and each ShadowImage pair consists of one P-VOL and up to nine S-VOLs when the cascade function is used. Table 2.1 lists and describes the Hitachi TrueCopy and ShadowImage pair status terms. The P-VOL controls the pair status for the primary and secondary volumes. The major pair statuses are SMPL, PAIR, PSUS/PSUE, and COPY/RCPY.
LEGEND for Table 2.2, Table 2.3, and Table 2.4: Accepted = Accepted and executed. When operation terminates normally, the status changes to the indicated number. Acceptable = Accepted but no operation is executed. Rejected = Rejected and operation terminates abnormally. Table 2.
Table 2.
– The state changes for pairsplit are (WD = Write Disable, WE = Write Enable): If PVOL has non-reflected data in PAIR state: Behavior of OLD pairsplit at T0 T0 PVOL_PAIR ÅÆ SVOL_PAIR(WD) T1: PVOL_COPY ÅÆ SVOL_COPY(WD) T2: PVOL_PSUS ÅÆ SVOL_SSUS(WE) Behavior of First pairsplit at T0 PVOL_PAIR ÅÆ SVOL_PAIR(WD) PVOL_PSUS ÅÆ SVOL_COPY(WE) PVOL_PSUS ÅÆ SVOL_SSUS(WE) If PVOL has been reflected all data to SVOL in PAIR state: Behavior of OLD pairsplit at T0 Behavior of First pairsplit at T0 T0: PVOL_PAIR ÅÆ SVOL
2.4.2 TrueCopy Async, TrueCopy Sync CTG, and Universal Replicator Volumes Hitachi TrueCopy Asynchronous/Universal Replicator provides paired volumes which utilize asynchronous transfer to ensure the sequence of writing data between the primary volume and secondary volume. The sequence of writing data between the primary and secondary volumes is guaranteed within each consistency (CT) group (see Figure 2.9).
Pair resynchronization: The pairresync command resynchronizes the secondary volume based on the primary volume. This resynchronization does not guarantee the sequenced data transfer. Error suspending: Pending recordsets which have not yet been sent to the secondary volume are marked on the bitmap of the primary volume and then deleted from the queue, and then the pair status changes to PSUE.
2.4.2.1 Sidefile Cache for Hitachi TrueCopy Asynchronous The first-in-first-out (FIFO) queue of each CT group is placed in an area of cache called the sidefile. The sidefile is used for transferring Hitachi TrueCopy Async recordsets to the RCU. The sidefile is not a fixed area in cache but has variable capacity for write I/Os for the primary volume.
2.4.2.2 Hitachi TrueCopy Asynchronous Transition States Hitachi TrueCopy Async volumes have special states for sidefile control during status transitions. Table 2.6 shows the transition states for Hitachi TrueCopy Synchronous and Hitachi TrueCopy Asynchronous volumes. The suspending and deleting states are temporary internal states within the RAID storage system. CCI cannot detect these transition states, because these states are reported on the previous state from the storage system.
Table 2.6 State Table for Hitachi TrueCopy Sync vs.
2.4.2.3 TrueCopy Async/Universal Replicator ERROR State In the case of an ESCON or fibre-channel (FC) failure, the S-VOL FIFO queue is missing a data block that was transferred from the P-VOL FIFO queue. The RCU waits to store the next sequenced data block in the S-VOL FIFO queue until the TrueCopy Async copy pending timeout occurs (defined using Hitachi TrueCopy remote console software).
Table 2.
[4] When fence level is async: TrueCopy Async/Universal Replicator uses asynchronous transfers to ensure the sequence of write data between the PVOL and SVOL. Writing to the PVOL is enabled, regardless of whether the SVOL status is updated or not. Thus the mirror consistency of the secondary volume is dubious (similar to “Never” fence): 2.4.3.
2.5 Applications of Hitachi TrueCopy/ShadowImage Commands This section provides examples of tasks which can be performed using Hitachi TrueCopy and/or ShadowImage commands (see Figure 2.12 - Figure 2.
OLTP (DB) server R/W Backup server Primary PAIR Secondary PAIR cBackup request cDatabase freezing dEvent waiting W (Database flushing) dEvent waiting (PSUS) (PAIR) ePair splitting (Read) fDatabase unfreezing Primary COPY differential data Secondary COPY R/W Primary COPY Secondary COPY After copied, the status changes to"PSUS" eDatabase mount -r fBackup executing R/W Primary PSUS Read Secondary PSUS gDatabase unmount hBackup completion R/W gPair re-synchronization Primary COPY Seco
OLTP (DB) server c Pair splitting (Simplex) DSS server R/W R/W Primary PSUS Secondary PSUS R/W R/W SMPL SMPL R/W d unmount e Pair generation (Remote) f Event waiting Secondary COPY (PAIR) Primary COPY R/W Secondary PAIR Primary PAIR g Restoration request h Event waiting (PSUS) c Database freezing R W (Database flushing) d Pair splitting Secondary PSUS (Read) Primary PSUS i mount -r e Database unfreezing Figure 2.
OLTP (DB) server c Pair splitting (Simplex) DSS server R/W R/W Primary PSUS Secondary PSUS R/W R/W SMPL SMPL Swapping paired volume R/W d unmount e Pair generation (Remote) f Event waiting Primary COPY Secondary COPY (PAIR) R/W Primary PAIR Secondary PAIR g Restoration request c Database freezing W (Database flushing) h Event waiting (PSUS) Secondary COPY differential data Primary COPY d Pair splitting R/W Secondary COPY R i mount -r Secondary PSUS Primary COPY e Database u
Server A R/W Primary PAIR Server B Secondary PAIR c R/W stop d Server swapping instruction e Event waiting (PAIR) SMPL SMPL c Pair splitting (Simplex) R/W Secondary PAIR Primary PAIR d Pair generation (No Copy) Local Splitting from swapped duplex state R/W i Pair splitting (R/W) Secondary PSUS R/W Primary PSUS Figure 2.
R/W Server A Primary PSUS Server B Secondary PSUS c DB shutdown d DB completion notification c Pair splitting SMPL SMPL (Simplex) d Pair generation (Remote) e Event waiting Secondary COPY Primary COPY (PAIR) f Pair splitting (Simplex) SMPL SMPL R/W g Pair generation (No Copy) Primary PAIR Secondary PAIR Figure 2.
2.6 Overview of Copy-on-Write Snapshot Operations Copy-on-Write Snapshot normally creates virtual volumes for copying on write without specifying LUNs as S-VOLs. However, to use a SnapShot volume via the host, it is necessary to map the SnapShot S-VOL to a LUN. Therefore, CCI provides a combined command to enable the user or APP to use the same CCI command in order to maintain the compatibility of ShadowImage. SnapShot uses two techniques called V-VOL mapping and SnapShot using copy on write.
2.6.1 Creating SnapShot The CCI command for creating a COW SnapShot pair is the same as for ShadowImage. The RAID storage system determines whether the pair is a ShadowImage or SnapShot pair by the LDEV attribute of the S-VOL. A SnapShot pair is generated in the following two cases. When V-VOL unmapped to the S-VOL of SnapShot called OPEN-0V is specified as S-VOL. When S-VOL is not specified. The V-VOL has the following characteristic. 2.6.2 2.6.
Read/Write Read/Write Primary volume S Table 2.8 Copy on write Restore copy Secondary volume S SnapShot Pairing Status Status Pairing Status Primary Secondary SMPL Unpaired (SnapShot) volume R/W enabled R/W disable (Note 1) PAIR (PFUL) The snapshot available state allocated the resource. R/W enabled R/W disable COPY The preparing state allocates the resource for the snapshot. R/W enabled R/W disable RCPY The copying state from snapshot to the primary volume by using restore option.
2.7 Overview of CCI Data Protection Operations User data files are normally placed to a disk through some software layer such as file system, LVM, diskdriver, SCSI protocol driver, bus adapter, and SAN switching fabric. Data corruption can happen on above software bugs and human error as follows. The purpose of Data Protection is to prevent writing to volumes by RAID storage system guarding the volume. The CCI Data Protection functions include: 2.7.1 Database Validator (sections 2.7.1, 2.7.2).
2.7.2 Restrictions on Database Validator Oracle Tablespace Location – File system-based Oracle files are not supported by DB Validator. All Oracle tablespace files must be placed on raw volumes (including LVM raw volumes) directly. – If host-based striping is used on the raw volumes, then the stripe size must be an exact multiple of the Oracle block size.
2.7.3 Data Retention Utility/Open LDEV Guard The purpose of Data Retention Utility (DRU) (Open LDEV Guard on 9900V) is to prevent writing to volumes by RAID storage system guarding the volume. DRU is similar to the command that supports Database Validator, setting a protection attribute to the specified LU. 42 Hiding from Inquiry command. The RAID storage system conceals the target volumes from SCSI Inquiry command by responding “unpopulated volume” (0x7F) to the device type. SIZE 0 volume.
2.7.4 Restrictions on Data Retention Utility Volumes File systems using Data Retention Utility (Open LDEV Guard) – When using the UNIX file system volumes as the DRU/Open LDEV Guard, the volumes must be mounted with Read Only option by setting the DRU/Open LDEV Guard after the volumes are unmounted. – In case of Windows 2003/2008 file system, you have to use “-x mount” and “-x umount” option of CCI commands with above procedures.
2.7.5 Operations The Hitachi storage systems (9900V and later) have parameters for the protection checking to each LU, and these parameters are set through the command device by CCI. CCI supports the following commands in order to set and verify the parameters for the protection checking to each LU: raidvchkset (see section 4.12.1) This command sets the parameter for the protection checking to the specified volumes. raidvchkdsp (see section 4.12.
2.8 CCI Software Structure Figure 2.20 illustrates the CCI software structure: the CCI components on the RAID storage system, and the CCI instance on the UNIX/PC server. The CCI components on the storage system include the command device(s) and the Hitachi TrueCopy and/or ShadowImage volumes. Each CCI instance on a UNIX/PC server includes: 2.8.
2.8.2 CCI Instance Configurations The basic unit of the CCI software structure is the CCI instance. Each copy of CCI on a server is a CCI instance. Each instance uses a defined configuration file to manage volume relationships while maintaining awareness of the other CCI instances. Each CCI instance normally resides on one server (one node). If two or more nodes are run on a single server (e.g., for test operations), it is possible to activate two or more instances using instance numbers.
User’s execution environment TrueCopy Command Command Command User’s execution environment ShadowImage Command Configuration definition file Command log Command Command log Monitoring command HORCM Remote execution TrueCopy command server Configuration management Error monitoring and event monitoring x Syslog file x HORCM log x HORCM trace x Command trace x Command core file Log and trace HORCM (CCI) instances TrueCopy control ShadowImage control HORC control Object volumes Object volumes
The four possible CCI instance configurations are: 2.8.3 One host connected to one storage system. Each CCI instance has its own operation manager, server software, and scripts and commands, and each CCI instance communicates independently with the command device. The RAID storage system contains the command device which communicates with the CCI instances as well as the primary and secondary volumes of both CCI instances. One host connected to two storage systems.
HP-UX Solaris Windows RM RM RM Windows RM-H RM-S RM Command device LDEV P-VOL H #18 Solaris HP-UX RM RM Command device P-VOL S P-VOL W S-VOL W RAID S-VOL S S-VOL H RAID Figure 2.22 RAID Manager Communication Among Different Operating Systems Table 2.
2.8.4 Configuration Definition File The CCI configuration definition file is the text file which defines connected hosts and the volumes and groups known to the CCI instance. Physical volumes (special files) used independently by the servers are combined when paired logical volume names and group names are given to them.
HORCM_MON #ip_address HST1 service horcm poll(10ms) 1000 HORCM_CMD #unitID 0... (seq#30014) #dev_name dev_name /dev/rdsk/c0t0d0 #unitID 1...
HORCM_MON. The monitor parameter (HORCM_MON) defines the following values: Ip_address: The network address (IPv4 or IPv6) of the local host. When HORCM has two or more network addresses on different subnets or MPE/iX, enter NONE for IPv4 or NONE6 for IPv6 here. Service: Specifies the UDP port name assigned to the HORCM communication path, which is registered in “/etc/services” (“\WINNT\system32\drivers\etc\services” in Windows, “SYS$SYSROOT:[000000.TCPIP$ETC]SERVICES.DAT” in OpenVMS).
unitID0 = Ser# 30014 unitID1 = Ser# 30015 ESCON® unitID=0 HORCM (CCI) Command device Ser#30014 Server HUB ESCON®/ Fibre-channel unitID0 = Ser# 30014 unitID1 = Ser# 30015 unitID=1 HORCM (CCI) Command device ESCON® Ser# 30015 Server Figure 2.
dev_name for UNIX In UNIX SAN environment, there are situations when the device file name will be changed, a failover operation in UNIX SAN environment or every reboot under Linux when the SAN is reconfigured. The CCI user needs to find NEW "Device Special File" and change HORCM_CMD described in the CCI configuration file. Thus, CCI supports the following naming format specifying Serial#/LDEV#/Port#:HINT as notation of the command device for UNIX: \\.\CMD-Ser#-ldev#-Port#:HINT HORCM_CMD #dev_name \\.
HORCM_DEV. The device parameter (HORCM_DEV) defines the RAID storage system device addresses for the paired logical volume names. When the server is connected to two or more storage systems, the unit ID is expressed by port# extension. Each group name is a unique name discriminated by a server which uses the volumes, the attributes of the volumes (such as database data, redo log file, UNIX file), recovery level, etc.
The following ports can only be specified for USP/NSC and USP V/VM: Basic Option Option Option CL5 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL6 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL7 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL8 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CL9 an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CLA an bn cn dn en fn gn hn jn kn ln mn nn pn qn rn CLB
MU# for HORC/Universal Replicator: Defines the mirror unit number (0 - 3) of one of four possible HORC/UR bitmap associations for an LDEV. If this number is omitted, it is assumed to be zero (0). The Universal Replicator mirror description is described in the MU# column by adding “h” in order to identify identical LUs as the mirror descriptor for UR. The MU# for HORC must be specified “blank” as “0”. The mirror description for HORC is only one, but UR will have four mirrors, as shown below.
HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE horcm 1000 3000 . . HORCM_INST #dev_group ip_address service oradb HST1_IPA horcm oradb HST1_IPB horcm HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE horcm 1000 3000 . .
Host Host RM command IPv6 HORCM IPv6 RM command HORCM IPv6 HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE6 horcm0 1000 3000 #fe80::209:6bff:febe:3c17 horcm0 1000 3000 HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE6 horcm0 1000 3000 #fe80::202:a5ff:fe55:c1d2 horcm0 1000 3000 #/********** For HORCM_CMD ****************/ HORCM_CMD #dev_name #UnitID 0 (Serial# 63502) /dev/rdsk/c1t0d0s2 #/********** For HORCM_CMD ****************/ HORCM_CMD #dev_name #UnitID 0 (Serial# 63502) /
In case of IPv4 mapped IPv6, it is possible to communicate between HORCM/IPv4 and HORCM/IPv6 using IPv4 mapped IPv6. Host Host RM command RM command HORCM IPv4 HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE horcm4 1000 3000 #158.214.127.
In case of mixed IPv4 and IPv6, it is possible to communicate between HORCM/IPv4 and HORCM/IPv6 and HORCM/IPv6 using IPv4 mapped IPv6 and native IPv6. Host Host RM command RM command HORCM IPv4 HORCM IPv4 IPv4 Host Host RM command HORCM IPv6 mapped mapped RM command HORCM IPv6 IPv6 HORCM_MON #ip_address service poll(10ms) timeout(10ms) NONE horcm4 1000 3000 #158.214.127.
HORCM_LDEV. The HORCM_LDEV parameter is used for specifying stable LDEV# and Serial# as the physical volumes corresponding to the paired logical volume names. Each group name is unique and typically has a name fitting its use (e.g., database data, Redo log file, UNIX file). The group and paired logical volume names described in this item must also be known to the remote server. (a) dev_group: This parameter is the same as HORCM_DEV parameter. (b) dev_name: This parameter is the same as HORCM_DEV parameter.
2.8.5 Command Device The Hitachi TrueCopy/ShadowImage commands are issued by the HORC Manager (HORCM) to the RAID storage system command device. The command device is a user-selected, dedicated logical volume on the storage system which functions as the interface to the CCI software on the UNIX/PC host. The command device is dedicated to CCI communications and cannot be used by any other applications.
HORCM_CMD #dev_name dev_name dev_name /dev/rdsk/c1t66d36s2 /dev/rdsk/c2t66d36s2 Figure 2.31 Example of Alternate Path for Command Device for Solaris Systems 2.8.6 Alternate Command Device Function The CCI software issues commands to the command device via the UNIX/PC raw I/O interface. If the command device fails in any way, all Hitachi TrueCopy/ShadowImage commands are terminated abnormally, and the user cannot use any commands.
2.8.7 Command Interface with Hitachi TrueCopy/ShadowImage When the CCI commands are converted into SCSI commands of a special format, a SCSI through driver which can send such special SCSI commands to the RAID storage system is needed. As a result, it is quite possible that support by CCI depends on the OS supplier. Accordingly, it is necessary to use read/write command that can easily be issued by many UNIX/PC server platforms.
2.8.7.1 Command Competition The CCI commands are asynchronous commands issued via the SCSI interface. Accordingly, if several processes issue these commands to a single LDEV, the storage system cannot take the proper action. To avoid such a problem, two or more WR commands should not be issued to a single LDEV. The command initiators should not issue two or more WR commands to a single LDEV unless the storage system can receive commands with independent initiator number * LDEV number simultaneously.
2.8.7.3 Issuing Commands for LDEV(s) within a LUSE Device A LUSE device is a group of LDEVs regarded as a single logical unit. Since it is necessary to know about the configuration of the LDEVs when issuing a command, a new command is used. This command specifies a target LU and acquires LDEV configuration data (see Figure 2.36). Target LU (Port#, SCSI ID#, LU#) Special LDEV LDEV# n LDEV# n+1 Initial LBA of command Command area Special LDEV space LDEV# n+2 Figure 2.
2.8.8 Logical DKC per 64K LDEVs The Universal Storage Platform V/VM controller manages internal LDEV numbers as a fourbyte data type in order to support over 64K LDEVs. Because the LDEV number for the host interface is defined as two-byte data type, the USP V/VM implements the concept of the logical DKC (LDKC) in order to maintain the compatibility of this host interface and to make operation possible for over 64K LDEVs without changing the host interface.
2.8.9 Command Device Guarding In the customer environment, a command device may be attacked by the maintenance program of the Solaris Server, after that usable instance will be exhausted. As a result, CCI instance could not start up on all servers (except attacked server). This may happen on wrong operation of the maintenance personnel for the UNIX Server.
RAID HOST RAID Manager Command device Read(Instance request) Temporary allocation Table 00 10 Getting LBA Write with LBA (to get configuration) Figure 2.39 Getting configuration Actual allocation 11 Improved Assignment Sequence The command device performs the assignment of an instance through TWO phase that has “temporary allocation (1 0)” and “actual allocation (1 1)” to the instance assignment table.
2.8.10 CCI Software Files The CCI software product consists of files supplied to the user, log files created internally, and files created by the user. These files are stored on the local disk in the server machine. Table 2.10 lists the CCI files which are provided for UNIX®-based systems. Table 2.11 lists the CCI files which are provided for Windows-based systems. Table 2.12 lists the CCI files which are provided for OpenVMS®-based systems. Table 2.10 CCI Files for UNIX-based Systems No.
Table 2.11 72 CCI Files for Windows-based Systems No. Title File name Command name 001 HORCM \HORCM\etc\horcmgr.exe horcmd 002 HORCM_CONF \HORCM\etc\horcm.conf - 003 Takeover \HORCM\etc\horctakeover.exe horctakeover 004 Accessibility check \HORCM\etc\paircurchk.exe paircurchk 005 Pair generation \HORCM\etc\paircreate.exe paircreate 006 Pair split \HORCM\etc\pairsplit.exe pairsplit 007 Pair re-synchronization \HORCM\etc\pairresync.
No. Title File name Command name 034 Volume check \HORCM\usr\bin\pairvolchk.exe pairvolchk 035 Synchronous waiting \HORCM\usr\bin\pairsyncwait.exe pairsyncwait 036 Pair configuration confirmation \HORCM\usr\bin\pairdisplay.exe pairdisplay 037 RAID scanning \HORCM\usr\bin\raidscan.exe raidscan 038 Connection confirmation \HORCM\usr\bin\raidqry.
Table 2.12 CCI Files for OpenVMS®-based Systems No. Title File name Command name User 001 HORCM $ROOT:[HORCM.etc]horcmgr.exe horcmd sys 002 HORCM_CONF $ROOT:[HORCM.etc]horcm.conf – sys 003 Takeover $ROOT:[HORCM.usr.bin]horctakeover.exe horctakeover sys 004 Volume Accessibility check $ROOT:[HORCM.usr.bin]paircurchk.exe paircurchk sys 005 Pair generation $ROOT:[HORCM.usr.bin]paircreate.exe paircreate sys 006 Pair splitting $ROOT:[HORCM.usr.bin]pairsplit.
2.8.11 Log and Trace Files The CCI software (HORCM) and Hitachi TrueCopy and ShadowImage commands maintain start-up log files, execution log files, and trace files which can be used to identify the causes of errors and keep records of the status transition history of the paired volumes. Please refer to Appendix A for a complete description of the CCI log and trace files. 2.8.12 User-Created Files Script Files. CCI supports scripting to provide automated and unattended copy operations.
2.9 Configuration Definition File Figure 2.36 - Figure 2.44 show examples of CCI configurations, the configuration definition file(s) for each configuration, and examples of CCI command use for each configuration. The command device is defined using the system raw device name (character-type device file name). For example, the command devices for Figure 2.
RAID HOST RAID Manager Command device Read(Instance request) Actual allocation Table 0 1 Getting LBA Getting configuration Write with LBA (to get configuration) Figure 2.40 Hitachi TrueCopy Remote Configuration Example Example of CCI commands with HOSTA: Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in Figure 2.40.
The command device is defined using the system raw device name (character-type device file name). For example, the command devices for Figure 2.41 would be: HP-UX: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1 Solaris: HORCM_CMD of HOSTA = /dev/rdsk/c0t0d1s2 HORCM_CMD of HOSTB = /dev/rdsk/c1t0d1s2 Note: For Solaris operations with CCI version 01-09-03/04 and higher, the command device does not need to be labeled during format command.
LAN Ip address:HST1 Ip address:HST2 HOSTA CONF.file HOSTB /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d0 /dev/rdsk/c0t1d1 /dev/rdsk/c0t1d2 /dev/rdsk/c0t0d0 CONF.
Example of CCI commands with HOSTA: Designate a group name (Oradb) and a local host P- VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in Figure 2.41). Designate a volume name (oradev1) and a local host P-VOL a case.
The command device is defined using the system raw device name (character-type device file name). The command device defined in the configuration definition file must be established in a way to be following either every instance. If one command device is used between different instances on the same SCSI port, then the number of instances is up to 16 per command device. If this restriction is exceeded, then use a different SCSI path for each instance. For example, the command devices for Figure 2.
LAN Ip address:HST1 HORCMINST0 HORCMINST1 HOSTA /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d0 /dev/rdsk/c0t1d1 /dev/rdsk/c0t1d2 /dev/rdsk/c0t0d0 CONF.file CONF.
Example of CCI commands with Instance-0 on HOSTA: When the command execution environment is not set, set an instance number. For C shell: # setenv HORCMINST 0 For Windows: set HORCMINST=0 Designate a group name (Oradb) and a local instance P- VOL a case. # paircreate -g Oradb -f never -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in Figure 2.42).
The command device is defined using the system raw device name (character-type device file name). For example, the command devices for Figure 2.
Linux, zLinux: HORCM_CMD of HOSTA = /dev/sdX HORCM_CMD of HOSTB = /dev/sdX HORCM_CMD of HOSTC = /dev/sdX HORCM_CMD of HOSTD = /dev/sdX where X = device number assigned by Linux, zLinux IRIX: HORCM_CMD for HOSTA ... /dev/rdsk/dks0d0l1vol or /dev/rdsk/node_wwn/lun1vol/c0p0 HORCM_CMD for HOSTB ... /dev/rdsk/dks1d0l1vol or /dev/rdsk/node_wwn/lun1vol/c1p0 HORCM_CMD for HOSTC ... /dev/rdsk/dks1d0l1vol or /dev/rdsk/node_wwn/lun1vol/c1p0 HORCM_CMD for HOSTD ...
LAN HOST D Ip address:HST4 /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d1 HOST C/dev/rdsk/c1t2d2 Ip address:HST3 構CONF.file /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d1 /dev/rdsk/c1t2d1 CONF.file .file HOST B Ip /dev/rdsk/c1t0d1 address:HST2 /dev/rdsk/c1t2d2 /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 HORCM /dev/rdsk/c1t0d1 /dev/rdsk/c1t2d1 CONF.file /dev/rdsk/c1t2d2 /dev/rdsk/c1t0d1 /dev/rdsk/c1t2d2 HORCM /dev/rdsk/c1t0d1 /dev/rdsk/c1t0d1 SCSI port HORCM C1 SCSI port Ip address:HST1 HOST A CONF.
Configuration file for HOSTA (/etc/horcm.conf) Configuration file for HOSTB (/etc/horcm.
Example of CCI commands with HOSTA (group Oradb): When the command execution environment is not set, set HORCC_MRCF to the environment variable. For C shell: # setenv HORCC_MRCF 1 Windows: set HORCC_MRCF=1 Designate a group name (Oradb) and a local host P-VOL a case. # paircreate -g Oradb -vl This command creates pairs for all LUs assigned to group Oradb in the configuration definition file (two pairs for the configuration in Figure 2.43).
Example of CCI commands with HOSTA (group Oradb1): When the command execution environment is not set, set HORCC_MRCF to the environment variable. For C shell: # setenv HORCC_MRCF 1 For Windows: set HORCC_MRCF=1 Designate a group name (Oradb1) and a local host P-VOL a case. # paircreate -g Oradb1 -vl This command creates pairs for all LUs assigned to group Oradb1 in the configuration definition file (two pairs for the configuration in Figure 2.43).
Example of CCI commands with HOSTA (group Oradb2): When the command execution environment is not set, set HORCC_MRCF to the environment variable. For C shell: # setenv HORCC_MRCF 1 For Windows: set HORCC_MRCF=1 Designate a group name (Oradb2) and a local host P-VOL a case. # paircreate -g Oradb2 -vl This command creates pairs for all LUs assigned to group Oradb2 in the configuration definition file (two pairs for the configuration in Figure 2.43).
The command device is defined using the system raw device name (character-type device file name). The command device defined in the configuration definition file must be established in a way to be following either every instance. If one command device is used between different instances on the same SCSI port, then the number of instances is up to 16 per command device. If this restriction is exceeded, then use a different SCSI path for each instance. For example, the command devices for Figure 2.
LAN Ip address:HST1 HORCMINST0 HORCMINST1 HOSTA /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 . /dev/rdsk/c1t0d0 /dev/rdsk/c0t1d1 /dev/rdsk/c0t1d2 /dev/rdsk/c0t0d0 CONF.file HORCM Fibre port C0 Fibre port CONF.
Example of CCI commands with Instance-0 on HOSTA: When the command execution environment is not set, set an instance number. For C shell: For Windows: # setenv HORCMINST 0 # setenv HORCC_MRCF 1 set HORCMINST=0 set HORCC_MRCF=1 Designate a group name (Oradb) and a local instance P- VOL a case.
Designate a group name and display pair status. # pairdisplay -g oradb -m cas Group oradb oradb1 oradb2 oradb oradb oradb1 oradb2 oradb PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M oradev1(L) (CL1-D , 2, 1-0)30053 268..S-VOL PAIR,----- 266 oradev11(L) (CL1-D , 2, 1-1)30053 268..P-VOL PAIR,30053 270 oradev21(L) (CL1-D , 2, 1-2)30053 268..SMPL ----,----- ---- oradev1(R) (CL1-A , 1, 1-0)30053 266..P-VOL PAIR,30053 268 oradev2(L) (CL1-D , 2, 2-0)30053 269..
Windows 2008/2003/2000: HORCM_CMD of HOSTA(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB(/etc/horcm0.conf) ... \\.\CMD-Ser#-ldev#-Port# Windows NT: HORCM_CMD of HOSTA(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB(/etc/horcm.conf) ... \\.\CMD-Ser#-ldev#-Port# HORCM_CMD of HOSTB(/etc/horcm0.conf) ... \\.\CMD-Ser#-ldev#-Port# Linux, zLinux: HORCM_CMD of HOSTA(/etc/horcm.conf) ...
LAN HOSTA Ip address:HST1 HOSTB HORCMINST HORCMINST0 HORCMINST CONF.file /dev/rdsk/c0t1d1 /dev/rdsk/c0t1d2 CONF.file /dev/rdsk/c1t2d1 /dev/rdsk/c1t2d2 HORCM /dev/rdsk/c0t0d1 HORCM /dev/rdsk/c1t0d1 Fibre port Ip address:HST2 C0 /dev/rdsk/c1t3d1 /dev/rdsk/c1t3d2 . /dev/rdsk/c1t0d1 Fibre port HORCM C1 Fibre-channel A CONF.
Example of CCI commands with HOSTA and HOSTB: Designate a group name (Oradb) on Hitachi TrueCopy environment of HOSTA. # paircreate -g Oradb -vl Designate a group name (Oradb1) on ShadowImage environment of HOSTB. When the command execution environment is not set, set HORCC_MRCF.
Designate a group name and display pair status on TrueCopy environment of HOSTB. # pairdisplay -g oradb -m cas Group oradb1 oradb2 oradb oradb oradb oradb1 oradb2 oradb oradb oradb Designate a group name and display pair status on ShadowImage environment of HOSTB. # pairdisplay -g oradb1 -m cas Group oradb1 oradb2 oradb oradb1 oradb1 oradb2 oradb oradb1 PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M oradev11(L) (CL1-D , 2, 1-0)30053 268..
2.9.1 Configuration Definition for Cascading Volume Pairs The CCI software (HORCM) is capable of keeping track of up to seven pair associations per LDEV (1 for TC/UR, 3 for UR, 3 for SI/Snapshot, 1 for Snapshot). By this management, CCI can be assigned to seven groups per LU that describes seven mirror descriptors for a configuration definition file. UR Oradb Oradb1 Oradb4 - 6 MU#1-#3 TrueCopy MU#0 LDEV ShadowImage Oradb2 - 3 MU#1-2 MU#3-63 Oradb7~ Figure 2.
Table 2.
2.9.1.2 Cascade Function and Configuration Files A volume of the cascading connection describes entity in a configuration definition file on the same instance, and classifies connection of volume through the mirror descriptor. In case of Hitachi TrueCopy/ShadowImage cascading connection, too, the volume entity describes to a configuration definition file on the same instance. Figure 2.47 shows an example of this.
2.9.1.3 ShadowImage ShadowImage is a mirror configuration within one storage system. Therefore, ShadowImage can be described a volume of the cascading connection according to two configuration definition files. In case of cascading connection of ShadowImage only, the specified group is assigned to the mirror descriptor (MU#) of ShadowImage that describes definitely “0” as MU# for ShadowImage. Figure 2.48 - Figure 2.
SVOL 270 Oradb1 0 1 P/S VOL 268 0 2 /dev/rdsk/c0t3d4 Oradb 266 Oradb2 272 # pairdisplay -d /dev/rdsk/c0t3d4 -m cas Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# oradb1 oradev11(L) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,----268 oradb1 oradev11(R) (CL1-D , 3, 2-1)30053 268..P-VOL PAIR,30053 270 oradb oradev1(R) (CL1-D , 3, 2-0)30053 268..S-VOL PAIR,----266 oradb2 oradev21(R) (CL1-D , 3, 2-2)30053 268..P-VOL PAIR,30053 272 M - Figure 2.
2.9.1.4 Cascading Connections for Hitachi TrueCopy and ShadowImage The cascading connections for Hitachi TrueCopy/ShadowImage can be set up by using three configuration definition files that describe the cascading volume entity in a configuration definition file on the same instance. The mirror descriptor of ShadowImage and Hitachi TrueCopy definitely describe “0” as MU#, and the mirror descriptor of Hitachi TrueCopy does not describe “0” as MU#.
Figure 2.50 - Figure 2.53 show Hitachi TrueCopy/ShadowImage cascading configurations and the pairdisplay information for each configuration. PVOL SMPL 266 0 S/P VOL 268 oradb 0 Oradb1 270 1 Oradb2 Seq#30053 Seq#30052 272 # pairdisplay -g oradb -m cas Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# oradb oradev1(L) (CL1-D , 3, 0-0)30052 266..SMPL ----,-------oradb oradev1(L) (CL1-D , 3, 0) 30052 266..P-VOL COPY,30053 268 oradb1 oradev11(R) (CL1-D , 3, 2-0)30053 268..
S/P VOL 268 Oradb 0 266 Oradb1 1 Seq#30053 Seq#30052 Oradb2 SVOL 270 272 # pairdisplay -g oradb1 -m cas Group PairVol(L/R) (Port#,TID,LU-M),Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# oradb1 oradev11(L) (CL1-D , 3, 2-0)30053 268..P-VOL PAIR,30053 270 oradb2 oradev21(L) (CL1-D , 3, 2-1)30053 268..P-VOL PSUS,30053 272 oradb oradev1(L) (CL1-D , 3, 2) 30053 268..S-VOL PAIR,----266 oradb1 oradev11(R) (CL1-D , 3, 4-0)30053 270..S-VOL PAIR,----268 M W - Figure 2.
2.10 Error Monitoring and Configuration Confirmation CCI supports error monitoring and configuration confirmation commands for linkage with the system operation management of the UNIX/PC server. 2.10.1 Error Monitoring for Paired Volumes The HORC Manager (HORCM) monitors all volumes defined in the configuration definition file at a certain interval regardless of the Hitachi TrueCopy/ShadowImage commands.
2.10.3 Pair Status Display and Configuration Confirmation The CCI pairing function (configuration definition file) combines the physical volumes in the storage system used independently by the servers. Therefore, you should make sure that the servers’ volumes are combined as intended by the server system administrator. The pairdisplay command displays the pairing status to enable you to verify the completion of pair creation or pair resynchronization (see Figure 2.56).
2.11 Recovery Procedures for HA Configurations After configuring and starting Hitachi TrueCopy operations, the system administrator should conduct operational tests for possible failures in the system. In normal operation, service personnel obtain information for identifying the failure cause on the SVP. However, a motive for the action above should be given by the Hitachi TrueCopy operation command. Figure 2.58 shows the system failover and recovery procedure. Figure 2.
Regression state Mirroring state Host A Host B Host A Recovery state Host B Svol goes down. S Vol P Vol Host A Host B Continue (regression) S Vol c Host A Host B P Vol S Vol Recovery Disconnected P Vol S Vol e Entire copy d c Host B P Vol Pairsplit -S Paircreate S Vol Host A Host B Svol recovers. Continue (regression) P Vol Host A Pairresync S Vol P Vol d e Difference c The PVOL detects a failure in the SVOL and causes suspension of the duplicated writing.
Chapter 3 Preparing for CCI Operations This chapter covers the following topics: System requirements (section 3.1) Hardware installation (section 3.2) Software installation (section 3.3) Creating/editing the configuration file (section 3.4) Porting notice for OpenVMS (section 3.5) CCI startup (section 3.6) Starting CCI as a Service (Windows Systems) (section 3.
3.1 System Requirements CCI operations involve the CCI software on the UNIX/PC server host and the RAID storage system(s) containing the command device(s) and the Hitachi TrueCopy and/or ShadowImage pair volumes. The system requirements for CCI are: CCI software product. The CCI software is supplied on CD-ROM or diskette. The CCI software files take up 2.5 MB of space. The log files can take up to 3 MB of space. Host platform. CCI is supported on the following host platforms. See Table 3.
Hitachi RAID storage system(s). The Hitachi TagmaStore USP, Hitachi TagmaStore NSC, Lightning 9900V, and Lightning 9900 storage systems support CCI operations. Hitachi TrueCopy Synchronous and Asynchronous are supported for all storage system models. Please contact your Hitachi Data Systems representative for further information on storage system configurations.
3.1.1 Supported Platforms Table 3.1 – Table 3.8 list the supported platforms for CCI operations. Table 3.1 Supported Platforms for TrueCopy Vendor Operating System Failover Software Volume Manager I/O Interface Sun Solaris 2.5 First Watch VxVM SCSI/Fibre Solaris 10 /x86 — VxVM Fibre HP-UX 10.20/11.0/11.2x MC/Service Guard LVM, SLVM SCSI/Fibre HP-UX 11.2x on IA64* MC/Service Guard LVM, SLVM Fibre Digital UNIX 4.0 TruCluster LSM SCSI Tru64 UNIX 5.
Table 3.2 Supported Platforms for ShadowImage Vendor Operating System Failover Software Volume Manager I/O Interface Sun Solaris 2.5 First Watch VxVM SCSI/Fibre Solaris 10 /x86 -— VxVM Fibre HP-UX 10.20/11.0/11.2x MC/Service Guard LVM, SLVM SCSI/Fibre HP-UX 11.2x on IA64* MC/Service Guard LVM, SLVM Fibre Digital UNIX 4.0 TruCluster LSM SCSI Tru64 UNIX 5.0 TruCluster LSM SCSI/Fibre OpenVMS 7.3-1 — — Fibre DYNIX/ptx 4.4 ptx/Custer LVM SCSI/Fibre AIX 4.
Table 3.3 Supported Platforms for TrueCopy Async Vendor Operating System Failover Software Volume Manager I/O Interface Sun Solaris 2.5 First Watch VxVM SCSI/Fibre Solaris 10 /x86 — VxVM Fibre HP-UX 10.20/11.0/11.2x MC/Service Guard LVM, SLVM SCSI/Fibre HP-UX 11.2x on IA64* MC/Service Guard LVM, SLVM Fibre Digital UNIX 4.0 TruCluster LSM SCSI Tru64 UNIX 5.0 TruCluster LSM SCSI/Fibre OpenVMS 7.3-1 — — Fibre DYNIX/ptx 4.4 ptx/Custer LVM SCSI/Fibre AIX 4.
Table 3.4 Supported Platforms for Universal Replicator Vendor Operating System Failover Software Volume Manager I/O Interface SUN Solaris2.8 VCS VxVM Fibre Solaris 10 /x86 — VxVM Fibre HP-UX 11.0/11.2x MC/Service Guard LVM, SLVM Fibre HP-UX 11.2x on IA64* MC/Service Guard LVM, SLVM Fibre IBM AIX 5.1 HACMP LVM Fibre Microsoft Windows 2000, 2003, 2008 MSCS LDM Fibre Windows 2003/2008 on IA64* Windows 2003/2008 on EM64T MSCS LDM Fibre/iSCSI Red Hat Linux AS 2.1, 3.0, 4.
Table 3.6 Supported Guest OS for VMware VM Vendor Layer Guest OS CCI Support Confirmation Volume Mapping I/O Interface VMware ESX Server 2.5.1 or later using Linux Kernel 2.4.9 [Note 1] Guest Windows 2003 SP1 Confirmed RDM* Fibre Windows 2000 Server Unconfirmed RDM* Fibre Windows NT 4.0 Unconfirmed RHAS 3.0 Confirmed RDM* Fibre SLES 9.0 Unconfirmed Solaris 10 u3 (x86) Confirmed RDM* Fibre SVC Linux Kernel 2.4.9 Confirmed Direct Fibre Client AIX 5.
Table 3.
3.1.2 Using CCI with Hitachi and Other RAID Storage Systems Table 3.9 shows the related two controls between CCI and the RAID storage system type (Hitachi or HP® XP). Figure 3.1 shows the relationship between the APP, CCI, and RAID storage system.
APP can use Common API/CLI APP can use XP API/CLI on XP array only Raid Manager XP CCI CCI Raid Mgr XP -CM -CM ® HITACHI Array HP XP Array : Common API/CLI commands are allowed under both installation only. Figure 3.1 3.1.3 Relationship between APP, CCI, and Storage System Restrictions on zLinux In the following example, zLinux defines the Open Volumes that are connected to FCP as /dev/sd*. Also, the mainframe volumes (3390-xx) that are connected to FICON are defined as /dev/dasd*.
The restrictions for using CCI with zLinux are: Command device. CCI uses a SCSI Path-through driver to access the command device. As such, the command device must be connected through FCP adaptors. Open Volumes via FCP. You can control the ShadowImage and TrueCopy pair operations without any restrictions. Mainframe (3390-9A) Volumes via FICON. You cannot control the volumes (3390-9A) that are directly connected to FICON for ShadowImage pair operations.
3.1.4 3.1.4.1 Restrictions on VM VMware ESX Server Whether CCI (RM) runs or not depends on the support of guest OS by VMware. In addition, the guest OS depends on VMware support of virtual H/W (HBA). Therefore, the following guest OS and restrictions must be followed when using CCI on VMware. Server CCI#1 CCI#2 Guest OS CCI#3 Guest OS VMware ESX Server HBA Hitachi RAID Storage System Command device for CCI #1 and #2 -CM Command device for CCI #3 -CM Figure 3.
5. Lun sharing between Guest and Host OS. It is not supported to share a command device or a normal Lun between guest OS and host OS. 6. About running on SVC. The ESX Server 3.0 SVC (service console) is a limited distribution of Linux based on Red Hat Enterprise Linux 3, Update 6 (RHEL 3 U6). The service console provides an execution environment to monitor and administer the entire ESX Server host. The CCI user will be able to run CCI by installing “CCI for Linux” on SVC.
CCI on AIX VIO should be used with the following restrictions: 1. Command device. CCI uses SCSI Path-through driver for the purpose of access of the command device. Therefore the command device must be mapped as RAW device of Physical Mapping Mode. At least one command device must be assigned for each VIO Client.
3.1.5 About Platforms Supporting IPv6 Library and System Call for IPv6 CCI uses the following functions of IPv6 library to get and convert from hostname to IPv6 address. IPv6 library to resolve hostname and IPv6 address: – getaddrinfo() – inet_pton() – inet_ntop() Socket System call to communicate using UDP/IPv6: – socket(AF_INET6) – bind(), sendmsg(), sendto(), rcvmsg(), recvfrom()… If CCI links above function in the object(exe), a core dump may occur if an OLD platform (e.g.
Environment Variable CCI loads and links the library for IPv6 by specifying a PATH as follows. For Windows systems: Ws2_32.dll For HP-UX (PA/IA) systems: /usr/lib/libc.sl However, CCI may need to specify a different PATH to use the library for IPv6. After this consideration, CCI also supports the following environment variables for specifying a PATH: $IPV6_DLLPATH (valid for only HP-UX, Windows): This variable is used to change the default PATH for loading the Library for IPv6.
3.2 Hardware Installation Installation of the hardware required for CCI is performed by the user and the Hitachi Data Systems representative. To install the hardware required for CCI operations: 1. User: a) Identify the Hitachi TrueCopy and/or ShadowImage primary and secondary volumes, so that the CCI hardware and software components can be installed and configured properly. b) Make sure that the UNIX/PC server hardware and software are properly installed and configured (see section 3.
3.3 Software Installation Installation of the CCI software on the host server(s) is performed by the user, with assistance as needed from the Hitachi Data Systems representative. 3.3.1 Software Installation for UNIX Systems If you are installing CCI from CD-ROM, please use the RMinstsh and RMuninst scripts on the CD-ROM to automatically install and uninstall the CCI software. For other media, please use the following instructions.
5. Execute the HORCM installation command: # /HORCM/horcminstall.sh 6. Verify installation of the proper version using the raidqry command: # raidqry -h Model: RAID-Manager/HP-UX Ver&Rev: 01-22-03/02 Usage: raidqry [options] Version Up. To install a new version of the CCI software: 1. Confirm that HORCM is not running. If it is running, shut it down: One CCI instance: # horcmshutdown.sh Two CCI instances: # horcmshutdown.
3.3.2 Software Installation for Windows Systems Make sure to install CCI on all servers involved in CCI operations. If network (TCP/IP) is not established, install a network of Windows attachment, and add TCP/IP protocol. To install the CCI software on a Windows system: 1. If a previous version of CCI is already installed, uninstall it as follows: a) Confirm that HORCM is not running.
New installation. To install the CCI software on an OpenVMS® system: 1. Insert and mount the provided CD or diskette. 2. Execute the following command: $ PRODUCT INSTALL RM /source=Device:[PROGRAM.RM.OVMS]/LOG _$ /destination=SYS$POSIX_ROOT:[000000] Device:[PROGRAM.RM.OVMS] where HITACH-ARMVMS-RM-V0122-2-1.PCSI exists 3. Verify installation of the proper version using the raidqry command: $ raidqry -h Model: RAID-Manager/OpenVMS Ver&Rev: 01-22-03/02 Usage: raidqry [options] 4.
3.3.4 Changing the CCI User (UNIX Systems) The CCI software is initially configured to allow only the root user (system administrator) to execute CCI commands. If desired (e.g., CCI administrator does not have root access), the system administrator can change the CCI user from root to another user name. To change the CCI user: 1.
3.3.5 Changing the CCI User (Windows Systems) Usually, RAID Manager commands can only be executed by the system administrator in order to directly open the PhysicalDrive. When an administrator of CCI does not have an “administrator” privilege or there is a difference between the system administrator and the CCI administrator, the CCI administrator can use CCI commands as follows: System Administrator Tasks 1. Add a user_name to the PhysicalDrive.
CCI Administrator Tasks 1. Establish the HORCM (/etc/horcmgr) startup environment. By default, the configuration definition file is located in the following directory: %SystemDrive%:\windows\ Because users cannot write to this directory, the CCI administrator must change the directory by using the HORCM_CONF variable. For example: C:\HORCM\etc\>set HORCM_CONF=C:\Documents and Settings\RMadmin\horcm10.
3.3.6 Uninstallation Uninstalling permanently removes software. Uninstallation for UNIX systems. To uninstall the CCI software: 1. Confirm that CCI (HORCM) is not running. If it is running, shut it down: One CCI instance: # horcmshutdown.sh Two CCI instances: # horcmshutdown.sh 0 1 If Hitachi TrueCopy/ShadowImage commands are running in the interactive mode, terminate the interactive mode and exit these commands using -q option. 2.
3.4 Creating/Editing the Configuration File The configuration definition file is a text file which is created and/or edited using any standard text editor (e.g., UNIX vi editor, Windows Notepad). A sample configuration definition file, HORCM_CONF (/HORCM/etc/horcm.conf), is included with the CCI software. This file should be used as the basis for creating your configuration definition file(s).
Table 3.10 Configuration (HORCM_CONF) Parameters Parameter Default value Type Limit ip_address None Character string 64 characters service None Character string or numeric value 15 characters poll (10 ms) 1000 Numeric value None See Note timeout (10 ms) 3000 Numeric value None See Note dev_name for HORCM_DEV None Character string 31 characters dev_group None Character string 31 characters Recommended value = 8 char.
3.5 Porting Notice for OpenVMS In the OpenVMS, the system call on UNIX are supported as the functions of CRTL (C Run Time Library) on the user process, and also the CRTL for OpenVMS does not support the POSIX and POSIX Shell fully such as UNIX. In addition to this, the RAID Manager uses the UNIX domain socket for IPC (Inter Process Communication), but OpenVMS does not support the AF_UNIX socket.
For example, using the Detached process: If you want to have the HORCM daemon running in the background, you need to make the Detached LOGINOUT.EXE Process by using the ‘RUN /DETACHED’ command of the OpenVMS, and need to make the commands file for LOGINOUT.EXE. The following are examples of “loginhorcm*.com” file given to SYS$INPUT for LOGINOUT.EXE, and are examples that “VMS4$DKB100:[SYS0.SYSMGR.]” was defined as SYS$POSIX_ROOT. loginhorcm0.
(5) Command device. CCI uses the SCSI class driver for the purpose of accessing the command device on the 9900V/9900, since OpenVMS does not provide the raw I/O device such as UNIX, and is defining “DG*,DK*,GK*” as the logical name for the device. The SCSI class driver requires the following privileges: DIAGNOSE and PHY_IO or LOG_IO (for details see the OpenVMS manual). In CCI version 01-12-03/03 or earlier, you need to define the Physical device as either DG* or DK* or GK* by using DEFINE/SYSTEM command.
(9) Option syntax and Case sensitivity. VMS users are not accustomed to commands being case sensitive and syntax of the option, like UNIX. So CCI changes “case sensitivity” and “-xxx” syntax for options in order to match the expectations of VMS users as much as possible. CCI allows “/xxx” syntax for options as well as “-xxx” option, but this will be a minor option.
(11) Privileges for using RAID Manager. A user account for RAID Manager must have the same privileges as “SYSTEM” that can be used the SCSI Class driver and Mailbox driver directly. However some OpenVMS system administrators may not allow RAID Manager to run from the system account (equivalent to root on UNIX), therefore RAID Manager recommends to be created another account on the system such as “RMadmin” that has the equivalent privileges to “SYSTEM”.
For Installing: $ PRODUCT INSTALL RM /source=Device:[directory]/LOG _$ /destination=SYS$POSIX_ROOT:[000000] Device:[directory] where HITACHI-ARMVMS-RM-V0122-2-1.PCSI exists : : $ PRODUCT SHOW PRODUCT RM ----------------------------------------- ----------- -----------PRODUCT KIT TYPE STATE ----------------------------------------- ----------- -----------HITACHI ARMVMS RM V1.
3.5.2 Known Issues Rebooting on PAIR state (Writing disable) OpenVMS does not show the volumes of writing disable (e.g., SVOL_PAIR) at start-up of system, therefore the SVOLs are hidden when rebooting in PAIR state or SUSPEND(read only) mode. You are able to verify that the “show device” and “inqraid”command does not show the SVOLs after reboot as below (notice that DGA148 and DGA150 devices are SVOL_PAIR).
3.5.3 Start-up Procedures Using Detached Process on DCL (1) Create the shareable Logical name for RAID if undefined initially. CCI (RAID Manager) need to define the physical device ($1$DGA145…) as either DG* or DK* or GK* by using SHOW DEVICE command and DEFINE/SYSTEM command, but then does not need to be mounted in CCI version 01-12-03/03 or earlier.
(3) Discover and describe the command device on SYS$POSIX_ROOT:[etc]horcm0.conf. $ inqraid DKA145-151 -CLI DEVICE_FILE PORT SERIAL DKA145 CL1-H 30009 DKA146 CL1-H 30009 DKA147 CL1-H 30009 DKA148 CL1-H 30009 DKA149 CL1-H 30009 DKA150 CL1-H 30009 DKA151 CL1-H 30009 LDEV CTG 145 146 147 148 149 150 151 - SYS$POSIX_ROOT:[etc]horcm0.conf HORCM_MON #ip_address service 127.0.0.
(6) Describe the known HORCM_DEV on SYS$POSIX_ROOT:[etc]horcm*.conf For horcm0.conf HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTB TargetID 0 0 0 LU# 2 4 6 MU# 0 0 0 LU# 3 5 7 MU# 0 0 0 service horcm1 For horcm1.
3.5.4 Command Examples in DCL (1) Setting the environment variable by using Symbol. $ HORCMINST := 0 $ HORCC_MRCF := 1 $ raidqry -l No Group Hostname HORCM_ver Uid Serial# Micro_ver Cache(MB) 1 --VMS4 01-22-03/02 0 30009 50-04-00/00 8192 $ $ pairdisplay -g VG01 -fdc Group PairVol(L/R) Device_File M ,Seq#,LDEV#.P/S,Status, % ,P-LDEV# M VG01 oradb1(L) DKA146 0 30009 146..S-VOL PAIR, 100 147 VG01 oradb1(R) DKA147 0 30009 147..P-VOL PAIR, 100 146 VG01 oradb2(L) DKA148 0 30009 148..
(6) Making the configuration file automatically. You will be able to omit the step from (3) to (6) on Start-up procedures by using mkconf command. $ type dev_file DKA145-150 $ $ pipe type dev_file | mkconf -g URA -i 9 starting HORCM inst 9 HORCM Shutdown inst 9 !!! A CONFIG file was successfully completed. HORCM inst 9 finished successfully.
(7) Using $1$* naming as native device name. You are able to use the native device without DEFINE/SYSTEM command by specifying $1$* naming directly.
3.5.5 Start-up Procedures in Bash CCI (RAID Manager) does not recommend to be used through the bash, because the bash will not be provided as official release in OpenVMS 7.3-1. (1) Create the shareable Logical name for RAID if undefined initially. You need to define the Physical device ($1$DGA145…) as either DG* or DK* or GK* by using SHOW DEVICE command and DEFINE/SYSTEM command, but then does not need to be mounted.
(3) Discover and describe the command device on /etc/horcm0.conf.
(6) Describe the known HORCM_DEV on /etc/horcm*.conf. FOR horcm0.conf HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTB TargetID 0 0 0 LU# 2 4 6 MU# 0 0 0 LU# 3 5 7 MU# 0 0 0 service horcm1 FOR horcm1.conf HORCM_DEV #dev_group VG01 VG01 VG01 HORCM_INST #dev_group VG01 dev_name oradb1 oradb2 oradb3 port# CL1-H CL1-H CL1-H ip_address HOSTA TargetID 0 0 0 service horcm0 (7) Start ‘horcmstart 0 1’.
3.6 CCI Startup After you have installed the CCI software (see section 3.3), set the configuration definition file(s) (see section 3.4), and (for OpenVMS only) followed the porting requirements and restrictions (see section 3.5), you can begin using the CCI software (HORCM) to perform Hitachi TrueCopy and/or ShadowImage operations on the attached storage systems. 3.6.1 Startup for UNIX Systems One Instance. To start up one instance of CCI on a UNIX system: 1.
Two Instances. To start up two instances of CCI on a UNIX system: 1. Modify /etc/services to register the port name/number (service) of each configuration definition file. The port name/number must be different for each CCI instance. horcm0 xxxxx/udp xxxxx = the port name/number for horcm0.conf horcm1 yyyyy/udp yyyyy = the port name/number for horcm1.conf 2. If you want HORCM to start automatically each time the system starts up, add /etc/horcmstart.sh 0 1 to the system automatic start-up file (e.g.
3.6.2 Startup for Windows Systems One Instance. To start up one instance of CCI on a Windows system: 1. Modify \WINNT\system32\drivers\etc\services to register the port name/number (service) of the configuration definition file. Make the port name/number the same on all servers: horcm xxxxx/udp xxxxx = the port name/number of horcm.conf 2. If you want HORCM to start automatically each time the system starts up, add \HORCM\etc\horcmstart to the system automatic start-up file (e.g., \autoexec.bat). 3.
3.6.3 Startup for OpenVMS® Systems One Instance. To start up one instance of CCI on an OpenVMS® system: 1. Create the configuration definition file (see section 3.4). For a new installation, the configuration definition sample file is supplied (SYS$POSIX_ROOT:[HORCM.etc]horcm.conf). Make a copy of the file: $ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc] Edit this file according to the system configuration using a text editor (e.g., eve).
Two Instances. To start up two instances of CCI on a OpenVMS® system: 1. Create the configuration definition files (see section 3.4). For a new installation, the configuration definition sample file is supplied (SYS$POSIX_ROOT:[HORCM.etc]horcm.conf). Copy the file twice, once for each instance. $ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc] horcm0.conf $ COPY SYS$POSIX_ROOT:[HORCM.etc]horcm.conf SYS$POSIX_ROOT:[etc] horcm1.
3.7 Starting CCI as a Service (Windows Systems) Usually, CCI (HORCM) is started by executing the start-up script from the Windows services. However, in the VSS environment, there is no interface to automatically start CCI. As a result, CCI provides the following svcexe.exe command and a sample script (HORCM0_run.
3. Setting the user account. The system administrator must set the user account for the CCI administrator as needed. In case of using GUI, use “Administrative ToolsÆServicesÆSelect HORCM0ÆLogon”.
162 Chapter 3 Preparing for CCI Operations
Chapter 4 Performing CCI Operations This chapter covers the following topics: Environmental variables (section 4.1) Creating pairs (paircreate) (section 4.2) Splitting and deleting pairs (pairsplit) (section 4.3) Resynchronizing pairs (pairresync) (section 4.4) Confirming pair operations (pairevtwait) (section 4.5) Monitoring pair activity (pairmon) (section 4.6) Checking attribute and status (pairvolchk) (section 4.7) Displaying pair status (pairdisplay) (section 4.
4.1 Environmental Variables When activating HORCM or initiating a command, users can specify any of the environmental variables that are listed in Table 4.1. Table 4.1 HORCM, Hitachi TrueCopy, and ShadowImage Variables Variable Functions HORCM (/etc/horcmgr) environmental variables $HORCM_CONF: Names the HORCM configuration file, default = /etc/horcm.conf. $HORCM_LOG: Names the HORCM log directory, default = /HORCM/log/curlog.
Variable Functions ShadowImage command environmental variables $ HORCC_MRCF: Sets the execution environment of the ShadowImage commands. The selection whether the command functions as that of Hitachi TrueCopy or ShadowImage is made according to this variable. The HORCM is not affected by this variable. When issuing a Hitachi TrueCopy command, do not set the HORCC_MRCF variable for the execution environment of the command.
4.1.1 $HORCMINST and $HORCC_MRCF Supported Options The CCI command has depended on the $HORCMINST,HORCC_MRCF environment variable as described in the table above. However CCI also supports the following options that do not depend on the $HORCMINST,HORCC_MRCF environment variable. 4.1.1.1 Specifying Options -I[instance#] This option used for specifying Instance# of HORCM.
Table 4.
4.2 Creating Pairs (Paircreate) WARNING: Use the paircreate command with caution. The paircreate command starts the Hitachi TrueCopy/ShadowImage initial copy operation, which overwrites all data on the secondary/target volume. If the primary and secondary volumes are not identified correctly, or if the wrong options are specified (e.g., vl instead of vr), data will be transferred in the wrong direction. The paircreate command generates a new volume pair from two unpaired volumes.
Table 4.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the paircreate command enter interactive mode. The -zx option guards performing of the HORCM in interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
Parameter Value Returned values This command sets the following returned values during exit allowing the user to check the execution results. Normal termination: 0. When creating groups, 0 = normal termination for all pairs. Abnormal termination: other than 0, refer to the execution logs for error details.
Table 4.4 Specific Error Codes for Paircreate Category Error Code Error Message Recommended Action Value Volume status EX_ENQVOL Unmatched volume status within the group Confirm status using the pairdisplay command. Make sure all volumes in the group have the same fence level and volume attributes. 236 EX_INCSTG Inconsistent status in group Confirm pair status using pairdisplay. 229 EX_INVVOL Invalid volume status Confirm pair status using pairdisplay -l.
4.3 Splitting and Deleting Pairs (Pairsplit) The pairsplit command stops updates to the secondary volume of a pair and can either maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes (see Table 4.3). The pairsplit command can be applied to a paired logical volume or a group of paired volumes. The pairsplit command allows read access or read/write access to the secondary volume, depending on the selected options.
Table 4.
Options -h: Displays Help/Usage and version information. Note: Only one pairsplit option (-r, -rw, -S, -R, or -P) can be specified. If more than one option is specified, only the last option will be executed. -q: Terminates the interactive mode and exits this command. -z or -zx (OpenVMS cannot use the -zx option): Makes the pairsplit command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode.
Returned values 176 Normal termination: 0. When splitting groups, 0 = normal termination for all pairs. Abnormal termination: other than 0, refer to the execution logs for error details.
Table 4.6 Specific Error Codes for Pairsplit Category Error Code Error Message Recommended Action Value Volume status EX_ENQVOL Unmatched volume status within the group Confirm status using the pairdisplay command. Make sure all volumes in the group have the same fence level and volume attributes. 236 EX_INCSTG Inconsistent status in group Confirm pair status using pairdisplay. 229 EX_INVVOL Invalid volume status Confirm pair status using pairdisplay -l.
4.3.1 Timing Pairsplit Operations The pairsplit command terminates after verifying that the status has changed according to the pairsplit command options (to PSUS or SMPL). If you want to synchronize the volume pair, the non-written data (in the host buffer) must be written before you issue the pairsplit command. When the pairsplit command is specified, acceptance of write requests to the primary volume depends on the fence level of the pair (data, status, never, or async). Some examples are shown below.
4.3.2 Deleting Pairs (Pairsplit -S) The pair delete operation is executed by using the -S option of the pairsplit command. When the pairsplit -S command is issued, the specified Hitachi TrueCopy or ShadowImage pair is deleted, and each volume is changed to SMPL (simplex) mode. If you want to re-establish a pair which has been deleted, you must use the paircreate command (not pairresync).
4.4 Resynchronizing Pairs (Pairresync) The pairresync command re-establishes a split pair and then restarts the update copy operations to the secondary volume (see Figure 4.5). The pairresync command can resynchronize either a paired logical volume or a group of paired volumes. The normal direction of resynchronization is from the primary volume to the secondary volume. If the -restore option is specified (ShadowImage only), the pair is resynchronized in the reverse direction (i.e.
Server B Server A Pair resynchronization command Primary volume (Secondary volume) Figure 4.5 Paired logical volumes Differential/entire data copy Secondary volume (Primary volume) Pair Resynchronization Read/Write Normal Resync Copy Read* COPY : Write Data P-VOL Read* S-VOL Restore Resync Copy (ShadowImage only) Read RCPY : Write Data P-VOL Figure 4.
Table 4.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits this command. -z or -zx (OpenVMS cannot use the -zx option): Makes the pairresync command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
184 Parameter Value Returned values Normal termination: 0. When resynching groups, 0 = normal termination for all pairs. Abnormal termination: other than 0, refer to the execution logs for error details.
Table 4.8 Specific Error Codes for Pairresync Category Error Code Error Message Recommended Action Value Volume status EX_ENQVOL Unmatched volume status within the group Confirm status using the pairdisplay command. Make sure all volumes in the group have the same fence level and volume attributes. 236 EX_INCSTG Inconsistent status in group Confirm pair status using pairdisplay. 229 EX_INVVOL Invalid volume status Confirm pair status using pairdisplay -l.
R/W R or R/W R/W R T1 T0 pairresync -swaps on SVOL or pairresync -swapp on PVOL P-VOL S-VOL NEW_SVOL NEW_PVOL Write Data Figure 4.
4.5 Confirming Pair Operations (Pairevtwait) The pair event waiting (pairevtwait) command is used to wait for completion of pair creation and pair resynchronization and to check the status (see Figure 4.11). It waits (“sleeps”) until the paired volume status becomes identical to a specified status and then completes. The pairevtwait command can be used for a paired logical volume or a group of paired volumes.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits this command. -z or -zx (OpenVMS cannot use the -zx option): Makes the pairevtwait command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
Parameter Value Returned values When the -nowait option is specified: Normal termination: 1: The status is SMPL 2: The status is COPY or RCPY 3: The status is PAIR 4: The status is PSUS 5: The status is PSUE When monitoring groups, 1/2/3/4/5 = normal termination for all pairs. Abnormal termination: other than 0 to 127, refer to the execution logs for error details.
ShadowImage Environment pairevtwait -g oradb1 -s psus -t 10 -FHORC Ora(TrueCopy) SVOL 0 P/P VOL 0 Oradb1(ShadowImage) SVOL Oradb2(ShadowImage) 1 Seq#30052 SVOL Seq#30053 Figure 4.12 Example of -FHORC Option for Pairevtwait TrueCopy Environment pairevtwait -g ora -s psus -t 10 -FMRCF 1 PVOL Ora(TrueCopy) S/P VOL 0 0 1 Oradb1(ShadowImage) SVOL Oradb2(ShadowImage) Seq#30052 Seq#30053 SVOL Figure 4.13 Example of -FMRCF Option for Pairevtwait Using -ss ...
The horctakeover will suspend G2(CA-Jnl) automatically if horctakeover will return “Swaptakeover” as exit code. In DC1 host failure, if APP1 want to wait until DC3 become the suspend state, then they can verify “SSUS” state by using the pairevtwait command as shown below. APP1 L1 DC1 APP1 SVOL 0 G1 (Sync) 1 PVOL PSUS PVOL 0 L2 2 DC2 horctakeover –g G1 Pairevtwait –g G3 –FHORC 1 –ss ssus –t 600 SMPL G2(UR) G3 (UR) L3 1 SVOL SSUS 2 DC3 SMPL APP Figure 4.
4.6 Monitoring Pair Activity (Pairmon) The pairmon command, which is connected to the HORCM daemon, obtains the pair status transition of each volume pair and reports it. If the pair status changes (due to an error or a user-specified command), the pairmon command issues a message. Table 4.11 lists and describes the pairmon command parameters. Figure 4.16 shows an example of the pairmon command and its output. Table 4.12 specifies the results of the command options.
Output of the pairmon command: Group: This column shows the group name (dev_group) which is described in the configuration definition file. Pair vol: This column shows the paired volume name (dev_name) in the specified group which is described in the configuration definition file. Port targ# lun#: These columns show the port ID, TID, and LUN which is described in the configuration definition file. For further information on fibre-to-SCSI address conversion, see Appendix C.
4.7 Checking Attribute and Status (Pairvolchk) The pairvolchk command acquires and reports the attribute of a volume or group connected to the local host (issuing the command) or remote host. The volume attribute is SMPL (simplex), P-VOL (primary volume), or S-VOL (secondary volume). The -s[s] option reports the pair status in addition to the attribute. Figure 4.17 shows examples of the pairvolchk command and its output. lists and describes the pairvolchk command parameters and returned values. Table 4.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the pair volume check command. -z or -zx (OpenVMS cannot use the -zx option): Makes the pairvolchk command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
Parameter Value Returned values When the -ss option is not specified: Normal termination: 1: The volume attribute is SMPL. 2: The volume attribute is P-VOL. 3: The volume attribute is S-VOL. Abnormal termination: Other than 0 to 127, refer to the execution log files for error details. When the -ss option is specified: Abnormal termination: specific error codes (Table 4.14) and generic error (Table 5.3). Normal termination: 11: The status is SMPL.
Table 4.14 Specific Error Codes for Pairvolchk Category Error Code Error Message Recommended Action Value Volume status EX_ENQVOL Unmatched volume status within the group Confirm status using the pairdisplay command. Make sure all volumes in the group have the same fence level and volume attributes. 236 Unrecoverable EX_EVOLCE Pair Volume combination error Confirm pair status using pairdisplay, and change combination of volumes.
Figure 4.18 shows a pairvolchk example that acquires the status (PVOL_PSUS) of the intermediate P/Pvol through specified pair group on ShadowImage environment. Figure 4.19 shows a pairvolchk example that acquires the status (PVOL_PSUS) of the intermediate S/Pvol (MU#1) through specified pair group on Hitachi TrueCopy environment.
Table 4.
Table 4.16 State Transition Table for HA Control Script Volume Attributes and Pair Status State No.
PDUB 21 22 COPY statu s SVOL_E Æ 5,6 never SVOL_E Æ 5,6 async SVOL Æ 5,6 S-VOL EX_EVOLCE Unknown EX_ENORMT or EX_EVOLCE YYY EX_CMDIOE 23 PAIR/ PFUL 24 25 SVOL_E * Æ 4,5 SVOL_E* data SVOL Æ 4 status SVOL Æ 4 never SVOL_E Æ 4 async SVOL Æ 4 PSUS SVOL_E Æ 4 PFUS SVOL Æ 4-1 PSUE PDUB data SVOL Æ 5,6 status SVOL_E Æ 5,6 never SVOL_E Æ 5,6 async SVOL Æ 5,6 Explanation of terms in Table 4.
4.7.1 Recovery in Case of SVOL-Takeover While the DC1 is conducting processing (normally state = 4), and when the DC2 has recovered from the failure, the following commands must be issued to make PVOL on the DC1 side: In case of operations on the DC1 side: c pairsplit -S d paircreate -vl e pairevtwait (wait for PAIR) In case of operations on the DC2 side: c pairsplit -S d paircreate -vr e pairevtwait (wait for PAIR) Host A Host B PVOL PSUS SMPL DC1 DC2 State No.
If after pairsplit operation, and when the DC2 takes over processing from the DC1, the horctakeover command will be returned with EX_VOLCRE due to (DC2)SMPL & (DC1)SMPL on the (DC2) side. Æ state is No. 1.
In case of state No. 17: This case is pair suspend (using pairsplit command) by operator. The DC1 takes over processing from the DC2, when the DC2 has PSUS state DC1(SVOL-PSUS) & DC2(PVOLPSUS) that will be needed that ask operator for decision, and/or pairresync on the DC1 side. If the DC1 takes over processing from the DC2 without their confirmation operations, horctakeover command will be returned with SVOL_E (execute SVOL-takeover and return EX_VOLCUR) on the (DC1) side. Æ state is No. 17.
4.7.2 PVOL-PSUE-Takeover The horctakeover command executes PVOL-PSUE-takeover when the primary volume cannot be used (PSUE or PDUB volume is contained in the group, or the link down that the pair status is PVOL_PAIR/SVOL_PAIR and the AP (active path) value is 0), and will be returned with “PVOL-PSUE-takeover” as the return value. PVOL-PSUE-takeover changes the primary volume to the suspend state (PSUE or PDUB Æ PSUE*, PAIR Æ PSUS) which permits WRITE to all primary volumes of the group.
Even though ESCON or FC has been connected to S-VOL, PVOL-PSUE-takeover is changed to the suspend state with the primary volume only (SVOL’s state is not changed), since that maintains consistence of the secondary volume at having accepted horctakeover command. Host C SVOL failure Host A Host C Host B PAIR PSUE PAIR PAIR PAIR PAIR PSUE PAIR PAIR PAIR P-VOL S-VOL horctakeover Host A Host B PSUS PSUE* PSUS PSUS PSUS PAIR PSUE PAIR PAIR PAIR P-VOL S-VOL Group STATUS of the P-VOL.
4.7.4 SVOL-SSUS Takeover in Case of ESCON/Fibre/Host Failure The SVOL-Takeover executes SVOL-SSUS-takeover to enable writing without changing the SVOL to SMPL. SVOL-SSUS-takeover changes the SVOL to the suspend state (PAIR, PSUE Æ SSUS) which permits write and maintains delta data (bitmap) for all SVOLs of the group.
4.7.5 Recovery from SVOL-SSUS-Takeover After recovery of the ESCON/FC link, this special state (PVOL_PSUE and SVOL_PSUS) will be changed to COPY state that original SVOL is swapped as the NEW_PVOL and resynchronizes (cast off original PVOL) the NEW_SVOL based on the NEW_PVOL by issuing of the pairresync -swaps command on takeover site (Host B).
Failback without recovery on Host B. After recovery of the ESCON/FC link and hosts, if you stopped the applications without executing the pairresync -swaps command on Host B and restarted the applications on Host A, you must use the following procedure for recovery. At this time, pairvolchk command on Host A will be returned PVOL_PSUE & SVOL_PSUS as state combination.
4.7.6 SVOL-Takeover in Case of Host Failure After SVOL-takeover changed to the suspend (PAIR, PSUE Æ SSUS) state with the SVOL only, internal operation of SVOL-takeover will be executed pairresync -swaps command for maintaining mirror consistency between NEW_PVOL and NEW_SVOL, and then will be returned with Swap-takeover as the return value of horctakeover command. Hitachi TrueCopy Async/UR.
4.8 Displaying Pair Status (Pairdisplay) The pairdisplay command displays the pair status allowing you to verify completion of pair operations (e.g., paircreate, pairresync). The pairdisplay command is also used to confirm the configuration of the pair connection path (the physical link of paired volumes and servers). The pairdisplay command can be used for a paired volume or a group of paired volumes. Table 4.17 lists and describes the pairdisplay command parameters and returned values. Figure 4.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the pair volume check command. -z or -zx (OpenVMS cannot use the -zx option): Makes the pairdisplay command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
Parameter Value Returned values 1: The volume attribute is SMPL. 2: The volume attribute is P-VOL. 3: The volume attribute is S-VOL. When displaying groups, 1/2/3 = normal termination for all pairs. Abnormal termination (other than 0 to 127): refer to the execution log files for error details.
# pairdisplay -g oradb -fcx Group Pair Vol(L/R) (P,T#,L#), Seq#, oradb oradb1(L) (CL1-B, 1,0) 1234 oradb oradb1(R) (CL1-A, 1,0) 5678 LDEV#..P/S, Status, 64..P-VOL PAIR C8..S-VOL PAIR Fence, Copy%, P-LDEV# Never, 75 C8 Never, ---64 M - Figure 4.20 Hitachi TrueCopy Pairdisplay Command Example # pairdisplay -g oradb Group Pair Vol(L/R) (Port#,TID,LU-M), Seq#, oradb oradb1(L) (CL1-A, 1,0) 30053 oradb oradb1(R) (CL1-D, 1,0) 30053 LDEV#..P/S, Status, Fence, Seq#, P-LDEV# M 18..P-VOL PAIR Never, 30053 19 19..
(P,T#,L#) (TrueCopy) = port, TID, and LUN as described in the configuration definition file. For further information on fibre-to-SCSI address conversion, see Appendix C.
4.9 Checking Hitachi TrueCopy Pair Currency (Paircurchk) The CCI paircurchk command checks the currency of the Hitachi TrueCopy secondary volume(s) by evaluating the data consistency based on pair status and fence level. Table 4.18 specifies the data consistency for each possible state of a TrueCopy volume. A paired volume or group can be specified as the target of the paircurchk command. The paircurchk command assumes that the target is an S-VOL.
Notes: 1. To be confirmed = It is necessary to check the object volume, since it is not the secondary volume. 2. Inconsistent = Data in the volume is inconsistent because it was being copied. 3. OK (assumption) = Mirroring consistency is not assured, but as S-VOL of Hitachi TrueCopy Async/UR, the sequence of write data is ensured. Figure 4.23 shows an example of the paircurchk command for a group and the resulting display of inconsistent volumes in the specified group. Table 4.
Table 4.20 Specific Error Code for Paircurchk Category Error Code Error Message Recommended Action Value Volume status Unrecoverable EX_VOLCUR S-VOL currency error Check volume list to see if an operation was directed to the wrong S-VOL. 225 Note: Unrecoverable errors are fixed and will not be resolved, even after re-executing the command. If the command failed, the detailed status will be logged in the CCI command log ($HORCC_LOG) (see Table A.2), even if the user script has no error handling.
4.10 Performing Hitachi TrueCopy Takeover Operations The Hitachi TrueCopy takeover command (horctakeover) is a scripted command for executing several Hitachi TrueCopy operations. The takeover command checks the specified volume’s or group’s attributes (paircurchk), decides the takeover function based on the attributes, executes the chosen takeover function, and returns the result. The four Hitachi TrueCopy takeover functions designed for HA software operation are (see section 4.10.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the horctakeover command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shutdown, interactive mode terminates.
Category Error Code Error Message Recommended Action Value EX_EVOLCE Pair Volume combination error Confirm pair status using pairdisplay, and change combination of volumes. 235 EX_VOLCUR S-VOL currency error Check volume list to see if an operation was directed to the wrong S-VOL. 225 EX_VOLCUE Local Volume currency error Confirm pair status of the local volume.
Recovery from EX_EWSTOT: If horctakeover failed with [EX_EWSTOT], recover as follows: 1. Wait until the SVOL state becomes “SVOL_PSUS” by using the return code of “pairvolchk -g -ss” command, and try to the start-up again for the HA Control Script. 2. Make an attempt to re-synchronize the original PVOL based on SVOL using “pairresync -g -swaps -c “ for a fast failback operation.
4.10.1 Horctakeover Command Functions 4.10.1.1 Takeover-Switch Function The control scripts activated by the HA software are used the same way by all nodes of a cluster; they do not discriminate between primary and secondary volumes. The takeover command, when activated by a control script, checks the combination of attributes of the local and remote volumes and determines the proper takeover action. Table 4.
Notes: 1. NG = The takeover command is rejected, and the operation terminates abnormally. 2. Nop-Takeover = The takeover command is accepted, but no operation is performed. 3. Volumes not conform = The volumes are not in sync, and the takeover command terminates abnormally. 4. Unknown = The remote node attribute is unknown and cannot be identified. The remote node system is down or cannot communicate. 5. SSWS = Suspend for Swapping with SVOL side only.
4.10.1.3 SVOL-Takeover Function The SVOL-takeover function allows the takeover node to use the secondary volume (except in COPY state) in SSUS(PSUS) state (i.e., reading and writing are enabled), on the assumption that the remote node (possessing the primary volume) cannot be used. The data consistency of the Hitachi TrueCopy SVOL is evaluated by its pair status and fence level (same as paircurchk, refer to). If the primary and secondary volumes are not consistent, the SVOLtakeover function fails.
4.10.1.4 PVOL-Takeover Function The PVOL-takeover function releases the pair state as a group, since that maintains the consistency of the secondary volume at having accepted horctakeover command when the primary volume is fenced (“data or status” & “PSUE or PDUB” state, “PSUE or PDUB” volume are contained in the group). This function allows the takeover node to use the primary volume (i.e.
4.10.2 Applications of the Horctakeover Command The basic Hitachi TrueCopy commands (takeover, pair creation, pair splitting, pair resynchronization, event waiting) can be combined to enable recovery from a disaster, backup of paired volumes, and many other operations (e.g., restoration of paired volumes based on the secondary volume, swapping of the paired volumes). Figure 4.
User’s Script User’s Environment Manual activation Activation from HA software 1. horctakeover 2. -x mount 3. chkdsk 4. Server software activation 5. Application activation Takeover Command Execution Log 1. horctakeover Communication with the primary site is disabled. Accordingly, SVOL-takeover is executed SVOL-SSUS-takeover. The secondary volume is R/W-enabled. 2. -x mount The file system is mounted for R/W using the CCI subcommand. 3. chkdsk Conformability of the file system is checked. 4.
4.11 Displaying Configuration Information 4.11.1 Raidscan Command The raidscan command displays configuration and status information for the specified port/TID(s)/device(s). The information is acquired directly from the storage system (not the config. definition file). Table 4.24 lists and describes the raidscan command parameters. Figure 4.27 and Figure 4.28 provide examples of the raidscan command and its output.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidscan command enter interactive mode. The -zx option guards performing of the HORCM in interactive mode. When this option detects a HORCM shut down, interactive mode terminates. -I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for specifying instance# of HORCM.
# raidscan -p cl1-r Port#, TargetID#, Lun# CL1-R, 15, 7 CL1-R, 15, 6 Num(LDEV#...) P/S, Status, 5(100,101...) P-VOL PAIR 5(200,201...) SMPL ---- Fence, LDEV#, P-Seq# P-LDEV# NEVER 100, 5678 200 ---------- # raidscan -p cl1-r -f Port#, TargetID#, Lun# CL1-R, 15, 7 CL1-R, 15, 6 Num(LDEV#...) 5(100,101...) 5(200,201...) P/S, P-VOL SMPL Status, PAIR ---- Fence, NEVER ---- LDEV#, 100, ---- Vol.Type OPEN-3 OPEN-3 # raidscan -pd /dev/rdsk/c0t15/d7 -fg Port#, TargetID#, Lun# Num(LDEV#...
232 Vol.Type = logical unit (LU) type (e.g.
Group = group name (dev_group) as described in the configuration definition file UID: Displays the unit ID for multiple storage system configuration. If UID is displayed as ‘-’, the command device for HORCM_CMD is not found.
4.11.2 Raidar Command The raidar command displays configuration, status, and I/O activity information for the specified port/TID(s)/device(s) at the specified time interval. The configuration information is acquired directly from the storage system (not from the configuration definition file). Table 4.25 lists and describes the raidar command parameters. Figure 4.30 shows an example of the raidar command and its output.
# raidar -p cl1-a TIME[03] PORT 13:45:25 13:45:28 CL1-A CL1-B CL1-A 15 6 T 15 14 12 L 6 5 3 -p cl1-b 14 5 -p VOL STATUS SMPL --P-VOL PAIR P-VOL PSUS cl1-a 12 3 -s 3 IOPS HIT(%) W(%) 200.0 80.0 40.0 133.3 35.0 13.4 200.0 35.0 40.6 IOCNT 600 400 600 Figure 4.30 Raidar Command Example Output of the raidar command: IOPS = # of I/Os (read/write) per second (total I/O rate). HIT(%) = Hit rate for read I/Os (read hit rate). W(%) = Ratio of write I/Os to total I/Os (percent writes).
4.11.3 Raidqry Command The raidqry command (RAID query) displays the configuration of the connected host and RAID storage system. Figure 4.31 shows an example of the raidqry command and its output. Table 4.26 lists and describes the raidqry command parameters.
Table 4.26 Raidqry Command Parameters Parameter Value Command Name raidqry Format raidqry { -h ⎪ -q ⎪ -z ⎪ -l ⎪ -r ⎪ [ -f ] | -g} Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidqry command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
4.12 Performing Data Protection Operations CCI supports the following three commands to set and verify the parameters for protection checking (Data Retention Utility, Database Validator) to each LU. The protection checking functions are available on the USP V/VM, USP/NSC, and Lightning 9900V (not 9900). raidvchkset (see section 4.12.1) raidvchkdsp (see section 4.12.2) raidvchkscan (see section 4.12.3) For further information on Data Protection Operations, see section 2.7. 4.12.
Table 4.27 Raidvchkset Command Parameters Parameter Value Command Name raidvchkset Format raidvchkset { -h ⎪ -q ⎪ -z ⎪ -g ⎪ -d -d[g] [MU#] ⎪ -d[g] [MU#] ⎪ -nomsg ⎪ -vt [type] ⎪ -vs < bsize> [slba] [elba] ⎪ -vg [type] [rtime] } Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkset command enter the interactive mode.
Parameter Value -vg [type]: Specifies the following guard type to the target volumes for Data Retention Utility (Open LDEV Guard on 9900V). If [type] is not specified, this option will disable all of the guarding. inv: The target volumes are concealed from SCSI Inquiry command by responding “unpopulated volume”. sz0: The target volumes replies with “SIZE 0” through SCSI Read capacity command. rwd: The target volumes are disabled from reading and writing. wtd: The target volumes are disabled from writing.
Setting for Oracle H.A.R.D Oracle 10g supports ASM (Automated Storage Manager), so users must change the setting according to the use of this ASM. The USP V/VM and TagmaStore USP/NSC support the setting for Oracle 10g. Table 4.29 shows the related CCI command settings. Table 4.29 Setting H.A.R.
4.12.2 Raidvchkdsp Command The raidvchkdsp command displays the parameters for validation checking of the specified volumes. Unit of checking for the validation is based on the group of CCI configuration definition file. Table 4.30 lists and describes the raidvchkdsp command parameters. Figure 4.34 - Figure 4.36 show examples of the raidvchkdsp command. Note: This command will be controlled as protection facility. Non-permitted volume is shown without LDEV# information (LDEV# information is “ – ”).
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkdsp command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
raidvchkdsp -g vg01 -fd -v cflag Å Example of -fd option showing Unknown vol. Group PairVol Device_File Seq# LDEV# BR-W-E-E MR-W-B BR-W-B SR-W-B-S vg01 oradb1 Unknown 2332 - - - - - - - - - - vg01 oradb2 c4t0d3 2332 3 D E B R D D D D E E D E D D # raidvchkdsp -g horc0 -v gflag -fe Group ... TID LU Seq# LDEV# GI-C-R-W-S horc0 ... 0 20 63528 65 E E E E E horc0 ... 0 20 63528 66 E E E E E PI-C-R-W-S E E E E E E E E E E R-Time 0 0 Å Example of -fe option. EM E-Seq# E-LDEV# - Figure 4.
Output of the raidqvchkdsp command with -fe option: EM: This column displays the external connection mode. H = Mapped E-lun is hidden from the host. V = Mapped E-lun is visible to the host. —= Unmapped to the E-lun. BH = Mapped E-lun as hidden from the host, but LDEV blockading. BV = Mapped E-lun as visible to the host, but LDEV blockading. B = Unmapped to the E-lun, but LDEV blockading. E-Seq#: This column displays the production (serial) number of the external LUN (‘Unknown’ shown as ‘-’).
SR-W-B-S: Displays the flags for checking regarding CHK-F1 in the data block. R = E: Checking for CHK-F1 on Read is enabled. D: Checking for CHK-F1 on Read is disabled. W = E: Checking for CHK-F1 on Write is enabled. D: Checking for CHK-F1 on Write is disabled. B = E: Checking for CHK-F1 in the data block #0 is enabled. D: Checking for CHK-F1 in the data block #0 is disabled. S = E: Referring for CHK-F1 flag contained in the data block is enabled.
Output of the raidqvchkdsp command with -v gflag option: GI-C-R-W-S: This displays the flags for guarding as for the target volume. I Æ E: Enabled for Inquiry command. D: Disabled for Inquiry command. C Æ E: Enabled for Read Capacity command. D: Disabled for Read Capacity command. R Æ E: Enabled for Read command. D: Disabled for Read command. WÆ E: Enabled for Write command. D: Disabled for Write command. SÆ E: Enabled for becoming the SVOL. D: Disabled for becoming the SVOL.
Output of the raidqvchkdsp command with -v pool option: Bsize: This displays the data block size of the pool, in units of block (512bytes). Available(Bsize): This displays the available capacity for the volume data on the SnapShot pool in units of Bsize. Capacity(Bsize): This displays the total capacity in the SnapShot pool in units of Bsize.
4.12.3 Raidvchkscan Command The raidvchkscan command displays the fibre port of the storage system (9900V and later), target ID, LDEV mapped for LUN#, and the parameters for validation checking, regardless of the configuration definition file. Table 4.31 lists and describes the raidvchkscan command parameters. Figure 4.40 through Figure 4.42 show examples of the raidvchkscan command. Note: This command will be rejected with EX_ERPERM by connectivity checking between CCI and the Hitachi RAID storage system.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidvchkscan command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates.
# raidvchkscan -p CL1-A -v cflag PORT# /ALPA/C TID# LU# Seq# Num LDEV# CL1-A / ef/ 0 0 0 2332 1 0 CL1-A / ef/ 0 0 1 2332 1 1 BR-W-E-E D E B R D E B R MR-W-B D D D D D D BR-W-B-Z D E E E D E E E SR-W-B-S D E D D D E D D Figure 4.40 Raidvchkscan Command Example with -v cflag Option Output of the raidqvchkscan command with -v cflag option: BR-W-E-E: This column displays the flags for checking regarding data block size. R = E: Checking for data block size on Read is enabled.
SR-W-B-S: Displays the flags for checking regarding CHK-F1 in the data block. R = E: Checking for CHK-F1 on Read is enabled. D: Checking for CHK-F1 on Read is disabled. W = E: Checking for CHK-F1 on Write is enabled. D: Checking for CHK-F1 on Write is disabled. B = E: Checking for CHK-F1 in the data block #0 is enabled. D: Checking for CHK-F1 in the data block #0 is disabled. S = E: Referring for CHK-F1 flag contained in the data block is enabled.
# raidvchkscan -p CL1-A -v gflag PORT# /ALPA/C TID# LU# Seq# Num LDEV# CL1-A / ef/ 0 0 0 2332 1 0 CL1-A / ef/ 0 0 1 2332 1 1 CL1-A / ef/ 0 0 2 2332 1 2 GI-C-R-W-S E E D D E E E D D E E E D D E Å Example of -v gflag option. PI-C-R-W-S R-Time E E D D E 365 E E D D E E E D D E 0 Figure 4.43 Raidvchkscan Command Example with -v gflag Option Output of the raidqvchkscan command with -v gflag option: GI-C-R-W-S: This displays the flags for guarding as for the target volume.
# raidvchkscan -v pool -p CL2-d-0 PORT# /ALPA/C TID# LU# Seq# Num LDEV# CL2-D-0 /e4/ 0 2 0 62500 1 160 CL2-D-0 /e4/ 0 2 1 62500 1 161 Bsize 2048 2048 Available 100000 100000 Capacity 1000000000 1000000000 Figure 4.44 Raidvchkscan Command Example with -v pool Option Output of the raidqvchkscan command with -v pool option: Bsize: This displays the data block size of the pool, in units of block (512 bytes).
4.12.4 Raidvchkscan Command for Journal (UR) The raidvchkscan command supports the (-v jnl [t] [unit#]) option to find the journal volume list setting via SVP. It also displays any information for the journal volume. The Universal Replicator function is available on the Hitachi USP V/VM and USP/NSC storage systems. Table 4.
Output of the raidqvchkscan command with -v jnl 0 option: JID: Displays the journal group ID. MU: Displays the mirror descriptions on UR. CTG: Displays the CT group ID. JNLS: Displays the following status in the journal group. – SMPL: this means the journal volume which does not have a pair, or deleting. – P(S)JNN: this means “P(S)vol Journal Normal Normal”. – P(S)JNS this means “P(S)vol Journal Normal suspend” created with -nocsus option.
Q-CNT: Displays the number of remaining Q-Markers within each journal volume. Q-Marker (#2) of S-JNL Q-Marker (#9) of P-JNL R/W P9 8 7 6 5 4 3 PVOL S7 6 5 4 Asynchronous transfer Q-CNT 3 Q-CNT SVOL Figure 4.47 Example of Q-Marker and Q-CNT U(%): Displays the usage rate of the journal data. D-SZ: Displays the capacity for the journal data on the journal volume. Seq#: Displays the serial number of the RAID storage system.
Table 4.33 lists information about the different journal volume statuses. QCNT=0 indicates that the number of remaining Q-Markers is ‘0’. The letter ‘N’ indicates a non-zero. Table 4.
4.12.5 Raidvchkscan Command for Snapshot Pool and Dynamic Provisioning The raidvchkscan command supports the option ( -v pid[a] [unit#]) to find the SnapShot pool or HDP pool settings via SVP, and displays information for the SnapShot pool or HDP pool. Table 4.
Num: Displays the number of LDEV configured the SnapShot pool. LDEV#: Displays the first number of LDEV configured the SnapShot pool. H(%): Displays the threshold rate being set to the SnapShot pool as High water mark. ‘Unknown’ will be shown as ‘-’.
4.13 Controlling CCI Activity 4.13.1 Horcmstart Command The horcmstart command is a shell script that starts the HORCM application (/etc/horcmgr). This shell script also sets the environment variables for HORCM as needed (e.g., HORCM_CONF, HORCM_LOG, HORCM_LOGS). Table 4.35 lists and describes the horcmstart command parameters. Table 4.35 Horcmstart Command Parameters Parameter Value Command Name horcmstart Format horcmstart.sh { inst ... } (UNIX systems) horcmstart.exe { inst ...
4.13.2 Horcmshutdown Command The horcmshutdown command is a shell script for stopping the HORCM application (/etc/horcmgr). Table 4.36 describes the shutdown command parameters. Table 4.36 Horcmshutdown Command Parameters Parameter Value Command Name horcmshutdown Format horcmshutdown.sh {inst...} horcmshutdown.exe {inst...} Option Inst: Specifies the HORCM (CCI) instance number (numerical value). When this option is specified, the command stops the specified HORCM instance.
4.13.3 Horcctl Command The HORCM and Hitachi TrueCopy software have logs that identify the cause of software and/or hardware errors as well as a tracing function for investigating such errors. The location of the log files depends on the user’s command execution environment and the HORC Manager’s execution environment. The command trace file and core file reside together under the directory specified in the HORC Manager’s execution environment. See Appendix A for log file and log directory information.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the horcctl command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates. -I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and used for specifying instance# of HORCM.
4.13.4 3DC Control Command using HORC/UR NEW This is a scripted command for executing several HORC operation commands combined. It checks the volume attribute (optionally specified) and decides a takeover action. The horctakeoff operation is defined to change from 3DC multi-target to 3DC multi-hop with the state of running APP, after that horctakeover command will be able to configure 3DC multi-target on the remote site without stopping the APP.
Parameter Value Options -h displays Help/Usage, and version information. -q terminates interactive mode and exits this command. -z or -zx (OpenVMS cannot use -zx option) This option makes this command enter interactive mode. The -zx option prevents using HORCM in interactive mode. This option terminates interactive mode upon HORCM shut-down.
Note: Unrecoverable error should have been done without re-execute by handling of an error code. The command has failed, and then the detailed status will be logged on Raid Manager command log ($HORCC_LOG), even though the user script has no error handling.
4.13.4.
3 DC Multi Hop 3 DC Multi Targe t G1 G1 L1 L1 L2 G2 G2 G3 L3 L2 G3 L3 horctakeoff command on L1 remote site # horctakeoff horctakeoff : horctakeoff : horctakeoff : horctakeoff : in progress. horctakeoff : horctakeoff : horctakeoff : horctakeoff : -g G1 -gs G2 'pairsplit -g G2 -S' is in progress. 'pairsplit -g G1' is in progress. 'pairsplit -g G1 -FHORC 2 -S' is in progress. 'paircreate -g G2 -vl -nocopy -f async -jp 0 -js 1' is 'pairsplit -g G2' is in progress.
4.13.5 Windows Subcommands The CCI software provides subcommands for the Windows platforms which are executed as options (-x ) of another command. When you specify a subcommand as the only option of a command, you do not need to start HORCM. If another option of the command and the subcommand are specified on the same command line, place the other option after the subcommand. 4.13.
4.13.7 Drivescan Subcommand The drivescan subcommand displays the relationship between the disk numbers assigned by the Windows system and the LDEVs on the RAID storage system, and also displays attribute and status information for each LDEV. Table 4.40 lists and describes the drivescan subcommand parameters. Figure 4.52 shows an example of the drivescan subcommand used as an option of the raidscan command and its output. Table 4.
4.13.8 Portscan Subcommand The portscan subcommand displays the devices on the specified port(s). Table 4.41 lists and describes the portscan subcommand parameters. Figure 4.53 shows an example of the portscan subcommand used as an option of the raidscan command and its output. Table 4.41 Portscan Subcommand Parameters Parameter Value Command Name portscan Format -x portscan port#(0-N) Argument port#(0-N): Specifies the range of port numbers on the Windows system.
4.13.9 Sync and Syncd Subcommands The sync (synchronization) subcommand sends unwritten data remaining on the Windows server to the specified device(s) to synchronize the pair(s) before the CCI command is executed. The syncd (synchronization delay) subcommand waits the delayed IO for dismount after issued “sync”. Table 4.52 lists and describes the sync and syncd subcommand parameters. Table 4.42 Sync and Syncd Subcommand Parameters Parameter Value Command Name sync syncd Format -x sync[d] A: B: C: ..
The following examples show the sync subcommand used as an option of the pairsplit command. For the example in Figure 4.54, the data remaining on logical drives C: and D: is written to disk, all pairs in the specified group are split (status = PSUS), and read/write access is enabled for all S-VOLs in the specified group. pairsplit -x sync C: D: -g oradb -rw Figure 4.54 Sync Subcommand Example – Pairsplit For the example in Figure 4.
4.13.10 Mount Subcommand The mount subcommand mounts the specified drive to the specified partition on the specified hard disk drive using the drive letter. When the mount subcommand is executed without an argument, all currently mounted drives (including directory mounted volumes) are displayed, and logical drive has been mounting an LDM volume then displays Harddisk#[n] configured an LDM volume. Table 4.43 lists and describes the mount subcommand parameters. Figure 4.56 and Figure 4.
The example in Figure 4.56executes mount from command option of the pairsplit, mounting the “F:” drive to partition1 on disk drive2 and the “G:” drive to partition1 on disk drive1, and then displays the mounted devices. pairsplit -x mount F: hdisk2 pairsplit -x mount Drive FS_name VOL_name C: NTFS Null F: NTFS Null D: NTFS Null D:\hd1 NTFS Null D:\hd2 NTFS Null G: NTFS Null Device Partition ... Harddiskvolume1 ... Harddiskvolume2 ... Harddiskvolume3 ... Harddiskvolume4 ... Harddiskvolume5 ...
4.13.11 Umount and Umountd Subcommands The umount subcommand unmounts the specified logical drive and deletes the drive letter. Before deleting the drive letter, this subcommand executes sync internally for the specified logical drive and flushes unwritten data. The umountd subcommand unmounts the logical drive after waiting the delayed IO for dismount. Table 4.44 lists and describes the umount and umountd subcommand parameters. Figure 4.
Note: The umount command flushes (sync) the system buffer of the associated drive before deleting the drive letter. If umount has failed, you need to confirm the following conditions: The logical and physical drives designated as the objects of the umount command are not opened to any applications. For example, confirm that Explore is not pointed on the target drive. If it is, then the target drive will be opening.
4.13.12 Environment Variable Subcommands If no environment variables are set in the execution environment, the environment variable subcommand sets or cancels an environment variable within the CCI command. The setenv subcommand sets the specified environment variable(s). The usetenv subcommand deletes the specified environment variable(s). The env subcommand command displays the environment variable(s). The sleep subcommand causes CCI to wait for the specified time. Table 4.
4.14 CCI Command Tools 4.14.1 Inqraid Command Tool CCI provides the inqraid command tool for confirming the drive connection between the storage system and host system. The inqraid command displays the relation between special file(s) on the host system and actual physical drive of the RAID storage system. Table 4.46 lists and describes the inqraid command and parameters. Figure 4.
Parameter Value -svinf[=PTN] (only Windows systems) -svinfex[=PTN] (for GPT disk on Windows 2008/2003): Sets the signature and volume layout information that was saved to the system disk to a raw device file provided via STDIN or arguments. Gets the serial# and LDEV# for the target device using SCSI Inquiry, and sets the signature and volume layout information into VOLssss_llll.ini file to the target device.
Parameter Value -CLIB -sort: This option is used to know how much pair is possible to create the paired volume on the actual array, and calculates the total Bitmap page for HORC/HOMRCF and the unused Bitmap page by sorting the specified special file (the standard input or the argument) with Serial#,LDEV# order. The default is HOMRCF. This option is valid within “-sort” option.
HP-UX System: # ioscan -fun | grep rdsk | ./inqraid /dev/rdsk/c0t2d1 -> [HP] CL2-D Ser = 30053 LDEV = 9 [HP ] [OPEN-3 HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3 /dev/rdsk/c0t4d0 -> [HP] CL2-D Ser = 30053 LDEV = 14 [HP ] [OPEN-3-CM RAID5[Group 2- 1] SSID = 0x0008 ] ] Linux and zLinux System: # ls /dev/sd* | .
IRIX System with FC_AL: # ls /dev/rdsk/*vol | ./inqraid /dev/rdsk/dks1d6vol -> [SQ] CL2-D Ser = 30053 LDEV = 9 [HITACHI ] [OPEN-3 HORC = P-VOL HOMRCF[MU#0 = SMPL MU#1 = SMPL MU#2 = SMPL] RAID5[Group 2- 1] SSID = 0x0008 CTGID = 3 /dev/rdsk/dks1d7vol -> [SQ] CL2-D Ser = 30053 LDEV = 14 [HITACHI ] [OPEN-3-CM RAID5[Group 2- 1] SSID = 0x0008 ] ] IRIX System with Fabric: # ls /dev/rdsk/*/*vol/* | .
CHNO: Displays the channel number on the device adapter that recognizes on the Linux host. Displayed only for Linux systems. TID: Displays target ID of the hard disk that connects on the device adapter port. Displayed only for Linux systems. LUN: Displays logical unit number of the hard disk that connects on the device adapter port. Displayed only for Linux systems. Note: The display of Group, SSID, and CTGID depends on the storage system microcode level.
# ls /dev/sd* | ./inqraid -CLI DEVICE_FILE PORT SERIAL sdh CL2-B 30053 sdi CL1-A 64015 sdj - LDEV 23 14 - CTG 2 - H/M/12 S/P/ss - SSID 0004 0004 - R:Group 5:02-01 E:00002 - PRODUCT_ID OPEN-3 OPEN-3-CM - Figure 4.64 Inqraid: Example of -CLI Option (Linux example shown) DEVICE_FILE: Displays the device file name only. PORT: Displays the RAID storage system port number. SERIAL: Displays the production (serial#) number of the storage system. LDEV: Displays the LDEV# within the storage system.
DEVICE_FILE: Displays the device file name only. WWN: CLIWP option displays Port_WWN of the host adapter included in the STD inquiry page. CLIWN option displays Node_WWN of host adapter included in STD inquiry page. AL: This option always displays as “-”. PORT: Displays the RAID storage system port number. LUN: This option always displays as “-”. SERIAL: Displays the production (serial#) number of the storage system. LDEV: Displays the LDEV# within the storage system.
4.14.2 Mkconf Command Tool The mkconf command tool is used to make a configuration file from a special file (raw device file) provided via STDIN. Execute the following steps to make a configuration file: 1. Make a configuration file for only HORCM_CMD by executing “inqraid -sort -CM -CLI”. 2. Start a HORCM instance without a description for HORCM_DEV and HORCM_INST for executing the raidscan command with next step. 3.
# cd /tmp/test # cat /etc/horcmperm.conf | /HORCM/usr/bin/mkconf.sh -g ORA -i 9 -m 0 starting HORCM inst 9 HORCM inst 9 starts successfully. HORCM Shutdown inst 9 !!! A CONFIG file was successfully completed. starting HORCM inst 9 HORCM inst 9 starts successfully.
Notes on mkconf: A unitID is added to the Serial# order. If two or more command devices exist in the storage system, then this option selects the multiple device files linked to a command device (an LDEV).
4.15 Synchronous Waiting Command (Pairsyncwait) for Hitachi TrueCopy Async/UR More robust systems need to confirm the data consistency between the Hitachi TrueCopy Async/UR PVol and SVOL. In DB operations (e.g., Oracle), the commit() of DB transaction (see Figure 4.70) is needed to confirm that a last writing for the commit() on a local site reached to remote site by using CCI-unique API command.
Table 4.48 lists and describes the pair synchronization waiting command parameters and returned values. Table 4.49 lists and describes the error codes for the pairsyncwait command. The pairsyncwait command is used to confirm that required writing was stored in DFW area of RCU, and it will be able to confirm whether or not a last writing of just before this command is reached to RCU DFW area.
Parameter Value Options -h: Displays Help/Usage and version information. -q: Terminates the interactive mode and exits the command. -z or -zx (OpenVMS cannot use the -zx option): Makes the raidar command enter the interactive mode. The -zx option guards performing of the HORCM in the interactive mode. When this option detects a HORCM shut down, interactive mode terminates. -I[H][M][instance#] or -I[TC][SI][instance#] Specifies the command as [HORC]/[HOMRCF], and is used to specify the instance# of HORCM.
Parameter Value Returned values When the -nowait option is specified: Normal termination: 0: The status is NOWAIT. Abnormal termination: other than 0 to 127, refer to the execution logs for error details. When the -nowait option is not specified: Normal termination: 0: The status is DONE (completion of synchronization). 1: The status is TIMEOUT (timeout). 2: The status is BROKEN (Q-marker synchronized process is rejected). 3: The status is CHANGED (Q-marker is invalid due to resynchronize).
Figure 4.71 shows examples of the pairsyncwait command with and without the -nowait option. The output of the pairsyncwait command is: UnitID: Unit ID in case of multiple storage system connection CTGID: CTGID within Unit ID Q-Marker: The latest sequence # of MCU PVol (Marker) when the command is received. Status: The status after the execution of command. Q-Num: The number of process queue to wait for synchronization within the CTGID.
4.16 Protection Facility The Protection Facility permits main operations to volumes that the user can see on the host, and prevents wrong operations. CCI controls protected volumes at the result of recognition of protection. CCI recognizes only volumes that the host shows. For that purpose current Hitachi SANtinel is provided for the CCI environment. It is not possible to turn ON or OFF the Protection Facility from CCI. The Protection Facility ON/OFF is controlled by Remote Console/SVP or SNMP.
Table 4.50 Registration for the Mirror Descriptor Mirror Descriptor on Horcm.conf TrueCopy ShadowImage MU#0 Volumes on Horcm.conf Unknown E none E none MU#1 E none MU#2 E none Permitted Volumes /dev/rdsk/c0t0d0 Unknown E = Mirror descriptor volume to be registered in horcm.conf. Unknown: Volumes that own host cannot recognize, even though volumes were registered in horcm.conf. CCI permits operation after “Permission command” at startup of HORCM.
4.16.2 Examples for Configuration and Protected Volumes Case (1): Two Hosts (Figure 4.73). In protect mode Ora2 are rejected to be operate the paired volume, because of Unknown for Grp4 on HOST2. Case (2): One Host (Figure 4.74). In protect mode Ora1 and Ora2 are rejected to be operate the paired volume, because of Unknown for Grp2 and Grp4 on HOST1. If HOST1 has a protection OFF command device, then Ora1 and Ora2 are permitted to operate the paired volume.
Horcm0.conf on HOST1 volumes for Grp1 volumes for Grp3 Ora1 Horcm1.conf on HOST1 Horcm0.conf on HOST2 Horcm1.conf on HOST2 volumes for Grp2 volumes for Grp4 volumes for Grp2 Ora3 volumes for Grp4 Ora2 Visible to Grp2,Grp4 Visible to Grp1,Grp3 CM* Grp1 Grp2 Grp3 Grp4 * CM = protection “On” command device Figure 4.
4.16.3 Target Commands for Protection The following commands are controlled by the Protection Facility: Horctakeover, Paircurchk, Paircreate, Pairsplit, Pairresync, Pairvolchk, Pairevtwait, Pairsyncwait, raidvchkset, raidvchkdsp. Pairdisplay is not included. When the command is issued to nonpermitted volumes, CCI rejects the request with error code “EX_ENPERM”. Pairdisplay command shows all volumes, so that you can confirm non-permitted volumes. Non-permitted volumes are shown without LDEV# information.
4.16.5 New Options for Security (1) raidscan -find inst. The -find inst option is used to register the device file name to all mirror descriptors of the LDEV map table for CCI and permit the matching volumes on horcm.conf in protection mode, and is started from /etc/horcmgr automatically. Therefore the user will not normally need to use this option. This option issues Inquiry to device file from the result of STDIN. And CCI gets Ser# and LDEV# from RAID storage system.
(2) pairdisplay -f[d]. The -f[d] option shows the relation between the Device_File and the paired volumes (protected volumes and permitted volumes), based on the group, even though this option does not have any relation with protection mode. # pairdisplay -g oradb -fd Group PairVol(L/R) Device_File oradb oradev1(L) c0t3d0 oradb oradev1(R) c0t3d1 M ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M 0 35013 17..P-VOL COPY, 35013 18 0 35013 18..
Windows systems. $HORCMPERM is “\WINNT\horcmperm.conf” or “\WINNT\horcmperm*.conf”(* is the instance number) as default. type $HORCMPERM | x:\HORCM\etc\raidscan.exe -find inst # The following are an example to permit the DB Volumes. # Note: a numerical value is interpreted as Harddisk#. # DB0 For MU# 0 Hd0-10 harddisk12 harddisk13 harddisk17 # DB1 For MU# 1 hd20-23 Verifying a group for DB1.
4.16.7 Environmental Variables $HORCMPROMOD. This environmental parameter turns protection mode ON as specified in Table 4.51. If your command device is set for non-protection mode, this parameter sets it to protection mode. Table 4.51 Relation between HORCMPROMOD and Command Device Command Device HORCMPROMOD Mode Protection mode Don’t care Protection mode Non-protection mode Not specified Non-protection mode Specified Protection mode $HORCMPERM.
4.16.8 Determining the Protection Mode Command Device The inquiry page is not changed for a command device with protection mode ON. Therefore, CCI provides how to find the protection mode command device. To determine the currently used command device, use the horcctl -D command. This command shows the protection mode command device by adding an asterisk (*) to the device file name. Example for HP-UX systems: # horcctl -D Current control device = /dev/rdsk/c0t0d0* Å * indicates protection ON.
4.17 Group Version Control for Mixed Storage System Configurations Before executing each option of a command, CCI checks the facility version of the Hitachi storage system internally to verify that the same version is installed on mixed storage system configuration. If the configuration includes older storage systems (e.g.
4.18 LDM Volume Discovery and Flushing for Windows Windows systems support the Logical Disk Manager (LDM) (such as VxVM), and a logical drive letter is typically associated with an LDM volume (“\Device\HarddiskVolumeX”). Therefore, the user will not be able to know the relationship between LDM volumes and the physical volumes of the RAID storage system. The user makes the CCI configuration file, and then needs to know the relationship that is illustrated in Figure 4.76.
4.18.1 Volume Discovery Function CCI supports the volume discovery function of three levels that shows the relationship between LDM volumes and the physical volumes. Physical level. CCI shows the relationship between ‘PhysicalDrive’ and LDEV by given $Physical as KEY WORD for the discovery. LDM volume level. CCI shows the relationship between ‘LDM volume & PhysicalDrives’ and LDEV by given $Volume as KEY WORD for the discovery. Drive letter level.
inqraid $Phy -CLI DEVICE_FILE PORT Harddisk0 CL2-K Harddisk1 CL2-K Harddisk2 CL2-K Harddisk3 CL2-K Harddisk4 LDEV CTG 194 256 257 258 - H/M/12 s/s/ss s/s/ss s/s/ss s/s/ss - SSID 0004 0005 0005 0005 - R:Group 1:01-10 1:01-11 1:01-11 1:01-11 - PRODUCT_ID OPEN-3 OPEN-3 OPEN-3 OPEN-3 DDRS-34560D Device Object Name of the Partition for Windows NT – SERIAL 61456 61456 61456 61456 - \Device\HarddiskX\PartitionY Æ \DskX\pY Device Object Name of the PhysicalDrive for Windows NT – \Device\HarddiskX\Pa
4.18.2 Mountvol Attached to Windows 2008/2003/2000 Systems The user must pay attention to ‘mountvol /D’ command attached to a Windows 2008, 2003, or 2000 system, that it does not flush the system buffer associated with the specified logical drive. The mountvol command shows the volume mounted as Volume{guid} as follows: mountvol Creates, . . MOUNTVOL MOUNTVOL MOUNTVOL deletes, or lists a volume mount point.
4.18.3 System Buffer Flushing Function The logical drive to be flushed can be specified by the following two methods. One method is that logical drive (e.g., G:\hd1 drive, as below) is specified immediately, but this method must know about the logical drive which corresponds to a group before executes the sync command. Also the volume is mounting by a directory and this method requires to find its volume name.
1. Offline backup used ‘raidscan-find sync’ for Windows NT file system: ‘raidscan-find sync’ flushes the system buffer through finding a logical drive which corresponds to a group of the configuration file, so that the user will be able to use without using -x mount and -x umount command. The following is an example for group ORB. P-VOL Side S-VOL Side Close all logical drives on the P-VOL by APP. Back up the SVOL data.
4. Online backup used ‘raidscan-find sync’ for Windows 2008/2003/2000 file system: ‘raidscan-find sync’ flushes the system buffer associated to a logical drive through finding a Volume{guid} which corresponds to a group of the configuration file so that the user will be able to use without using -x mount and -x umount commands. The following is an example for group ORB. P-VOL Side S-VOL Side Freeze DB on opening PVOL by APP.
4.19 Special Facilities for Windows 2008/2003/2000 Systems CCI provides the following special facilities for Windows 2008/2003/2000 systems: Signature changing facility (section 4.19.1) Directory mount facility (section 4.19.2) 4.19.1 Signature Changing Facility for Windows 2008/2003/2000 Systems Consider the following Microsoft Cluster Server (MSCS) configuration in which a MSCS PVOL is shared from MSCS Node1 and Node2, and the copied volume of SVOL is used for backup on Node2.
CCI adopts the following way with this point in view: The user must save the signature and volume layout information to the system disk by using “inqraid -gvinf” command, after an SVOL has been set the signature and new partition by the Windows disk management. The user will be able to put back the signature by setting the signature and volume layout information to an SVOL that was saved to the system disk by using “inqraid svinf” command, after splits the SVOL.
4.19.2 GPT disk for Windows 2003/2008 Windows 2003/2008 supports the basic disk called “GPT disk” used GUID partition instead of the Signature. The “GPT disk” also can be used as SVOL of the BC, So RAID Manager supports the way for saving/restoring the GUID DiskId of the GPT Basic disk to the inqraid command.
D:\HORCM\etc>pairdisplay -l -fd -g URA | inqraid -svinfex=Harddisk [VOL61459_448_DA7C0D91] -> Harddisk10 [OPEN-V ] [VOL61459_449_D4CB5F17-2ADC-4FEE-8650-D3628379E8F5] -> Harddisk11 V ] [VOL61459_450_9ABDCB73-3BA1-4048-9E94-22E3798C3B61] -> Harddisk12 [OPEN-V ] [OPEN- -gplbaex option (Windows 2003 Only) This option is used for displaying usable LBA on a Physical drive in units of 512 bytes, and is used to specify [slba] [elba] options for raidvchkset command.
4.19.3 Directory Mount Facility for Windows Systems The attached mountvol command into Windows (2008, 2003, or 2000) supports the directory mount, but it does not support the directory mount function that flushes the system buffer associated to a logical drive such as in UNIX systems. The directory mount structure on Windows is only symbolical link between a directory and Volume{guid}, illustrated in Figure 4.79 below.
Mount and Sync used Volume{GUID} for Windows 2008/2003/2000: RAID Manager supports the mount command option specified in the device object name, such as “\Device\Harddiskvolume X”. Windows changes the device number for the device object name after recovering from a failure of the PhysicalDrive. As a result, the mount command specified in the device object name may be failed. Therefore, RAID Manager supports a mount command option that specifies a Volume{GUID} as well as the device object name.
4.20 Host Group Control The Hitachi RAID storage systems (9900V and later) have the defined host group in the port and are able to allocate Host LU every this host group. CCI does not use this host LU, and specifies by using absolute LUN in the port. Therefore, a user can become confused because LUN of the CCI notation does not correspond to LUN on the host view and Remote Console. Thus, CCI supports a way of specifying a host group and LUN on the host view. 4.20.
4.20.2 Commands and Options Including a Host Group (1) Specifiable command for host group The following commands are able to specify a host group with the port strings: raidscan -p , raidar -p , raidvchkscan -p # raidscan -p CL2-D-1 PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV# CL2-D-1 /da/ 0, 4, 0.1(256)...........SMPL ---- ------ ----, ----- ---CL2-D-1 /da/ 0, 4, 1.1(257)...........SMPL ---- ------ ----, ----- ---CL2-D-1 /da/ 0, 4, 2.1(258)..........
4.21 Using CCI SLPR Security The Virtual Partition Manager (VPM) feature of the Hitachi RAID storage systems (USP V/VM and TagmaStore USP/NSC) supports Storage Logical Partitioning (SLPR), a feature that partitions the ports and volumes of the RAID storage system. If CCI does not have SLPR security, then it will be able to operate the target volumes crossing SLPR through the command device.
4.21.1 Specifying the SLPR Protection Facility When you want to access certain SLPRs on a single Host, use the CCI protection facility so that the Host can access multiple SLPRs through a single command device. The following outline reviews the setup tasks for the SLPR protection facility. 1. Setting SLPR on the command device: The command device has a SLPR number and an associated bitmap so you can set multiple SLPRs.
4.21.2 SLPR Configuration Examples 4.21.2.1 Single Host Figure 4.81 provides an example of when control is denied to the paircreate and raidscan commands in the following cases: The volume described on RM INST1 is different from the SLPR of the command device, so the paircreate command cannot control the paired volume. The specified port is different from the SLPR of the command device, so the raidscan -p CL3-A command cannot scan any ports that are defined as SLPR#N.
Host RM INST0 RM INST1 CL3-A CL1-A CM CM PVOL SLPR0 SVOL SLPR# M SLPR# N Figure 4.82 Operation Across SLPRs Using two Command Devices on a Single Host To operate SLPR#N, share the command device. If RMINST1 has a shared command device for SLPR#N, the paircreate command is permitted. Additionally, the raidscan -p CL3-A command (via RMINST0), will be permitted to scan a port, because the shared command device has the Bitmap settings SLPR#M and SLPR#N.
4.21.2.2 Dual Hosts In the following example, the paircreate command is unable to operate the paired volume because the volume described on HostB is different than the SLPR of the command device. Also, the raidscan -p CL3-A command (via both Hosts), will be unable to scan a port because the specified port is different than the SLPR of the command device. HostB HostA RM INST0 RM INST1 CL3-A CL1-A CM PVOL SLPR0 SVOL SLPR# M SLPR# N Figure 4.
To operate SLPR#N, share the command device. If HostB has a shared command device for SLPR#N, the paircreate command is permitted. Also, the raidscan -p CL3-A command (via HostA), will be allowed to scan a port because the shared command device has the Bitmap settings SLPR#M and SLPR#N. HostB HostA RM INST0 RM INST1 CL3-A CL1-A CM PVOL SLPR0 SVOL SLPR# M SLPR# N Figure 4.
4.21.2.3 TrueCopy Using Dual Hosts In the following example, the pair-operation command (except the -l option) determines whether the operation for paired volumes should be permitted at a remote site. The result is that the paircreate command is not allowed to operate the paired volume, because the volume described on HostB differs from the SLPR of the command device. Also, the raidscan -p CL3-A command (on HostB) will not be allowed to scan a port.
4.22 Controlling Volume Migration The volume migration including the external volume will be required to control using CLI in Data Lifecycle Management (DLCM) solution. It is possible to support the volume migration that cooperates with CC (Cruising Copy) and the external connection by operating the current ShadowImage and VDEV mapping of the external connection.
(1) Command specification CCI operates the volume migration by specifying to the horcm*.conf as same SI and TC, because the volume migration using CCI is necessary to be defined the mapping for the target volume. MU# (of SMPL as SI) which is not used as SI is used for the operation for CC. An original volume for the migration is defined as PVOL. A target volume for the migration is defined as SVOL.
4.22.2 Commands to Control the Volume Migration (1) Command for volume migration CCI supports the volume migration by adding an option (-m cc) to the paircreate command. paircreate -g -d … -m -vl[r] -c -m mode = cc (Specifiable by the HOMRCF only) This option is used to specify the Cruising Copy mode for the volume migration. Note: This option cannot be specified with “-split” option in the same command.
(3) Command for confirming the status It is possible to confirm the status for CC by using “-fe” option of the pairdisplay command. pairdisplay -g -fe -fe This option is used to display the serial# and LDEV# of the external LUNs mapped to the LDEV and additional information for the pair volume. This option displays the information above by adding to last column, and then ignores the format of 80 column. This option will be invalid if the cascade options (-m all,-m cas) are specified.
(4) Command for discovering an external volume via the device file It is possible to discover the external volumes by using the inqraid command. Example in Linux: # ls /dev/sd* | .
334 Group: This item shows physical position of an LDEV according to mapping of LDEV in the RAID storage system.
4.22.3 Relations between “cc” Command Issues and Status The migration volumes can be handled by issuing the CCI commands (pair creation and pair splitting commands). The validity of the specified operation is checked according to the status of the paired volume (primary volume). Table 4.52 shows the relations between the migration volume statuses and command acceptances. Table 4.
4.22.4 Restrictions for Volume Migration Volume migration must be used within the following restrictions: ShadowImage (HOMRCF). The operation for the volume migration must be operated at the “SMPL” or “PAIR” or “COPY” state. If not, “paircreate -m cc” command will be rejected with EX_CMDRJE or EX_CMDIOE. Also HOMRCF can not be operated to CC_SVOL moving in Cruising Copy. In copying CC_SVOL, the copy operation for the volume migration will be stopped, if pairsplit command for of HOMRCF will be executed.
Chapter 5 Troubleshooting This chapter contains the following resources to address issues that you may encounter while working with the CCI software: General Troubleshooting (section 5.1) Changing IO way of the command device for AIX (section 5.2) Error Reporting (section 5.3) Calling the Hitachi Data Systems Support Center (section 5.
5.1 General Troubleshooting If you have a problem with the CCI software, first make sure that the problem is not being caused by the UNIX/PC server hardware or software, and try restarting the server. Table 5.1 provides operational notes and restrictions for CCI operations. For maintenance of Hitachi TrueCopy and ShadowImage volumes, if a failure occurs, it is important to find the failure in the paired volumes, recover the volumes, and continue operation in the original system.
Condition Recommended Action Sharing volumes in a hot standby configuration When paired volume is used for the disk shared by the hosts in hot standby configuration using HA software, use the primary volume as the shared disk and describe the corresponding hosts using the paired volume in the configuration definition file as shown below.
Condition Recommended Action Error in paired volume operation Hitachi TrueCopy only: If an error occurs in duplicated writing in paired volumes (i.e., pair suspension), the server software using the volumes may detect the error by means of the fence level of the paired volume. In such a case, check the error notification command or syslog file to identify a failed paired volume.
5.1.1 About Linux Kernel 2.6.9.XX supported ioctl(SG_IO) The RAID Manager currently uses the ioctl(SCSI_IOCTL_SEND_COMMAND) for sending the control command to the command device. However, in RHEL 4.0 using kernel 2.6.9.XX, the following messages are output to syslog file (/var/log/messages) with every ioctl(). program horcmgr is using a deprecated SCSI ioctl, please convert it to SG_IO This seems to originate from the following kernel code in drivers/scsi/scsi_ioctl.
5.2 Changing IO Way of the Command Device for AIX RAID Manager tries to use ioctl(DK_PASSTHRU) or SCSI_Path_thru as much as possible, if it fails, changes to RAW_IO follows conventional ways. Even so, RAID Manager may encounter to AIX FCP driver which does not support the ioctl(DK_PASSTHRU) fully in the customer site. After this consideration, RAID Manager also supports by defining either following environment variable or /HORCM/etc/USE_OLD_IOCTLfile(size=0) that uses the RAW_IO forcibly.
5.3 Error Reporting Table 5.2 lists and describes the HORCM system log messages and provides guidelines for resolving the error conditions. Table 5.3 lists and describes the command error messages and their return values and also provides guidelines for resolving the error conditions. Table 5.4 and Table 5.5 list the generic error codes. Table 5.6 lists the specific error codes. Table 5.
Table 5.3 Command Error Messages Error Code Error Message Condition Recommended Action Value EX_COMERR Can’t be communicated with HORC Manager This command failed to communicate with the CCI software. Verify that HORCM is running by using UNIX commands [ps - ef | grep horcm]. 255 EX_REQARG Required Arg list An option or arguments of an option are not sufficient. Please designate the correct option using the -h option.
Error Code Error Message Condition Recommended Action Value EX_INVCMD Invalid RAID command Detected a contradiction for a command. Call the Hitachi Data Systems Support Center. 240 EX_ENOGRP No such group The designated device or group name does not exist in the configuration file, or the network address for remote communication does not exist. Verify the device or group name and add it to the configuration file of the remote and local hosts.
Error Code Error Message Condition Recommended Action Value EX_EWSTOT Timeout waiting for specified status Detected a time out, before it made it to the designated status. Please increase the value of the timeout using the -t option. 233 EX_EWSLTO Timeout waiting for specified status on the local host Timeout error because the remote did not notify about expected status in time. Please confirm that HORC Manager on the remote host is running.
Error Code Error Message Condition Recommended Action Value EX_ENXCTG No CT groups left for OPEN Vol use. An available CT group for OPEN Volume does not exist (TrueCopy Async or ShadowImage). Please confirm whether all CT groups are already used by mainframe volumes (TC and TC390 Async, SI and SI390). 215 EX_ENQCTG Unmatched CTGID within the group The CT group references within a group do not have an identical CTGID.
The codes in Table 5.4 indicate generic errors returned by the following commands: horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairevtwait, pairvolchk, pairsyncwait, pairdisplay. Unrecoverable error should be done without re-execute by handling of an error code. Recoverable error can re-execute by handling of an error code. Table 5.
The codes in Table 5.5 are generic error returned by the following commands: raidscan, raidqry, raidar, horcctl. Unrecoverable error should be done without re-execute by handling of an error code. Recoverable error can re-execute by handling an error code. Table 5.
The codes in Table 5.6 are specific error returned by the following commands: horctakeover, paircurchk, paircreate, pairsplit, pairresync, pairevtwait, pairvolchk, pairsyncwait, raidvchkset. Unrecoverable error should be done without re-execute by handling of an error code. Recoverable error can re-execute (except for EX_EWSTOT of the horctakeover) by handling an error code. Refer to Chapter 4 for information on possible error code(s) for each command. Table 5.
5.4 Calling the Hitachi Data Systems Support Center If you need to call the Hitachi Data Systems Support Center, please provide as much information about the problem as possible, including: The Storage Navigator configuration information saved on diskette using the FD Dump Tool or FDCOPY function (see the Storage Navigator User’s Guide for the storage system. The circumstances surrounding the error or failure. The exact content of any error messages displayed on the host system(s).
352 Chapter 5 Troubleshooting
Appendix A A.1 Maintenance Logs and Tracing Functions Log Files The CCI software (HORCM) and Hitachi TrueCopy/ShadowImage commands maintain internal logs and traces which can be used to identify the causes of errors and keep records of the status transition history of paired volumes. Figure A.1 shows the CCI logs and traces. HORCM logs are classified into start-up logs and execution logs. The start-up logs contain data on errors which occur before the HORCM becomes ready to provide services.
The start-up log, error log, trace, and core files are stored as shown in Table A.1. The user should specify the directories for the HORCM and command log files using the HORCM_LOG and HORCC_LOG environmental variables as shown in Table A.2. If it is not possible to create the log files, or if an error occurs before the log files are created, the error logs are output in the system log file.
Table A.2 Log Directories Directory Name Definition $HORCM LOG A directory specified using the environmental variable HORCM_LOG. The HORCM log file, trace file, and core file as well as the command trace file and core file are stored in this directory. If no environmental variable is specified, “/HORCM/log/curlog” is used. $HORCC LOG A directory specified using the environmental variable HORCC_LOG. The command log file is stored in this directory.
A.4 Logging Commands for Audit RAID Manager supports the command error logging only, so this logging function will not be able to use for auditing the script issuing the command. Thus RAID Manager supports the function logging the result of the command executions by expanding the current logging. This function has the following control parameter. $HORCC_LOGSZ variable This variable is used to specify a maximum size (in units of KB) and normal logging for the current command. ‘/HORCM/log*/horcc_HOST.
The masking feature is to enable the tracing without changing their scripts. And this feature is available for all RM commands (except inqraid or EX_xxx error code). For example, if you want to mask pairvolchk (returns 22) and raidqry, you can specify as below. pairvolchk=22 raidqry=0 The user will be able to track the performing of their scripts, and then they will decide to mask by auditing the command logging file as needed. Relationship between an environment variable and Horcc_HOST.
/HORCM/log*/horcc_HOST.conf file # For Example HORCC_LOGSZ=2048 #The masking variable #This variable is used to disable the logging by the command and exit code.
Appendix B Updating and Uninstalling CCI B.1 Uninstalling UNIX CCI Software After verifying that the CCI software is not running, you can uninstall the CCI software. If the CCI software is still running when you want to uninstall, shut down the CCI software using the horcmshutdown.sh command to ensure a normal end to all TrueCopy/ShadowImage functions. Caution: Before uninstalling CCI, make sure that all device pairs are in simplex status. To uninstall the CCI software from a root directory (see Figure B.
B.3 Uninstalling Windows CCI Software After verifying that the CCI software is not running, you can uninstall the CCI software. If the CCI software is still running when you want to uninstall, shut down the CCI software using the horcmshutdown command to ensure a normal end to all TrueCopy/ShadowImage functions. Caution: Before uninstalling the CCI software, make sure that all device pairs are in simplex mode. To uninstall the CCI software: 1. On the Control panel select the Add/Remove programs option. 2.
Appendix C Fibre-to-SCSI Address Conversion Disks connected with Fibre channel display as SCSI disks on UNIX hosts. Disks connected with Fibre channel connections can be fully utilized. Fibre AL_PA conversion table LU #0 LU #1 . . . LU #n LU #0 LU #1 . . . LU #n Target ID Figure C.1 Example Fibre Address Conversion Note: Use fixed address AL_PA (0xEF) when using iSCSI. CCI converts fibre-channel physical addresses to SCSI target IDs (TIDs) using a conversion table (see Figure C.2). Table C.
Conversion table for Windows. The conversion table for Windows is based on conversion by an Emulex driver. If the fibre-channel adapter is different (e.g., Qlogic, HP), the target ID which is indicated by the raidscan command may be different from the target ID on the Windows host. Figure C.1 shows an example of using the raidscan command to display the TID and LUN of Harddisk6 (HP driver).
C.1 LUN Configurations on the RAID Storage Systems The Hitachi RAID storage systems (9900V and later) manage the LUN configuration on a port through the LUN security as shown in Figure C.4.
C.2 Fibre Address Conversion Tables Table C.2, Table C.3, and Table C.4 show the fibre address conversion tables: Table number 0 = HP-UX systems (see Table C.2) Table number 1 = Solaris and IRIX systems (see Table C.3) Table number 2 = Windows systems (see Table C.4) Note: The conversion table for Windows systems is based on the Emulex driver.
Table C.
Table C.
Acronyms and Abbreviations 3DC three-data-center AL-PA AOU arbitrated loop-physical address allocation on use (another name for Hitachi Dynamic Provisioning) BMP bitmap C RTL CCI CD-ROM CLPR CM COW CTGID CU CVS C Run-Time Library Command Control Interface compact disk – read-only memory Cache Logical Partition Cluster Manager Copy-on-Write consistency group ID control unit custom volume size DB DFW DRU database DASD fast write Data Retention Utility ELBA ESCON ending logical block address Enterpr
368 LBA LCP LDEV LDKC LDM LU LUN LUSE LV LVM logical block address local control port logical device logical disk controller (used for USP V/VM) Logical Disk Manager logical unit logical unit number Logical Unit Size Expansion logical volume logical volume manager MB MCU MRCF MSCS MU megabytes main control unit (Hitachi TrueCopy only) Multi-RAID Coupling Feature (refers to ShadowImage) Microsoft Cluster Server mirrored unit NSC Hitachi TagmaStore Network Storage Controller OPS OS Oracle Parallel Ser
TID target ID UR USP Hitachi Universal Replicator Universal Storage Platform VPM V-VOL VxVM Virtual Partition Manager virtual volume VERITAS Volume Manager WR write Hitachi Command Control Interface (CCI) User and Reference Guide 369
370 Acronyms and Abbreviations