Virtual TapeServer for NonStop Servers Supplemental Installation Guide HP Part Number: 690172-001 Published: February 2012 Edition: All J06 release version updates (RVUs), all H06 RVUs, and all G06 RVUs HP Confidential
© Copyright 2012 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Supported release version updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Typographical conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv | Contents
Preface Welcome to the Virtual TapeServer Supplemental Installation Guide. This guide provides additional configuration information for Virtual TapeServer (VTS), which should be used after completing the procedures in the Virtual TapeServer Quick Start Guide and Virtual TapeServer Configuration Guide. This guide is provided for HP personnel only.
vi | Preface
Installing GFS The Global File System (GFS) is an advanced feature that allows Linux servers to simultaneously read and write files on a single shared file system on a SAN. VTS is based on Linux, and GFS enables multiple VTS servers to access a shared set of pools and virtual tapes. The Event Management Service (EMS) can then automatically mount virtual tapes from the GFS pools as if they were separately mounted.
• Configuring fencing devices with Conga (luci and ricci) — http://www.redhat.com/docs/enUS/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/s1-config-fence-devicesconga-CA.html Note • When connecting to the Fibre Channel switch used for fencing, ensure that dual paths are not created when zoning is configured. All other Red Hat documentation — http://www.redhat.com/docs/manuals/csgfs/ The following topics are relevant: “Red Hat Cluster Suite Overview for RHEL5.
/media/cdrom/installGFS.bash Ignore any warnings that may be displayed. 5. Unmount and eject the DVD: umount /media/cdrom eject 6. Enter the following commands to disable clustering services that are included with the GFS but not used by VTS. Failure to disable these can cause the system to hang. chkconfig openais off && service openais stop chkconfig saslauthd off && service saslauthd stop If any of these commands returns a failure, it is not an error. It indicates that the process was not running. 7.
Disk /dev/sde: 18.4 GB, 18413722112 bytes 255 heads, 63 sectors/track, 2238 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id /dev/sde1 1 2238 17976703+ 83 System Linux If the vault will be 2-4TB in size, complete these steps to partition the disk: a. Start the partition editor, which is an interactive program similar to fdisk: parted /dev/sde b. Create a GPT disk label, which is a GUID partition table: mklabel gpt c.
Here is an example of the output: Reading all physical volumes. This may take a while... Found volume group "gfsvg1" using metadata type lvm2 g. Enter the following to display details about the physical volume: pvdisplay Here is an example of the output: --- Physical volume --PV Name /dev/sde1 VG Name gfsvg1 PV Size 17.14 GB / not usable 3.37 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 4388 Free PE 0 Allocated PE 4388 PV UUID tTHBFt-6pqc-ILIY-Uis5-L8Yn-bvBu-SCN3MV h.
LV Write Access LV Status # open LV Size Current LE Segments Allocation Read ahead sectors Block device read/write available 0 17.14 GB 4388 1 inherit 0 253:0 11. Create the GFS file system: a. Enter the following command. Note that any data that may reside on the logical volume (dev/gfsvg1/lv1 is used as an example) is destroyed.
The file should list all cluster nodes. The localhost entry is for the system you are using and the other entries are for the cluster nodes. b. On each node, start ricci: chkconfig ricci on service ricci start To confirm that ricci is running, enter the following: service ricci status c. On one (and only one) of the cluster servers, configure and start the luci service. Red Hat recommends configuring luci on a non-cluster node.
d. Select brocade fabric switch. e. Enter the hostname of the Fibre Channel switch. Consult your SAN administrator for this information. f. Enter the IP address of the Fibre Channel switch. Consult your SAN administrator for this information. g. Enter the username for accessing the Fibre Channel switch. Consult your SAN administrator for this information. h. Enter the corresponding password. Consult your SAN administrator for this information. i. Enter the port of the Fibre Channel switch.
b. Enter the following commands on the console of the node: mkdir /VAULT10 mount -a -t gfs chown bill.root /VAULT10 chmod 750 /VAULT10 ls -al /VAULT10 The following is an example of output for the ls command: total 12 drwxr-xr-x 2 root root 3864 May 15 15:24 . drwxr-xr-x 4 root root 4096 May 15 17:59 .. c. Enter the following command to verify that there is free space on the mounted GFS file system.
17. Verify fencing. Note These steps verify Brocade Fibre Channel fencing only. Before performing these steps, make sure you are not logged into the switch through Telnet. If you are logged in, the brocade fencing script will fails with an error similar to the following: /sbin/fence_brocade -a ip_addr -l username -n 2 -p password -o disable pattern match read eof at ./fence_brocade line 138 # echo $? 255 where ip_addr, username, and password is that of the Fibre Channel switch.
Troubleshooting This section describes how to verify the installation and troubleshoot any issues. 1. Verify that the appropriate services are enabled and started by entering the following commands: chkconfig cman on && service cman restart chkconfig clvmd on && service clvmd restart chkconfig ricci on && service ricci restart Note that these commands also restart fencing and deactivate the pool. 2.
Here is an example of the output: Version: 6.0.1 Config Version: 5 Cluster Name: cma Cluster Id: 711 Cluster Member: Yes Cluster Generation: 64 Membership state: Cluster-Member Nodes: 2 Expected votes: 1 Total votes: 2 Quorum: 1 Active subsystems: 7 Flags: 2node Ports Bound: 0 11 Node name: 192.168.80.2 Node ID: 2 Multicast addresses: 239.192.2.201 Node addresses: 192.168.80.
Enabling BMA Integration and Migration Through integration with a backup management application server, you can read and write files to and from Virtual TapeServer (VTS). You can then enable VTS to export, or “migrate”, virtual tapes to physical tapes using an attached external tape device. On upgraded servers, you can configure the hsm_ parameters in the configuration file to enable migration, which is described in this chapter.
• A VTSPolicy command sent from the host server through EMS can initiate migration. VTSPolicy can initiate many of the commands that are available on the BMA server. The command can be scheduled on the host server using NetBatch or appended to a Tandem Advanced Command Language (TACL) backup script. Be sure that the BMA client is installed and configured on the VTS server. Refer to the BMA documentation for more information.
Parameter Description Values Req’d? hsm_backup_ options For Backup Express only: checksum, EOJ, or retention Yes hsm_backup_ options_poolname For Backup Express only: checksum, EOJ, or retention Yes hsm_restore_device For Backup Express only: No Specifies the restore device. Name of the restore device.
Parameter Description Values Req’d? hsm_backupset For CommVault Galaxy only: Backup set name Yes Policy name Yes YES or NO No Integer, from 0-999 No Integer No Value No Specifies the name of the backup set, which is the logical grouping of subclients in CommVault. hsm_subclient For CommVault Galaxy only: Specifies the name of the subclient policy, which consists of one or more CommVault subclient templates.
Parameter Description Values Req’d? ems_hsm_backup_ notification Sends notifications to the NonStop kernel (NSK) from the EMS Telnet policy process. For example, after a virtual tape is migrated, a successful or failed message is generated so that the NSK can determine if the operation was successful. YES or NO No Default value: YES Here are sample configurations for each BMA. When setting values for your environment, be sure to use your hostname, paths, and so on.
hsm_server='stingray3' hsm_client='server42' hsm_backupset='server42_OnDemand' hsm_subclient='default' 4. Click SAVE. 5. Set the username and password for the BMA. Note This step is required for CommVault Galaxy, Backup Express, and NetBackup only. a. Click Security→Passwords on the navigation pane.The following page is displayed: b. Select hsm from the drop-down list and configure a password. Click Help for complete instructions.
Enabling and Configuring AutoCopy and Instant DR Data Replication is a advanced software feature that enables you to back up data to remote Virtual TapeServer (VTS) servers. This feature was implemented as Instant DR and AutoCopy before this release; Instant DR and AutoCopy are supported on upgraded systems. To configure Instant DR and AutoCopy, you must configure network settings for all VTS locations.
Overview of Instant DR Instant DR enables you to create and maintain identical copies of backup data on VTS disk storage at one or more locations. Virtual tapes are transmitted from one vault to another. The Instant DR jobs indicate the pools or virtual tapes to be transmitted. When a job is initiated, a copy of the virtual tape is transmitted over the network link.
AutoCopy is enabled at the pool level; all virtual tapes contained in the pool are monitored and copied when necessary. If VTS fails to copy a virtual tape, it retries the AutoCopy operation every five minutes for a total of twelve attempts. If the operation fails altogether, a failure message is sent to the event log. Note Autocopy will fail if the virtual tape already exists in another pool on the target. Virtual tape names must be unique across all vaults and pools.
Instant DR AutoCopy At what level are changes made (implementation granularity)? Vault, pool, or virtual tape Pool only How much data is transmitted? Data difference only, which may reduce data transmission Bit-for-bit (1:1) copy operation When is bandwidth consumed? Per schedule, usually offhours Throughout the day, may coincide with user activity How many backup sites are supported? Multiple sites, from one site to many based on setup Single site, from one site to another Is tuning supported
To configure network settings 1. Verify that the hostname, IP address, and gateway are configured on each VTS server in the environment. Refer to the Quick Start Guide for more information. 2. If DNS or DHCP is not configured in your environment and you want the servers to communicate using hostnames, set up the /etc/hosts file to configure aliases for each VTS server in the environment. Perform this step on the target server for each source VTS server. a. At the command prompt, log in. b.
c. Press ENTER to save the file in the default location. This step creates the /home/bill/ .ssh/ directory. d. Press ENTER to skip the pass phrase. e. Press ENTER to verify skipping the pass phrase. f. Copy the generated authorization key to the target server (boston): ssh-copy-id –i /home/bill/.ssh/id_rsa.pub bill@boston g. When prompted, enter yes. h. Enter the password for the bill user at the target server.
Configuring AutoCopy Note A default configuration file is defined for each VTS server. To override the default settings, you must define settings as described below. Requires the Edit Configuration File access right To modify the VTS configuration file to configure AutoCopy 1. Enable Data Replication licensing by entering the key on the Manage System Licenses page (click Configuration→System on the navigation pane and then click Manage System Licenses). 2.
Parameter Description Values autocopy_target_ poolname Specifies the target of the copy operation for the pool specified by poolname. Specify this parameter for each pool listed by the autocopy_pools parameter. VTS system name, such as server42 • You can set this parameter to a target VTS system name; in this case, the pool is copied to the same path on the target server. Ensure that an identically named vault exists on the target system.
Parameter Description Values rep_delay Specifies the time to wait (in seconds) before starting to copy data, for virtual tapes in VTLs. This is an optional parameter that provides a delay to the host so that it can issue a mount for verification. When a mount request is received, any running AutoCopy operation for the virtual tape is terminated and the partially copied data is removed from the remote system.
jobsets, one right after another, to run multiple synchronization tasks simultaneously, though whether you can do this depends on the performance capabilities of your host server resources. Each jobset contains entries that identify virtual tapes to process. A virtual tape is identified by vault, pool, and virtual tape name. The destination name of the virtual tape on the remote VTS server can be the same as the source or different, and it is referred to as a target.
5. Click SUBMIT to continue. The following page is displayed: 6. In the Backup target system field, type the name or IP address of the system to receive the backup. 7. In the Directory field, type the full path to the target vault (in UNIX notation). The path must begin with the vault where the backup will be made. The remainder of the string is specific to your environment. 8.
• Specify a single = in any of the fields to specify the same value as the one directly above that value (previous line, same column). Specify == or * in the pool field to indicate the same pool as referenced in the source entry, such as /VAULT01/==. You can specify == for the target name, which indicates to use the same name as the virtual tape you are synchronizing.
To run a backup jobset Requires the Virtual Tape Instant DR access right To manually backup data to the remote site 1. Click Configuration→Tapes and Pools on the navigation pane. 2. Select instant dr from the window drop-down list at the top of the page. 3. Select the jobset to run. 4. Click DISPATCH at the bottom of the page to start the Instant DR process and to display the main Instant DR page.
To use a file utility program (FUP) call You can issue a simple FUP call to copy the contents of a text file to the $0 process. Complete the following steps: 1. Create a text file named IDRTEXT1 on the NonStop server that contains the following line: VTSPolicy IDR idrjobname1 where idrjobname1 is the name of the Instant DR job defined in VTS. Note that this text is case sensitive. 2.
#LOAD /LOADED evt_x/ $system.zspidef.
34 | Enabling and Configuring AutoCopy and Instant DR
SCSI-to-Fibre Channel Adapter Upgrade This chapter provides instructions to replace one or more SCSI cards with the same number of VTS-supported Fibre Channel cards on the following VTS models: • VT5900-E, on the HP ProLiant DL385 G2 • VT5900-H, on the HP ProLiant DL385 G5 Refer to Upgrading a SCSI Adapter to a Fibre Channel Adapter on page 49 if you need to upgrade legacy hardware. Requires the View/Manage Configuration access right To replace SCSI cards with Fibre Channel cards and reconfigure VTS 1.
f. Save all changes and exit the editor. g. From the NonStop server, stop all virtual tape drives using the SCF STOP PMFname command. h. Power down all VTS SCSI converters using the switch on the rear of each unit. i. From a terminal window on VTS, enter shutdown now. 4. Remove the SCSI card(s) and install the Fibre Channel card(s). a. From the rear of the HP ProLiant DL385 G2 server, mark each SCSI cable with the port position to which it is attached.
g. After you install the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot. Then, re-install the PCI Riser cage in the DL385 G2 chassis. Once aligned, firmly press the module into place. Re-tighten the two blue pull tabs on the PCI Riser cage. h. Place the top cover back in place and secure it. Slide the HP DL385 G2 ProLiant server back into the rack and secure the two quick-disconnect screws on the front panel. i.
PCI Slot Bus Number Card Type Virtual Tape Name 3A N/A (left open for P800 controller card) 3B N/A (left open for P800 controller card) 4A 0 2G Fibre $VTAPE00 4B 1 2G Fibre $VTAPE01 5A 2 2G Fibre $VTAPE02 5B 3 2G Fibre $VTAPE03 d. Click Submit. Confirm to reboot the server. 8. Use the VTS web interface to reconfigure a VTD. After the Fibre Channel ports are configured as virtual devices, the final step in this process is to reconfigure the VTDs.
Hardware Information for Legacy Installations This chapter describes the hardware that was shipped for the Virtual TapeServer (VTS) 6.03.xx and 6.04.xx installations and that is supported in an upgraded environment. It also provides cabling and Fibre Channel upgrade procedures for the old hardware. Hardware overview For VTS installations that are upgrading to 8.0, the following hardware may be installed.
External disk storage If VTS model VT5900-A, VT5900-C, VT5900-E, or VT5900-G is installed, the following hardware may have been installed for external disk storage. • VT5915, which is built on the StorageWorks MSA 1500cs. It provides one controller shelf and one MSA 20 disk shelf, and it includes dual controllers, each of which provides a single Fibre Channel port. The VT5915 also includes 12 750GB 7,200rpm SATA disk drives.
Cabling and connecting VTS Cabling for the various VTS models is described in the following sections.
Connections from first SCSI converter: HVD Bus A HVD Bus B HVD Bus C HVD Bus D To NonStop server To NonStop server To NonStop server To NonStop server LVD Bus A LVD Bus B LVD Bus C LVD Bus D To port 8A on VTS server To port 8B on VTS server To port 7A on VTS server To port 7B on VTS server Connections from second SCSI converter: HVD Bus A HVD Bus B HVD Bus C HVD Bus D To NonStop server To NonStop server To NonStop server To NonStop server LVD Bus A LVD Bus B LVD Bus C LVD Bus D T
Connecting the HP ProLiant DL380 G4 (VT5900-B and VT5900-C) The VT5900-B and VT5900-C were built on a 2U (3.5 inch) chassis and provides four SCSI ports, numbered 0-3. The VT5900-C has a dual-channel Fibre Channel card but no DAT72 drive and is shipped with two (mirrored) 36GB SCSI drives, for use by the software only. Optional six SCSI disks (146GB or 300GB) are available to give this unit internal storage. The six SCSI disks replace the two 36GB drives.
The recommended cable connection order for the SCSI converter attached to a base model of VT5900-E is as follows: HVD Bus A HVD Bus B HVD Bus C HVD Bus D To NonStop server To NonStop server To NonStop server To NonStop server LVD Bus A LVD Bus B LVD Bus C LVD Bus D To port 4A on VTS server To port 4B on VTS server To port 5A on VTS server To port 5B on VTS server Note When attaching NonStop servers to the HVD ports, it is recommended that you attach to HVD port D then to HVD port C and then
Connecting to the HP ProLiant DL385 G5 (VT5900-J) Here is an illustration of the back of the VT5900-J: 1. 2Gb FC port (slot 4, port A) 6. 4Gb FC port (slot 2, port B) 2. 2Gb FC port (slot 5, port A) 7. Power plugs 3. 2Gb FC port (slot 5, port B) 4. 2Gb FC port (slot 4, port B) 8. Ethernet port 2 9. Ethernet port 1 11. PS/2 port 12. VGA port 13. USB ports 14. Serial port 15. iLO port 10. PS/2 port 5.
To connect to an external disk array or tape device 1. Connect one end of a Fibre optic cable to the Fibre Channel (FC) port on the VTS server. 2. Connect the other end of the cable to the external disk array. You may want to connect a second cable, to provide redundancy. When using the VT5917, for example, active/active failover can be configured. To connect to external tape drives or libraries 1. Connect one end of a Fibre optic cable to a Fibre Channel (FC) port on the VTS server. 2.
Connecting to the HP ProLiant DL185 G5 (VT5900-K) Note The VT5900-O that ships with version 8.0 is also built on the DL185 G5. Refer to the 8.0 Quick Start Guide for connection information for this server. Here is an illustration of the back of the VT5900-K: 1. Power plugs 2. PS/2 port 3. PS/2 port 4. Serial port 5. FC port (slot 3, port A) 6. FC port (slot 3, port B) 7. Ethernet ports 8. USB ports 9. Mgmt port 10. VGA port 11. FC port (slot 2, port A) 12.
Connecting to the HP ProLiant DL380 G6 (VT5900-L) Note The VT5900-P that ships with version 8.0 is also built on the DL380 G6. Refer to the 8.0 Quick Start Guide for connection information for this server. Here is an illustration of the back of the VT5900-L (base model): 1. 4Gb FC ports (slot 2) 2. 4Gb FC ports (slot 3) 3. Power plugs 4. 4Gb FC ports (slot 6) 5. Ethernet ports 4 and 3 9. Ethernet ports 2 and 1 6. iLO port 10. VGA port 7. Serial port 11. USB ports 8.
DL380 G4 (VT5900-B, VT5900-C) Port Bus # 1A 4 1B 5 Upgrading a SCSI Adapter to a Fibre Channel Adapter SCSI-to-Fibre Channel Adapter Upgrade on page 35 provides instructions to replace one or more SCSI cards with the same number of VTS-supported Fibre Channel cards. Instructions for the 6.03.42 models are included in that appendix; refer to it for the steps required to replace a SCSI card.
6. Remove SCSI cards from the server as follows: Note This order must be followed or the upgrade will not work. Every HP ProLiant DL585 server has SCSI and Fibre Channel cards in the same physical slots. A Fibre Channel card is always in slots 5 and 6. You will see slot numbers on the chassis above each slot. These numbers are visible on the inside and outside of the chassis. The following order of SCSI card removal must be followed: a.
e. Remove the SCSI card in slot 7 and install the fifth Fibre Channel card in slot 7, if necessary. f. Remove the SCSI card in slot 8 and install the sixth Fibre Channel card in slot 8, if necessary. Here is a snapshot of the SCSI card after it is removed: 7. After you install all of the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot. Then, re-install the shipping bracket and make sure the pull tab is secure. 8.
5. Look inside the ProLiant DL380 server to the back right-hand side where a pluggable module, called the PCI Riser cage, houses the SCSI and Fibre adapters. You will see a two round blue quick-release pull tabs that you need to open. Once loose, grasp the PCI Riser cage and pull up to remove it from the DL380 chassis. Refer to the diagrams on top of the PCI Riser cage if you have any questions about its removal.
The following order of SCSI card removal must be followed: a. Remove the SCSI card in slot 3 and install the first Fibre Channel card in slot 3. b. Remove the SCSI card in slot 2 and install the second Fibre Channel card in slot 2, if necessary. Here is a snapshot of the SCSI card after it is removed: 7. After you install all of the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot.
54 | Hardware Information for Legacy Installations
Index A external storage for legacy hardware 40 AutoCopy configuring SSH 23 enabling 20 overview 20 parameters in configuration file 25 autocopy_enable parameter 25 autocopy_pools parameter 25 autocopy_target_anypool parameter 26 autocopy_target_poolname parameter 26 automating Instant DR 31 Fibre Channel upgrade for legacy hardware 49 procedure 35 VT5900-A 49 VT5900-B and VT5900-C 51 VT5900-E 36 G GFS, installing 1 B backup management applications integration 13 backup schemes for Instant DR 27 F (B
L licensing, migration 18 M menu_InstantDR parameter 28 migration 13 configuration parameters 14 licensing 18 MSA 1000 40 MSA 1500 40 O overview, legacy hardware 39 R related documentation v rep_bwlimit parameter 27 rep_delay parameter 27 rep_enable parameter 26 S SCSI, upgrading to Fibre Channel for legacy hardware 49 SCSI-to-Fibre Channel upgrade, current models 35 SSH, configuring 23 T typographical conventions v V Virtual Media - Instant DR page 28 virtual tape connections, configuring for legacy