Virtual TapeServer for NonStop Servers Supplemental Installation Guide HP Part Number: 702038-001 Published: August 2012 Edition: All J06 release version updates (RVUs), all H06 RVUs, and all G06 RVUs
© Copyright 2012 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Preface v Supported release version updates v Typographical conventions v Related documentation v 1 Installing GFS Installing or reinstalling GFS Troubleshooting 1 2 13 2 Enabling BMA Integration and Migration 15 3 Enabling and Configuring AutoCopy and Instant DR 25 Overview of Instant DR 26 Overview of AutoCopy 26 Using Instant DR or AutoCopy 27 Configuring network settings 28 Configuring AutoCopy 32 Configuring Instant DR 35 Manually replicating a virtual tape using Inst
Connecting to the HP ProLiant DL385 G5 (VT5900-H) 54 Connecting to the HP ProLiant DL385 G5 (VT5900-J) 57 Connecting to the HP ProLiant DL185 G5 (VT5900-K) 59 Connecting to the HP ProLiant DL380 G6 (VT5900-L) 60 Connecting to the HP ProLiant DL185 G5 (VT5900-O) 60 Modifying virtual tape connections 62 Upgrading a SCSI Adapter to a Fibre Channel Adapter 63 On the HP ProLiant DL585 G1 (VT5900-A) 63 On the HP ProLiant DL380 G4 (VT5900-B and VT5900-C) 65 Index iv | Virtual TapeServer Suppleme
Preface Welcome to the Virtual TapeServer Supplemental Installation Guide. This guide provides additional configuration information for Virtual TapeServer (VTS), which should be used after completing the procedures in the Virtual TapeServer Quick Start Guide and Virtual TapeServer Configuration Guide. This guide is provided for HP personnel only.
l Configuration Guide, which describes how to configure VTS and how to use the VTS web interface to manage VTS. l Help, which provides detailed instructions for working with the web interface l Release Notes, which provides information about system support, known issues, upgrade and downgrade instructions, and other information about the current release. All documentation is available on the About page of the web interface.
Installing GFS 1 The Global File System (GFS) is an advanced feature that allows Linux servers to simultaneously read and write files on a single shared file system on a SAN. VTS is based on Linux, and GFS enables multiple VTS servers to access a shared set of pools and virtual tapes. The Event Management Service (EMS) can then automatically mount virtual tapes from the GFS pools as if they were separately mounted.
Installing or reinstalling GFS During the installation of GFS, you must configure a fencing method for the cluster. You can configure Fibre Channel switch fencing if the external storage device is connected over Fibre Channel (for example, if the HP StorageWorks Modular SAN Array provides a built-in Fibre Channel switch). Or, you can configure HP Integrated Lights-Out (iLO) to handle fencing. Refer to the following for more information about fencing: l Fencing overview — https://access.redhat.
4. Install the GFS RPMs by entering the following command: /media/cdrom/installGFS.bash Ignore any warnings that may be displayed. 5. Unmount and eject the DVD: umount /media/cdrom eject 6. Enter the following commands to disable clustering services that are included with the GFS but not used by VTS. Failure to disable these can cause the system to hang.
If the vault will be less than 2TB in size, complete the following steps to partition the disk: a. Enter the following command to partition the device. fdisk /dev/sde b. Enter n to add a new partition. c. Enter p to specify the primary partition. d. Enter 1 to specify the first partition. e. Press ENTER to accept the defaults. f. Enter w to save the configuration. To confirm the configuration, enter the following command: fdisk -l /dev/sde Here is an example of the output: Disk /dev/sde: 18.
10. Perform LVM initialization of the device. /dev/sde1 is used as an example partition on the /dev/sde device. a. If GFS was previously installed and you want to remove all volume data, enter the following commands to remove the logical volume (lv1) and wipe labels from the physical volume (sda1): lvremove -f /dev/gfsvg1/lv1 gfsvg1 pvremove /dev/sda1 -ff When prompted, enter y to confirm. b.
g. Enter the following to display details about the physical volume: pvdisplay Here is an example of the output: --- Physical volume --PV Name /dev/sde1 VG Name gfsvg1 PV Size 17.14 GB / not usable 3.37 MB Allocatable yes (but full) PE Size (KByte) 4096 Total PE 4388 Free PE 0 Allocated PE 4388 PV UUID tTHBFt-6pqc-ILIY-Uis5-L8Yn-bvBu-SCN3MV h.
i. Enter the following command to view details about the logical volume: lvdisplay Here is an example of the output: --- Logical volume --LV Name /dev/gfsvg1/lv1 VG Name gfsvg1 LV UUID VQUsmh-LI1E-rBIm-3tCe-9o6K-cjlp-ah8e4j LV Write Access read/write LV Status available# open LV Size 17.14 GB Current LE 4388 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0 0 11. Create the GFS file system: a. Enter the following command.
Syncing... All Done 12. Start ricci and luci. For more information about these GFS services, refer to chapters 3 and 4 of this guide: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_ Administration/index.html. These services must be configured in the cluster before you can mount the newly created GFS volume. Complete the following steps to start the services. a. Make sure that the luci system has a proper /etc/hosts file.
Note the URL given in the output; you will access it in the next step. d. Configure the cluster using the luci web interface: a. Access the web interface by loading the URL given in the previous step in a web browser. b. When prompted, accept the certificate permanently. c. Click OK if a certificate domain mismatch warning is displayed. d. Log in by entering the luci admin username and password. e. Click Add a System. f. Enter the fully qualified domain name or IP address of the GFS system. g.
h. Enter the corresponding password. Consult your SAN administrator for this information. i. Enter the port of the Fibre Channel switch. Consult your SAN administrator for this information. j. Click Update. Repeat these steps for each node in the cluster. 14. Add GFS storage to the cluster: a. In left-hand column of the web interface, under the cluster name, click Resources. b. Click Add a resource. c. Select GFS. Note Do not select GFS2. d. For the name, enter the vault name (for example, VAULT10). e.
b. Enter the following commands on the console of the node: mkdir /VAULT10 mount -a -t gfs chown bill.root /VAULT10 chmod 755 /VAULT10 ls -al /VAULT10 The following is an example of output for the ls command: total 12 drwxr-xr-x 2 root root 3864 May 15 15:24 . drwxr-xr-x 4 root root 4096 May 15 17:59 .. c. Enter the following command to verify that there is free space on the mounted GFS file system.
17. Verify fencing. Note These steps verify Brocade Fibre Channel fencing only. Before performing these steps, make sure you are not logged into the switch through Telnet. If you are logged in, the brocade fencing script will fails with an error similar to the following: /sbin/fence_brocade -a ip_addr -l username -n 2 -p password -o disable pattern match read eof at ./fence_brocade line 138 # echo $? 255 where ip_addr, username, and password is that of the Fibre Channel switch.
Troubleshooting This section describes how to verify the installation and troubleshoot any issues. 1. Verify that the appropriate services are enabled and started by entering the following commands: chkconfig cman on && service cman restart chkconfig clvmd on && service clvmd restart chkconfig ricci on && service ricci restart Note that these commands also restart fencing and deactivate the cluster. 2.
If luci can see a node but indicates that ricci is not running, but the node shows ricci is running, output to the similar is displayed: luci[15356]: Unable to establish an SSL connection to 192.168.80.2:11111: ricci's certificate is not trusted Enter the following commands to remove luci: rpm -e luci rm -rf /var/lib/luci You may need to reinstall luci or re-import the cluster. The luci RPM is available on the GFS Install/Upgrade DVD.
2 Enabling BMA Integration and Migration Through integration with a backup management application server, you can read and write files to and from Virtual TapeServer (VTS). You can then enable VTS to export, or “migrate”, virtual tapes to physical tapes using an attached external tape device. On upgraded servers, you can configure the hsm_ parameters in the configuration file to enable migration, which is described in this chapter.
client will request services from the BMA server, which may be located on the VTS server or on a remote server. The BMA server then identifies the tape to back up or restore and instructs the tape library to perform the operation. l A VTSPolicy command sent from the host server through EMS can initiate migration. VTSPolicy can initiate many of the commands that are available on the BMA server.
Parameter Description Values Req'd? you can specify any of these values, which depends on the BMA. Default value: TSM hsm_server For Backup Express, Hostname or IP address NetBackup, Networker, of the VTS server Yes and CommVault Galaxy only: Specifies the BMA server hostname or IP address. hsm_pool For Networker only: Pool name Yes Pool names, separated Yes Specifies the pool name.
Parameter hsm_optfile Description For Tivoli Storage Values Req'd? Path Yes Path No Integer, from 1-30 No Policy Yes Manager only: Specifies the path to the optfile file. Default value: dsm.opt hsm_optfile_pool For Tivoli Storage Manager only: Specifies the path to the optfile file for a specific VTS pool. This parameter overrides the hsm_optfile parameter for the pool.
Parameter Description Values Req'd? Specifies the backup policy. hsm_schedule For NetBackup only: The schedule name Yes Hostname or IP address Yes Backup set name Yes Policy name Yes Specifies the backup schedule. (Typically, a User schedule is specified.) hsm_client For CommVault Galaxy only: Specifies the hostname or IP address of the VTS server on which the CommVault client is installed.
Parameter Description Values Req'd? subclients within a CommCell. hsm_erase_after_backup Enables VTS to YES or NO No Integer, from 0-999 No Integer No automatically erase a virtual tape after it is successfully migrated. Only the tape data is erased; the metadata remains. Default value: NO hsm_joblog_maxnum Specifies the maximum number of job log files to be retained. These files contain the output from each job. This value applies separately to backup and restore jobs.
Parameter hsm_restore_period Description Values For Backup Express only: Req'd? Value No YES or NO No Specifies how far back in time the BMA should search its catalog for the file to be restored. Specify the restore period in the nnU format where: l nn is a number between 1 and 99 l U is the unit; specify “d” for days, “w” for weeks, or “y” for years Default value: 90d ems_hsm_backup_ Sends notifications to the notification NonStop kernel (NSK) from the EMS Telnet policy process.
Here are sample configurations for each BMA. When setting values for your environment, be sure to use your hostname, paths, and so on. l Tivoli Storage Manager: hsm_enable='YES' hsm_product='TSM' hsm_optfile='dsm.opt' hsm_optfile_E1_HAL_10YEARS='/usr/opt/dsm.server10.
hsm_server='stingray3' hsm_client='server42' hsm_backupset='server42_OnDemand' hsm_subclient='default' 4. Click SAVE. 5. Set the username and password for the BMA. Note This step is required for CommVault Galaxy, Backup Express, and NetBackup only. a. Click Security > Passwords on the navigation pane.The following page is displayed: b. Select hsm from the drop-down list and configure a password. Click Help for complete instructions.
24 | Virtual TapeServerSupplemental Installation Guide
3 Enabling and Configuring AutoCopy and Instant DR Data Replication is a advanced software feature that enables you to back up data to remote Virtual TapeServer (VTS) servers. This feature was implemented as Instant DR and AutoCopy before this release; Instant DR and AutoCopy are supported on upgraded systems. To configure Instant DR and AutoCopy, you must configure network settings for all VTS locations.
Overview of Instant DR Instant DR enables you to create and maintain identical copies of backup data on VTS disk storage at one or more locations. Virtual tapes are transmitted from one vault to another. The Instant DR jobs indicate the pools or virtual tapes to be transmitted. When a job is initiated, a copy of the virtual tape is transmitted over the network link.
AutoCopy is enabled at the pool level; all virtual tapes contained in the pool are monitored and copied when necessary. If VTS fails to copy a virtual tape, it retries the AutoCopy operation every five minutes for a total of twelve attempts. If the operation fails altogether, a failure message is sent to the event log. Note Autocopy will fail if the virtual tape already exists in another pool on the target. Virtual tape names must be unique across all vaults and pools.
Instant DR AutoCopy Dispatch button on the Instant DR page in the VTS web interface in the pool are monitored and copied when necessary At what level are changes made (implementation granularity)? Vault, pool, or virtual tape Pool only How much data is transmitted? Data difference only, which may reduce data transmission Bit-for-bit (1:1) copy operation When is bandwidth consumed? Per schedule, usually offhours Throughout the day, may coincide with user activity How many backup sites are suppor
The following steps use two sites as an example of enabling Data Replication between two sites, Server A and Server B, connected by a wide area network (WAN): To configure network settings 1. Verify that the hostname, IP address, and gateway are configured on each VTS server in the environment. Refer to the Quick Start Guide for more information. 2.
3. Test connectivity by pinging the network connections. At the prompt, enter ping hostname. For example, to ping Server B, enter ping server_b. Output similar to the following is displayed: 64 bytes from server_b (10.10.2.145): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from server_b (10.10.2.145): icmp_seq=1 ttl=64 time=0.053 ms 64 bytes from server_b (10.10.2.145): icmp_seq=2 ttl=64 time=0.053 ms 64 bytes from server_b (10.10.2.145): icmp_seq=3 ttl=64 time=0.053 ms Press CTRL-C to stop the ping process.
5. If you configured SSH and access to the bill account is restricted on the VTS servers, you must grant SSH access to the bill user for each VTS server. To do this, become root (enter su root) and then edit /etc/ssh/sshd_config to add this line: AllowUsers vtsa bill@source_svr where source_svr is the IP address or hostname of the VTS server where the remote export job is originating.
Configuring AutoCopy Note A default configuration file is defined for each VTS server. To override the default settings, you must define settings as described below. To modify the VTS configuration file to configure AutoCopy Requires the Edit Configuration File access right 1. Enable Data Replication licensing by entering the key on the Manage System Licenses page (click Configuration > System on the navigation pane and then click Manage System Licenses). 2.
Parameter Description Values not need to exist on the target server; it will be created, if necessary. You can also set this parameter to a vault and server, which defines the target path on the server. You can set this parameter to a pool on a target VTS server; in this case, VTS determines the vault in which the pool resides on the target system. In all cases, ensure that no virtual tape name on the target system matches the name of any virtual tapes to be copied (they will be overwritten).
Parameter Description Values rep_enable Enables AutoCopy for virtual tapes in YES or NO VTLs. Default value: NO rep_delay Specifies the time to wait (in 0 - 86400 seconds) before starting to copy data, for virtual tapes in VTLs. This is an optional parameter that provides a delay to the host so that it can issue a mount for verification.
autocopy_target_DR1='SVR08' autocopy_target_DR2='SVR08:/VAULT00/DR2_AU' 5. Click SAVE. 6. Restart the TapeServer process on the Manage System Tasks page. Configuring Instant DR You can manually replicate data using an Instant DR jobset, or you can automatically replicate data through the use of EMS. Manually replicating a virtual tape using Instant DR To manually replicate data using Instant DR, you must define a jobset. Each jobset can back up one or more virtual tapes.
3. Select instant dr from the window drop-down list at the top of the page. 4. Click NEW in the Select Backup Set section of the page. 5. In the Enter new job name field, type the name of the jobset. 6. Click SUBMIT to continue. The following page is displayed: 7. In the Backup target system field, type the name or IP address of the system to receive the backup. 8. In the Directory field, type the full path to the target vault (in UNIX notation).
9. Select the Use Secure Transfer checkbox if SSH is configured on the VTS systems and you want to encrypt the information being transmitted. This transfer method is not necessary if the connection between the VTS systems is secure or in a trusted environment. The encryption process used in a secure transfer might cause some degradation in the data transfer rate. 10. Select the All Files checkbox if you want to send all files in the job to the remote system.
Do this by creating three lines in the jobset. If these virtual tapes are in the same pool, you only need to specify vault and pool once and use = for these items on the second and third lines. On each line, specify the source virtual tape name. Specify ABCABC for target name on the first line and then specify = on the last two lines.
5. Click DISPATCH at the bottom of the page to start the Instant DR process and to display the main Instant DR page. You can monitor the process in the Job History box, which shows a one-line summary of the jobsets that are running and those that are complete. Click REFRESH until the Job History indicates finished. The virtual tape is available on the remote VTS server until it is needed or until the process runs again and updates it. To display complete job log information on the process, click §.
#SET emphasis -1 #SET evt_text VTSPolicy IDR idrjobname Here is an explanation of the lines: #SET collector $0 — Refers to the EMS collector to which to send the message. Change collector to the name of the collector in the EMS configuration settings on VTS. #SET evt_num number — Assigns a number to the event. #SET action -1 — Refers to the way the message is displayed in an EMS viewer, such as ViewPoint. In this case, it is inverse text until acknowledged.
4 SCSI-to-Fibre Channel Adapter Upgrade This chapter provides instructions to replace one or more SCSI cards with the same number of VTSsupported Fibre Channel cards on the following VTS models: l VT5900-E, on the HP ProLiant DL385 G2 l VT5900-H, on the HP ProLiant DL385 G5 Refer to "Upgrading a SCSI Adapter to a Fibre Channel Adapter" on page 63 if you need to upgrade legacy hardware.
c. Use the umount /VAULTxx command (where xx is the vault number) to unmount all vaults. Repeat this command for each vault on the system until all vaults are unmounted. d. Verify that all vaults are unmounted by typing mount in the VTS terminal window. There should be no vaults mounted. e. From the terminal window on VTS, open the /etc/fstab file. Use a text editor to comment out all vaults in the /etc/fstab file by placing a # symbol before each LABEL=/VAULT line (for example, #LABEL=/VAULT01).
2. Remove the SCSI card in slot 4 and install the second Fibre Channel card in slot 4, if necessary. Here is a snapshot of the SCSI card after it is removed: g. After you install the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot. Then, re-install the PCI Riser cage in the DL385 G2 chassis. Once aligned, firmly press the module into place. Re-tighten the two blue pull tabs on the PCI Riser cage. h.
b. Power on all of the SCSI converters using the switch on the rear of each unit. Depending on how many SCSI cards were removed from the ProLiant server, some converter ports or entire units may no longer be used. 6. Configure the remaining SCSI port(s) with the original settings. a. On the VTS web interface, click Configuration > Manage Virtual Devices on the navigation pane. b. If prompted, enter the login credentials. c.
PCI Slot Bus Number Card Type Virtual Tape Name controller card) 4A 0 2G Fibre $VTAPE00 4B 1 2G Fibre $VTAPE01 5A 2 2G Fibre $VTAPE02 5B 3 2G Fibre $VTAPE03 d. Click Submit. Confirm to reboot the server. 8. Use the VTS web interface to reconfigure a VTD. After the Fibre Channel ports are configured as virtual devices, the final step in this process is to reconfigure the VTDs. VTDs originally set up as SCSI need to be edited for use on a Fibre Channel port.
46 | Virtual TapeServerSupplemental Installation Guide
5 Hardware Information for Legacy Installations This chapter describes the hardware in older Virtual TapeServer (VTS) installations and that is supported in an upgraded environment. It also provides cabling and Fibre Channel upgrade procedures for the old hardware. Hardware overview For VTS installations that are upgrading to 8.0, the following hardware may be installed.
Internal storage If the VT5900-C, VT5900-E, VT5900-G, VT5900-H, VT5900-J, VT5900-K, or VT5900-O was purchased, additional internal storage may have been purchased for storing virtual tapes.
l VT5930 controller, which is built on the P2000 MSA and has 12 2TB drives and up to three VT5931 expansion chassis (P2000 MSA), each with 12 2TB drives; maximum capacity per P2000 system is 96TB. Fibre Channel upgrade card To replace a SCSI card with a Fibre Channel card on an HP ProLiant server, the VT5900-FCU may have been purchased: This Fibre Channel upgrade card is a 2 Gbps, dual-port, 64-Bit/133 MHz, PCI-X-to-Fibre Channel host bus adapter (HBA).
VTS Model Hardware Height # of SCSI Ports # of FC Ports Optical or Magnetic Drives VT5900-H DL385 G5 2U Up to 4 Up to 4 -- VT5900-J DL385 G5 2U 0 Up to 6 -- VT5900-K DL185 G5 2U 0 Up to 4 -- VT5900-L DL380 G6 2U 0 Up to 10 -- VT5900-O DL185 G5 2U 0 Up to 8 DVD/CD Note For the most reliable service, use cables of the highest quality and shortest possible length based on the location of the equipment in the data center.
Connections from third SCSI converter HVD Bus A HVD Bus B HVD Bus C HVD Bus D To NonStop server To NonStop server To NonStop server To NonStop server LVD Bus A LVD Bus B LVD Bus C LVD Bus D To port 4A on VTS server To port 4B on VTS server To port 3A on VTS server To port 3B on VTS server Here is an illustration of the port numbers on the back of the VT5900-A (DL585 G1): Here is an illustration of the HVD and LVD ports on the back of the SCSI converter: Connecting the HP ProLiant DL380 G4
LVD Bus A LVD Bus B LVD Bus C LVD Bus D To port 2A on VTS server To port 2B on VTS server To port 3A on VTS server To port 3B on VTS server Here is an illustration of the port numbers on the back of the DL380 G4: Here is an illustration of the HVD and LVD ports on the back of the SCSI converter: Connecting the HP ProLiant DL385 G2 (VT5900-E) The VT5900-E is built on a 2U (3.5 inch) chassis and provides four SCSI ports, numbered 0-3.
Here is an illustration of the slots on the back of the base model. If the P800 card is installed, it is placed in slot 3 (below slot 4). Here is an illustration of the HVD and LVD ports on the back of the SCSI converter: Note If you replace one or both of the SCSI cards with the 2Gbps Fibre Channel card(s), as described in "SCSI-to-Fibre Channel Adapter Upgrade" on page 41, and you set any of the Fibre Channel ports to target mode (to connect the port to a virtual tape drive), port numbering will change.
Connecting to the HP ProLiant DL385 G5 (VT5900-H) Here is an illustration of the back of the VT5900-H (base model): 1. SCSI port (slot 4, port A) 6. 4Gb FC port (slot 2, port B) 11. PS/2 port 2. SCSI port (slot 5, port A) 7. Power plugs 12. VGA port 3. SCSI port (slot 5, port B) 8. Ethernet port 2 (eth1) 13. USB ports 4. SCSI port (slot 4, port B) 9. Ethernet port 1 (eth0) 14. Serial port 5. 4Gb FC port (slot 2, port A) 10. PS/2 port 15.
l HVD Bus D > To NonStop server l LVD Bus A > To VTS slot 4A l LVD Bus B > To VTS slot 4B l LVD Bus C > To VTS slot 5A l LVD Bus D > To VTS slot 5B It is recommended that you attach to HVD bus D then to HVD bus C and then to HVD buses A and B if necessary. Also, note that you do not have to use all of the SCSI buses (or converter ports) for VTDs. You can use the ports as initiators for legacy SCSI tape drives. 2.
other end of the cable to the LAN or WAN switch. If Data Replication, AutoCopy, or Instant DR is licensed, you may also want to connect to Ethernet port 2, which is on the right (labeled in the server diagram). Port 2 corresponds to eth1. Performance on port 1 may be affected if you do not dedicate a port for replication traffic. Connect the other end of the cable to the LAN or WAN switch. (You can also use port 2 for network configuration, as described below.
Connecting to the HP ProLiant DL385 G5 (VT5900-J) Here is an illustration of the back of the VT5900-J: 1. 2Gb FC port (slot 4, port A) 2. 2Gb FC port (slot 5, port A) 3. 2Gb FC port (slot 5, port B) 6. 4Gb FC port (slot 2, port B) 11. PS/2 port 12. VGA port 7. Power plugs 13. USB ports 8. Ethernet port 2 14. Serial port 9. Ethernet port 1 15. iLO port 10. PS/2 port 4. 2Gb FC port (slot 4, port B) 5.
You may want to connect a second cable, to provide redundancy. When using the VT5917, for example, active/active failover can be configured. To connect to external tape drives or libraries 1. Connect one end of a Fibre optic cable to a Fibre Channel (FC) port on the VTS server. 2. Connect the other end of the cable to the external drive or library. 3. Repeat these steps for additional drives or libraries. 4. Note the port number(s) used on the VTS server.
Connecting to the HP ProLiant DL185 G5 (VT5900-K) Note The VT5900-O that ships with version 8.0 is also built on the DL185 G5. Refer to the 8.0 Quick Start Guide for connection information for this server. Here is an illustration of the back of the VT5900-K: 1. Power plugs 5. FC port (slot 3, port A) 9. Mgmt port 2. PS/2 port 6. FC port (slot 3, port B) 10. VGA port 3. PS/2 port 7. Ethernet ports 11. FC port (slot 2, port A) 4. Serial port 8. USB ports 12.
Connecting to the HP ProLiant DL380 G6 (VT5900-L) Note The VT5900-P that ships with version 8.0 is also built on the DL380 G6. Refer to the 8.0 Quick Start Guide for connection information for this server. Here is an illustration of the back of the VT5900-L (base model): 1. 4Gb FC ports (slot 2) 5. Ethernet ports 4 and 3 9. Ethernet ports 2 and 1 2. 4Gb FC ports (slot 3) 6. iLO port 10. VGA port 3. Power plugs 7. Serial port 11. USB ports 4. 4Gb FC ports (slot 6) 8.
The second FC card shown in slot 2 is optional and may not be installed. If installed, it may provide up to four ports instead of the two shown. To connect to the host server When connecting host servers over Fibre Channel to the VTS server, use one port per host server, if possible. If connecting more than one host server through a switch, ensure that the host servers are of the same type. Each Fibre Channel port on the VTS server is configured to emulate one host type only.
To connect for network configuration To configure the server’s network settings, you can connect the VTS server to the network using one of these options: l Connect a computer to the server using a serial cable l Connect a computer to the server using an Ethernet cable l Connect to a monitor and keyboard To connect using a serial cable: 1. Connect one end of a serial cable to one of the serial ports on the VTS server. 2. Connect the other end of the cable to a computer.
To determine the port ID, you must find the PCI slot number on the back of the VTS server module where the SCSI or Fibre Channel cable connects from the card to the target. Labels indicate the port number for each port.
to release to remove the shipping bracket. Once the shipping bracket is removed, you can access the blue tabs that lock and unlock the SCSI and Fibre Channel cards. 6. Remove SCSI cards from the server as follows: Note This order must be followed or the upgrade will not work. Every HP ProLiant DL585 server has SCSI and Fibre Channel cards in the same physical slots. A Fibre Channel card is always in slots 5 and 6. You will see slot numbers on the chassis above each slot.
Here is a snapshot of the SCSI card after it is removed: 7. After you install all of the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot. Then, re-install the shipping bracket and make sure the pull tab is secure. 8. Place the top cover back in place and secure it. Slide the HP DL585 ProLiant server back into the rack and secure the two quick-disconnect screws on the front panel. 9.
3. If the HP ProLiant DL380 is mounted in a rack, loosen the front pull tabs and slide the server out so that you can open the top cover. Do not remove the server from the rails. If there is a stabilizer foot, extend it so the enclosure does not tip forward. 4. Open the ProLiant DL380 top cover, remove it, and set it aside. 5. Look inside the ProLiant DL380 server to the back right-hand side where a pluggable module, called the PCI Riser cage, houses the SCSI and Fibre adapters.
Here is a snapshot of the SCSI card after it is removed: 7. After you install all of the Fibre Channel upgrade cards in the appropriate slots, make sure the blue clips are firmly seated and locked in place on each slot. Then, re-install the PCI Riser cage in the DL380 chassis. Once aligned, firmly press the module into place. Re-tighten the two blue pull tabs on the PCI Riser cage. 8. Place the top cover back in place and secure it.
PCI-X Slot Bus Number Virtual Tape Name 2A 2 $VTAPE02 2B 3 $VTAPE03 1A 4 Fibre * 1B 5 Fibre * 68 | Virtual TapeServer Supplemental Installation Guide
Index virtual tape connections for legacy hardware 62 A addingSee also creating connecting VT5900-A 50 AutoCopy enabling 26 VT5900-B 51 overview 26 VT5900-C 51 parameters in configuration file 32 VT5900-E 52 autocopy_enable parameter 32 VT5900-G 53 autocopy_pools parameter 32 VT5900-H 52 autocopy_target_anypool parameter 33 VT5900-J 53 autocopy_target_poolname parameter 32 creatingSee also adding backup scheme for Instant DR 35 automating Instant DR 39 E B ems_hsm_backup_notification para
Instant DR G automating 39 GFS creating backup scheme 35 overview 1 enabling 26 GFS, installing 1 H hardware overview, legacy models 47 hsm_backup_options parameter 17 page 36 internal storage for legacy hardware 48 L licensing hsm_backup_options_poolname parameter 17 hsm_backup_pools parameter 17 hsm_backupset parameter 19 hsm_client parameter 19 migration 23 M menu_InstantDR parameter 35 migration 15 hsm_enable parameter 16 configuration parameters 16 hsm_erase_after_backup parameter 20 hsm_jo
SCSI, upgrading to Fibre Channel for legacy hardware 63 SSH, configuring 30 T typographical conventions v V Virtual Media Instant DR page 36 virtual tape connections, configuring for legacy hardware 62 VT5900-A, Fibre Channel upgrade 63 VT5900-B Fibre Channel upgrade 65 overview 47 VT5900-C Fibre Channel upgrade 65 overview 47 VT5900-E Fibre Channel upgrade 42 overview 47 VT5900-G, overview 47 VT5900-H, overview 47 VT5900-J, overview 47 VT5905 48 VT5906 48 VT5915 48 VT5916 48 VTSPolicy Instant DR 39 Index
72 | Virtual TapeServer Supplemental Installation Guide