HP EVA 4000/6000/8000 and EVA 4100/6100/8100 User Guide Abstract This document is intended for customers who operate and manage the EVA 4000/6000/8000 and EVA 4100/6100/8100 storage systems. These models are sometimes referred to as EVA4x00, EVA6x00, and EVA8x00 or as EVAx000 and x100. IMPORTANT: With the release of the P6300/P6500 EVA, the EVA family name has been rebranded to HP P6000 EVA. The names for all existing EVA array models will not change. The rebranding also affects related EVA software.
© Copyright 2005, 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Contents 1 Enterprise Virtual Array startup...................................................................11 EVA8000/8100 storage system connections..............................................................................11 EVA6000/6100 storage system connections...............................................................................12 EVA4000/4100 storage system connections..............................................................................12 Direct connect........................
Enclosure address bus connections..............................................................................34 Error Condition Reporting...............................................................................................34 Error condition categories..........................................................................................35 Error queue.............................................................................................................35 Error condition report format....
Windows 2003 MSCS cluster installation..............................................................................60 Connecting to C-series switches...........................................................................................60 HP Insight Remote Support software.....................................................................................60 Failback preference setting for HSV controllers............................................................................
Setting preferred paths.......................................................................................................83 Oracle Solaris........................................................................................................................83 Loading the operating system and software...........................................................................83 Configuring FCAs with the Oracle SAN driver stack...............................................................
Certification and classification information..........................................................................106 Canadien notice (avis Canadien).......................................................................................106 Class A equipment......................................................................................................106 Class B equipment......................................................................................................106 European union notice.
0.3.en.06 UNRECOVERABLE condition—No blowers installed .........................................123 Temperature conditions.....................................................................................................123 0.4.en.01 NONCRITICAL condition—High temperature...................................................123 0.4.en.02 CRITICAL condition—High temperature...........................................................124 0.4.en.03 NONCRITICAL condition—Low temperature...........................
C Controller fault management....................................................................135 Using HP P6000 Command View ..........................................................................................135 GUI termination event display................................................................................................135 GUI event display............................................................................................................135 Fault management displays......
Risks..........................................................................................................................152 OpenVMS configuration...................................................................................................153 Requirements..............................................................................................................153 HBA configuration.......................................................................................................153 Risks..
1 Enterprise Virtual Array startup This chapter describes the procedures to install and configure the Enterprise Virtual Array. When these procedures are complete, you can begin using your storage system. NOTE: Installation of the Enterprise Virtual Array should be done only by an HP authorized service representative. The information in this chapter provides an overview of the steps involved in the installation and configuration of the storage system.
EVA6000/6100 storage system connections Figure 2 (page 12) shows a typical EVA6000/6100 SAN topology: • The HSV200-A and HSV200-B controllers connect via two host ports (FP1 and FP2) to the Fibre Channel fabrics. The hosts that will access the storage system are connected to the same fabrics. • The HP Command View EVA management server also connects to both fabrics. • The controllers connect through one loop pair to the drive enclosures.
Figure 3 EVA4000/4100 configuration 1 Network interconnection 7 Fabric 2 2 Management server 8 Controller A 3 Non-host 9 Controller B 4 Host X 10 Cache mirror ports 5 Host Z 11 Drive enclosure 1 6 Fabric 1 12 Drive enclosure 2 Direct connect NOTE: Direct connect is currently supported on Microsoft Windows only. For more information on direct connect, go the Single Point of Connectivity Knowledge (SPOCK) at: http://www.hp.com/ storage spock.
iSCSI connection configurations The EVA4x00/6x00/8x00 support iSCSI attach configurations using the HP MPX100. Both fabric connect and direct connect are supported for iSCSI configurations. For complete information on iSCSI configurations, go to the following website: http://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html NOTE: An iSCSI connection configuration supports mixed direct connect and fabric connect.
Procedures for getting started Step Responsibility 1. Gather information and identify all related storage documentation. Customer 2. Contact an authorized service representative for hardware configuration information. Customer 3. Enter the World Wide Name (WWN) into the OCP. HP Service Engineer 4. Configure HP P6000 Command View. HP Service Engineer 5. Prepare the hosts. Customer 6. Configure the system through HP P6000 Command View. HP Service Engineer 7.
The OCP on either controller can be used to input the WWN and password data. For more information about the OCP, see “Operator control panel” (page 43). Table 1 (page 16) lists the push-button functions when entering the WWN, WWN checksum, and password data. Table 1 Push button functions Button Function Selects a character by scrolling up through the character list one character at a time. Moves forward one character.
4. 5. 6. Press or until the first character of the WWN is displayed. Press to accept this character and select the next. Repeat Step 4 to enter the remaining characters. Press Enter to accept the WWN and select the checksum entry mode. Entering the WWN checksum The second part of the WWN entry procedure is to enter the two-character checksum, as follows. 1. Verify that the initial WWN checksum displays 0 in both positions. 2. Press or until the first checksum character is displayed.
See the HP P6000 Command View Installation Guide for information on installing the software. Installing optional EVA software licenses If you purchased optional EVA software, it will be necessary to install the license. Optional software available for the Enterprise Virtual Array includes HP Business Copy EVA and HP P6000 Continuous Access. Installation instructions are included with the license.
2 Enterprise Virtual Array hardware components The Enterprise Virtual Array includes the following hardware components: • Fibre Channel drive enclosure — Contains disk drives, power supplies, blowers, I/O modules, and an Environmental Monitoring Unit (EMU). • Fibre Channel loop switches — Provides twelve-port central interconnect for Fibre Channel drive enclosure FC Arbitrated Loops. The loop switches are required for EVA6000/6100 and EVA8000/8100 configurations with more than four disk enclosures.
loop switches. The EVA6100 includes two HSV200-B controllers with two Fibre Channel loop switches. • EVA4000/4100 — available in configurations ranging from the 2C1D configuration to the 2C4D configuration without loop switches. The EVA4000 includes two HSV200-A controllers. The EVA4100 includes two HSV200-B controllers. Multiple EVA4000/4100s can be installed in a single rack. See the HP 4x00/6x00/8x00 Enterprise Virtual Array Hardware Configuration Guide for more information about configurations.
7. Blower 2 8. Power supply 2 9. I/O module A 10. Status indicators (EMU, enclosure power, enclosure fault) I/O modules Two I/O modules provide the interface between the drive enclosure and the host controllers. See Figure 7 (page 21). They route data to and from the disk drives using Loop A and Loop B, the dual-loop configuration. For redundancy, only dual-controller, dual-loop operation is supported. Each controller is connected to both I/O modules in the drive enclosure. Figure 7 I/O module 1.
I/O module status indicators There are three status indicators on the I/O module. See Figure 7 (page 21). The status indicator states for an operational I/O module are shown in Table 2 (page 22). Table 3 (page 22) shows the status indicator states for a non-operational I/O module. Table 2 Operational I/O module status indicators Upper Power Lower Descriptions Off On Off • I/O Module is operational. On Flashing, then On On • Top port—Fibre Channel drive enclosure signal detected.
Fiber Optic Fibre Channel cables The Enterprise Virtual Array uses orange, 50-µm, multi-mode, fiber optic cables for connection to the SAN. The fiber optic cable assembly consists of two 2-m fiber optic strands and small form-factor connectors on each end. See Figure 9 (page 23). To ensure optimum operation, the fiber optic cable components require protection from contamination and mechanical hazards. Failure to provide this protection can cause degraded operation.
Up to 14 disk drives can be installed in a drive enclosure. Disk drive status indicators Three status indicators display the drive operational status. Figure 11 (page 24) shows the disk drive status indicators. Table 4 (page 24) provides a description of each status indicator. Figure 11 Disk drive status indicators 1. Activity 2. Online 3. Fault Table 4 Disk drive status indicator descriptions Status indicator Description This green status indicator flashes when the disk drive is being accessed.
Table 6 Non-operational disk drive status indications Activity Online Fault Description On On On Indicates no connection or the controllers are offline. Recommended corrective actions: 1. Check power supplies for proper operation. 2. If defective, replace disk drive. On Off Flashing Indicates disk drive error/not active. Recommended corrective actions: 1. Verify FC loop continuity. 2. Replace disk drive.
The output of each power supply is 499 W, with a peak output of 681 W. A single power supply can support an enclosure with a full complement of disks. The power supply circuitry provides protection against: • Overloads • Short circuits • Overheating Power supply status and diagnostic information is reported to the EMU with voltage, current, and temperature signals. See “Regulatory notices and specifications” (page 104) for the enclosure power specifications.
The EMU for Fibre Channel-Arbitrated Loop (FC-AL) drive enclosures is fully compliant with SCSI-3 Enclosure Services (SES), and mounts in the left rear bay of a drive enclosure. See Figure 6 (page 20). Controls and displays Figure 13 (page 27) illustrates the location and function of the EMU displays, controls, and connectors. Figure 13 EMU controls and displays 1. Status indicators: a. EMU — This flashing green is the heartbeat for an operational EMU. b.
• Providing enclosure status data to the controllers. • Reporting the WWN and the logical address of all disk drives. NOTE: Although the EMU can determine the logical address of a drive, the EMU can neither display nor change this information. HP P6000 Command View can display the addresses from the EMU-supplied status information. EMU monitoring functions The internal EMU circuitry monitors the enclosure and component functions listed in Table 8 (page 28).
EMU indicator displays The EMU status indicators are located above the alphanumeric display. See Figure 13 (page 27). These indicators present the same information as those on the front, lower right corner of the enclosure. You can determine the EMU and enclosure status using the information in Table 10 (page 29).
Table 11 EMU display groups Display Display group Description En Enclosure Number The enclosure number is the default display and is a decimal number in the range 00 through 14. See “Enclosure number feature” (page 32) for detailed information. Li Bay 1 Loop ID This display group has a single sublevel display that defines the enclosure bay 1 loop ID. Valid loop IDs are in the range 00 through 7F.
Table 12 Audible alarm sound patterns (continued) Condition type Cycle 1 Cycle 2 Alarm On Alarm Off NONCRITICAL INFORMATION Legend Controlling the audible alarm You can control the alarm with the push-button. This process includes muting, enabling, and disabling. When an error condition exists, the alphanumeric display reads Er, the alarm sounds, and you can: • Correct all errors, thereby silencing the alarm until a new error occurs.
NOTE: 1. Er is displayed in the alphanumeric display when an error condition is present. Press and hold the bottom push-button until the status indicator is On. A muted alarm will remain off until a new condition report exists. 2. To unmute the alarm, press and hold the bottom push-button until the status indicator is Off. When a new error condition occurs, the alarm will sound.
A display of 01 through 14 indicates that the enclosure is connected to the enclosure address bus and can exchange information with other enclosures on the enclosure address bus. The decimal number indicates the physical position of the enclosure in relation to the bottom of the rack. • 01 is the address of the enclosure connected to the lower connector in the first (lower) enclosure ID expansion cable.
Enclosure address bus connections Connecting the enclosures to the enclosure ID expansion cables establishes the enclosure address bus. The enclosures are automatically numbered based on the enclosure ID expansion cable to which they are connected. Figure 15 (page 34) shows the typical configuration of a 42U cabinet with 14 enclosures. Figure 15 Enclosure address bus components with enclosure ID expansion cables 1. Shelf ID expansion cable port 1—Disk enclosure 1 2.
Error condition categories Each error condition is assigned to a category based on its impact on disk enclosure operation. The following four error categories are used: • Unrecoverable — the most severe error condition, it occurs when one or more enclosure components have failed and have disabled some enclosure functions. The enclosure may be incapable of correcting, or bypassing the failure, and requires repairs to correct the error.
The most severe error in the queue always has precedence, regardless of how long less severe errors have been in the queue. This ensures that the most severe errors are displayed immediately. NOTE: When viewing an error, the occurrence of a more severe error takes precedence and the display changes to the most severe error. The earliest reported condition within an error type has precedence over errors reported later.
Figure 16 Displaying error condition values 1 Press and hold top push-button to view first error in queue. 2 Press and release top push-button. 3 Press and hold top push-button to view next error. 4 Press and release the bottom push-button at any time to return to the Er display. e.t. = element type, en. = element number, ec = error code Analyzing condition reports Analyzing each error condition report involves three steps: 1. Identifying the element. 2. Determining the major problem. 3.
The reporting group numbers are displayed on the EMU alphanumeric display as a pair of two-digit displays. These two displays are identified as rH and rL. • Valid rH displays are in the range 00 through 40, and represent the high-order (most significant) two digits of the RGN. • Valid rL displays are in the range 00 through 99, and represent the low-order (least significant) two digits of the RGN.
Figure 17 30-10022-01 loop switch status indicators 1. Ethernet activity • Flashing—the Ethernet port is receiving data. • Flashing rapidly—the traffic level is high. 2. Ethernet link • On—the port is connected to an operational Ethernet. 3. Port status • Off—SFP is not installed in the port. • On (green)—Normal port operational status when an SFP is installed and a link has been established. • On (yellow)—port has an SFP installed but a link has not been established. • Flashing (green)—activity.
Figure 18 30-10010-02 loop switch status indicators 1. Handle 2. Bezel snaps 3. Alignment tabs 4. Walk-up RS232 port 5. SFP status indicator 6. Port Bypassed indicator 7. POST fault indicator 8. Over Temp indicator 9. Power indicator 10. Loop operational indicator Power-on self test (POST) When you power on the 30-10010-02 loop switch, it performs a Power-on Self Test (POST) to verify that the switch is functioning properly.
Table 15 30-10010-02 loop switch port status indicators SFP status indicator (Green) Port bypass indicator (Amber) Description Off Off Indicates that the port does not have an SFP installed and is bypassed by the loop. On Off Indicates that the port is operating normally. The port and device are fully operational. On On Indicates the that port is in a bypassed state. The port is non-operational due to loss of signal, poor signal integrity, or the Loop Initialization Procedure (LIP).
• Four 4 Gbps Fibre Channel-Switched fabric host ports (two host ports in HSV200-A or HSV200-B controller) • Four 2 Gbps Fibre Channel drive enclosure device ports (two device ports in HSV200-A or HSV200- B controller) ◦ Arranged in redundant pairs ◦ Data load/performance balanced across a pair ◦ Support for up to 240 disks with HSV210-A or HSV210-B and 112 with HSV200-A or HSV200- B • 2 GB cache per controller, mirrored, with battery backup (1-GB cache in HSV200-A or HSV200- B controller) • 2 G
Figure 20 HSV200-A/B controller—rear view 1. Dual controller interconnect 2. CAB (cabinet address bus) 3. Unit ID 4. Power ON 5. FC device ports 6. FC cache mirror ports 7. FC host ports 8. Power supply 0 9. Power supply 1 10. Service connectors (not for customer use) Figure 21 HSV controller—front view 1. Battery 0 2. Battery 1 (EVA8000/8100 only) 3. Blower 0 4. Blower 1 5. Operator Control Panel (OCP) 6. Status indicators 7.
Figure 22 Controller OCP 1. Status indicators (see Table 17 (page 44)) and UID button 2. 40-character alphanumeric display 3. Left, right, top, and bottom push-buttons 4. Esc 5. Enter Status indicators The status indicators display the operational status of the controller. The function of each indicator is described in Table 17 (page 44). During initial setup, the status indicators might not be fully operational.
Table 18 Controller port status indicators Port Fibre Channel host ports Description • Green—Normal operation • Amber—No signal detected • Off—No SFP1detected or the Direct Connect OCP setting is incorrect Fibre Channel device ports • Green—Normal operation • Amber—No signal detected or the controller has failed the port • Off—No SFP Fibre Channel cache mirror ports 1 detected • Green—Normal operation • Amber—No signal detected or the controller has failed the port • Off—No SFP1 detected Dual contro
The menu tree is organized into the following major menus: • System Info—displays information and configuration settings. • Fault Management—displays fault information. Information about the Fault Management menu is included in “Controller fault management” (page 135). • Shutdown Options—initiates the procedure for shutting down the system in a logical, sequential manner. Using the shutdown procedures maintains data integrity and avoids the possibility of losing or corrupting data.
Displaying system information NOTE: The purpose of this information is to assist the HP-authorized service representative when servicing your system. The system information displays show the system configuration, including the XCS version, the OCP firmware and application programming interface (API) versions, and the enclosure address bus programmable integrated circuit (PIC) configuration. You can only view, not change, this information.
Shutting the controller down Use the following procedure to access the Shutdown System display and execute a shutdown procedure. CAUTION: If you decide NOT to power off while working in the Power Off menu, Power Off System NO must be displayed before you press Esc. This reduces the risk of accidentally powering down. NOTE: HP P6000 Command View offers the preferred method for shutting down the controller.
6. Press the arrow keys to navigate to the open field and type DELETE and then press ENTER. The system uninitializes. NOTE: If you do not enter the word DELETE or if you press ESC, the system does not uninitialize. The bottom OCP line displays Uninit cancelled. Password options The password entry options are: • Entering a password during storage system initialization (see “Entering the storage system password” (page 17)). • Displaying the current password.
Power supplies Two power supplies provide the necessary operating voltages to all controller enclosure components. If one power supply fails, the remaining supply is capable of operating the enclosure. Figure 23 Power supplies 1. Status indicator 2. Power supply 0 3.
Cache battery Batteries provide backup power to maintain the contents of the controller cache when AC power is lost and the storage system has not been shutdown properly. When fully charged the batteries can sustain the cache contents for to 96 hours. Two batteries are used on the EVA8x00 and a single battery is used on the EVA6x00 and EVA4x00. Figure 25 (page 51) illustrates the location of the cache batteries and the battery status indicators.
The data connections are the interfaces to the disk drive enclosures or loop switches (depending on your configuration), the other controller, and the fabric. Fiber optic cables link the controllers to the fabric, and, if an expansion cabinet is part of the configuration, link the expansion cabinet drive enclosures to the loop es in the main cabinet. Copper cables are used between the controllers (mirror port) and between the controllers and the drive enclosures or loop switches.
Figure 26 60-Hz and 50-Hz wall receptacles NEMA L6-30R receptacle, 3-wire, 30-A, 60-Hz IEC 309 receptacle, 3-wire, 30-A, 50-Hz • The standard power configuration for any Enterprise Virtual Array rack is the fully redundant configuration. Implementing this configuration requires: ◦ Two separate circuit breaker-protected, 30-A site power sources with a compatible wall receptacle (see Figure 26 (page 53)). ◦ One dual PDU assembly. Each PDU connects to a different wall receptacle.
PDUs Each Enterprise Virtual Array rack has either a 50- or 60-Hz, dual PDU mounted at the bottom rear of the rack. The 228481-002/228481-003 PDU placement is back-to-back, plugs facing down, with switches on top. • The standard 50-Hz PDU cable has an IEC 309, 3-wire, 30-A, 50-Hz connector. • The standard 60-Hz PDU cable has a NEMA L6-30P, 3-wire, 30-A, 60-Hz connector. If these connectors are not compatible with the site power distribution, you must replace the PDU power cord cable connector.
Each PDM has eight AC receptacles and one thermal circuit breaker. The PDMs distribute the AC power from the PDUs to the enclosures. Two power sources exist for each controller pair and drive enclosure. If a PDU fails, the system will remain operational. CAUTION: The AC power distribution within a rack ensures a balanced load to each PDU and reduces the possibility of an overload condition. Changing the cabling to or from a PDM could cause an overload condition.
Figure 29 Rack AC power distribution 1. PDM 1 2. PDM 2 3. PDM 3 4. PDU 1 5. PDM 4 6. PDM 5 7. PDM 6 8. PDU 2 Rack System/E power distribution components AC power is distributed to the Rack System/E rack through Power Distribution Units (PDU) mounted on the two vertical rails in the rear of the rack. Up to four PDUs can be mounted in the rack—two mounted on the right side of the cabinet and two mounted on the left side. Each of the PDU power cables has an AC power source specific connector.
Moving the rack requires a clear, uncarpeted pathway that is at least 80 cm (31.5 in) wide for the 60.3 cm (23.7 in) wide, 42U rack. A vertical clearance of 203.2 cm (80 in) should ensure sufficient clearance for the 200 cm (78.7 in) high, 42U rack. CAUTION: Ensure that no vertical or horizontal restrictions exist that would prevent rack movement without damaging the rack. Make sure that all four leveler feet are in the fully raised position.
Figure 31 Raising a leveler foot 1. Hex nut 3. To 1. 2. 3. 58 2. Leveler foot Carefully move the rack to the installation area and position it to provide the necessary service areas (see Figure 30 (page 57)). stabilize the rack when it is in the final installation location: Use a wrench to lower the foot by turning the leveler foot hex nut clockwise until the caster does not touch the floor. Repeat for the other feet. After lowering the feet, check the rack to ensure it is stable and level.
3 Enterprise Virtual Array operation This chapter presents the tasks that you might need to perform during normal operation of the storage system. Best practices For useful information on managing and configuring your storage system, see the HP Enterprise Virtual Array configuration best practices white paper available from http://h18006.www1.hp.com/storage/arraywhitepapers.
Enabling Boot from SAN for Windows direct connect To ensure that Boot from SAN is successful for Windows hosts that are directly connected to an array, enable the Spin up delay setting in the HBA BIOS. This applies to QLogic and Emulex HBAs. This workaround applies to all supported Windows operating systems and all supported QLogic and Emulex HBAs. For support details, go to the Single Point of Connectivity Knowledge (SPOCK) website: http://www.hp.
HP Care Pack Service or HP contractual support agreement. HP Insight Remote Support supplements your monitoring, 24x7 to ensure maximum system availability by providing intelligent event diagnosis, and automatic, secure submission of hardware event notifications to HP, which will initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country.
Failback preference setting for HSV controllers Table 25 (page 62) describes the failback preference behavior for the controllers. Table 25 Failback preference behavior Setting Point in time Behavior No preference At initial presentation The units are alternately brought online to Controller A or to Controller B. On dual boot or controller resynch If cache data for a LUN exists on a particular controller, the unit will be brought online there.
Table 25 Failback preference behavior (continued) Setting Point in time Behavior On dual boot or controller resynch If cache data for a LUN exists on a particular controller, the unit will be brought online there. Otherwise, the units are brought online to Controller B. On controller failover All LUNs are brought online to the surviving controller. On controller failback All LUNs remain on the surviving controller.
1 If preference has been configured to ensure a more balanced controller configuration, the Path A/B – Failover/Failback setting is required to maintain the configuration after a single controller reboot. Changing virtual disk failover/failback setting Changing the failover/failback setting of a virtual disk may impact which controller presents the disk. Table 27 (page 64) identifies the presentation behavior that results when the failover/failback setting for a virtual disk is changed.
4. Under System Shutdown click Power Down. If you want to delay the initiation of the shutdown, enter the number of minutes in the Shutdown delay field. The controllers complete an orderly shutdown and then power off. The disk enclosures then power off. Wait for the shutdown to complete. 5. 6. 7. Turn off the power switch (callout 4 in Figure 17 (page 39)) on the rear of each HSV controller. Turn off the circuit breakers on both of the EVA rack Power Distribution Units (PDU).
NOTE: For more information on using SSSU, see the HP Storage System Scripting Utility reference. See “Related information” (page 101). 1. 2. 3. Double-click on the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. Enter LS SYSTEM to display the EVA storage systems managed by the management server. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
Example 1 Saving configuration data using SSSU on a Windows Host To save the storage system configuration: 1. Double-click on the SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password. 2. Enter LS SYSTEM to display the EVA storage systems managed by the management server. 3. Enter SELECT SYSTEM system name, where system name is the name of the storage system. 4.
• Set the add disk option to manual. See “Changing the Device Addition Policy” (page 69) for more information. • When adding multiple disk drives, add a disk and wait for its activity indicator (1) to stop flashing (up to 90 seconds) before installing the next disk (see Figure 32 (page 68)). This procedure must be followed to avoid unexpected EVA system behavior. Figure 32 Disk drive activity indicator Creating disk groups The new disks you add will typically be used to create new disk groups.
Adding a disk drive This section describes the procedure for adding a Fibre Channel disk drive. Removing the drive blank 1. 2. Grasp the drive blank by the two mounting tabs (see Figure 34 (page 69)). Lift up on the lower mounting tab and pull the blank out of the enclosure. Figure 34 Removing the drive blank 1. Upper mounting tab 2.
Figure 35 Installing the disk drive Checking status indicators Check the following to verify that the disk drive is operating normally: NOTE: • • It may take up to 10 minutes for the component to display good status. Check the disk drive status indicators. See Figure 36 (page 71). ◦ Activity indicator (1) should be on or flashing ◦ Online indicator (2) should be on or flashing ◦ Fault indicator (3) should be off Check the following using HP P6000 Command View.
Figure 36 Disk drive status indicators 1. Activity 2. Online 3. Fault Adding the disk to a disk group After replacing the disk, use HP P6000 Command View to add it to a disk group. 1. In the Navigation pane, select Storage system > Hardware > Rack > Disk enclosure > Bay 2. In the Content pane, select the Disk Drive tab. 3. Click Group to initiate the process for adding the disk to a disk group. NOTE: If the Device Addition Policy is set to automatic, the disk will automatically be added to a disk group.
4 Configuring application servers Overview This chapter provides general connectivity information for all the supported operating systems. Where applicable, an OS-specific section is included to provide more information. Clustering Clustering is connecting two or more computers together so that they behave like a single computer. Clustering is used for parallel processing, load balancing, and fault tolerance. See the Single Point of Connectivity Knowledge (SPOCK) website (http://www.hp.
Testing connections to the EVA After installing the FCAs, you can create and test connections between the host server and the EVA. For all operating systems, you must: • Add hosts • Create and present virtual disks • Verify virtual disks from the hosts The following sections provide information that applies to all operating systems. For OS-specific details, see the applicable operating system section. Adding hosts To add hosts using HP P6000 Command View: 1.
1. 2. 3. 4. From HP P6000 Command View, create a virtual disk on the EVA4x00/6x00/8x00. Specify values for the following parameters: • Virtual disk name • Vraid level • Size Present the virtual disk to the host you added. If applicable (OpenVMS) select a LUN number if you chose a specific LUN on the Virtual Disk Properties window. Verifying virtual disk access from the host To verify that the host can access the newly presented virtual disks, restart the host or scan the bus.
State ======================================================================================== ba 3 0/6 lba CLAIMED BUS_NEXUS Local PCI Bus Adapter (782) fc 2 0/6/0/0 td CLAIMED INTERFACE HP Tachyon XL@ 2 FC Mass Stor Adap /dev/td2 fcp 0 0/6/0/0.39 fcp CLAIMED INTERFACE FCP Domain ext_bus 4 0/6/00.39.13.0.0 fcparray CLAIMED INTERFACE FCP Array Interface target 5 0/6/0/0.39.13.0.0.0 tgt CLAIMED DEVICE ctl 4 0/6/0/0.39.13.0.0.0.0 sctl CLAIMED DEVICE HP HSV300 /dev/rscsi/c4t0d0 disk 22 0/6/0/0.39.13.0.0.0.
http://www.hp.com/support/downloads In the Search products box, enter MPIO, and then click AIX MPIO PCMA for HP Arrays. Select IBM AIX, and then select your software storage product. Adding hosts To determine the active FCAs on the IBM AIX host, enter: # lsdev -Cc adapter |grep fcs Output similar to the following appears: fcs0 Available 1H-08 FC Adapter fcs1 Available 1V-08 FC Adapter # lscfg -vl fcs0 fcs0 U0.1-P1-I5/Q1 FC Adapter Part Number.................80P4543 EC Level....................
Linux Driver failover mode If you use the INSTALL command without command options, the driver’s failover mode depends on whether a QLogic driver is already loaded in memory (listed in the output of the lsmod command). Possible driver failover mode scenarios include: • If an hp_qla2x00src driver RPM is already installed, the new driver RPM uses the failover of the previous driver package. • If there is no QLogic driver module (qla2xxx module) loaded, the driver defaults to failover mode.
# modprobe qla2400 To reboot the server, enter the reboot command. CAUTION: 7. If the boot device is attached to the SAN, you must reboot the host. To verify which RPM versions are installed, use the rpm command with the -q option. For example: # rpm -q hp_qla2x00src # rpm –q fibreutils Upgrading Linux components If you have any installed components from a previous solution kit or driver kit, such as the qla2x00 RPM, invoke the INSTALL script with no arguments, as shown in the following example: # .
Compiling the driver for multiple kernels If your system has multiple kernels installed on it, you can compile the driver for all the installed kernels by setting the INSTALLALLKERNELS environmental variable to y and exporting it by issuing the following commands: # INSTALLALLKERNELS=y # export INSTALLALLKERNELS You can also use the -a option of the INSTALL script as follows: # .
This line identifies the location of the binary RPM. 4. Copy the binary RPM to the production servers and install it using the following command: # rpm -ivh hp_qla2x00-version-revision.architecture.rpm Verifying virtual disks from the host To verify the virtual disks, first verify that the LUN is recognized and then verify that the host can access the virtual disks. • • To ensure that the LUN is recognized after a virtual disk is presented to the host, do one of the following: ◦ Reboot the host.
Table 29 Comparing console LUN to OS unit ID ID type System Display Console LUN ID set to 100 $1$GGA100: OS unit ID set to 50 $1$DGA50: Adding OpenVMS hosts To obtain WWNs on AlphaServers, do one of the following: • Enter the show device fg/full OVMS command. • Use the WWIDMGR -SHOW PORT command at the SRM console. To obtain WWNs on Integrity servers, do one of the following: 1. Enter the show device fg/full OVMS command. 2. Use the following procedure from the server console: a.
NOTE: The EVA4x00/6x00/8x00 console LUN can be seen without any virtual disks presented. The LUN appears as $1$GGAx (where x represents the console LUN ID on the controller). After the system scans the fabric for devices, you can verify the devices with the SHOW DEVICE command: $ SHOW DEVICE NAME-OF-VIRTUAL-DISK/FULL For example, to display device information on a virtual disk named $1$DGA50, enter $ SHOW DEVICE $1$DGA50:/FULL.
2. Enter the following command to mount the disk: MOUNT/SYSTEM name-of-virtual-disk volume-label NOTE: The /SYSTEM switch is used for a single stand-alone system, or in clusters if you want to mount the disk only to select nodes. You can use the /CLUSTER switch for OpenVMS clusters. However, if you encounter problems in a large cluster environment, HP recommends that you enter a MOUNT/SYSTEM command on each cluster node. 3. View the virtual disk’s information with the SHOW DEVICE command.
Update instructions depend on the version of your OS: • For Solaris 9, install the latest Oracle StorEdge SAN software with associated patches. To locate the software, log into My Oracle Support: https://support.oracle.com/CSP/ui/flash.html 1. Select the Patches & Updates tab and then search for StorEdge SAN Foundation Software 4.4 (formerly called StorageTek SAN 4.4). 2. Reboot the host after the required software/patches have been installed.
3. If using a single FCA and no multipathing, edit the following parameter to reduce the risk of data loss in case of a controller reboot: nodev-tmo=120; 4. If using Veritas Volume Manager (VxVM) DMP for multipathing (single or multiple FCAs), edit the following parameter to ensure proper VxVM behavior: no-device-delay=0; 5. In a fabric topology, use persistent bindings to bind a SCSI target ID to the world wide port name (WWPN) of an array port.
1. Ensure that you have the latest supported version of the qla2300 driver. You must sign up for an HP Passport to enable access. For more information on how to use SPOCK, see the Getting Started Guide (http://www.qlogic.com). 2. Edit the following parameters in the /kernel/drv/qla2300.
6. If the qla2300 driver is version 4.15 or later, verify that the following or a similar entry is present in the /kernel/drv/sd.conf file: name="sd" parent="qla2300" target=2048; To perform LUN rediscovery after configuring the LUNs, use the following command: /opt/QLogic_Corporation/drvutil/qla2300/qlreconfig –d qla2300 -s 7. Reboot the server to implement the changes to the configuration files. NOTE: The qla2300 driver is not supported for Oracle StorEdge Traffic Manager/Oracle Storage Multipathing.
1. 2. 3. 4. 5. 6. 7. Go to http://support.veritas.com. Enter Storage Foundation for UNIX/Linux in the Product Lookup box. Enter EVA in the Enter keywords or phrase box, and then click the search symbol. To further narrow the search, select Solaris in the Platform box and search again. Read TechNotes and follow the instructions to download and install the ASL/APM. Run vxdctl enable to notify VxVM of the changes.
Example 4 Setting the iopolicy # vxdmpadm getattr arrayname EVA8100 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA8100 Round-Robin Round-Robin # vxdmpadm setattr arrayname EVA8100 iopolicy=adaptive # vxdmpadm getattr arrayname EVA8100 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA8100 Round-Robin Adaptive Configuring virtual disks from the host The procedure used to configure the LUN path to the array depends on the FCA driver.
50001fe1002709e9,5 • Emulex (lpfc)/QLogic (qla2300) drivers: ◦ You can retrieve the WWPN by checking the assignment in the driver configuration file (the easiest method, because you then know the assigned target) or by using HBAnyware/SANSurfer. ◦ You can retrieve the WWLUN ID by using HBAnyware/SANSurfer. You can also retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and difficult to read.
Example 5 Format command # format Searching for disks...done c2t50001FE1002709F8d1: configured c2t50001FE1002709F8d2: configured c2t50001FE1002709FCd1: configured c2t50001FE1002709FCd2: configured c3t50001FE1002709F9d1: configured c3t50001FE1002709F9d2: configured c3t50001FE1002709FDd1: configured c3t50001FE1002709FDd2: configured with with with with with with with with capacity capacity capacity capacity capacity capacity capacity capacity of of of of of of of of 1008.00MB 1008.00MB 1008.00MB 1008.
7. 8. 9. For each new device, use the disk command to select another disk, and then repeat Step 1 through Step 6. Repeat this labeling procedure for each new device. (Use the disk command to select another disk.) When you finish labeling the disks, enter quit or press Ctrl+D to exit the format utility. For more information, see the System Administration Guide: Devices and File Systems for your operating system, available on the Oracle website: http://www.oracle/com/technetwork/indexes/documentation/index.
Configuring an ESX server This section provides information about configuring the ESX server. Loading the FCA NVRAM The FCA stores configuration information in the non-volatile RAM (NVRAM) cache. You must download the configuration for HP Storage products. Perform one of the following procedures to load the NVRAM: • If you have a ProLiant blade server: 1. Download the supported FCA BIOS update, available on http://www.hp.com/support/ downloads, to a virtual floppy.
ESX 4.x commands • The # esxcli nmp device setpolicy --device naa.6001438002a56f220001100000710000 --psp VMW_PSP_MRU command sets device naa.6001438002a56f220001100000710000 with an MRU multipathing policy. • The # esxcli nmp device setpolicy --device naa.6001438002a56f220001100000710000 --psp VMW_PSP_FIXED command sets device naa.6001438002a56f220001100000710000 with a Fixed multipathing policy. • The # esxcli nmp fixed setpreferred --device naa.
Verifying virtual disks from the host To verify that the host can access the virtual disks, enter the more /proc/scsi/scsi command. The output lists all SCSI devices detected by the server.
5 Customer replaceable units This chapter describes customer replaceable units. Information about initial enclosure installation, ESD protection, and common replacement procedures is also included. Customer self repair (CSR) Table 30 (page 97) identifies which hardware components are customer replaceable. Using WEBES, ISEE or other diagnostic tools, a support specialist will work with you to diagnose and assess whether a replacement component is required to address a system problem.
level. The replacement component revision level must be the same as, or greater than, the number on the element being replaced. The higher the revision level, the later the revision. Figure 37 Typical product label The spare part number for each disk drive is listed on the capacity label attached to each drive. See Figure 38 (page 97). Figure 38 Disk drive label Replaceable parts This product contains the replaceable parts listed in Table 30 (page 97).
Table 30 Hardware component CSR support (continued) 98 Description Spare part number (non RoHS/RoHS) Disk drive – 300 GB 10K 366023-001/366023-002 ✓ Disk drive – 450 GB 10K 518736-001 ✓ Disk drive – 600 GB 10K 518737-001 ✓ Disk drive – 72 GB 15K 300588-001/300588-002 ✓ Disk drive – 146 GB 15K 366024-001/366024-002 ✓ Disk drive – 300 GB 15K 416728-001 ✓ Disk drive – 450 GB 15K 454415-001 ✓ Disk drive – 600 GB 15K 531995-001 ✓ Disk drive – 250 GB FATA 366022-001/366022-002 ✓ Dis
Table 30 Hardware component CSR support (continued) Spare part number (non RoHS/RoHS) Description CSR Front panel bezel EVA8000 390853-001, 70-41140-S1/ 411632-005, 70-41140-S3 ✓ Front panel bezel EVA8100 390854-001, 70-41140-S2/ 411632-006, 70-41140-S5 ✓ Front panel bezel EVA4000/6000 411633-005, 70-41140-S4 (both RoHS) ✓ Front panel bezel EVA4100/6100 411633-006, 70-41140-S6 (both RoHS) ✓ For more information about CSR, contact your local service provider.
◦ HP Controller Power Supply Replacement Instructions ◦ HP Disk Enclosure Power Supply/Blower Replacement Instructions ◦ HP Fibre Channel Disk Drive Replacement Instructions ◦ HP Operator Control Panel Replacement Instructions Returning the defective part In the materials shipped with a replacement CSR part, HP specifies whether the defective component must be returned to HP.
6 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Document conventions and symbols Table 31 Document conventions Convention Element Blue text: Table 31 (page 102) Cross-reference links and e-mail addresses Blue, underlined text: http://www.hp.
Customer self repair HP customer self repair (CSR) programs allow you to repair your product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider. For North America, see the CSR website: http://www.hp.
A Regulatory notices and specifications This appendix includes regulatory notices and product specifications for the HP Enterprise Virtual Array family. Regulatory notices Federal Communications Commission (FCC) notice Part 15 of the Federal Communications Commission (FCC) Rules and Regulations has established Radio Frequency (RF) emission limits to provide an interference-free radio frequency spectrum.
interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. • Increase the separation between the equipment and receiver.
Certification and classification information This product contains a laser internal to the Optical Link Module (OLM) for connection to the Fibre communications port. In the USA, the OLM is certified as a Class 1 laser product conforming to the requirements contained in the Department of Health and Human Services (DHHS) regulation 21 CFR, Subchapter J. The certification is indicated by a label on the plastic OLM housing.
This symbol on the product or on its packaging indicates that this product must not be disposed of with your other household waste. Instead, it is your responsibility to dispose of your waste equipment by handing it over to a designated collection point for recycling of waste electrical and electronic equipment.
luonnonvaroja ja varmistamaan, että laite kierrätetään tavalla, joka estää terveyshaitat ja suojelee luontoa. Lisätietoja paikoista, joihin hävitettävät laitteet voi toimittaa kierrätettäväksi, saa ottamalla yhteyttä jätehuoltoon tai liikkeeseen, josta tuote on ostettu. French notice Élimination des appareils mis au rebut par les ménages dans l'Union européenne Le symbole apposé sur ce produit ou sur son emballage indique que ce produit ne doit pas être jeté avec les déchets ménagers ordinaires.
τον άχρηστο εξοπλισμό σας για ανακύκλωση, επικοινωνήστε με το αρμόδιο τοπικό γραφείο, την τοπική υπηρεσία διάθεσης οικιακών απορριμμάτων ή το κατάστημα όπου αγοράσατε το προϊόν. Hungarian notice Készülékek magánháztartásban történő selejtezése az Európai Unió területén A készüléken, illetve a készülék csomagolásán látható azonos szimbólum annak jelzésére szolgál, hogy a készülék a selejtezés során az egyéb háztartási hulladéktól eltérő módon kezelendő.
Ten symbol na produkcie lub jego opakowaniu oznacza, że produktu nie wolno wyrzucać do zwykłych pojemników na śmieci. Obowiązkiem użytkownika jest przekazanie zużytego sprzętu do wyznaczonego punktu zbiórki w celu recyklingu odpadów powstałych ze sprzętu elektrycznego i elektronicznego. Osobna zbiórka oraz recykling zużytego sprzętu pomogą w ochronie zasobów naturalnych i zapewnią ponowne wprowadzenie go do obiegu w sposób chroniący zdrowie człowieka i środowisko.
La recogida y el reciclado selectivos de los residuos de aparatos eléctricos en el momento de su eliminación contribuirá a conservar los recursos naturales y a garantizar el reciclado de estos residuos de forma que se proteja el medio ambiente y la salud.
Country-specific certifications HP tests electronic products for compliance with country-specific regulatory requirements, as an individual item or as part of an assembly. The product label (see Figure 39 (page 112)) specifies the regulations with which the product complies. NOTE: Components without an individual product certification label are qualified as part of the next higher assembly (for example, enclosure, rack, or tower).
Table 33 Environmental specifications (continued) Humidity 10% to 90% non-condensing Shipping Humidity 5% to 90% non-condensing Altitude Up to 8,000 ft. (2,400 m) Air Quality Not to exceed 500,000 particles per cubic foot of air at a size of 0.5 micron or larger Power specifications The input voltage is a function of the country-specific input voltage to Enterprise storage system rack power distribution units (PDUs).
Table 36 EVA4x00 power specifications — 208 Volts Specification Typical1 Failover Mode 1 2C1D 2C2D 2C3D 2C4D Total System Wattage 638 1013 1390 1767 Total System BTU/hour 1729 3014 4300 5585 Input Current (A) Typical per line 1.6 2.6 3.5 4.4 In Rush Current (A) 98 132 170 220 Input Current (A) Maximum per line 2.7 4.3 5.9 7.5 Typical is described as a system in normal steady state operation. (i.e.
This data represents fully populated drive enclosures with 15K RPM disk drives. Other drive types may vary slightly. For example, if you are using 10K RPM drives, the power specifications will be approximately 20% less than the 15K RPM drives.
1 116 Typical is described as a system in normal steady state operation. (i.e., both PDUs operating normally, the array reading/writing to disk drives in a production environment). This data represents fully populated drive enclosures with 15K RPM disk drives. Other drive types may vary slightly. For example, if you are using 10K RPM drives, the power specifications will be approximately 20% less than the 15K RPM drives.
B EMU-generated condition reports This section provides a description of the EMU generated condition reports that contain the following information: • Element type (et), a hexadecimal number in the range 01 through FF. • Element number (en), a decimal number in the range 00 through 99 that identifies the specific element with a problem. • Error code (ec), a decimal number in the range 00 through 99 that defines a specific problem. • The recommended corrective action.
Table 42 Assigned element type codes Code 1 Element 0.1. Disk Drives 0.2. Power Supplies 0.3. Blowers 0.4. Temperature Sensors 0.6 Audible Alarm 0.7. EMU 0.C. Controller OCP LCD1 0.F. Transceivers 1.0. Language1 1.1. Communication Port 1.2. Voltage Sensors 1. Current Sensors 8.0 Drive Enclosure1 8.2. Drive Enclosure Backplane 8.7. I/O Modules 1 Does not generate a condition report. However, for any error, you should record the error code.
If the EMU cannot determine the drive link rate, the EMU activates the drive bypass function for one minute. During this time the EMU continually checks the drive to determine the link rate. • If the EMU determines the drive cannot operate at the Fibre Channel link rate set by the I/O module, the drive bypass function ends and the drive is placed on the loop. This does not generate a condition report. • The EMU issues the condition report 0.1.en.
This error remains active until the problem is corrected. Complete the following procedure to correct this problem: 1. Record all six characters of the condition report. 2. Remove and replace the drive in the enclosure. 3. Observe the drive status indicators to ensure the drive is operational. 4. Observe the EMU to ensure the error is corrected. 5. If removing and replacing the drive did not correct the problem, replace the drive. 6. Observe the drive status indicators to ensure the drive is operational. 7.
2. Ensure that there is AC power to the rack PDU, and from the PDU to the PDMs, and that the PDU and PDM circuit breakers are not reset. If there is no AC power to the PDU, contact building facilities management. Verify that the power supply AC power cord is properly connected. 3. If AC is present, and the rack power distribution circuitry is functioning properly, the power supply indicator should be on. Observe the EMU to ensure the error is corrected. Contact your authorized service representative. 4.
Figure 41 Blower element numbering CAUTION: A single blower operating at high speed can provide sufficient air flow to cool an enclosure and the elements for up to 100 hours. However, operating an enclosure at temperatures approaching an overheating threshold can damage elements and may reduce the MTBF of a specific element. Immediate replacement of the defective blower is required. The following sections define the power supply condition reports. 0.3.en.
0.3.en.06 UNRECOVERABLE condition—No blowers installed NOTE: IMPORTANT When this condition exists there will be two error messages. The first message will be 0.3.en.05 and will identify the first blower. The second message will be 0.3.en.06 and will identify the second blower. The EMU cannot detect any installed blowers. Shutdown is imminent! The EMU will shut down the enclosure in seven minutes unless you correct the problem.
3. 4. 5. 6. 7. Ensure that nothing is obstructing the air flow at either the front of the enclosure or the rear of the blower. Ensure that both blowers are operating properly (the indicators are on) and neither blower is operating at high speed. Verify that the ambient temperature range is +10° C to +35° C (+50° F to +95° F). Correct the ambient conditions. Observe the EMU to ensure the error is corrected. If unable to correct the problem, contact your authorized service representative. 0.4.en.
temperature thresholds). Under these conditions the EMU starts a timer that will automatically shut down the enclosure in seven minutes unless you correct the problem. Enclosure shutdown is imminent! CAUTION: An automatic shutdown and possible data corruption may result if the procedure below is not performed immediately. Complete the following procedure to correct this problem. 1. Ensure that all disk drives, I/O modules, and power supply elements are fully seated. 2.
4. 5. If resetting the EMU did not correct the problem, replace the EMU. If unable to correct the problem, contact your HP authorized service representative. 0.7.01.03 UNRECOVERABLE Condition—Power supply shutdown This message only appears in HP P6000 Command View to report a power supply has already shut down. This message can be the result of the controller shutdown command or an EMU or power supply initiated power shutdown. This message cannot be displayed until after restoration of power.
0.7.01.12 NONCRITICAL condition—EMU cannot read NVRAM data The EMU is unable to read data from the NVRAM. This condition report remains active until the problem is corrected. Complete the following procedure to correct this problem: 1. Record all six characters of the condition report. 2. Reset the EMU. 3. Observe the EMU to ensure the error is corrected. 4. If resetting the enclosure did not correct the problem, contact your HP authorized service representative. 0.7.01.
0.7.01.17 UNRECOVERABLE condition—Power shutdown failure The power supply did not respond to a controller, EMU, or power supply shut down command. Shutting down the supply is required to prevent overheating. Complete the following procedure to correct the problem: 1. Record all six characters of the condition report. 2. Move the power cord bail lock 1, Figure 42 (page 128), to the left. 3. Disconnect the AC power cord 2 from the supply. Figure 42 Disconnecting AC power 0.7.01.
Figure 43 Transceiver element numbering 1. Transceiver 01 2. Transceiver 02 3. Transceiver 03 4. Transceiver 04 0.F.en.01 CRITICAL condition—Transceiver incompatibility The transceivers on this link are not the same type or they are incompatible with the I/O module. This error prevents the controller from establishing a link with the enclosure disk drives and eliminates the enclosure dual-loop capability. This error remains active until the problem is corrected.
0.F.en.05 CRITICAL condition—Invalid fibre channel character This symptom can occur under the following conditions: • The incoming data stream is corrupted. • A cable is not completely connected. • The signal is degraded. This error prevents the controller from transferring data on a loop and eliminates the enclosure dual-loop capability. This error remains active until the problem is fixed.
1.1.03.03 INFORMATION condition—Overrun recovery This condition report notes automatic recovery initiated by the occurrence of too many data overruns with respect to received messages on the CAN bus. This condition report remains active until one of the following occurs: • 90 seconds elapses • The CURRENT ALARM QUEUE is read via SES • The RECENT ALARM LOG is read via SES No action is required. Voltage sensor and current sensor conditions The format of these sensor condition reports is 1.2.en.
1.2.en.04 CRITICAL condition—Low voltage This condition report indicates that an element voltage has reached the low voltage CRITICAL threshold. This condition report remains active until the problem is corrected. To correct this problem, record all six characters of the condition report, then contact your HP-authorized service representative. 1.3.en.
To correct this problem, record all six characters of the condition report, then contact your HP-authorized service representative. I/O Module conditions The format of an I/O module condition report is 8.7.en.ec, where: • 8.7. is the I/O module element type number • en. is the two-character I/O module element number (see Figure 44 (page 133)) • ec is the error code Figure 44 I/O module element numbering 1. I/O Module A (01) 2.
3. Contact your HP-authorized service representative. 8.7.en.12 NONCRITICAL condition—I/O Module NVRAM read failure The system is unable to read data from the I/O module NVRAM. Complete the following procedure to correct this problem: 1. Record all six characters of the condition report. 2. Contact your HP-authorized service representative. 8.7.en.13 NONCRITICAL condition—I/O module removed The system detects that an I/O module has been removed. To correct the problem, install an I/O module.
C Controller fault management This appendix describes how the controller displays events and termination event information. Termination event information is displayed on the LCD. HP P6000 Command View enables you to view controller events. This appendix also discusses how to identify and correct problems. Once you create a storage system, an error condition message has priority over other controller displays.
Figure 46 Typical HP P6000 Command View Event display Date Time SWCID Evt No CAC EIP Type Description The Event display provides the following information: • Date—The date the event occurred. • Time—The time the even occurred. • SWCID—Software Identification Code. A number in the range 1–256 that identifies the internal firmware module affected. • Evt No—Event Number. A hexadecimal number in the range 0–FF that is the software component identification number. • CAC—Corrective Action Code.
Interpreting fault management information Each version of HP P6000 Command View includes an ASCII text file that defines all the codes that the authorized service representative can view either on the GUI or on the OCP. IMPORTANT: This information is for the exclusive use of the authorized service representative. The file name identifies the controller model, file type, XCS baselevel id, and XCS version. For example, the file name hsv210_event_cr08d3_5020.
D Non-standard rack specifications The appendix provides information on the requirements when installing the EVA4x00/6x00/8x00 in a non-standard rack. All the requirements must be met to ensure proper operation of the storage system. Rack specifications Internal component envelope EVA component mounting brackets require space to be mounted behind the vertical mounting rails.
Determining the CG of a configuration may be necessary for safety considerations. CG considerations for CG calculations do not include cables, PDU’s and other peripheral components. Some consideration should be made to allow for some margin of safety when estimating configuration CG. Estimating the configuration CG requires measuring the CG of the cabinet the product will be installed in.
NOTE: Further testing is required to update the information in Tables 45-47. Once testing is complete, these tables will be updated in a future release. Power requirements The following tables list the wattage and BTU/hour power requirements for the three supported operating voltages. NOTE: listed.
Table 49 100V Wattage and BTU/Hour Enclosures EVA4x00 Amps VA Watts EVA6x00 BTU/h EVA8x00 Amps VA Watts BTU/h 8 35.5 3545 3474 11855 7 31.5 3145 3082 10518 6 27.5 2746 2691 9181 5 23.5 2346 2299 7845 Amps VA Watts BTU/h EVA8x00 not supported 4 18.7 1875 1837 6269 19.5 1946 1907 6508 3 14.8 1475 1446 4933 15.5 1546 1516 5171 2 10.8 1075 1054 3596 11.5 1147 1124 3835 1 6.8 676 662 2259 7.
Table 51 UPS operating time limits (continued) Minutes of operation Load (percent) With standby battery With 1 ERM With 2 ERMs R5500 100 7 24 46 80 9 31 60 50 19 61 106 20 59 169 303 R12000 100 5 11 18 80 7 15 24 50 14 28 41 20 43 69 101 Table 52 EVA 8x00 UPS loading % of UPS capacity Enclosures Watts R5500 R12000 12 4920 41.0 11 4414 98.1 36.8 10 4037 89.7 33.6 9 3660 81.3 30.5 8 3284 73.0 27.4 7 2907 64.6 24.2 6 2530 56.2 21.1 5 2153 47.
Table 53 EVA 6x00 UPS loading (continued) % of UPS capacity Enclosures Watts R3000 R5500 R12000 2 953 35.3 21.2 7.9 1 577 21.4 12.8 4.8 Table 54 EVA 4x00 UPS loading % of UPS capacity Enclosures Watts R1500 R3000 4 1637 60.6 3 1260 94.0 46.6 2 883 65.9 32.7 1 507 37.9 18.7 Environmental specifications Table 55 Environmental specifications Operating temperature 50° to 95° F (10° to 35° C) - Reduce rating by 1° F for each 1000 ft. altitude (1.
Shock and vibration specifications Table 56 (page 144) lists the product operating shock and vibration specifications. This information applies to products weighing 45 Kg (100 lbs) or less. NOTE: HP EVA products are designed and tested to withstand the operational shock and vibration limits specified in Table 56 (page 144). Transmission of site vibrations through non-HP racks exceeding these limits could cause operational failures of the system components.
E Single Path Implementation This appendix provides guidance for connecting servers with a single path host bus adapter (HBA) to the Enterprise Virtual Array (EVA) storage system with no multi-path software installed. A single path HBA is defined as an HBA that has a single path to its LUNs. These LUNs are not shared by any other HBA in the server or in the SAN.
Installation requirements • The host must be placed in a zone with any EVA worldwide IDs (WWIDs) that access storage devices presented by the hierarchical storage virtualization (HSV) controllers to the single path HBA host. The preferred method is to use HBA and HSV WWIDs in the zone configurations. • On HP-UX, Solaris, Microsoft Windows Server 2003 (32-bit), , Linux and IBM AIX operating systems, the zones consist of the single path HBA systems and one HSV controller port.
switches and EVA controllers. Whereas the dual HBA server has multi-path software that manages the two HBAs and their connections to the switch (with the exception of OpenVMS and Tru64 UNIX servers), the single path HBA has no software to perform this function. The dashed line in the figure represents the fabric zone that must be established for the single path HBA server. Note that in Figure 49 (page 147), servers with OpenVMS or Tru64 UNIX operating system should be zoned with two controllers.
HP-UX configuration Requirements • Proper switch zoning must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs. • In the use of snapshots and snapclones, the source virtual disk and all associated snapshots and snapclones must be presented to the single path hosts that are zoned with the same controller.
Windows Server (32-bit) configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs.
3 Host 2 7 Controller A 4 Management server 8 Controller B Windows Server (64-bit) configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs. HBA configuration • Hosts 1 and 2 are single path HBA hosts. • Host 3 is a multiple HBA host with multi-pathing software.
Figure 52 Windows Server (64-bit) configuration 1 Network interconnection 6 SAN switch 1 2 Management server 7 SAN switch 2 3 Host 1 8 Controller A 4 Host 2 9 Controller B 5 Host 3 Oracle Solaris configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • Single path HBA server can be in the same fabric as servers with multiple HBAs. • Single path HBA server cannot share LUNs with any other HBAs.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is not supported. Figure 53 Oracle Solaris configuration 1 Network interconnection 5 SAN switch 1 2 Host 1 6 SAN switch 2 3 Host 2 7 Controller A 4 Management server 8 Controller B Tru64 UNIX configuration Requirements • Switch zoning or controller level SSP must be used to ensure each HBA has exclusive access to its LUNs.
NOTE: For additional risks, see Table 60 (page 160). Figure 54 Tru64 UNIX configuration 1 Network interconnection 5 SAN switch 1 2 Host 1 6 SAN switch 2 3 Host 2 7 Controller A 4 Management server 8 Controller B OpenVMS configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs. • All nodes with direct connection to a disk must have the same access paths available to them.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. Figure 55 OpenVMS configuration 1 Network interconnection 5 SAN switch 1 2 Host 1 6 SAN switch 2 3 Host 2 7 Controller A 4 Management server 8 Controller B Linux (32-bit) configuration Requirements • Switch zoning or controller level SSP must be used to ensure each single path HBA has an exclusive path to its LUNs.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is supported on single path HBA servers.
Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is supported on single path HBA servers.
Risks • Single path failure may result in loss of data accessibility and loss of host data that has not been written to storage. • Controller shutdown results in loss of data accessibility and loss of host data that has not been written to storage. NOTE: For additional risks, see Table 62 (page 161). Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is not supported.
Risks • Single path failure may result in data loss or disk corruption. NOTE: For additional risks, see Table 63 (page 161). Limitations • HP P6000 Continuous Access is not supported with single path configurations. • Single path HBA server is not part of a cluster. • Booting from the SAN is supported on single path HBA servers.
Table 57 HP-UX failure scenarios (continued) Fault stimulus Failure effect Server path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, cannot umount disk, fsck failed, disk corrupted, need mkfs disk. Storage path failure Short term: Data transfer stops. Possible I/O errors. Long term: Job hangs, replace cable, I/O continues. Without cable replacement job must be aborted; disk seems error free.
OpenVMS and Tru64 UNIX Table 60 OpenVMS and Tru64 UNIX failure scenarios Fault stimulus Failure effect Server failure (host power-cycled) All I/O operations halted. Possible data loss from unfinished or unflushed writes. File system check may be needed upon reboot. Switch failure (SAN switch disabled) OpenVMS—OS will report the volume in a Mount Verify state until the MVTIMEOUT limit is exceeded, when it then marks the volume as Mount Verify Timeout. No data is lost or corrupted.
Table 61 Linux failure scenarios (continued) Fault stimulus Failure effect Server path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss.
Table 63 VMware failure scenarios (continued) 162 Fault stimulus Failure effect Server path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss. HBA driver must be reloaded before failed drives can be recovered, fsck should be run on any failed drives before remounting. Storage path failure Short: I/O suspended, possible data loss. Long: I/O halts with I/O errors, data loss.
Glossary This glossary defines terms used in this guide or related to this product and is not a comprehensive glossary of computer terms. A active member of a virtual disk family An active member of a virtual disk family is a simulated disk drive created by the controllers as storage for one or more hosts. An active member of a virtual disk family is accessible by one or more hosts for normal storage. An active virtual disk member and its snapshot, if one exists, constitute a virtual disk family.
baud The maximum rate of signal state changes per second on a communication circuit. If each signal state change corresponds to a code bit, then the baud rate and the bit rate are the same. It is also possible for signal state changes to correspond to more than one code bit so the baud rate may be lower than the code bit rate. bay The physical location of an element, such as a drive, I/O module, EMU or power supply in a drive enclosure. Each bay is numbered to define its location.
controller A hardware/firmware device that manages communications between host systems and other devices. Controllers typically differ by the type of interface to the host and provide functions beyond those the devices support. controller enclosure A unit that holds one or more controllers, power supplies, blowers, cache batteries, transceivers, and connectors.
disk replacement delay The time that elapses between a drive failure and when the controller starts searching for spare disk space. Drive replacement seldom starts immediately in case the “failure” was a glitch or temporary condition. drive blank See disk drive blank. drive enclosure A unit that holds storage system devices such as disk drives, power supplies, blowers, I/O modules, transceivers, or EMUs. drive enclosure See drive enclosure.
Enclosure Services Processor See ESP. Enterprise Virtual Array The Enterprise Virtual Array is a product that consists of one or more storage systems. Each storage system consists of a pair of HSV controllers and the disk drives they manage. A storage system within the Enterprise Virtual Array can be formally referred to as an Enterprise storage system, or generically referred to as the storage system.
fiber optic cable A transmission medium designed to transmit digital signals in the form of pulses of light. Fiber optic cable is noted for its properties of electrical isolation and resistance to electrostatic contamination. fiber optics The technology where light is transmitted through glass or plastic (optical) threads (fibers) for data communication or signaling purposes. fibre The international spelling that refers to the Fibre Channel standards for optical media.
IDX A 2-digit decimal number portion of the HSV controller termination code display that defines one of 32 locations in the Termination Code array that contains information about a specific event. See also param and TC. in-band communication The method of communication between the EMU and controller that utilizes the Fibre Channel drive enclosure bus. INFORMATION condition A drive enclosure EMU condition report that may require action.
M management agent The HP P6000 Command View software that controls and monitors the Enterprise storage system. The software can exist on more than one management server in a fabric. Each installation is a management agent. management agent event Significant occurrence to or within the management agent software, or an initialized storage cell controlled or monitored by the management agent. mean time between failures See MTBF. metadata Information that a controller pair writes on the disk array.
P param That portion of the HSV controller termination code display that defines: • The 2-character parameter identifier that is a decimal number in the 0 through 30 range. • The 8-character parameter code that is a hexadecimal number. See also IDX and TC. password A security interlock where the purpose is to allow: • A management agent to control only certain storage systems • Only certain management agents to control a storage system PDM Power Distribution Module.
read ahead caching A cache management method used to decrease the subsystem response time to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives. read caching A cache method used to decrease subsystem response times to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives. Reading data from cache memory is faster than reading data from a disk.
SSN Storage System Name. An HP P6000 Command View-assigned, unique 20-character name that identifies a specific storage system. storage carrier See carrier. storage pool The aggregated blocks of available storage in the total physical disk array. storage system The controllers, storage devices, enclosures, cables, and power supplies and their software. Storage System Name See SSN. switch An electro-mechanical device that initiates an action or completes a circuit.
virtual disk snapshot See snapshot. Vraid0 A virtualization technique that provides no data protection. Data host is broken down into chunks and distributed on the disks comprising the disk group from which the virtual disk was created. Reading and writing to a Vraid0 virtual disk is very fast and makes the fullest use of the available storage, but there is no data protection (redundancy) unless there is parity. Vraid1 A virtualization technique that provides the highest level of data protection.
Index Symbols +12.5 VDC for the drives, 25 +5.
Corrective Action Code see CAC Corrective Action Codes see CAC country-specific certifications, 112 coupled crash control codes, 137 creating virtual disks, 73 creating volume groups, 75 CRITICAL conditions audible alarm, 30 blowers speed, 122 drive link rate, 118, 119, 120 drives configuration, 118 EMU internal clock, 125 high current, 132 high temperature, 124 high voltage, 131 I/O modules communication, 133 I/O modules unsupported, 133 low temperature, 124 low voltage, 132 transceivers, 129 current senso
Event Information Packets see EIP event number, 135 F fabric setup, 87 failure, 132 FATA drives, using, 59 fault management details, 136 display, 46 displays, 136 FC loops, 11, 21 FCA configuring, 83 configuring QLogic, 85 configuring, Emulex, 84 FCC Class A Equipment, compliance notice, 104 Class B Equipment, compliance notice, 104 Declaration of Conformity, 105 modifications, 105 FCC Class A certification, 104 Federal Communications Commission (FCC) notice, 104 fiber optics cleaning cable connectors, 71
NONCRITICAL conditions, 131 lpfc driver, 84 LTEA, 136 LUN numbers, 15 M Management Server, 17 Management Server, HP P6000 Command View, 11 missing AC input, 120 power supplies, 121 monitored functions blowers, 28 I/O module, 28 power supply, 28 multipathing accessing, 72 policy, 93 N non-standard rack, specifications, 138 NONCRITICAL conditions audible alarm, 31 backplane, 132 NVRAM conditions, 132 blowers missing, 122 speed, 122 EMU cannot read NVRAM data, 127 enclosure address, 127 NVRAM invalid read da
R rack non-standard specifications, 138 physical layout, 19 rack configurations, 52 regulatory compliance notices cables, 105 Class A, 104 Class B, 104 European Union, 106 Japan, 111 laser devices, 105 modifications, 105 Taiwan, 111 WEEE recycling notices, 106 regulatory notices, 104 resetting EMU, 125 RESTART LCD, 47 restarting the system, 47, 48 defined, 47 rH displays, 38 rL displays, 38 S Secure Path accessing, 72 sensing power supply temperature, 26 SES compliance, 27 setting password, 17 SFP, 41 shor
Veritas Volume Manager, 87 version information Controller, 47 displaying, 47 firmware, 47 OCP, 47 software, 47 XCS, 47 version information: firmware, 47 vgcreate, 75 virtual disks configuring, 74, 82, 89 presenting, 73 verifying, 89, 90, 95 VMware installing, 92 upgrading, 92 voltage sensors, 131 volume groups, 75 W warnings lasers, radiation, 105 website Oracle documentation, 92 Symantec/Veritas, 87 websites customer self repair, 103 HP , 101 HP Subscriber's Choice for Business, 101 WEEE recycling notices