HP Mainframe Connectivity Design Guide Abstract This reference document provides information about FICON SAN architecture, including Fibre Channel, iSCSI, and hardware interoperability. System administrators can use this document to plan, design, and maintain an HP FICON SAN.
© Copyright 2007–2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents I FICON SAN architecture..............................................................................7 1 FICON SAN design overview...................................................................8 FICON SAN solutions..........................................................................................................8 HP FICON SAN implementations...........................................................................................9 FICON SAN components...................................
Operating systems and storage products..........................................................................58 Fabric rules for C-series FICON directors..........................................................................58 Zoning limits and enforcement........................................................................................60 Zoning guidelines for C-series FICON directors.................................................................61 5 McDATA FICON directors and fabric rules...
Hyper PAV.................................................................................................................107 XP7 configuration rules.....................................................................................................107 XP7 configuration examples..........................................................................................108 9 P9500 storage system rules..................................................................103 P9500 storage systems..................
FICON and Fibre Channel intermix guidelines.....................................................................156 Performance guidelines.....................................................................................................156 Cabling and record keeping.............................................................................................156 Zoning guidelines............................................................................................................
Part I FICON SAN architecture FICON SAN architecture is presented in these chapters: • “FICON SAN design overview” (page 8) • “FICON SAN fabric topologies” (page 15)
1 FICON SAN design overview SANs provide the data communication infrastructure for advanced, cost-efficient storage systems. SAN technology offers investment protection, management features, and I/O price performance to minimize capital expense. HP SAN architecture provides open network storage solutions for all sizes and types of businesses, including small to medium-sized IT departments and enterprise environments. The IBM implementation of ESCON channel programs over Fibre Channel is called FICON.
• Modularity Modular design simplifies SAN scalability and increases return on investment by consolidating and sharing systems. • Open systems support (intermix) FICON SANs support FCP operating systems, servers, and devices in an intermix environment. Security measures such as zoning, LSANs, and VSANs ensure that FICON and FCP devices, mainframes, and data are isolated from each other. You must exercise care when designing and implementing an intermix SAN.
FICON SAN components A FICON SAN consists of the following hardware and software components: • Directors A FICON director creates the SAN fabric. By interconnecting directors, you can create scalable SANs with thousands of port connections. • Mainframes and channels Channel cards provide an intelligent interface to the directors and CUs. Channel cards offload all I/O function from the CPC, minimizing application processor overhead. Channel cards contain multiple ports and are assigned CHPIDs.
Table 1 Fibre Channel protocol layers (continued) Protocol SCSI commands CCWs FC-1 layer Encode/decode and link control FC-0 layer Physical link FICON supports the following Fibre Channel ports: • N_Ports, F_Ports, and E_Ports • G_Ports that have adopted the characteristics of an N_Port, F_Port, or E_Port NOTE: Loop ports are not supported by FICON. FICON SAN infrastructure You use FICON directors to create the SAN communication paths.
For the HP-supported directors and port count fabric maximums, see the following: • ◦ “B-series FICON directors and fabric rules” (page 29) ◦ “C-series FICON directors and fabric rules” (page 46) ◦ “McDATA FICON directors and fabric rules” (page 63) Supported hop counts FICON supports a maximum of one hop between the mainframe and the CU. One hop is one ISL path between two directors. For more information, see “Cascaded FICON fabric” (page 16).
• FICON CUP • High-integrity fabric features, including: ◦ Fabric binding—Enables the fabric to prevent a director from being added ◦ Persistent domain IDs—Prevents a director address from being changed automatically when a duplicate director address is added to the fabric ◦ In-order delivery—Guarantees in-order delivery of FICON frames across the fabric Each FICON director in a fabric must have a unique domain ID and a unique switch ID.
For storage system information, see “P9500 storage system rules” (page 103) and “XP storage system rules” (page 110). • Scalability and migration Select a design that can be expanded incrementally over time as storage and connectivity needs increase. Migration paths for each of the topologies provide flexibility to expand a SAN. For information on scaling and migrating, see “FICON SAN best practices” (page 154).
2 FICON SAN fabric topologies This chapter discusses HP FICON SAN fabric topologies. The terms FICON director and FICON switch are used interchangeably. The terms director and switch refer to a FICON director-class product and a FICON switch-class product, respectively. Fabric topologies A FICON fabric is a single-switch fabric or a cascaded fabric. Single-switch and cascaded refer to the number of directors in the data path, not the actual number of directors in the fabric.
Figure 1 Single-switch FICON fabric 25248a Director models Single-switch FICON SANs typically consist of director-class products (up to 256 ports per domain ID or switch ID). Switch-class products are rarely used in a single-switch SAN. For a high-availability SAN, use two directors configured in a dual-fabric SAN.
Figure 2 Cascaded FICON fabric 25258a Figure 3 Cascaded FICON fabric with multiple directors 25249a A cascaded FICON fabric consists of the following: • Entry switch—The FICON director directly connected to the FICON channel in the mainframe • Cascaded switch—A second director attached to the CU • ISL—The link between the two directors (can consist of multiple ISLs) NOTE: The entry switch is the director attached to the CHPID, and the cascaded switch is the second director after the ISL.
Director models Entry switches in a FICON SAN are typically director-class products (up to 256 ports). FICON switch-class products are used in small data centers with only CUs or a limited number of attached mainframe channels.
Topology data access To select a FICON SAN fabric topology, you must determine which data access type is appropriate for your environment.
• Number of paths between a mainframe and a storage system • Distance (in circuit miles/km) between a mainframe and a storage system Levels The data availability levels are as follows: • Level 1: single connectivity fabric • Level 2: single resilient fabric • Level 3: single resilient fabric with multiple device paths • Level 4: multiple fabrics and device paths (NSPOF) Level 1: single connectivity fabric Level 1 provides maximum connectivity, but does not provide fabric resiliency or redundancy
Level 2: single resilient fabric Level 2 provides fabric path redundancy by using multiple FICON directors with multiple ISLs to provide connectivity between the directors . Each mainframe and storage system has one path to the fabric. If an ISL fiber or ISL director port failure occurs, the director automatically reroutes data through an alternate ISL and there is no interruption in mainframe I/O activity. Figure 5 (page 21) shows a level 2 fabric.
Level 3: single resilient fabric with multiple device paths Level 3 is the same as level 2, but provides multiple mainframe and storage system ports to the fabric to increase availability . If a FICON director port, mainframe channel, or storage system port failure occurs, data is automatically rerouted through an alternate path and there is no interruption in mainframe I/O activity. Figure 6 (page 22) shows a level 3 fabric.
Level 4: multiple fabrics and device paths (NSPOF) Level 4 provides multiple data paths between mainframes and storage systems, but unlike level 3, the paths connect to physically separate fabrics . This level ensures the highest availability with NSPOF protection. If a director port, mainframe channel, or storage system port failure occurs, data is automatically rerouted through the alternate fabric and there is no interruption in mainframe I/O activity. Figure 7 (page 23) shows a level 4 fabric.
The routing type is determined by the number of directors between the mainframe and the CU, not the number of directors in the fabric. • If the mainframe and CU are connected to the same director, single-director routing is used. • If the mainframe and CU are connected to two different directors, and there is at least one ISL or ICL between the directors, cascaded-director routing is used.
Figure 8 FICON SAN using 1-byte destination link addressing CHPID 01 10 Switch 01 20 CU 1000 21 22 CU 2000 CU 3000 25260a The host definitions for the CUs in Figure 8 (page 25) contain the following routing information: Control Unit = 1000 Control Unit = 2000 Control Unit = 3000 Path = CHPID 01 Path = CHPID 01 Path = CHPID 01 Link = 20 Link = 21 Link = 22 NOTE: Switch IDs, CHPIDs, and port numbers configured on the mainframe are hexadecimal, not decimal.
Figure 9 FICON SAN using 2-byte destination link addressing CHPID 01 10 Switch 01 11 10 Switch 02 20 CU 1000 21 22 CU 2000 CU 3000 25261a The host definitions for the CUs in Figure 9 (page 26) contain the following routing information: Control Unit = 1000 Control Unit = 2000 Control Unit = 3000 Path = CHPID 01 Path = CHPID 01 Path = CHPID 01 Link = 0220 Link = 0221 Link = 0222 Figure 10 (page 27) shows a FICON SAN that uses 2-byte destination link addressing, but has two CUs attached to switch
Figure 10 FICON SAN using 2-byte destination link addressing with two CUs CHPID 01 CU 0800 20 10 Switch 01 21 11 10 Switch 02 CU 0900 20 CU 1000 21 22 CU 2000 CU 3000 25262a The host definitions for the CUs in Figure 10 (page 27) contain the following routing information: Control Unit = 0800 Control Unit = 0900 Path = CHPID 01 Path = CHPID 01 Link = 0120 Link = 0121 Control Unit = 1000 Control Unit = 2000 Control Unit = 3000 Path = CHPID 01 Path = CHPID 01 Path = CHPID 01 Link = 0220 L
Part II FICON fabric infrastructure rules FICON fabric infrastructure rules are presented in these chapters: • “B-series FICON directors and fabric rules” (page 29) • “C-series FICON directors and fabric rules” (page 46) • “McDATA FICON directors and fabric rules” (page 63) • “FICON SAN fabric connectivity and director interoperability rules” (page 71)
3 B-series FICON directors and fabric rules This chapter describes B-series FICON directors and the rules for building B-series fabrics. B-series FICON directors B-series FICON directors can be configured as a single-switch fabric or a cascaded fabric. In a cascaded fabric, there is a 1-hop maximum between the mainframe and the CU (a maximum of two directors in a data path). The fabric can have more than two directors provided that the 1-hop maximum is enforced.
NOTE: Some 1 Gb/s CUs may not be able to autonegotiate with the 4 Gb/s or 2 Gb/s director ports. If this occurs, you must configure the director ports to run at 1 Gb/s. B-series 16 Gb/s and 8 Gb/s SFP+ transceivers do not support a speed of 1 Gb/s. Model naming The B-series FICON directors are typically called HP SAN, SAN Backbone, Core, or Director switches. SAN Backbone, Core, and Director switches are core (enterprise-class) switches. SAN switches are edge (entry-level or midrange) switches.
Table 3 B-series FICON directors Fabric management version Firmware versions nl Director/switch 3 HP SN6000B 16Gb FC Switch HP Network Advisor1 DCFM2 version Number of ports 7.2.0d 24 to 48 7.1.0c 7.0.0d N/A HP SN8000B 8-Slot SAN Backbone Director Switch5 7.2.0d 32 to 5124 7.1.0c 5 7.0.0d HP SN8000B 4-Slot SAN Director Switch 32 to 256 • 12 FC/FICON HP StorageWorks DC SAN Director Multi-protocol Extension Blade 12.1.3 • 10 1-GbE (FCIP) 12.0.2 • 2 10-GbE (FCIP) 11.1.2 • 16 FC/FICON 7.2.
1 FICON SANs require Network Advisor Enterprise. 2 DCFM = Data Center Fabric Manager. FICON SANs require DCFM Enterprise. 3 For information about FICON director-supported blades, see Table 5 (page 32). Blade support for the SAN Director Switches This section describes the blades that you can install in the SN8000B 8-Slot Director, SN8000B 4-Slot Director, 2/128 SAN Director, 4/256 SAN Director, DC SAN Backbone Director, and DC04 SAN Director.
Table 5 B-series blades (continued) Blade ID (slotshow) nl Name Abbreviation Description Compatibility DC04 SAN Director, and the DC SAN Backbone Director 16-port 8 Gb/s port blade 32-port 8 Gb/s port blade 48-port 8 Gb/s port blade FC8-16 FC8-32 FC8-487 21 16-port blade supporting 1, 2, 4, and 8 Gb/s FC or FICON port speeds6 Compatible only with the 4/256 SAN Director (requires FOS 5.3.
6 7 8 The 16 Gb/s SFP autonegotiates to 16, 8, and 4 Gb/s. The 8 Gb/s SFP+ autonegotiates to 8, 4, and 2 Gb/s. The 4 Gb/s SFP autonegotiates to 4, 2, and 1 Gb/s. The FC8-48 blades support FICON in DC SAN Backbone Directors and DC04 SAN Directors with FOS 6.2.0e or later. The Virtual Fabric feature must be enabled if the FC8-48 blade is installed in a DC SAN Backbone Director, regardless of whether the blade is used for FICON connectivity.
Table 7 Blades supported by each director with FOS 7.0.
Features Features of the B-series FICON directors are as follows: • Advanced Performance Monitor—Analyzes resource utilization throughout the fabric. • Advanced Web Tools—Centralizes and simplifies director management through a browser-based application. • Advanced Zoning—Provides secure access control over fabric resources. Uses the director firmware to enforce port and WWN zoning. • Extended Fabrics—Enables FICON connectivity from 2.
performance, even in congested environments. The QoS SID/DID prioritization and ingress rate limiting features are the first components of this license option. For configuration restrictions, see the HP StorageWorks Fabric OS 6.x Administrator Guide. • ICLs (DC SAN Backbone Directors and DC04 SAN Directors only)—Provides dedicated high-bandwidth links between two DC SAN Director Switch chassis without using the front-end 8 Gb/s ports.
Table 9 (page 38) compares the high-availability features of the B-series FICON directors.
Figure 11 DC SAN Backbone Directors connected with ICLs DC Backbone Switch ID = 1 ICLs = 0 Hop DC Backbone Switch ID = 2 ISLs = 1 Hop Switch ID = 3 26470a Even if there are no other FICON directors in the fabric, when using ICLs, the SN8000B and DC SAN Director Switches are considered a cascaded FICON fabric and must adhere to all configuration rules for cascaded fabrics, including using 2-byte destination link addresses.
Table 10 B-series FICON fabric rules Rule number Description 1 All FICON directors in the same fabric or SAN must use the same firmware version. When updating director firmware, you can use two successive director firmware versions temporarily in the SAN. 2 In general, FICON fabrics and intermix fabrics must contain directors of the same series. For exceptions, see “Interoperability with McDATA directors” (page 44).
Table 10 B-series FICON fabric rules (continued) Rule number 14 Description If CUP is enabled, port x'FE' on the director is the default port, making it and port x'FF' unavailable. nl 15 Use either SFOS or ACLs to establish an SCC policy for Fabric Binding. SFOS and ACLs are mutually exclusive—you must use one or the other to establish Fabric Binding for FICON. • Use SFOS for firmware versions earlier than 5.2.0a; it requires an additional license. • Use ACLs with firmware 5.2.0a or later.
FICON director database size Table 11 (page 42) describes the database size rules for B-series FICON directors in a fabric. Table 11 Database size rules for B-series FICON directors Rule number 1 Description SFOS—With security enabled in a fabric, the maximum security database size is as follows: • If the fabric contains any 1 Gb/s directors, the maximum size is 32 KB, with only 16 KB active. The maximum number of DCC policies is 620.
• In an intermix environment, do not mix FCP and FICON ports in the same zone. Use Fibre Channel zoning guidelines for the FCP ports, as described in the HP SAN Design Reference Guide. For more information about intermix, see “FICON SAN best practices” (page 154). • In an intermix environment, do not include non-FICON blades (such as the FC4-48 blade and the iSCSI blade) in the FICON zone.
enabled, directors synchronize time with the primary FCS, which may or may not be the principal director. NOTE: If CUP is enabled and configured on the director and the mainframe, the mainframe sets the clock in the director. All other clock update methods must be disabled. Mainframe time runs at Greenwich Mean Time. Interoperability with McDATA directors NOTE: The information in this section is provided for reference only.
1 2 3 ECFM must connect to an McDATA switch to manage a mixed fabric. DCFM 10.3.1 or earlier must connect to a B-series switch to manage a mixed fabric. The M4700 is no longer supported in a FICON fabric because it is supported with M-EOSc 9.9.7 and earlier only. DCFM Enterprise 10.3.2 or 10.4.1 can manage a mixed fabric when connected directly to either an McDATA switch or a B-series switch. For more information on chassis configuration mode options, see HP StorageWorks Fabric OS Administrator Guide.
4 C-series FICON directors and fabric rules This chapter describes C-series FICON directors and the rules for building C-series fabrics. C-series FICON directors C-series FICON directors can be configured as a single-switch fabric or a cascaded fabric. In a cascaded fabric, there is a 1-hop maximum between the mainframe and the CU (a maximum of two directors in a data path). The fabric can have more than two directors provided the 1-hop maximum is enforced.
Model naming The C-series Fibre Channel switches are named MDS 9xnn and SNn000C. Switch Type SN8000C and SN8500C (Multilayer Directors) Director or core SN6000C Entry-level MDS 95nn1 Director or core MDS 92nn2 Mid-range MDS 91nn2 Entry-level 3 MDS 9020 Entry-level 4 Gb/s switch (Multilayer Fabric switch) 1 In this switch, nn indicates the number of slots available for supervisors and port modules. 2 In this switch, nn indicates the number of fixed ports.
NOTE: Starting with release 4.1(1b), Cisco MDS 9000 SAN-OS software is rebranded to Cisco MDS 9000 NX-OS software. Software prior to NX-OS 4.1(1b) uses the SAN-OS name and remains on the 3.x code stream. NX-OS is designed to operate with switches running SAN-OS software. NX-OS 4.1(1c) is the first NX-OS version that is certified to support FICON.
Table 14 C-series FICON directors Director HP StoreFabric SN8500C 8-slot 16Gb FC Director Firmware versions Number of ports NX-OS 6.2(5a) 3843 HP SN8000C 6-Slot Supervisor 2A Director Switch 192 MDS 9506 Multilayer Director1 NX-OS 6.5(5a), 5.2(2), 4.2(7b)2 HP SN8000C 9-Slot Supervisor 2A Director Switch 3363, 4 1 MDS 9509 Multilayer Director HP SN8000C 13-Slot Supervisor 2A Fabric 3 Director Switch5 5283 NX-OS 6.5(5a), 5.
1 Modules, providing up to 384 ports of full 16Gbps line-rate performance across all ports. The open expansion slots of the SN8500C Director can be filled by with the HP StoreFabric C-series Family Modules, which include a 48-port 16Gb FC Module. • 50 ◦ The HP StoreFabric SN8500C 48-port 16Gb FC Module is recommended for high-performance 16 Gb/s enterprise-level host connections, storage connections, and ISL connections.
◦ • The IP Storage Services Modules and Multi-protocol Services Modules provide MDS FCIP and FICON over IP functionality. – IPS-4 and IPS-8 provide 4 and 8 GbE IP ports, respectively. – The 14/2 Multi-protocol Services Module provides 14 2-Gb/s Fibre Channel/FICON ports and 2 GbE IP ports. – The 18/4 Multiservice Module provides 18 4-Gb/s Fibre Channel ports and 4 GbE IP ports.
• • MDS 9222i switches have two slots: ◦ One slot is a fixed configuration with an 18/4 Multiservice Module. ◦ The second slot can accommodate a 48-port 8 Gb/s Host-Optimized Fibre Channel Switching Module; a 12-port, 24-port, or 48-port 4 Gb/s Fibre Channel Switching Module; a 4-port 10 Gb/s Fibre Channel Switching Module; an IPS-8 or 18/4 Multiservice Module for iSCSI and FCIP support; or a 32-port 2 Gb/s SSM.
Table 16 C-series Fibre Channel switching module support matrix for SAN-OS Switch type MDS 9513 MDS 9509 HP SN8000C MDS 9506 HP SN8000C 13-Slot HP SN8000C 9-Slot SUP2A SUP2A FAB2 6-Slot SUP2A Director Director Director Switch Switch Switch nl nl nl nl nl Switching module MDS 9216 MDS 9216A MDS 9216i MDS 9222i 16-port Fibre Channel Switching Module (1 Gb, 2 Gb) Yes Yes Yes No Yes Yes Yes 32-port Fibre Channel Switching Module1 (1 Gb, 2 Gb) Yes Yes Yes No Yes Yes Yes 14/2 Multiprotoc
Table 17 C-series Fibre Channel switching module support matrix for NX-OS 4.
Table 18 C-series Fibre Channel switching module support matrix for NX-OS 5.x, 6.
Table 18 C-series Fibre Channel switching module support matrix for NX-OS 5.x, 6.
FICON-specific features of the C-series FICON directors are as follows: • Persistent domain ID—Required for cascaded FICON director support. • In-order frame delivery—Guarantees in-order delivery of FICON frames across a single ISL or PortChannel ISLs. • Fabric binding—Required for cascaded FICON director support. • FICON CUP—Optional CU port that enables the mainframe to monitor and manage the director. • FICON port swapping—Enables you to swap port numbers between two FICON ports.
Table 19 C-series FICON director high-availability feature comparison (continued) Redundant/ hot-swappable power Model Redundant/ hot-swappable cooling nl Redundant control processor nl nl Port module support nl Nondisruptive code activation nl Protocol support nl Fabric 2 Director Switch SN8000C Supervisor 2A Fabric 3 Director Switch nl Fabric rules This section describes the fabric rules for C-series FICON directors and other factors to consider when building C-series FICON fabrics.
Table 20 C-series FICON fabric rules (continued) Rule number nl 5 Description FICON directors with CUP enabled are configured on the mainframe as follows: • Device type = 2032 • LCU = x'00' • Device address (unit address) = x'00' • Link address = x'FE' 6 There is a maximum of 60 MDS directors, 4,000 total ports, and 3,500 user ports. 7 SN8000C 6-Slot Supervisor 2A Fabric 2 Director Switches and MDS 9506 support up to 192 ports over 4 modular slots (four 48-port modules).
Table 21 ISL maximums for C-series FICON directors Total number of available user ports Number of ports allowed as ISLs HP SN8500C 48-port16Gb FC Module 48 48 at 16 Gb/s SN6000C 48 48 at 8 Gb/s HP SN8000C 8Gb 32-Port Advanced Fibre Channel Module 32 32 at 8 Gb/s, 24 at 10Gb/s HP SN8000C 8Gb 48-Port Advanced Fibre Channel Module 48 48 at 8 Gb/s, 24 at 10Gb/s MDS 9xxx 48-port 8 Gb/s Host-Optimized Fibre Channel Switching Module 48 48 at 1 Gb/s MDS 95xx 48-port 8 Gb/s Performance Fibre Channel
Table 23 (page 61) describes zoning enforcement for C-series FICON directors.
Figure 12 C-series FICON high-availability VSAN management configuration Fabric A VSAN 3 ISL Management VSAN 1 Fabric B VSAN 2 25272a 62 C-series FICON directors and fabric rules
5 McDATA FICON directors and fabric rules This chapter describes the Brocade McDATA FICON directors and the fabric rules for building McDATAfabrics. NOTE: This chapter is provided as a reference only. HP does not sell or support any McDATA FICON products. The information in this chapter is provided because HP allows McDATA FICON connectivity to P9500 and XP disk arrays. McDATA FICON directors McDATA FICON directors can be configured as a single-switch fabric or a cascaded fabric.
Director models Table 24 (page 64) describes the Brocade McDATA FICON directors. HP supports P9500 and XP connectivity to McDATA FICON directors in a fabric if you: • Use the firmware versions listed in Table 24 (page 64). All FICON directors in the same fabric or SAN must use the same firmware version. When updating director firmware, you can use two successive firmware versions temporarily in the same fabric or SAN. • Follow the fabric rules. See “Fabric rules” (page 67).
Table 25 McDATA switching module support matrix (continued) Director Switching module M6064 M6140 Mi10K No No Yes No No Yes 32-port LIM card (1 Gb, 2 Gb, 10 Gb)3 32-port LMQ card (1 Gb, 2 Gb, 4 Gb) 1 2 3 The QPM card can support four ports running at 4 Gb/s bursts, and can support two ports running at 4 Gb/s sustained. If configured for ISLs at four Gb/s, the QPM card can support two ports. The 10 Gb/s port on the XPM card has a sustained data rate of approximately 6.5 Gb/s.
For more information, see “FICON and Fibre Channel intermix guidelines” (page 156). • Extended fabrics—Enables FICON connectivity from 2.5 km (4 Gb/s), 5 km (2 Gb/s), and 10 km (1 Gb/s) up to 100 km to improve disaster recovery operations and ensure business continuity. For more information about distance extension, see “FICON and FICON SAN extension” (page 120). • High-integrity fabrics—Required to support cascaded FICON directors.
Fabric rules This section describes the fabric rules for McDATA FICON directors and other factors you should consider when building McDATA FICON fabrics.
Table 27 McDATA FICON fabric rules (continued) Rule number Description 9 You must assign a unique domain number (domain ID), unique switch number, and unique WWN to each director. Directors of the same model must have the same configuration parameter settings. The domain ID and switch number are two different parameters in the McDATA directors. 10 In an intermix environment, do not configure any directors with a domain ID of 8, which is reserved for HP-UX.
Table 29 ISL maximums for McDATA FICON directors Model Maximum number of ISLs M6140 140 M6064 48 M4700 140 Mi10K 140 Firmware version 09.xx.xx Zoning limits and enforcement This section describes zoning limits and enforcement for McDATA FICON directors. Table 30 (page 69) lists the zoning limits for McDATA FICON directors with firmware 09.xx.xx.
Interoperability with B-series directors NOTE: The information in this section is provided as a reference only. Although HP allows McDATA FICON connectivity to P9500 and XP disk arrays, HP does not sell or support McDATA FICON products. B-series FICON directors and switches (or the equivalent Brocade models) can coexist in the same FICON fabric as the Mi10K and M6140 Directors and the M4700 Fabric Switch under the following conditions: • B-series directors must be set to Interop Mode 2.
6 FICON SAN fabric connectivity and director interoperability rules This chapter describes FICON SAN fabric connectivity and director interoperability rules. FICON SAN fabric connectivity rules This section describes FICON SAN fabric connectivity rules.
Table 32 Rules for fiber optic cable connections Rule number Description 1 The minimum bend radius for HP PremierFlex 50 µm OM4 and OM3+ fiber optic cable is 7 mm. Industry standard bend radius for OM3, OM2, and OM1 cables is 25 mm for 50, 62.5, and 9 µm fiber optic cables. HP recommends 50 µm fiber optic cable for new installations that require multi-mode fiber connections. The 62.5 µm fiber optic cable is acceptable for existing installations.
NOTE: Channel insertion loss is the combined passive loss from connectors, splices, and media between the transmitter and receiver. A mated connector pair is defined as a device or switch transceiver-to-cable connection, or a cable-to-cable connection when using a passive coupler in a patch panel. A typical (nonpatch panel) installation has two mated pairs per cable segment (one for each end of the cable).
Table 35 (page 74) lists the 8 Gb/s fiber optic cable loss budgets when using OM1, OM2, OM3, OM3+, or OM4 multimode fiber optic cable and single-mode fiber optic cable. Table 35 8 Gb/s fiber optic cable loss budgets (nominal bandwidth) Maximum distance per cable segment Total channel insertion loss 62.5/125 micron (OM1 200 MHz-km at 850 nm) 21 m 1.58 dB 50/125 micron (OM2 500 MHz-km at 850 nm) 50 m 1.68 dB 50/125 micron (OM3 2000 MHz-km at 850 nm, OM3+ 3000 MHz-km at 850 nm) 150 m 2.
Table 37 (page 75) lists the 2 Gb/s fiber optic cable loss budgets when using OM1, OM2, or OM3 multimode fiber optic cable and single-mode fiber optic cable. Table 37 2 Gb/s fiber optic cable loss budgets (nominal bandwidth) Maximum distance per cable segment Total channel insertion loss 62.5/125 micron (OM1 200 MHz-km at 850 nm) 150 m 2.1 dB 50/125 micron (OM2 500 MHz-km at 850 nm) 300 m 2.62 dB 50/125 micron (OM3 2000 MHz-km at 850 nm) 500 m 3.31 dB 10 km 7.8 dB 35 km 21.
Table 39 16 Gb/s FICON distance rules (B-series only) Interface/transport 50 micron multi-mode OM2 fiber fiber optic cable and 35 m at 16 Gb/s1 short-wave SFP+ 50 m at 8 Gb/s transceivers 150 m at 4 Gb/s Supported storage products Supported distances OM3, OM3+ fiber OM4 fiber 1 Mainframe CTC 1 100 m at 16 Gb/s 125 m at 16 Gb/s 150 m at 8 Gb/s 190 m at 8 Gb/s 380 m at 4 Gb/s 400 m at 4 Gb/s 62.
Table 40 (page 77) describes the distance rules for 10 Gb/s FICON ISLs when using 10 Gb/s FICON director models. Table 40 10 Gb/s FICON distance rules Interface/transport 50 µm multi-mode fiber optic cable and short-wave XFPs Supported distances at 10 Gb/s Supported storage products OM2 fiber OM3 fiber 82 m ISL 300 m ISL (B-series only) (B-series only) OM1 fiber 62.
Table 41 (page 78) describes the distance rules for 8 Gb/s FICON connections when using 8 Gb/s FICON director models.
Table 42 (page 79) describes the distance rules for 4 Gb/s FICON connections when using 4 Gb/s FICON director models. Table 42 4 Gb/s FICON distance rules (B-series and C-series directors) Interface/transport Supported distances 50 µm multi-mode fiber optic cable and short-wave SFPs Supported storage products OM2 fiber OM3 fiber 150 m ISL at 4 Gb/s 380 m ISL at 4 Gb/s 300 m ISL at 2 Gb/s 500 m ISL at 2 Gb/s 500 m ISL at 1 Gb/s 860 m ISL at 1 Gb/s OM1 fiber 62.
Table 43 (page 80) describes the distance rules for 2 Gb/s FICON connections when using 2 Gb/s FICON director models. Table 43 2 Gb/s FICON distance rules (B-series, C-series, and McDATA directors) Interface/transport Supported distances 50 µm multi-mode fiber optic cable and short-wave SFPs 300 m ISL at 2 Gb/s 500 m ISL at 1 Gb/s 62.
Table 45 (page 81) describes the distance rules for FICON connections when using FCIP extension.
FICON director interoperability rules Table 46 (page 82) describes the rules for FICON director interoperability. Table 46 FICON director interoperability rules Rule number nl Description 1 In general, FICON fabrics and intermix fabrics must contain directors of the same series. For exceptions, see “Interoperability with McDATA directors” (page 44).
FICON third-party director support The following combinations of FICON directors in the same fabric are allowed; however, HP does not provide support for third-party FICON directors. HP provides support for its products and cooperates with the third-party's technical support staff, as needed.
send more data than the ISL or path bandwidth can accommodate. ISL trunking can help balance the load between physical ISL ports and prevent oversubscription. • Fabric interconnect speeds FICON supports 16 Gb/s, 8 Gb/s, 4 Gb/s, 2 Gb/s, and 1 Gb/s speeds. The highest performance is attained by configuring a fabric with all components at the same, highest-available speed. Additional factors such as distance, number of buffers, and device response times can also affect performance.
Part III Mainframe and storage system rules Mainframe and storage system rules are presented in these chapters: • “Mainframe rules” (page 86) • “XP7 storage system rules” (page 97) • “P9500 storage system rules” (page 103) • “XP storage system rules” (page 110)
7 Mainframe rules This chapter describes the mainframe configuration rules for FICON SANs.
speed of 1 Gb/s, 2 Gb/s, or 4 Gb/s, and a maximum of 64 concurrent I/O operations per port. FCV mode is not supported. • FICON Express 8—Includes four independent channels, each of which has one port with an LC duplex connector (four ports total). Supports a link speed of 2 Gb/s, 4 Gb/s, or 8 Gb/s, and a maximum of 64 concurrent I/O operations per port. A link speed of 1 Gb/s is not supported. FCV mode is not supported.
• The S/390 9672 G5 and S/390 9672 G6 processors do not support FICON channels in FCP mode. • HDS and ADL Millennium 2000c mainframes are compatible with the S/390 processors. IBM zSeries processor FICON support rules IBM zSeries processors are the current family of mainframes. The processors run the z/OS operating system, and offer the following FICON features: • FICON (FC) native mode CHPID support • FCV mode CHPID support NOTE: • Not all zSeries channel cards support FCV mode.
IBM z990 IBM z990 processors support: • Two LCSSs • Maximum of 60 FICON Express cards and/or FICON Express 2 cards, for a total of 240 FICON ports When building a FICON SAN with zSeries processors, consider the following: • The maximum number of FICON channel cards depends on the mainframe model. If the mainframe contains other channel cards (for example, ESCON or OSA), this reduces the maximum number of FICON channel cards.
• FCV mode CHPIDs cannot connect to a FICON CU or FICON director. • FCP mode CHPIDs cannot connect to a FICON CU or FICON director. Use FCP mode to connect to Fibre Channel directors and devices from an LPAR running zLinux. Arbitrated loop topologies are not supported with FCP mode. NOTE: FICON Express and FICON Express 2 cards are available only as a carry-over from an existing processor during an upgrade to a z9 processor or a z10 processor.
• FCP point-to-point configurations (directly attached, with no Fibre Channel director) • Cascaded FICON directors When building a FICON SAN with the zEnterprise 196 processor, consider the following: • The maximum number of FICON channel cards depends on the mainframe model and if the mainframe contains other channel cards. For example, if the mainframe has ESCON or OSA cards, this reduces the number of FICON channel cards supported. • FCV channels are not supported.
The Linux operating system has been running on zSeries and System z mainframes for more than a decade. zLinux is available in two distributions—Red Hat and SUSE. Both are functionally equivalent to each other from a channel and disk connectivity standpoint. zLinux can run as an independent operating system on an LPAR or as a guest operating system under z/VM.
zLinux FCP support rules for HP storage • FCP mode CHPIDs cannot connect to a FICON CU or FICON director. Use FCP mode to connect to Fibre Channel directors and devices from an LPAR or z/VM guest running zLinux. NOTE: For the latest information on Mainframe zLinux support with P9500 storage, see the HP SPOCK website at http://www.hp.com/storage/spock. You must sign up for an HP Passport to enable access.
Maximum unrepeated distance Table 47 (page 94) lists the FICON channel cable specifications and maximum unrepeated distances from the channel port on the mainframe to the CU or director port. Table 47 zSeries and z9 FICON supported channel cable types and distances Maximum unrepeated distance1 nl Channel card feature name nl nl Connector type Cable type Link data rate SM 9 µm 10 km With MCP2, MM 50 or MM 62.5 µm 550 m MM 62.
Table 47 zSeries and z9 FICON supported channel cable types and distances (continued) Maximum unrepeated distance1 nl Channel card feature name nl nl Connector type Cable type Link data rate 500 m (2,000 MHz-km) MM 62.5 µm 300 m (200 MHz-km) 1 Gb/s MM 50 µm 500 m (500 MHz-km) 860 m (2,000 MHz-km) nl FICON Express 8 10Km LX FICON Express 8 SX SM 9 µm 10 km 2, 4, or 8 Gb/s MM 62.5 µm 21 m (200 MHz-km) nl 50 m (500 MHz-km) 150 m (2,000 MHz-km) 8 Gb/s MM 50 µm MM 62.
Maximum repeated distance The maximum repeated distance for FICON is 100 km. Distance extension products are available for increasing the maximum distance. Use of extension products depends on the CU and application you are running. For more information about distance extension, see “FICON and FICON SAN extension” (page 120). FICON SAN zoning rules When building a FICON SAN, follow these zoning rules: • Ensure that all FICON ports are in the same zone, Meta SAN, or VSAN.
8 XP7 storage system rules This chapter describes rules for the XP7 storage system in a mainframe environment.
Table 49 XP7 FICON channel interface specifications Ports per system Channel feature # of ports per card Port Speed1 SFP Type 176 @ 8 Gb/s FICON 16 ML 8 2/4/8 Gb/s Long wave 192 @ 8 Gb/s FCP 16 MS8 8 2/4/8 Gb/s Short wave 16 FC8 8 2/4/8 Gb/s 8 FC16 8 Storage system XP7 96 @ 16 Gb/s FCP2 Connector type LC Duplex 1 Ports autonegotiate the speed they will operate at. 2 Currently, no mainframe CHPIDs support 16 Gb/s. Default is short wave. Long wave is 4/8/16 Gb/s optinal.
Table 51 Volume emulation type specifications (continued) Volume type Volume capacity (GB) Number of cylinders Tracks per cylinder (bytes per track) 3390-9 8.51 10,017 15 (56k) 3390-3 2.84 3,339 15 (56K) P9500 mainframe addressing of volumes Table 52 (page 99) lists the maximum values for XP7 mainframe configurations.
Table 54 XP7 PAV support Storage system CU type Maximum number of PAVs per base address XP7 2107 255 Multiple allegiance MA enables multiple LPARs to issue multiple I/O requests in parallel to a disk volume. MA is a standard part of the PAV feature in the XP7 storage system. Hyper PAV Hyper PAVs build on the PAV and MA functions.
Table 56 XP7 configuration rules (continued) Rule Description 10 A UNITADD is the starting address of the range of devices configured in the LCU. For example, if the LCU has 256 devices configured with the device addresses x'00' through x'FF', then x'00' is the UNITADD. 11 The UNITADD parameter contains two fields, the first field is UNITADD and the second field is a decimal value describing the range of addresses in the LCU.
XP7 configuration with PAVs An XP7 array configured for 2,048 mainframe volumes with 3 PAVs for each volume requires 32 LCUs, each with 64 base addresses and 192 PAV (alias) addresses. Table 58 (page 102) lists a sample XP7 configuration with PAVs. All numbers are hexadecimal unless noted otherwise. Table 58 Sample XP7 configuration with PAVs UNITADD (starting address, range1) nl Device number nl CUNUMBER CUADD 1000 00 1000–103F 00,64 1040–10FF 01 1140–11FF 1200–123F 02 1240–12FF . . . nl nl . .
9 P9500 storage system rules This chapter describes rules for P9500 storage systems in a mainframe environment.
Table 60 (page 104) lists the FICON channel interface specifications for the P9500 storage systems, and maximum unrepeated distances from the FICON port on the P9500 disk array. Table 60 P9500 FICON channel interface specifications Storage system Ports per system nl nl Connector type Channel feature Cable type MM 50/125 μm (OM3) Maximum unrepeated distance Link data rate 500 m 2 Gb/s 380 m 4 Gb/s 150 m 8 Gb/s 150 m 2 Gb/s 70 m 4 Gb/s 21 m 8 Gb/s 16 MUS MM 62.
Table 61 IBM control units and volumes for emulation Storage system IBM CU1 Volume types2 2105-F20 2107 3380-3 3390-3/3R3 3390-9 3390-L 3390-M 3390-A nl nl P9500 nl nl nl nl 1 HP recommends that you not mix CU types in the same P9500 system. 2 Not all volumes can be emulated on all CU types. For the latest supported volume types, contact an HP storage representative. 3390-3 and 3390-3R cannot be in the same P9500 system. 3 Table 62 (page 105) lists the capacity for different volume types.
P9500 mainframe addressing of volumes Table 63 (page 106) lists the maximum values for P9500 mainframe configurations per CU.
Multiple allegiance MA enables multiple LPARs to issue multiple I/O requests in parallel to a disk volume. MA is a standard part of the PAV feature in the P9500 storage system. Hyper PAV Hyper PAVs build on the PAV and MA functions. The difference is that instead of fixed alias addresses configured for each base address, Hyper PAV allows a pool of alias addresses to be used and reused to access base addresses.
Table 67 P9500 configuration rules (continued) Rule number nl Description 12 The LPAR uses a logical path to communicate with the LCU. This path includes the CHPID on the mainframe, the FICON port on the CU, and any ports and directors in the path. There can be up to eight logical paths from an LPAR to any LCU. 13 The system programmer assigns one or more 4-digit hexadecimal SSIDs to your storage subsystem. The SSID is assigned to a range of either 64 volumes or 256 volumes.
P9500 configuration with PAVs A P9500 array configured for 2,048 mainframe volumes with 3 PAVs for each volume requires 32 LCUs, each with 64 base addresses and 192 PAV (alias) addresses. Table 69 (page 109) describes a sample P9500 configuration with PAVs. All numbers are hexadecimal unless noted otherwise. Table 69 Sample P9500 configuration with PAVs UNITADD (starting address, range1) nl Device number nl CUNUMBER CUADD 1000 00 1000–103F 00,64 1040–10FF 01 1140–11FF 1200–123F 02 1240–12FF . . .
10 XP storage system rules This chapter describes rules for the following storage systems in a mainframe environment: • XP24000 • XP20000 • XP12000 • XP10000 • XP1024 • XP128 XP storage systems Before implementation, contact an HP storage representative for information about support for specific configurations, including the following: • Storage system firmware • FICON director models and firmware • Mainframe models and operating system version Vision and Storage Magic Vision (formally RMF
Table 70 XP FICON SAN support Storage system nl Storage system firmware FICON directors1 Mainframe operating systems2, 3 nl nl Mainframe models IBM: • z/OS • z/VM • z/VSE • Linux SUSE for zSeries B-series XP24000 XP20000 60x C-series IBM: • OS/390 McDATA • S/390 9672 G5 • VM/ESA • S/390 9672 G6 • VSE/ESA • z800 • Linux SUSE for S/390 (FCP) • z890 • ALCS • z900 • Red Hat Enterprise Linux for zSeries • z990 XP12000 XP10000 • OS/390 • z9 BC 50x • VM/ESA • z9 EC • VSE/ESA • z10
Table 71 XP FICON channel interface specifications (continued) Maximum unrepeated distance Link data rate 75 m 4 Gb/s 10 km 1, 2, and 4 Gb/s 500 m 1 Gb/s 300 m 2 Gb/s 150 m 4 Gb/s 300 m 1 Gb/s 150 m 2 Gb/s 70 m 4 Gb/s 20 km 1 Gb/s 10 km 2 Gb/s 10 km 4 Gb/s 500 m 1 Gb/s 300 m 2 Gb/s 300 m 1 Gb/s 150 m 2 Gb/s 20 km 1 Gb/s 10 km 2 Gb/s 500 m 1 Gb/s 300 m 2 Gb/s 300 m 1 Gb/s 150 m 2 Gb/s 20 km 1 Gb/s 10 km 2 Gb/s 500 m 1 Gb/s 300 m 2 Gb/s 300 m 1 Gb/s 150 m
data field. The standard also uses the EBCDIC character set (not the ASCII character set used in open systems). XP storage is configured to emulate IBM standard CUs and volume types. The emulated standard volumes present the same number of cylinders and capacity to the mainframe as the native S/390 volume type of the same name. Table 72 (page 113) lists the CU types and volume types that XP storage systems can emulate in a mainframe environment.
Table 73 (page 114) lists the capacity for different volume types. Table 73 Volume emulation type specifications Volume capacity (GB) Number of cylinders Tracks per cylinder (bytes per track) 3380-3 2.377 3,339 15 (48k) 3380-E 1.26 885 15 (48k) 3380-K 1.89 2,655 15 (48k) 3380-J 0.63 1,770 15 (48k) 3390-1 0.946 1,113 15 (56k) 3390-2 1.89 2,226 15 (56k) 2.838 3,339 15 (56k) 3390-9 8.51 10,071 15 (56k) 3390-L 27.8 32,760 15 (56k) 3390-M 55.
GDPS/PPRC and GDPS/XRC command support Most XP storage systems can be integrated into a GDPS/PPRC or GDPS/XRC environment. The GDPS features that are supported depend on the XP storage system model. For more information, contact an HP storage representative. Parallel access volumes PAVs enable a single LPAR to issue multiple I/O requests in parallel to a disk volume. Volumes with a base address contain data. PAVs are alias addresses assigned to the volume, in addition to the volume's base address.
XP configuration rules Table 78 (page 116) lists the XP mainframe configuration rules. Table 78 XP mainframe configuration rules Rule number 1 Description The XP storage system must be configured to emulate an IBM CU for mainframe implementations. For a list of IBM CUs, see Table 72 (page 113). nl 2 The XP storage system must be configured to emulate IBM disk volumes. For a list of IBM volume types, see Table 72 (page 113). 3 A maximum of 256 disk volumes per CU is supported.
Table 79 Sample XP configuration without PAVs UNITADD (starting address, range1) nl Device number nl Device address nl LCU SSID2 CUNUMBER CUADD 1000–10FF 1000 00 00 1000 1100–11FF 1100 01 01 1001 1200–12FF 1200 02 02 1002 1300–13FF 1300 03 03 00,256 1003 00–FF Base 1400–14FF 1400 04 04 1004 1500–15FF 1500 05 05 1005 1600–16FF 1600 06 06 1006 1700–17FF 1700 07 07 1007 1 The range is a decimal number. 2 SSID value is supplied by the system programmer.
Heterogeneous storage system support HP provides best-practices recommendations for connecting devices in the fabric. See “FICON SAN best practices” (page 154). All third-party storage connected to an HP FICON fabric must be compatible with the HP or third-party directors in the fabric. For more information, contact an HP storage representative.
Part IV FICON and FICON SAN extension Information about FICON SAN extension is presented in “FICON and FICON SAN extension” (page 120).
11 FICON and FICON SAN extension FICON and FICON SAN extension enables you to implement disaster-tolerant storage solutions over long distances and multiple sites. FICON extension overview A FICON extension is an extended CHPID-to-CU connection or a CTC connection. It extends the distance of the channel, but may or may not include a FICON SAN.
Figure 13 FICON extension examples 1 IP or SONET 2 IP or SONET IP or SONET 3 IP or SONET 4 IP or SONET 5 Local Remote 25277a FICON SAN extension overview A FICON SAN extension is an extended ISL connection between directors linking two sites.
Figure 14 FICON SAN extension examples 1 Long-wave SFP or GBIC connection 2 WDM connection 3 IP or SONET Remote Local 25338a FICON and FICON SAN extension technology HP supports the following FICON and FICON SAN extension technology: • FICON long-distance technology • Multi-protocol long-distance technology Table 81 (page 122) lists the FICON and FICON SAN extension technology and the corresponding HP extension products.
1 Must be used in conjunction with long-wave CHPIDs. The maximum supported distance is limited by that supported by the CHPID or XP storage array, whichever is the least distance for the transceiver type. FICON long-distance extension applications This section describes FICON long-distance applications for mainframes. Geographically Dispersed Parallel Sysplex GDPS is a disaster-recovery service provided by IBM.
GDPS configurations There are three GDPS configurations, determined by the distance between sites, the disk replication method, and whether the Sysplex Timers and/or coupling links connect the mainframes. • GDPS/PPRC—Uses synchronous PPRC and/or a coupling facility to connect the mainframes. This is limited to a WDM or dark fiber circuit and has a maximum circuit distance of approximately 100 km. IP gateways are not certified for GDPS/PPRC.
1. 2. The production LPARs write data to the production volumes, without disruption by the replication process on the SDM LPARs. The SDM reads the source storage system updates and queues them for writing on the target storage system. You can configure volumes from the source storage system or multiple storage systems into consistency groups. nl 3. 4. When the SDM has all of the updates for a consistency group, it writes (commits) the data to the target volumes.
Figure 17 (page 126) shows an XRC pull configuration over a long-distance link. Figure 17 XRC pull configuration Production Production LPARs volumes (Source) FICON directors (optional) Local SDM Replicated LPAR(s) volumes (Target) Remote 25294a XRC push configuration HP recommends that you not use a push configuration for XRC. In a push configuration, the SDM mainframe can lose contact with target volumes if a link fails. All in-progress updates may be lost.
Figure 19 (page 127) shows a FICON native tape extension implementation. It includes a cascaded fabric, but you can have a single-switch fabric or a fabric without FICON directors in the channel path.
Figure 21 (page 128) shows a peer-to-peer VTS extension with the FICON channels extended from the VTC to the remote VTS. Figure 21 Peer-to-peer VTS extension Production VTC LPARs FICON directors (optional) VTS/Tape library VTS/Tape library Local Remote 25297a Virtual Storage Manager VSM presents virtual tape drives and volumes to the mainframe. You configure VSM systems on the mainframe as tape drives to provide a storage replication solution for mainframe tapes.
Figure 23 (page 129) shows a VSM back-end extension with the FICON channels extended between VSM and the remote tape library. Figure 23 VSM back-end extension Production VSM LPARs FICON directors (optional) Tape library Tape library Local Remote 25299a You can link two VSM controllers to create a clustered VSM. The VSM controllers make a copy of each tape write, similar to asynchronous disk replication.
The 16 Gb/s, 8 Gb/s, 4 Gb/s, and 2 Gb/s transceivers are known as SFP+ or SFP and use LC-style connectors. See Figure 24 (page 130). Figure 24 LC SFP/SFP+ transceiver 25142a The 1 Gb/s transceivers can be LC SFPs, or GBICs which can use SC style connectors. See Figure 24 (page 130) and Figure 25 (page 130).
Long-wave transceivers The maximum supported distance for long-wave transceivers depends on the transceiver model. Some long-wave transceivers transmit up to 100 km. Table 82 (page 131) lists supported long-wave transceiver distances and maximum supported distances under ideal operating conditions for FICON SAN extension.
Gb/s, and 1 Gb/s. When planning for SAN extension, BB_credits are an important consideration in WDM network configurations. Typical WDM implementations for storage replication include a primary and secondary path. You must have enough BB_credits to cover the distances for both the primary path and secondary path so that performance is not affected if the primary path fails.
Table 83 WDM system architectures System architecture1 Passive (optical transmission protocol) Description • Transparent to transmission protocol and data-rate independent • Establishes open interfaces that provide flexibility to use Fibre Channel, SONET/SDH, ATM, Frame Relay, and other protocols over the same fiber • Passes the optical signal without any form of signal conditioning such as amplification or attenuation Active signal amplification • Includes line amplifiers and attenuators that connect to
are networked using a variety of wavelength-specific multiplexers/demultiplexers or OADMs that support ring or point-to-point topologies.
The configuration in Figure 27 (page 135) costs more, but is fully redundant. Figure 27 Fully redundant WDM configuration using two long-distance fiber optic links Fabric A1 Local WDM connection Fabric A1 Remote Fabric B1 Local WDM connection Fabric B1 Remote 25145a HP recommends that you use multiple fiber links between the two FICON directors in a FICON extension or FICON SAN extension. This provides redundancy in the event of a fiber link or WDM component failure.
NOTE: The minimum number of BB_credits listed in Table 84 (page 135) is based on transmitting full FC frames with a data payload of 2,112 bytes. Many mainframe applications do not use full FC frames, and therefore, require more BB_credits than are listed to maintain performance. For more information, contact an HP storage representative. B-series directors Table 85 (page 136) lists the B-series FICON director extended fabric settings for maintaining performance with extended fabric ISLs.
Table 86 C-series FICON director extended fabric settings Extended fabric item Setting Supported ISL port speeds 2 Gb/s, 4 Gb/s, 8 Gb/s, and 10 Gb/s Maximum number of hops 1 hop 300 km for DWDM (2 Gb/s, 4 Gb/s, and 10 Gb/s) Maximum ISL distance 100 km for CWDM (2 Gb/s) 40 km for CWDM (4 Gb/s) BB_credits C-series FICON directors support up to 4,095 BB_credits Consider the following when using C-series FICON directors with extended fabric ISLs: • The MDS 9216i and 14/2 Multi-protocol Services Modul
For more information about using FCIP products for FICON extension and FICON SAN extension, see “FICON extension overview” (page 120) and “FICON SAN extension overview” (page 121). NOTE: You must use the same gateway model or family at both ends to ensure interoperability. For more information, see “FCIP gateway interoperability”. B-series, C-series, and McDATA FICON directors connect to an FCIP gateway through an E_Port.
NOTE: Do not use the shared-link configuration if you require high availability because it does not provide redundancy between fabrics. It may also decrease performance because the total bandwidth available for storage is shared by the two fabrics. FCIP network considerations How you implement FCIP on your network depends on the expected application data load and network traffic.
FCIP gateways support: • Ethernet connections of 10 to 100 Mb/s, and 1 Gb/s. Select the network connection that matches the amount of data to be transferred and the time allowed for that transfer. • FICON at 1 Gb/s, 2 Gb/s, 4 Gb/s, and 8 Gb/s. Not all FCIP gateways support all FICON speeds. FCIP bandwidth considerations When sites are located several kilometers apart, there can be unacceptable delays in the completion of an I/O transaction. Increasing the available bandwidth does not solve this problem.
• ◦ Allow additional bandwidth for growth. ◦ If you need to resynchronize volumes due to a disaster, consider how long resynchronization will take over the allocated bandwidth. To reduce the resynchronization time, allocate additional bandwidth. Divide your data into tiers, with the most critical data in the highest tier.
Determining bandwidth requirements To determine bandwidth requirements in a mainframe environment, use one of the following: • Vision tool (formerly RMF Magic)—Determines the amount of data that must be replicated between sites. You can use the results to determine the bandwidth required to support a specific application. For more information about Vision, contact an HP storage representative. • Amount of new or changed data—Use the following procedure. To measure the amount of new or changed data: 1.
FICON and FICON SAN extension bandwidth requirements To prevent FICON from timing out and taking devices offline, the system must present acknowledgments to the application in a timely manner. HP recommends the following minimum circuit bandwidth values: • 50 Mb/s (6.25 MB/s) of circuit bandwidth for each FICON port being extended • 62.5 Mb/s (7.8 MB/s) of circuit bandwidth for each IBM VTC port being extended Table 89 (page 143) lists the HP FCIP gateways and supported FICON directors.
• Reduces application latency caused by tape read and write operations, XRC operations, and other control unit operations over long-distance FCIP ISLs • Can increase the supported distances for these applications from 300 km to several thousand km, depending on available bandwidth and application requirements Before you configure the Advanced FICON Accelerator: • Configure the fabric to support Cascaded FICON • Configure the FCIP interfaces and circuits • Disable the FCIP tunnels You can enable the
associated with a CHPID and CU pair of ports to always use the same FCIP tunnel. For more information on TI zones, seeFabric OS Administrator’s Guide. 1606 Extension SAN Switch and DC Dir Switch MP Extension Blade features and requirements Table 90 (page 145) lists the features and requirements for the 1606 Extension SAN Switch and DC Dir Switch MP Extension Blade.
1606 Extension SAN Switch and DC Dir Switch MP Extension Blade FCIP extension options When B-series FICON directors (DC SAN Backbone Director, DC04 SAN Director, and 4/256 SAN Director) are used in conjunction with B-series FCIP extension products (1606 Extension SAN Switch and DC Dir Switch MP Extension Blade [FX8-24]), they are certified for FICON extension in the five configurations shown in Figure 31 (page 146).
Table 91 (page 147) lists the features and requirements for the B-series 400 MP Router and MP Router Blade.
B-series FICON director FCIP extension options When B-series FICON directors (DC SAN Backbone Director, DC04 SAN Director, and 4/256 SAN Director) are used in conjunction with B-series FCIP products (FR4-18i MPR blade and 400 MPR), they are certified for FICON extension in the five configurations shown in Figure 32 (page 148).
C-series FTA and XRCA licenses The FTA feature is included with the Mainframe Package license. The SAN Extension over FCIP Package license is a prerequisite to use the FTA feature. FTA is supported by the 14/2 Multiprotocol Services Module, the 18/4 Multiservice Module, and the 9222i Multilayer Fabric Switch. The 9222i includes the SAN Extension over IP license as part of its base configuration. PortChannels are not supported with the FTA feature.
FCIP gateway interoperability The gateways on each end of the IP circuit must be compatible. See Table 93 (page 150).
NOTE: HP has not certified any Fibre Channel over SONET/SDH products for FICON use. If you have a SONET or SDH network, consider the following options: Option 1 Use FCIP gateways with the GbE port connected to your SONET or SDH multiplexer. This provides a dedicated IP circuit over your SONET or SDH network. HP allows FICON long-distance extension over SONET or SDH networks with an HP FICON SAN if the multiplexer is certified for FICON by the FICON director vendor.
HP storage replication products for the mainframe environment HP P9000/XP Continuous Access software is a controller-to-controller replication method that supports mainframe and open systems applications. For information about HP P9000/XP Continuous Access software and support of replication over long distances, including firmware specifications, see HP SAN Design Reference Guide. Certified third-party WDM and FCIP products This section provides information about HP-certified third-party products.
Part V Best practices and resource information The following chapters describe FICON SAN best practices and provide resource information: • “FICON SAN best practices” (page 154) • “Support and other resources” (page 159) • “Documentation feedback” (page 163)
12 FICON SAN best practices This chapter describes the HP best practices for FICON SAN design and implementation. FICON SAN design When planning a FICON SAN, it is important to eliminate all SPOFs to ensure continuous data availability. A properly designed FICON SAN provides multiple paths from the mainframe to the storage system. In the event of a component failure, the application can use an alternate path to access data.
Connect ports on a channel card to different fabrics. • Connect channel ports and storage ports to multiple interface cards on the director. This ensures that an interface card failure on the director has minimum impact on data availability. • Use multiple paths. This ensures that each LPAR on the mainframe and each LCU in the storage system has at least two paths to the FICON SAN fabric. • Use multiple ISLs between directors. Configure at least two ISLs between a pair of directors.
For director-specific information, see: ◦ “B-series FICON directors and fabric rules” (page 29) ◦ “C-series FICON directors and fabric rules” (page 46) ◦ “McDATA FICON directors and fabric rules” (page 63) FICON and Fibre Channel intermix guidelines Use care when designing and implementing an intermix SAN. Released microcode levels for Fibre Channel SAN components may be different from the released levels for FICON SAN components.
Label each end of the cable to identify the connection points of the other end. Use terms such as To and From. • Unused ports Use port protectors to protect unused ports on all equipment (directors, storage devices, and mainframe). Do not leave any ports exposed. • Cable management Plan the installation to keep to a minimum the number of cables that need to be disconnected or moved when replacing components in the directors.
To add directors to an existing fabric, ensure that the new configuration complies with the current fabric rules.
13 Support and other resources Contacting HP HP technical support For worldwide technical support information, see the HP support website: http://www.hp.
Related information The following documents and websites provide related information: Topic SAN design (Fibre Channel and iSCSI) Documents and websites For the latest version of the HP SAN Design Reference Guide, see the HP SAN Manuals website: http://www.hp.com/go/SDGManuals B-series directors and MP Router For the latest information on B-series directors and firmware versions, see the HP Storage Networking website: http://h18006.www1.hp.com/storage/saninfrastructure.
Typographic conventions Table 94 Document conventions Convention Uses Blue text: Table 94 (page 161) Cross-reference links Blue, underlined text: http://www.hp.com Website addresses Blue, underlined, bold text: storagedocsfeedback@hp.
Customer self repair HP CSR programs allow you to repair your HP product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider. For North America, see the CSR website: http://www.hp.com/go/selfrepair This product has no customer-replaceable components.
14 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Glossary This glossary defines acronyms and terms used in this guide. It is not a comprehensive glossary of computer terms. Table 95 (page 164) provides a list of Fibre Channel terms and the equivalent FICON terms.
BB_credits Buffer-to-buffer credits. C C-series FICON directors manufactured for HP by Cisco Systems, Inc. cascaded switch The destination director attached to the CU. CCW Channel command word. A device command that can be linked with other CCWs to form a channel program. This is the mainframe equivalent of a Fibre Channel SCSI command. CEE Converged enhanced Ethernet. CF Coupling facility.
device address A unique number assigned to each device in the LCU. Also known as the unit number or unit address. When combined with the LCU number, provides a unique identifier for a volume in the storage system. device number A unique number assigned to an I/O device, logical device, or volume and used by the system operator. Also know as the UCB. director This is the FICON equivalent of a Fibre Channel switch.
FC-to-SONET Fibre Channel to Synchronous Optical Network. A gateway that resides at the end of an intersite link and encapsulates Fibre Channel frames into SONET packets before transmitting the frames over the network. FCC Fibre Channel Congestion Control. A feature that allows C-series switches to intelligently regulate traffic across ISLs and ensure that each initiator-target pair of devices has the required bandwidth for data transfer. FCIP Fibre Channel over Internet Protocol.
HP P9000 Continuous Access An HP product consisting of two or more P9500 disk arrays performing disk-to-disk replication, along with the management user interface that facilitates configuring, monitoring, and maintaining the replication capabilities of the storage systems.
MAN Metropolitan area network. A communications network that covers a geographic area such as a town, city, or suburb. McDATA FICON directors manufactured for HP by McDATA Corporation. MCP Mode Conditioning Patch. MFL A 4-G FICON channel card with long wave SFPs. MFS A 4-G FICON channel card with short wave SFPs. MIDAW Modified indirect data address word. An alternative to using CCW data chaining in channel programs. MIF Multiple Image Facility.
R RADIUS Remote Authentication Dial-In User Service. A software tool used for data security. RBAC Role-based access control. A Fabric OS feature used to determine which commands are supported for each user. RMF Resource Measurement Facility. A software tool used for troubleshooting and statistical planning. RPO Recovery point objective. RSCN Registered state change notification. RSPAN Remote Switched Port Analyzer. See also SPAN. RTO Recovery time objective. S SAN Storage area network.
Sysplex System complex. Multiple system images running on multiple CPCs that share I/O devices, are connected through the cross-system coupling facility (XCF), and are contained in a single mainframe chassis. See also Parallel Sysplex. Sysplex timer A central clock service that provides common time references to all CPCs in a Sysplex, Parallel Sysplex, or GDPS configuration. system generation The tools and process used to configure the mainframe. T TCP Transmission Control Protocol.
Z zone 172 A collection of devices or user ports that communicate with each other through a fabric. Any two devices or user ports that are not members of at least one common zone cannot communicate through the fabric.
Index Symbols 1-byte destination link addressing, 24 1606 Extension SAN Switch, 143 2-byte destination link addressing, 25 A active protocol handling, 133 active signal amplification, 133 Advanced FICON Accelerator, 37, 41, 143 ATM network connecting to, 151 B B-series 1606 Extension SAN Switch, 143 blades, 32 DC Dir Switch MP Extension Blade, 143 B-series directors database size, 42 edge, 30 extended fabric settings, 136 Fabric Managment versions, 31 fabric rules, 39 features, 36 firmware update, 30 firm
connectivity, ports, 13 control unit address (CUADD), 107, 116 emulating IBM, 113 number (CUNUMBER), 107, 116 port (CUP), 155 control unit port see CUP conventions document, 161 converter mode CHPID, 87 CSR, 162 CTC communication, 129 CUP, 12 domain ID, 13 customer self repair, 162 CWDM, 10, 133 D DASD, 124 installing with tape drives, 155 data access, in FICON SAN fabric topologies, 19 data availability design considerations, 23 factors affecting, 19 FICON SAN, 19 level 1, 20 level 2, 21 level 3, 22 level
Fibre Channel Internet Protocol see FCIP Fibre Channel ports types supported, 87 Fibre Channel Protocol mode CHPID, 87 FICON and FCP on the same ISL, 155 architecture, 11 channel cards, 86 defined, 8 director functions, 12 director port interfaces, 71 distance rules, 75 extension overview, 120 Fibre Channel switching, 143 long-distance extension, 123 long-distance technologies, 129 ports in an intermix environment, 155 switch, 12 technology overview, 10 zoning, 157 FICON directors B-series, 29 interoperabli
IBM z9/z10 processor FICON support rules, 89 IBM zSeries processor FICON support rules, 88 ICL, 38 in-order delivery, 13 infrastructure FICON SAN, 11 intermix, 9 intermix environment configuration recommendations, 155 FICON and FCP ports, 155 interoperability, 9 director, 12 interswitch link see ISL IOS, 115 ISL and PortChannels, 60 B-series maximum, 42 C-series maximum, 59 connections, 155 McDATA maximum, 68 minimum configuration, 156 recommended ratios, 18 L latency FICON over IP, 140 FICON SAN, 83 LCU d
addressing example, 117 PDCM, 157 peer-to-peer remote copy see PPRC performance guidelines, 84, 156 recommendations, 83 persistent domain IDs, 13 Port Dynamic Connectivity Mask/Matrix see PDCM port offset McDATA directors, 68 port protectors, 157 PortChannels ISLs, 60 ports protecting unused, 157 PPRC, 123 protocol conversion products, 10 protocol layers, 10 R recovery point objective see RPO recovery time objective see RTO redundant active components B-series directors, 38 redundant control processor B-se
customer self repair, 162 HP Subscriber's choice for business, 159 X XP FICON channel interface, 112 XP FICON SAN support, 110 XP storage system configuration example, 116 configuration rules, 116 emulating IBM, 114 FICON channel interface, 111 FICON SAN support, 110 Hyper PAV support, 115 MA support, 115 mainframe addressing of volumes, 114 XP storage systems, 110 configuration, 112 configuration rules, 110 emulating IBM disk arrays and control units, 112 GDPS support, 115 PAV support, 115 XRC, 123, 124 p