HyperFabric Configuration Guidelines HP-UX 11i, 11i v1 and 11i v2 Manufacturing Part Number : B6257-90059 November 2006 © Copyright 2006 Hewlett-Packard Development Company, L.P.
Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material.
HyperFabric Configuration Guidelines This document provides protocol specific configuration guidelines for HyperFabric running TCP/UDP/IP and Hyper Messaging Protocol (HMP) applications.
HyperFabric Configuration Guidelines Overview Overview HyperFabric is a HP high-speed, packet-based interconnect for node-to-node communications. HyperFabric provides higher speed, lower network latency and uses less CPU than other industry standard protocols (e.g. Fibre Channel and Gigabit Ethernet). Instead of using a traditional bus based technology, HyperFabric is built around switched fabric architecture, providing the bandwidth necessary for high speed data transfer.
HyperFabric Configuration Guidelines Overview features and performance of Oracle 10g RAC database with standards-based interconnect technologies including Gigabit Ethernet, 10Gigabit Ethernet and Infiniband. To align with the market trend for standards-based interconnects, Oracle 10g RAC database is not currently supported on configurations consisting of HyperFabric product suite and it will not be supported in the future either.
HyperFabric Configuration Guidelines Overview The following sections define HyperFabric functionality for TCP/UDP/IP applications and Hyper Messaging Protocol (HMP) applications. There are distinct differences in supported hardware, available features and performance, depending on which protocol is used by applications running on HyperFabric.
HyperFabric Configuration Guidelines TCP/UDP/IP TCP/UDP/IP TCP/UDP/IP is supported on HF2 hardware. Although some of the HyperFabric adapter cards support both HMP and TCP/UDP/IP applications, in this section, the focus is on TCP/UDP/IP HyperFabric applications. Application Availability All applications, including Oracle 9i and HP-MPI, that use the TCP/UDP/IP stack are supported.
HyperFabric Configuration Guidelines TCP/UDP/IP entire HyperFabric subsystem. The monitor can inform the user if the resource being monitored is UP or DOWN. The administrator defines the condition to trigger a notification (usually a change in interface status). Notification can be accomplished with one of the following: — A Simple Network Management Protocol (SNMP) trap — Logging into a user specified log file with a choice of severity — Email to a user defined email address.
HyperFabric Configuration Guidelines TCP/UDP/IP If any HyperFabric resource in a cluster fails (adapter card, cable or switch port), the HyperFabric driver transparently routes traffic over other available HyperFabric resources with no disruption of service. The ability of the HyperFabric driver to transparently fail over traffic reduces the complexity of configuring highly available clusters with ServiceGuard, because ServiceGuard only has to take care of node and service failover only.
HyperFabric Configuration Guidelines TCP/UDP/IP resource is removed from a cluster. The difference between DRU and OLAR is that OLAR applies only to the addition or replacement of adapter cards from nodes. • Load Balancing: Supported When an HP 9000 HyperFabric cluster is running TCP/UDP/IP applications, the HyperFabric driver balances the load across all available resources in the cluster, including nodes, adapter cards, links, and multiple links between switches.
HyperFabric Configuration Guidelines TCP/UDP/IP In point-to-point configurations, the complexity and performance limitations of having a large number of nodes in a cluster make it necessary to include switching in the fabric. Typically, point-to-point configurations consist of only 2 or 3 nodes. In switched configurations, HyperFabric supports a maximum of 64 interconnected adapter cards. A maximum of 8 HyperFabric adapter cards are supported per instance of the HP-UX operating system.
HyperFabric Configuration Guidelines TCP/UDP/IP TCP/UDP/IP supports up to four hybrid HF1/HF2 switches connected in series with a maximum cable length of 60 ft. between copper ports and 200m between fiber ports. • Table 1 Throughput and Latency HF1 Throughput and Latency with TCP/UDP/IP Applications Server Class rp7400 Table 2 1.28 + 1.
HyperFabric Configuration Guidelines TCP/UDP/IP Table 3 HF1 and HF2 Hardware that Supports TCP/UDP/IP Applications Adapter Part Number A4919A (HF1) HP 9000 Server / Work Station HP-UX Release OLAR N4000 (*) 11.0, 11i v1 (64-bit only) YES (11i v1) 8 V-class (@) 11.0, 11i v1 (64-bit only) NO 7 (Max. 1 per Epic [total 2 per IO cage]) HSC K-class 10.20 (@@), 11.0 NO 2 EISA /HSC D,R-class 10.20 (@@), 11.0 NO 2 PCI 4X A400,A500, B1000, B2000, B2600, C3x, J5x, J6x, J7x 11.
HyperFabric Configuration Guidelines TCP/UDP/IP Table 3 HF1 and HF2 Hardware that Supports TCP/UDP/IP Applications Adapter Part Number A6386A (HF2) 14 Bus Type PCI 4X HP 9000 Server / Work Station HP-UX Release OLAR Maximum Adapters per Instance of HP-UX OS Superdome (**) 11i v1 YES 8 (Max. 4 per PCI card cage) V-class (@) 11.0, 11i v1 NO 7 (Max. 1 per Epic [total 2 per IO cage]) A400,A500, B1000, B2000, B2600, C3x, J5x, J6x, J7x, 11.0, 11i v1 NO 2 rp54xx (L-class) 11.
HyperFabric Configuration Guidelines TCP/UDP/IP Table 3 HF1 and HF2 Hardware that Supports TCP/UDP/IP Applications Adapter Part Number Bus Type HP 9000 Server / Work Station HP-UX Release OLAR Maximum Adapters per Instance of HP-UX OS rx56XX servers 11i v2 No 4 zx6000 workstation s 11i v2 No 1 SD64A servers 11i v2 Yes 8 (maximum 4 per PCI card cage) rx7620 servers 11i v2 No 8 (maximum 4 per PCI card cage) rx8620 servers 11i v2 Yes 8 (maximum 4 per PCI card cage) rx4640 servers 1
HyperFabric Configuration Guidelines TCP/UDP/IP system in the Superdome chassis. To use the copper cable in a Superdome, you most likely would have to remove some parts of the Superdome cabinet. (@) V-Class systems have been obsoleted starting September, 2000. (@@) HP-UX 10.20 has been obsoleted starting July, 2003. Table 4 Patches Needed to Run TCP/UDP/IP Applications (HF1 and HF2) OS Patch/AR 11i v1 PHNE_27745 AR0601 11.0 Remarks Oracle 9.01 & 9.02 patch, certified for Oracle RAC 9.01 & 9.02 B.11.
HyperFabric Configuration Guidelines TCP/UDP/IP TCP/UDP/IP Supported Configurations Multiple TCP/UDP/IP HyperFabric configurations are supported to match the cost, scaling and performance requirements of each installation. In the section “Configuration Parameters” on page 10, the maximum limits for TCP/UDP/IP enabled HyperFabric hardware configurations were outlined. In this section the TCP/UDP/IP enabled HyperFabric configurations that HP supports are explained.
HyperFabric Configuration Guidelines TCP/UDP/IP Figure 1 18 TCP/UDP/IP Point-To-Point Configurations
HyperFabric Configuration Guidelines TCP/UDP/IP Switched This configuration offers the same benefits as the point-to-point configurations illustrated in Figure 1, but it has the added advantage of greater connectivity (see Figure 2).
HyperFabric Configuration Guidelines TCP/UDP/IP High Availability Switched This configuration has no single point of failure. The HyperFabric driver provides end-to-end HA. If any HyperFabric resource in the cluster fails, traffic is transparently rerouted through other available resources. This configuration provides high performance and high availability (see Figure 3).
HyperFabric Configuration Guidelines TCP/UDP/IP Hybrid Configuration You can interconnect servers and workstations in a single heterogeneous HyperFabric cluster. In this configuration, the servers are highly available. In addition, the workstations and the servers can be running the same application or different applications (see Figure 4).
HyperFabric Configuration Guidelines TCP/UDP/IP Mixed HF1 / HF2 (Copper & Fiber) You can interconnect all the currently available HyperFabric products in a single HyperFabric cluster. The HF1 and HF2 products are interoperable, enabling user-controlled migration from copper-based to fiber-based technologies (see Figure 5).
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Hyper Messaging Protocol (HMP) HyperMessaging protocol (HMP) is an HP patented, high performance cluster interconnect protocol. HMP provides reliable, high speed, low latency, low CPU overhead, datagram service to applications running on HP-UX operating systems. HMP was jointly developed with Oracle Corp. The resulting feature set was tuned to enhance the scalability of the Oracle Cache Fusion clustering technology.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) HP-MPI is a native implementation of version 1.2 of the Message-Passing Interface Standard. It has become the industry standard for distributed technical applications and is supported on most technical computing platforms. HMP is certified with HP-MPI 1.7 on both HP-UX 11.0, 11i v1 and 11i v2. HMP is also certified with HP-MPI 1.7.2 which uses HMP as the default interconnect.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Within a cluster, ServiceGuard groups application services (individual HP-UX processes) into packages. In the event of a single service failure (node, network), EMS provides notification and ServiceGuard transfers control of the package to another node in the cluster, allowing services to remain available with minimal interruption.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) When HMP is configured in the local failover mode, all the resources in the cluster are utilized. If a resource fails in the cluster and is restored, HMP does not utilize that resource until another resource fails.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) For more detailed information on HyperFabric diagnostics, see Installing and Administering HyperFabric Part Number B6257-90030 Edition E0601, HyperFabric Administrator’s Guide Part Number B6257-90039 and HyperFabric Administrator’s Guide Part Number B6257-90042. Configuration Parameters This section discusses the maximum limits for HMP HyperFabric configurations.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) A maximum of 8 HyperFabric adapter cards are supported per instance of the HP-UX operating system. The actual number of adapter cards a particular node is able to accommodate also depends on slot availability and system resources. See node specific documentation for details. A maximum of 8 configured IP addresses are supported by the HyperFabric subsystem per instance of the HP-UX operating system.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Table 5 • HMP is supported on workstations running 64-bit HP-UX when patch number PHNE_25485 or above is installed. • HMP is supported on HyperFabric starting HyperFabric version B.11.00.11, B.11.11.01, and B.11.23.00. • HMP is not supported on A180 or A180C servers. • HMP is not supported on 32-bit versions of HP-UX.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Table 7 HF1 and HF2 Hardware that Supports HMP Applications Adapter Part Number A6386A (HF2) Bus Type PCI 4X HP 9000 Server/ Work Station HP-UX Release OLAR Maximum Adapters per Instance of HP-UX rp74xx (N-class) 11.0, 11i v1 NO 8 rp8400 11i v1 NO 8 Superdome (*) 11i v1 NO 8 (Max. 4 per PCI card cage) rp2400 (A), rp2450 (A), B1000, B2000, B2600, C3x, J5x, J6x, J7x 11.0, 11i v1 NO 2 rp54xx (L-class) 11.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) NOTE The local failover configuration on HMP is supported only on the A6386A HF2 adapters.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) NOTE The local failover configuration on HMP is supported only on the A6386A HF2 adapters. Table 9 Patches Needed to Run HMP Applications (HF1 and HF2) OS Patch/AR 11i v1 PHNE_27745 AR0601 11.0 Remarks Oracle 9.01 & 9.02 patch, certified for Oracle RAC 9.01 & 9.02 B.11.11.01 PHNE_26551 AR0601 32 Version AR0601 (B.11.11.01) or AR0902 (B.11.11.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) HMP Supported Configurations Multiple HMP HyperFabric configurations are supported to match the performance, cost and scaling requirements of each installation. In the section “Configuration Parameters” on page 27, the maximum limits for HMP enabled HyperFabric hardware configurations were outlined. In this section, the HMP enabled HyperFabric configurations that HP supports will be detailed.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Figure 6 34 HMP Point-To-Point Configurations
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Enterprise (Database) The HMP enterprise configuration illustrated in Figure 7 is very popular for running Oracle RAC 9i. Superdomes or other large servers make up the Database Tier. Database Tier nodes communicate with each other using HMP. Clients communicate with the Application Tier using TCP/UDP/IP.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Enterprise (Database) - Local Failover Supported Configuration The HMP enterprise configuration is a scalable solution. For high availability and performance, you can easily scale the HMP enterprise configuration with multiple connections between the HyperFabric resources. Any single point of failure in the database tier of the fabric is eliminated in the Local Failover Supported Enterprise Configuration (see Figure 8).
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Card-Pair is a logical entity comprising a pair of HF2 adapters on an HP9000 node. For example, if there are four HF2 adapters installed and configured in a node, then there would be two card pairs. IMPORTANT Remember the following points while configuring HMP for Local Failover support: • Only Oracle applications can make use of the local failover feature. Other middleware like MPI can continue using HMP without local failover support.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) • Before running clic_start on all the nodes in the cluster, ensure that all the configured cards are connected in the cluster. In other words, before running the clic_start command, verify that all the cables are connected to the adapters and switches.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) How Local Failover Works Consider a hypothetical HyperFabric configuration in a 4-node cluster, with each node having two adapters (see Figure 9). In this configuration, there is no single point of failure, and all adapters that are installed on any given node are configured as part of a card pair.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) If an adapter or a link or a switch port fails, HMP transparently fails over traffic through the other available link.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Case 2: Switch Failure (see Figure 11) Consider the following scenario where node A is connected to node D with traffic being routed through the HF adapter 1 on both the nodes (A and D), and the HF switch 1 fails. HMP transparently fails over traffic through the other available switch (HF switch 0).
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Case 3: Cable Failure Between Two Switches (see Figure 12) If a cable between two switches fails, HMP traffic fails over to the other available cable between those two switches.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Configuring HMP for Local Failover Support This section describes how to use SAM to configure HMP for Local Failover Support. IMPORTANT Currently, you must use only SAM to configure HMP for Local Failover Support. To use SAM to configure HMP for Local Failover Support, follow these steps: Step 1. Start SAM. Step 2. Select the “Networking and Communications” area. Step 3. Select “HyperFabric.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) If any of the above conditions is true, SAM displays an appropriate error message. Otherwise, the /etc/rc.config.d/clic_global_conf file is updated with information about the configured card pairs. If Card-Pair 0 comprises of adapters clic0 and clic1 and if the Card-Pair 1, comprises of adapters clic2 and clic3, then the following entries are added to the clic_global_conf file.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Step 5. Exit SAM.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Configuring HMP for Transparent Local Failover Support Using the clic_init command You can configure the Transparent Local Failover feature of HMP using clic_init also. Let us consider the following example where we have discussed the configuration in detail. This example uses some “dummy” (that is, not valid) addresses to the components in Figure 13.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) are used only to show the flow of the information provided as input to the clic_init command and SAM. Do not try to use these addresses in your configuration. Figure 13 Configuring the Transparent Local Failover feature Using the configuration information in Figure 13, the information you would specify when you run clic_init on each of the nodes is listed below.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) 1. How many HyperFabric adapters are installed on the node? 2. Do you want this node to interoperate with nodes running any HyperFabric versions earlier than B.11.00.11 or B.11.11.01? (n) You must answer ‘no’ if you want to run applications using HMP (Local Failover or Non-Local Failover) for communication over HyperFabric. In that case, all nodes in the cluster must be running version B.11.00.11 (or) B.11.11.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) HyperFabric. In that case, all nodes in the cluster must be running version B.11.00.11 (or) B.11.11.01 (or) later version of HyperFabric software. 3. What is the IP address of the first adapter (clic0)? (192.0.0.2) 4. What is the subnet mask of the first adapter? (255.255.255.0) If you do not specify a value for this, a default mask is chosen. You will most likely just accept the default.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Configuring HMP Local Failover for Oracle Each Oracle process (using HMP) allocates resources like NQs and EPTs (memory and receive end point). Values of these parameters are read by the HMP subsystem when the first process of an application registers with HMP, and these values are used to statically size HMP’s internal data structures.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Technical Computing (Work Stations) This configuration is typically used to run technical computing applications with HP-MPI. A large number of small nodes are interconnected to achieve high throughput (see Figure 14). High availability is not usually a requirement in technical computing environments. HMP provides the high performance, low latency path necessary for these technical computing applications.
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Figure 14 52 Technical Computing Configuration
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) Figure 15 Large Technical Computing Configuration 53
HyperFabric Configuration Guidelines Hyper Messaging Protocol (HMP) 54