HP X9000 File Serving Software 5.6 Installation Guide Abstract This document describes how to install the X9000 File Serving Software. It is intended for HP Services personnel who configure X9000 series Network Storage systems at customer sites. For upgrade information, see the administration guide for your system.
© Copyright 2009, 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Configuring the management console and file serving nodes on X9300/X9320 systems........................................................................................................6 Installation checklist..................................................................................................................6 Getting started.........................................................................................................................6 Configuring the management console.....
NFS client implementation tuning.........................................................................................47 Configuring CIFS shares (optional)............................................................................................47 Configuring other X9000 Software features................................................................................48 5 Adding Linux and Windows X9000 clients..................................................49 The Linux X9000 client...................
IP addressing schemas............................................................................................................74 External switch port settings.....................................................................................................75 Cabling example including switch redundancy......................................................................75 Cabling examples including LACP link aggregation................................................................
1 Configuring the management console and file serving nodes on X9300/X9320 systems Installation checklist Step Task More information 1. Set up the HP 2000 G2 Modular Smart Array or P2000 G3 MSA System (X9320 systems). The array documentation is on http:// www.hp.com/support/manuals under storage > Disk Storage Systems 2. Set up iLO on the ProLiant servers. The server documentation is on http:// www.hp.com/support/manuals under servers > ProLiant ml/dl and tc series servers 3.
If you are performing the installation on an existing cluster, ensure that the same version of the X9000 software is installed on all nodes. Configuration options The management console is configured first. You then have two options for configuring file serving nodes: • Create a configuration template that specifies environment and network settings and defines how hostnames are assigned. You can then apply the template to the file serving nodes. • Configure the file serving nodes individually.
4. The Management Console Configuration Menu lists the configuration parameters that you will need to set. 5. Select Hostname from the menu, and enter the hostname of this server.
6. Select Time Zone from the menu, and then press the up or down arrow keys to select your time zone. 7. Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway.
8. Select DNS Settings from the menu, and enter the IP addresses for your DNS servers. Also enter the DNS domain name. 9. Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers.
10. Select Networks from the menu. You will need to create one cluster network interface, which will be used for intracluster communication. The cluster network is bond0 for a 1GbE network and is bond1 for a 10GBE network. You might also need to create a user network, which is used for server-to-client communication. The user network is bond1 for a 1GbE network, and is a VIF on bond1 for a 10GbE network set up during the configuration of the X9000 software. To create a bond, select .
Enter a name for the interface. (For a 1GbE network, the cluster interface is bond0. For a 10GbE network, the cluster interface is bond1.) Also specify the appropriate options and slave devices. Use mode 6 bonding for a 1GbE network, and mode 1 bonding for a 10GbE network. 11. When the Configure Network dialog box reappears, select bond0 or bond1, as appropriate for your network.
12. To complete the bond configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use. Repeat this procedure to create a bonded user network (typically bond1, or a VIF on bond1 for a 10GbE network that is configured in X9000 Software) and any custom networks as required. 13. If this cluster will use the agile management console configuration, select Agile management from the menu.
Enabled. The agile management configuration uses virtual interfaces to provide high availability. Enter the virtual IP addresses for the cluster and user networks. 14. The Confirm Management Console Configuration screen lists the values entered for the management console. You can change the values if needed. When you select Commit, the values are applied. Networking is set up, and the management console software starts.
Creating a template for configuring file serving nodes To simplify configuring your file serving nodes, you can create a template to apply to each node. The template specifies the IP address that the nodes will use to access the management console, defines how hostnames will be generated for the nodes, and specifies a range of IP addresses to be used when configuring networks on the nodes. You can also change some of the parameters specified with the Management Console Configuration Wizard if necessary.
3. 16 Select Time Zone from the menu, and then use the up or down arrow keys to select your time zone.
4. Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway. 5. Select Management Console from the menu. The IP address that you specified earlier for the management console will be filled in. 6. Select Hostname Template from the menu. Each File Serving Node must have a unique hostname. The hostname template generates the names automatically, based on the naming scheme that you select.
The parameters in the naming schemes, when enclosed in braces ({...}), expand in the following manner: • number num: The number of File Serving Nodes in the cluster. NOTE: When using the number format, allow each File Serving Node to register before logging in to the next system.
8. Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers. 9. Select Network Templates from the menu. Select to create a cluster network.
Select Bonded Interface for the cluster network. Identify the bonded interface and enter the appropriate bond options and slave devices. Use mode 6 bonding for 1GbE networks and mode 1 bonding for 10GbE networks.
Select the role for the network and enter the IP address information the network will use. 10. Using the procedure in the above step, create templates for user and custom networks as necessary. 11. When you have completed your entries on the Cluster Configuration Menu, select Continue. 12. Verify the template on the confirmation screen and select Commit. The values are applied when you configure each file serving node.
NOTE: If you set up the hostname template, configure one node at a time to ensure the correct sequencing. 1. 2. Log into the system as user root (the default password is hpinvent). When the System Deployment Menu appears, select Join an existing cluster. 3. The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select the appropriate management console for this cluster.
5. The Verify Configuration window shows the configuration received from the management console. Select Accept to apply the configuration to the server and register the server with the management console. NOTE: If you select Reject, the wizard exits and the shell prompt is displayed. You can restart the wizard by entering the command /usr/local/ibrix/autocfg/bin/menu_ss_wizard or logging in to the server again. 6.
If you configured a passive management console, enter the following command to verify the status of the console: ibrix_fm -i Completing the installation For more information, see the following: • “Post-installation tasks” (page 46) • “Adding Linux and Windows X9000 clients” (page 49) Configuring file serving nodes manually Use this procedure to configure file serving nodes manually instead of using the template. 1. Log into the system as user root (the default password is hpinvent). 2.
4. The File Serving Node Configuration Menu appears. 5. The Cluster Configuration Menu lists the configuration parameters that you need to set. Use the up and down arrow keys to select an item in the list. When you have made your select, press Tab to move to the buttons at the bottom of the dialog box, and press the spacebar to go to the next dialog box.
6. Select Management Console from the menu, and enter the IP address of the management console. This is typically the address of the management console on the cluster network. 7. Select Hostname from the menu, and enter the hostname of this server.
8. Select Time Zone from the menu, and then use the up or down arrow keys to select your time zone. 9. Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway.
10. Select DNS Settings from the menu, and enter the IP addresses for the primary and secondary DNS servers that will be used to resolve domain names. Also enter the DNS domain name. 11. Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers.
12. Select Networks from the menu. Select to create a bond for the cluster network. You are creating a bonded interface for the cluster network; select Ok on the Select Interface Type dialog box. Enter a name for the interface. The cluster interface is bond0 for a 1GbE network and bond1 for a 10GbE network. Also specify the appropriate options and slave devices. Use mode 6 bonding for 1GbE networks and mode 1 bonding for 10GbE networks.
13. When the Configure Network dialog box reappears, select bond0.
14. To complete the bond configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use. The user network is a VIF on bond1 for 10GbE network that in configured in X9000 Software. Repeat this procedure to create a bonded user network (typically bond1) and any custom networks as required. 15. When you complete your entries on the File Serving Node Configuration Menu, select Continue. 16.
If you configured a user network, enter a VIF IP address and netmask for the network.
2 Installing and configuring X9720 systems Installation checklist Step Task More information 1. Redeem X9000 File Serving Software Licenses. “Redeem X9000 File Serving Software licenses” (page 34) 2. Power on the X9720 Network Storage System: “Power on the X9720 system” (page 34) • Connect power cables to the proper power sources. • Power on all of the X9700c enclosures. Wait until the seven-segment display on the back of all of the X9700c enclosures shows on.
Step Task More information 8. Perform post-installation tasks: “Post-installation tasks” (page 46) • Configure the Support Ticket feature. • Create file systems. • Set up NFS exports (optional). • Set up CIFS shares (optional). • Set up HTTP/HTTPS (optional). • Set up FTP/FTPS (optional). • Remote replication (optional). • Snapshots (optional). • Data tiering (optional). • NDMP (optional). • Insight Remote Support. 9. Configure X9000 clients for Linux or Windows (optional).
Boot server blades NOTE: If the server blades were powered on when the previous step completed (that is, one or more capacity blocks powered on after the server blades), cycle the power on all server blades again. Complete the following steps: 1. Connect a LAN cable from a laptop to a port on one of the ProCurve switches. Set the TCP/IP properties of the laptop LAN port to address 172.16.3.200, subnet 255.255.248.0. You should now be able to make an ssh connection to the blade servers.
X9720 bonding. Mode 4 (LACP) should not be used on X9720 systems as LACP packets are handled by the VC Flex 10 and are not forwarded to the blade servers. Properly configured, this provides a fully redundant network connection to each blade. A single failure of NIC, Virtual Connect module, uplink, or site network switch will not fail the network device.
2. Create an ifcfg-ethn file for each interface in bond1. The default for bond1 uses eth1 and eth2. Edit the ifcfg-eth1 file to appear as follows, ensuring that MASTER and SLAVE are set properly. Assuming the HWADDR for eth1 is 78:e7:d1:64:3a:1c: DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond1 SLAVE=yes BOOTPROTO=none HWADDR=78:e7:d1:64:3a:1c 3.
MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 100 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 78:e7:d1:64:3a:1c Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 78:e7:d1:64:3a:19 View the configuration You can check the IP, netmask and broadcast configuration with the ifconfig command.
> > Set the time zone Run timeconfig to set the correct time zone for the system location. Configure NTP servers Edit the /etc/ntp.conf file, using the following example as a template. In the example, is the address of a timeserving source. #AUTOMATICALLY GENERATED NTP CONFIGURATION FILE logfile /var/log/ntpd.
# ibrix_fm -m maintenance # ibrix_fm –u nl 2. 3. Define a new Agile_Cluster_VIF_DEV and the associated Agile_Cluster_VIF_IP. Change the management console's local cluster address from bond0 to bond1 in the X9000 database: a. Change the previously defined Agile_Cluster_VIF_IP registration address.
Optional configuration These configuration procedures are optional: • Change IP addressing for the private network • Configuring the agile management console Changing IP addressing for the private network This example changes the internal network from 172.16.0.0/255.255.248 to 192.168.0.0/255.255.248.0. 1. Connect an ssh session to 172.16.3.1 (the fusion manager on blade1). 2. Copy the /etc/hosts file: cp /etc/hosts /etc/hosts_original scp 172.16.3.2:/etc/hosts 172.16.3.2:/etc/hosts_original scp 172.16.
pdsh -a service network restart You will be disconnected. NOTE: If it is not possible to ping the bond0 interface on each server after the network restarts, it will still be possible to connect a browser session to the iLO to get a console session to fix the issue, or you can use the KVM console. 15. Change Onboard Administrator addresses: a. Connect a browser session to Onboard Administrator at http://172.16.1.1 (User=exds Password=hpinvent). b.
3 Configuring virtual interfaces for client access X9000 Software uses a cluster network interface to carry management console traffic and traffic between file serving nodes. This network is configured as bond0 when the cluster is installed. For clusters with an agile management console configuration, a virtual interface is also created for the cluster network interface to provide failover support for the console.
# # # # ibrix_nic ibrix_nic ibrix_nic ibrix_nic –b –b –b –b –H –H –H –H node1/bond1:1,node2/bond1:2 node2/bond1:1,node1/bond1:2 node3/bond1:1,node4/bond1:2 node4/bond1:1,node3/bond1:2 Configuring NIC failover NIC monitoring should be configured on VIFs that will be used by NFS, CIFS, FTP, or HTTP. Use the same backup pairs that you used when configuring standby servers.
HTTP. When you create a virtual host on the Create Vhost dialog box or with the ibrix_httpvhost command, specify the VIF as the IP address that clients should use to access shares associated with the Vhost. X9000 clients. Use the following command to prefer the appropriate user network. Execute the command once for each destination host that the client should contact using the specified interface. ibrix_client -n -h SRCHOST -A DESTNOST/IFNAME For example: ibrix_client -n -h client12.mycompany.
4 Post-installation tasks Setting up the Support Ticket feature To set up the Support Ticket feature, follow these steps: • Configure password-less SSH among the management consoles (active and passive) and all File Serving Nodes (even within the nodes themselves) in the cluster. • Verify that the /etc/hosts file on each node contains the IP/hostname entries of all the nodes (management console and File Serving Nodes) in the cluster. If not, add the entries.
Creating file systems For X9320 and X9720 systems, a logical volume (segment) is pre-formatted on each LUN. The logical volume name includes the array number, making it easy to relate a proposed segment topology with the array topology. See “Creating and mounting file systems” in the HP X9000 File Serving Software File System User Guide. NOTE: If you want to enable 64-bit mode, HP recommends that you delete the existing volume groups and then recreate them in 64-bit mode.
Configuring other X9000 Software features See the HP X9000 File Serving Software File System User Guide for information about configuring the following features: • HTTP/HTTPS • FTP/FTPS • Remote replication • Snapshots • Data tiering See the administrator guide for your platform for information about configuring NDMP.
5 Adding Linux and Windows X9000 clients Linux and Windows X9000 clients run applications that use the file system. The clients can read, write, and delete files by sending requests to File Serving Nodes. This chapter describes how to install, configure, and register the clients. IMPORTANT: The Windows X9000 client application can be started only by users with Administrative privileges. (For Windows 2008 and 2008 R2, if UAC is enabled, only Default administrators can launch the application.
./ibrixinit -tc -C CLUSTER_INTERFACENAME -i CLUSTER_NAME/VIF_IP For example: ./ibrixinit -tc -C eth4 -i 192.168.49.54 To install into a different directory, use the following command: ./ibrixinit -tc -C CLUSTER_INTERFACENAME -i CLUSTER_NAME/VIF_IP -P PATHNAME 3. Verify that the X9000 client is operational. The following command reports whether X9000 services are running: /etc/init.
Preferring a network interface for a hostgroup You can prefer an interface for multiple X9000 clients at one time by specifying a hostgroup. To prefer a user network interface for all X9000 clients, specify the clients hostgroup. After preferring a network interface for a hostgroup, you can locally override the preference on individual X9000 clients with the command ibrix_lwhost.
NOTE: This volume is shown with a capacity of 2 TB, the maximum size of a disk in a 32-bit Windows system. Your volume might be bigger or smaller, but because of synchronization issues, the disk appears as 2 TB regardless of the actual size. The installed files for the virtual bus and driver are C:\windows\system32\drivers\idef.sys and virtbd.sys.
The following command configures the user mapping. To base mapping on the information in the domain controllers, include the -d option. If your configuration does not include domain controllers, use the -L option to enable mapping of local users. Be sure to enclose the DEFAULTWINUSERNAME in double quotation marks. ibrix_activedirectory -A [-d DOMAIN_NAMES] [-L] [-W “DEFAULTWINUSERNAME”] After configuring automatic user mapping, register the Windows client, and start the service.
8. 9. Select Property-Specific. The property names vary by server version: • Windows Server 2003 SP2: Scroll to and select Read msSFU30GidNumber and Read msSFU30UidNumber. • Windows Server 2003 R2 and later: Scroll to and select Read gidNumber and Read uidNumber. Click Next, and then click Finish. If you create other OUs in Active Directory and users in those units will access the file system, delegate control for these OUs to the proxy user also.
To register clients, complete the following steps: 1. Launch the Windows X9000 client GUI and navigate to the Registration tab. 2. 3. 4. 5. 6. 7. 8. Select the client’s IP address from the list. Enter the management console name in the FM Host Name field. Select Recover Registration to avoid having to re-register this client if you reinstall it. This option automatically retrieves the client’s ID from the management console.
Starting the X9000 client service automatically The X9000 client service, FusionClient, starts manually by default. When the client is functioning to your satisfaction, change the client service to start automatically when the machine is booted. 1. On the client machine, select Settings > Control Panel > Administrative Tools > Services. 2. In the services list, scroll to FusionClient, right-click, and select Properties. 3. Set the Startup Type to Automatic. Click OK.
• Tune Host. Tunable parameters include the NIC to prefer (the client uses the cluster interface by default unless a different network interface is preferred for it), the communications protocol (UDP or TCP), and the number of server threads to use. • Active Directory Settings. Displays current Active Directory settings. See the online help for the client GUI if necessary.
ACEs can be explicit or inherited. An explicit ACE is assigned directly to the object by the owner or an administrator, while an inherited ACE is inherited from the parent directory. ACEs are governed by the following precedence rules: • An explicit deny ACE overrides an explicit allow ACE, and an inherited deny ACE overrides an inherited allow ACE.
read-write-execute permissions, the corresponding permission in the file mode mask for others is set to read-only. The write-execute permissions of the inherited ACE are ignored in the mapping. When an explicit deny ACE is added to a file’s ACL, the corresponding allow permissions are removed for group and others in the file mode mask, and the corresponding special explicit ACEs are updated accordingly. An inherited deny ACE has no effect on the mode mask.
The Permissions Entry window has three permissions that are important to X9000 Software: Read Data, Write Data, and Execute File. These map directly to Read, Write, and Execute in the Linux mode mask, as shown in the following table.
Uninstalling Windows X9000 clients NOTE: It is not necessary to unmount the file system before uninstalling the Windows X9000 client software. To uninstall a client, complete the following steps: 1. On the active management console, delete the Windows X9000 clients from the configuration database: /bin/ibrix_client -d -h 2. Locally uninstall the Windows X9000 client software from each X9000 client via the Add or Remove Programs utility in the Control Panel.
6 Network best practices for X9300 and X9320 systems The X9300 series storage appliances can use three networks for operation: the cluster network, user network, and management network. During the initial installation and setup of the storage appliance, it is configured for a 1GbE network or a 10GbE network in accordance with the best practices described in this chapter. NOTE: HP recommends that you use the agile management console configuration.
The following diagram shows the network cabling for an X9300/X9320 1GbE system.
Network switches in the X9300 Storage System base rack (AW546B) The X9300 base rack includes two HP ProCurve 2910al-48G network switches. These switches are equipped with the 10GbE interconnect modules and provide: • 96x 1GbE network ports (48 ports per switch) • 4x 10GbE CX4 ports (2 ports per switch) for switch interconnection The cluster network switches are “stackable,” and when connected together through the 10GbE CX4 interconnect ports, the pair operate as a single switch.
NOTE: An X9300 Gateway pair can be substituted for an X9320 system. Factory cabling of the cluster network For Rev B models of the X9320 system (introduced in April 2010), the cluster network is cabled in the factory if the system is ordered with factory rack integration. The cluster network is cabled per the cluster network cabling diagram shown in “Onsite cabling of the cluster network” (page 65). NOTE: Rev A models of the X9320 system did not include the dual 48-port cluster network switches.
Scaling out to multiple racks Two or more X9320/X9300 cluster racks can be interconnected for scale out. As long as the racks and networks are cabled and configured as shown in this chapter, the scale out can be accomplished by chaining the cluster network switches together. Interconnecting the switches allows the cluster network to span racks. As a result, all IP addresses on the cluster network across both racks must be unique.
Configuring the NIC ports and network addresses on the 1GbE cluster/management network The cluster network NIC ports on the file serving nodes/Management Servers and the optional Dedicated Management Server are bonded to provide path redundancy and higher aggregate performance. The following table describes how to configure the bonds and provides a suggested IP address plan for the cluster network. ALB - mode 6 bonding is the default recommended bonding mode for a 1GbE network.
Component Optional Dedicated Management Server NIC port Bond eth7 bond1:3 Bond mode IP address/subnet mask eth2 eth3 bond1 ALB — mode 6 10/10.1.100/ 255.255.255.0 eth4 eth5 Using 10GbE networks The following logical network diagram shows a system using a 10GbE network. The following diagram shows network cabling for a system using a 10GbE network.
Configuring a 10GbE network In 10GbE environments with demanding workloads, system performance can be maximized by configuring the cluster network to operate over the 10GbE network ports in the file server/Management nodes. The objective is to enable inter-segment file I/O through the 10GbE interfaces. Even with a 10GbE cluster network, a 1GbE management network is still required and should be configured as shown because the iLO interfaces and MSA management ports are all 1GbE interfaces.
To separate the private cluster network traffic from client file I/O traffic (NFS, CIFS, and so on), virtual network interfaces (VIFs) should be created on the 10GbE bond (bond 1) for NFS, CIFS, FTP, HTTP, and X9000 shares on each server. The client systems will access the file system through the VIFs only. The cluster members (file serving nodes and, optionally, Dedicated Management Server) will use the Base Address of the 10GbE bond (bond 1) on each server for cluster communication and operation.
Component MSA 2 (if present) Optional Dedicated Management Server NIC port Bond Bond mode IP address/subnet mask controller B 192.168.1.12 / 255.255.255.0 controller A 192.168.1.13 / 255.255.255.0 controller B 192.168.1.14 / 255.255.255.0 eth0 bond0 ALB — mode 6 192.68.1.100 / 255.255.255.0 eth1 iLO 192.68.1.200 / 255.255.255.0 The following figure shows the network handling cluster management traffic and NFS/CIFS traffic.
:::::::::::::: # Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet DEVICE=eth1 HWADDR=78:e7:d1:8d:e9:56 ONBOOT=no BOOTPROTO=none BROADCAST= IPADDR= MASTER=bond1 NETMASK= SLAVE=yes USERCTL=no :::::::::::::: ifcfg-eth2 :::::::::::::: # Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet DEVICE=eth2 HWADDR=78:e7:d1:8d:e9:58 ONBOOT=no BOOTPROTO=none BROADCAST= IPADDR= MASTER=bond1 NETMASK= SLAVE=yes USERCTL=no :::::::::::::: ifcfg-eth3 :::::::::::::: # Broadcom Corporation NetXtreme II BCM5709 G
7 Network best practices for X9720 systems This document is intended for system consultants, system engineers, onsite installers, HP Technical Support, and customers of the X9720 Network Storage System. Overview of the X9720 network The X9720 series storage network communication is comprised of several networks or network types, or even more precisely, it can be designated by named sets of IP addresses.
IP addressing schemas The X9720 is shipped from the factory with the following configuration and must be modified to the customer’s environment. Component NIC port Bond/VIF Bond mode Network/IP Type IP Address Management console eth0 bond0 active-backup mode=6 Cluster 172.16.3.1 Management 172.16.4.1 Cluster 172.16.3.2 Management 172.16.4.2 OA Management 172.16.1.1 VC Manager Management 172.16.2.
Following is a sample final network configuration with the public network based on a 10.10.135.0 subnet: Management Network (private): 172.16.0.0/255.255.248.0 Cluster and User Network: 10.10.135.0/255.255.255.0 Gateway IP: 10.10.135.1 Component NIC port File serving node 1 eth0 Bond/VIF Bond Mode Nework/IP Type IP Address bond0 active-backup mode=1 Management 172.16.3.1 Cluster 10.10.135.11 eth3 eth1 nl bond1 bond1:0 active-backup mode=1 Management Console 10.10.135.
The following example shows an X9720 cabled to a redundant switch pair. Each Virtual Connect bay connects to a separate switch to provide for VC and external switch redundancy. Following are examples of redundant switch configurations for the X9720: • Two independent switches • Two physical switches that are part of a clustered pair • Two blades within a redundant switch chassis Cabling examples including LACP link aggregation The X9720 supports LACP/802.
Clustered switch pairs If the redundant switch pair is part of a clustered or stackable switch pair, LACP links can be formed between the switch pair, and can have performance benefits by crossing the links between each pair of switches. Refer to your switch documentation to determine whether LACP links can be formed between separate physical blades or switches within the clustered configuration, and for information about performance recommendations.
Bonding modes for file serving nodes The recommended file server bonding for X9720 configurations is mode=1 (that is, active-backup). Previously, the advised bonding mode was mode=6, or active-alb. However, this mode may cause problems and sporadic failures in some X9720 configurations. In addition, there is likely no performance benefit to using this mode in the X9720 configuration. Therefore, it is recommended that users configure their systems to be mode=1, which is also known as active-backup.
Sample files and command output This section includes the following: • An example of a /etc/modprobe.conf file • Output from /proc/net/bonding/bond* • IP address configuration files for cluster and user IP addresses • Output from the ibrix_nic command Sample /etc/modprobe.conf file # hostname x965s1 # more /etc/modprobe.
MII Polling Interval (ms): 100 Up Delay (ms): 100 Down Delay (ms): 0 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 78:e7:d1:57:e2:cc Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 78:e7:d1:57:e2:c9 IP address configuration files for cluster and user IP addresses # cd /etc/sysconfig/network-scripts # more ifcfg-bond* ifcfg-eth* :::::::::::::: ifcfg-bond0 :::::::::::::: DEVICE=bond0 ONBOOT=yes BOOTPROTO=none IPADDR=172.16.3.1 NETMASK=255.255.248.
SLAVE=yes HWADDR=78:e7:d1:57:e2:c9 :::::::::::::: ifcfg-eth3 :::::::::::::: DEVICE=eth3 BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes HWADDR=78:e7:d1:57:e2:cd Output from ibrix_nic # ibrix_nic -i -h x965s1 | egrep -v "Rx|Tx" Nic: x965s1/bond1:2 =================== Host : x965s1 Interface : bond1:2 Type : User State : Up, LinkUp IP_Address : 10.10.135.31 MAC_Address : 78:e7:d1:57:e2:cc Netmask : 255.255.255.0 Gateway : Broadcast Address : 10.10.135.
Duplex LINKMON Monitored By Collisions LastReported # ibrix_nic –l HOST IFNAME ------ -----x965s1 bond1 x965s1 bond1:2 x965s1 bond1:3 x965s2 bond1 x965s2 bond1:2 x965s2 bond1:3 82 : : : : : No TYPE ------Cluster User User Cluster User User 0 3 Days 19 Hrs 16 Mins 45 Secs ago STATE ---------------Up, LinkUp Up, LinkUp Inactive,Standby Up, LinkUp Up, LinkUp Inactive,Standby Network best practices for X9720 systems IP_ADDRESS -----------10.10.135.11 10.10.135.31 10.10.135.12 10.10.135.
8 Setting up InfiniBand couplets InfiniBand is supported for X9300 and X9320 systems. The following logical network diagram shows an InfiniBand configuration. The following diagram shows network cabling for an InfiniBand configuration.
Downloading and installing the InfiniBand software HP supports Mellanox OFED v1.5.2 for use with X9300/X9320 systems. You can download the software from the HP Software and Drivers web site: http://www8.hp.com/us/en/support-drivers.html On the web site, search for offed 1.5.2 and then select the RHEL 5U5 link. The link opens the Mellanox InfiniBand Driver for Operating System Red Hat Enterprise Linux Version 5 Update 5 web page. Click the Installation Instructions tab. Install OFED 1.5.
Installing the driver CAUTION: OFED 1.5.2 overwrites pre-existing /lib/modules vital to X9000 Software and NFS protocol. The InfiniBand driver exchanges these modules on install as InfiniBand wants RPC and NFS to go over RDMA instead of IP. You will need to manually move mv(7) updated kernel module files after the installation of the OFED stack.
/sys/class/net/ib0 "echo connected > /sys/class/net/ib0/mode" "ifconfig ib0 mtu 65520" NOTE: For Windows WinOF (OFFED) IB client connectivity, check Windows Sockets Direct (wsd). This must be enabled for Windows. Troubleshoot physical errors (logical, sim erros, and so on). Note the following: • Use ibstat to check errors on InfiniBand nodes. • Use ibclearcounters to watch for error counter increments. • Check /sys/class/infiniband/mthca0/ports/1/counters.
1. Install the Voltaire OFED drivers on each couplet and client: [root@ib VoltaireOFED-1.4.2_2-k2.6.18-128.el5-x86_64]# ./install.sh installation will replace your iscsi-initiator-utils RPM Uninstalling the previous version of OFED. This may take few moments. Preparing to install Verifying installation Installing 64 bit RPMS Preparing...
9 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Glossary ACE access control entry. ACL access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UFM Voltaire's Unified Fabric Manager client software. UID Unit identification. USM SNMP User Security Model. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface. WINS Windows Internet Naming Service.