HP StoreAll Storage Installation Guide Abstract This document describes how to install StoreAll software. It is intended for HP Services personnel who configure StoreAll 9000 Storage at customer sites. For upgrade information, see the administration guide for your system. For the latest StoreAll guides, browse to http://www.hp.com/support/StoreAllManuals.
© Copyright 2009, 2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Installing 9300 and 9320 systems................................................................6 Network information.................................................................................................................6 Installation checklist..................................................................................................................6 Installing the latest StoreAll OS software release...........................................................................
Configuring VLAN tagging......................................................................................................76 Configuring link state monitoring for iSCSI network interfaces.......................................................77 Configuring link state monitoring for iSCSI network interfaces.......................................................77 Support for link state monitoring...............................................................................................
Troubleshooting the InfiniBand network....................................................................................142 Enabling client access...........................................................................................................143 Setting up Voltaire InfiniBand ................................................................................................143 13 Support and other resources...................................................................145 Contacting HP.......
1 Installing 9300 and 9320 systems The system is configured at the factory as follows: • StoreAll OS software 6.
Step Task More information 6. Set up StoreAll virtual IP addresses for client access. “Configuring virtual interfaces for client access” (page 74) 7. Perform post-installation tasks: “Post-installation tasks” (page 71) • Update license keys if it is not done already. • Configure server standby pairs for High Availability. • Configure the Ibrix Collect feature. • Configure HP Insight Remote Support. • Create file systems if not already configured.
5. 6. 7. Insert the USB key into the server to be installed. Restart the server to boot from the USB key. (Press F11 and use option 3 ). When the HP Storage screen appears, enter qr to install the software. Repeat steps 3–7 on each server and then go to “Starting the installation and configuration” (page 8). Starting the installation and configuration Complete the following steps: 1. Boot the servers that will be in the cluster and log into the first server as root.
5. The default Network Configuration dialog box defines the server on bond0. Note the following: • The hostname, which is the name of the local server, can include alphanumeric characters and the hyphen (-) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with StoreAll software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0.
• The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field. Later in this procedure, you can select either Web UI or ASCII mode to complete the installation. A gateway address is required to use the Web UI. • VLAN capabilities provide hardware support for running multiple logical networks over the same physical networking hardware. StoreAll supports the ability to associate a VLAN tag with a FSN interface.
8. In the Add Bond screen, provide the bond name, VLAN Tag ID, IP address, netmask, and gatweway ID. 9. The Configuration Summary lists your configuration. Select Commit to continue the installation. The setup wizard now configures the server according to the information you entered.
10. Select a method to complete the installation: • Continue with cluster setup at this console (Local web UI session) • Continue with cluster setup remotely (Remote web UI session) • Join this IBRIX server to an existing cluster • Exit Now (You just wanted to setup networking) If you need to create a cluster, press F2 to use ASCII mode, which opens the "Form a ClusterStep 2" screen, described in “Completing the installation in ASCII mode” (page 13).
To proceed with the Getting Started Wizard, do one of the following: • To continue using the console from which you are working, select Continue with cluster setup at this console (Local web UI session). A message displays that the management console GUI will be launched; click OK. When the browser opens, a message may display that it is an untrusted connection. Select Add Exception and then select Confirm Security Exception. The login window for the StoreAll Management Console is displayed.
3. The wizard now configures the active management console (Fusion Manager) on the server. If you have configured separate user and cluster networks, continue with the following steps. 4. If the bond0/cluster network is not routed and the bond1/user network is routed, complete the following steps to define the default gateway for bond1: • Set the default gateway in /etc/sysconfig/network.
5. If you want to be able to access the FM through the bond1/user network, create a management VIF for bond1: ibrix_fm -c -d -n -V For example: # ibrix_fm -c 10.10.125.101 -d bond1:1 -n 255.255.255.0 -v user The installation is complete on the first server. Installing additional servers 1. 2. Complete steps 1-9 in “Starting the installation and configuration” (page 8).
Installing 9300 and 9320 systems
2 Discovering HP StoreAll servers and their storage The Server & Storage Expansion Wizard lets you: • Discover any available HP StoreAll servers and attached storage. • Decide how you would like to use the storage that is found. NOTE: The Server & Storage Expansion Wizard applies only to the 9300, 9320, and 9730. If there are new servers to be found, have an IP address ready to enter for its bonded network interface available, as well as an IP address to assign to it's iLO interface.
3. Click Next. If the Server & Storage Expansion Wizard detects no previously discovered file systems, the wizard begins the discovery of file systems on your network. The discovery process can take several minutes.
After discovery, the Server & Storage Expansion Wizard displays the servers found on your network.
To view detailed information about a file server, select the file server. Details about the file server are displayed in the lower-half of the File Servers window. 4. 20 To add a range of servers, click the Add button on the File Servers window. Then, enter the IP address of the file server or enter a range of IP addresses to be scanned. Click OK when done.
5. 6. Select an unconfigured server. Unconfigured servers are designated by the warning icon ( next to the name of the name of the High Availability (HA) pair. For example, in Figure 1 (page 19) the last two HA pairs need to be configured. Click the Configure button. ) The Server & Storage Expansion Wizard displays the Configure File Server window. 7. 8. If the network bonds are unconfigured, click the Configure... button.
9. To configure Network Management, click the Management button. 10. Click Next. If one or more of the system components are not at the required firmware levels, the firmware tool prompts you to upgrade your firmware. See the administrator guide for your StoreAll system for information on how to upgrade the firmware.
11. Once you have updated the firmware, click Retry. The Wizard displays any unused storage for file system creation/expansion on the Unused Storage window. When you click next, unmapped volumes are mapped with this host, and auto-discovery of your storage is complete.
12. Click Next. 13. Select one of the following options from the File System window and then click Next: 24 • Create a new file system. • Expand an existing file system. • Do nothing at this time, I will create my file system(s) later.
14. Select the storage that you want to use for this file system. Click the check box of each row of storage that you would like to use. You can also change the default segment owner of each storage LUN. Click Next when done. 15. Confirm the configuration options. (Optional) Provide a tier name. Click Next when done.
16. The WORM/Retention window lets you enable data retention. Data retention archives read-only files, ensuring that the files cannot be modified or deleted for a specific period. Data validation scans are optional. Verify that retained files remain unchanged. Select one or more of the following options: • Enable Data Retention.
17. Click Next. When auditing is enabled, the actions selected on the Auditing Options window are kept in the database. IMPORTANT: You must enable Express Query before you can enable auditing. The option to enable Express Query can be found in the previous screen (WORM/Retention).
18. Click Next. Use the options in the Default File Shares window to share the root of this file system using NFS and/or SMB with default settings. NOTE: You can use the Default File Shares window at any time to delete, modify, and create file shares using various protocols (NFS, SMB, FTP, and HTTP).
19. Click Next when done with configuring your default file shares. The summary window is displayed with your tasks. Click Finish to exit out to the Server & Storage Expansion Wizard.
3 Configuring the cluster with the Getting Started Wizard The Getting Started Wizard configures the cluster in a few steps. Be sure to have the necessary IP addresses available. NOTE: This wizard can be used only for 9300, 9320, and 9730 systems. If you are recovering a server, do not use the Getting Started Wizard to join the server to the cluster. Instead, exit the Getting Started Wizard and use the ASCII GUI Configuration Wizard to join the recovered server to the cluster.
To update your license keys, click Update. Typically, you will need a license key for each server. Download your licenses from the HP website and place them in a location that can be accessed by this server. Use Browse to locate a license key and then click Add. Repeat this step for each license key. Enter the DNS server addresses and search domain for your cluster. Also enter your NTP server addresses.
The wizard attempts to access the addresses that you specify. If it cannot reach an address, a message such as Primary DNS Server Unreachable will be displayed on the screen. The File Servers page lists all servers the wizard found on your network. If the list includes servers that do not belong in the cluster, select those servers and click Remove. If a server is not defined, select the server and click Configure.
To configure a server, select the server on the File Servers page and click Configure. Enter the host name and IP address of the server. If the wizard can locate the subnet masks and iLO IP address, it will fill in those values. If your cluster should include servers that were not listed on the File Servers page, configure an IP address manually on the servers, using their local console setup screens. Then click Add on the File Servers page to add the servers to the cluster.
If you create an SMB share, you will need to configure the file serving nodes for SMB and configure authentication. You might also want to set certain SMB parameters such as user permissions. Other configuration options are also available for NFS shares. See the HP StoreAll Storage File System User Guide for more information. The Summary lists any warnings or other items noted during the post-installation checks. Click the items to determine whether further action is needed.
The wizard saves an installation log at /usr/local/ibrix/log/installtime.log on the server where you ran the wizard. Click View Log to display the log. When you click Finish to exit the wizard, the Fusion Manager is restarted. When the service is running again, you can log into the GUI. Troubleshooting the Getting Started Wizard If you are unable to resolve an issue, contact HP support for assistance.
Troubleshooting steps: 1. Relaunch the wizard, go to the Cluster Settings page, and click Next to retry the operation. If the operation fails again, run ifconfig bond0:0. If the output is empty, use the following command to set the VIF on the server: ibrix_fm -c -d bond0:0 -n -v cluster 2.
nameserver 202.52.1.11 If the operation still fails, there may be network inconsistencies or an outage. Typically, a network outage lasts only a few minutes. Try the operation later. If the operation continues to fail, contact your network administrator. DNS/NTP information cannot be retrieved Probable cause: The command timed out. Troubleshooting steps: The network is experiencing inconsistencies or an outage. Usually a network outage last only a few minutes.
If the command does not discover any nodes, multicast broadcasting may be disabled. If so, enable multicast on the server, relaunch the wizard, go to the File Servers page, and check for the servers. You can also use the text-based installation method to form the cluster. • Check the network connection. The discovery period for servers is approximately one minute. If the active server does not receive a response within that time, the servers will not be displayed.
Troubleshooting steps: • Try to ping the server. If you cannot reach the server, the network settings on the server may have failed. This can be due to temporary connectivity conditions. Use the iLO to assign network details such as the IP address and hostname, relaunch the wizard, and try to set the HA pair again. If the operation still fails, use the following command to set the HA pair: ibrix_server -b -h SERVERNAME1,SERVERNAME2 • Network inconsistencies or an outage could cause the wizard to fail.
If the iLO cannot be reached, use the route command to check the gateway used for the server. Both the server and iLO should be on the same network. After verifying this, relaunch the wizard and go to the File Servers page. Click Next to add the power source again.
Use ethtool to detect the speed information. The port value for 9300/9320 systems is detected as described in the previous step. For 9730 systems, the ports are internal. • ◦ If the speed of the NIC ports selected for bond creation is 10GB, use any two NIC ports in bond mode 1. ◦ If the speed of the NIC ports selected for bond creation is 1GB, use four NIC ports in bond mode 6. If these steps do not resolve the condition, use the text-based installation method to form the cluster.
List the discovered physical devices: ibrix_pv –l If the output of ibrix_pv –l is not empty, relaunch the wizard, go to the Create a Default File System page, and create the file system. If the output is empty, no storage is available to create the file system. Add storage to the server and run the previous commands again to make the storage available for the file system. • The storage exposed to the servers might be in an inconsistent state.
4 Installing 9730 systems The system is configured at the factory as follows: • StoreAll OS version 6.2 is installed on the servers. • LUNs are created and preformatted on the 9730 CX storage system. • Depending on the system size, the 9730 system is partially or totally preracked and cabled. NOTE: Four transit bracket are installed in the rack for stability during shipping (one bracket in each corner of the rack). The transit brackets can remain in place or be removed after the 9730 installed.
IMPORTANT: In previous releases, the X9720 system shipped with access to the management, user and cluster networks through a single FSN bond. The field setup moved the user and cluster network access to a separate bond once the connection between a FSN and the customer network was established. 9730 systems default to a unified network configuration that places the chassis management components on one subnet and the user/cluster components on a separate subnet.
Step Task More information 3. Perform the installation “Starting the installation and configuring the chassis” (page 48) 4. Set up virtual IP addresses for client access “Configuring virtual interfaces for client access” (page 74) 5.
4. On the Change OA Network Properties screen, set Enclosure IP to Enable and press OK to Accept. 5. On the Network Settings:OA1 Active screen, navigate to Active IPv4 and press OK. 6. On the Change:OA1 Network Mode screen, change DHCP to Static and press OK to Accept. 7. On the Change: OA1 IP Address screen, set the IP address, subnet mask, and gateway (optional) and Accept the changes.
8. On the Network Settings: OA1 Active screen, select Accept All and press OK. 9. 10. 11. 12. On the Enclosure Settings screen, select Standby OA or OA2 and press OK. On the Network Settings:OA2 screen, navigate to Active IPv4 and press OK. Set the IP address, subnet mask, and gateway (optional) and Accept the changes. Back on the Network Settings:OA2 screen, navigate to Accept All and press OK. The Main Menu reappears and the procedure is complete.
4. Execute the following dd command to make USB the QR installer: dd if= of=/dev/sdi oflag=direct bs=1M For example: dd if=X9000-QRDVD-6.3.72-1.x86_64.signed.iso of=/dev/sdi oflag=direct bs=1M 4491+0 records in 4491+0 records out 4709154816 bytes (4.7 GB) copied, 957.784 seconds, 4.9 MB/s 5. 6. 7. 8. Insert the USB key into the server to be installed. Restart the server to boot from the USB key. (Press F11 and use option 3 ).
IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 4. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings. While setting the time zone, press Enter and the time zone is listed in a separate window.
Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with StoreAll software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field.
Select the number of networks and select OK. The Network Configuration screen is displayed. Select the interface and then select Continue. Or, to edit the interface, select it and then select Configure. The Edit Bonded Interface screen is displayed. Edit the interface details and select OK. 6. The Configuration Summary lists your configuration. Select Commit to continue the installation. The wizard now sets up the blade based on the information you entered. 7.
IMPORTANT: If you are using the default unified network layout, select OK on the Confirming Onboard Administrator and Virtual Connect Settings dialog box and go to step 8. 8. The wizard now validates the information you have entered. It performs the following tests: • Pings the active OA. • Verifies the OA password. • Verifies that OA at that IP address is the active OA.
To configure the iLO IP addresses manually, enter each iLO IP address on the Enter iLO IP Addresses dialog box. 11. The wizard lists the IP addresses you specified on the Confirm iLO IP Addresses dialog box. Select Ok to continue. 12. Configure the chassis interconnect bays (VCs and SAS switches). On the Get Interconnect IP Addresses dialog box, specify whether you want to configure the Interconnect (IC) IP addresses in sequence or manually. Use the space bar to select/deselect the check boxes.
To configure the Interconnect (IC) IP addresses in sequence, enter the first Interconnect (IC) IP address on the Set Interconnect IP Addresses dialog box. The installer then sets the remainder of the addresses sequentially for all 8 interconnect bays. For example, if 172.16.3.21 is the starting Interconnect (IC) IP Address, the installer sets the Interconnect (IC) IP Addresses in the range 172.16.3.21–172.16.3.28.
14. Enter the DNS and NTP server information used by the Onboard Administrator. 15. The wizard now configures the OA. This process takes up to 45 minutes to complete. 16. Next, the wizard verifies the VC configuration and creates a new user called hpspAdmin. You may need to provide input for the following: • The wizard attempts to log into the Virtual Connect manager using the Administrator password you supplied earlier. If the attempt fails, you can retry the attempt or reenter the password.
Log into blade1 again when the Linux login prompt appears. The wizard makes the following checks: • Pings the VC management IP address. • Verifies the hpspAdmin account created earlier. If a check fails, take the corrective actions described on the GUI. 18. The wizard now configures the remaining bays for the Virtual Connect modules in the chassis.
19. The wizard verifies the VC configuration and then creates an hpspAdmin user account on each iLO. 20. The wizard validates the VC configuration and verifies the SAS firmware. If necessary, the SAS switches are flashed with the correct firmware. 21. The wizard verifies the SAS configuration. After determining the correct layout of the storage hardware, the wizard configures the SAS switch zoning so that couplets see the same storage.
22. The wizard powers off blades 2–16, applies the SAS configuration, and then reboots blade 1. Log into blade 1 when the Linux login prompt appears. 23. The wizard takes the following actions: • Verifies the SAS configuration to ensure that SAS zoning is set up correctly • Powers on blades 2–16 • Verifies storage firmware to ensure that is set up correctly • Validates the LUN layout and configures it if necessary 24. The wizard now forms bond0 from eth0 and eth3.
25. Select a method to complete the installation: • Continue with cluster setup at this console (Local web UI session) • Continue with cluster setup remotely (Remote web UI session) • Join this IBRIX server to an existing cluster • Exit Now (You just wanted to setup networking) To create a cluster, press F2 to use ASCII mode, which opens the "Form a Cluster - Step 2" screen, described in “Creating the cluster on blade 1” (page 60).
console will connect to the server; click OK. A command prompt opens; enter the login credentials for the StoreAll Management Console. See “Configuring the cluster with the Getting Started Wizard” (page 30) for the procedure to complete the wizard.
1. On the Form a Cluster — Step 2 dialog box, enter a name for the cluster and specify the IP address and netmask for the Management Console IP (also called the Cluster Management IP or Management IP). This IP address runs on a virtual interface (VIF) assigned to the entire cluster for management use. Think of it as the “IP address of the cluster.” You should connect to this VIF in future GUI management sessions. The VIF remains highly available.
3. A configuration script now performs some tuning, imports the LUNs into the StoreAll OS software, and sets up HA. When the script is complete, you can install the remaining blades as described in the next section. Installing additional 9730 blades Use this procedure to install blades 2–16 on an 9730 system. Complete the following procedure on each blade: 1. Log into the blade. 2. The 9730 Setup dialog box is displayed. 3.
IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 4. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings. While setting the time zone, press Enter and the time zone is listed in a separate window.
Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with IBRIX software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field.
Select the total number of networks and click OK. The Network Configuration screen is displayed. Select an interface and then select Continue. 6. The Configuration Summary lists your configuration. Select Commit to continue the installation. 7. The wizard now obtains OA/VC information from the chassis. If the wizard cannot obtain the information automatically, the following screen appears and you must enter the OA/VC credentials manually.
8. 9.
10. The Join Cluster screen appears. All available management consoles are displayed. Select the applicable management console to complete the configuration.
11. The wizard registers and starts a passive management console on the blade. The installation is complete. Troubleshooting Install software cannot ping the OA This condition can occur if OA2 becomes the Active OA. Configure the OA as follows to resolve this condition: 1. From the Main Menu of the Insight Display, navigate to Enclosure Settings and press OK. 68 2. On the Enclosure Settings screen, select Active OA and press OK. 3.
5. On the Network Settings: OA1 Active screen, select Accept All and press OK. 6. 7. 8. 9. On the Enclosure Settings screen, select Standby OA and press OK. On the Network Settings:OA2 screen, navigate to Active IPv4 and press OK. Set the IP address, subnet mask, and gateway (optional) and Accept the changes. On the Network Settings:OA2 screen, navigate to Accept All and press OK.
1. Update the OA and VC with their credentials: /opt/hp/platform/bin/hpsp_credmgmt --init-cred --master-passwd=hpdefault --hw-username= --hw-password= 2. Update the iLO with its credentials: /opt/hp/platform/bin/hpsp_credmgmt --update-cred --cred-selector=chassis:global/ilo --cred-type=upwpair --cred-username= --cred-password= OA and VC have different usernames and passwords. 1.
5 Post-installation tasks Updating license keys Typically you need a license key for each server. If you did not update your license keys during the installation, download the license keys and install them as described in the administrator guide for your system. Configuring and Enabling High Availability 9730 systems The installation process configures and enables the servers for High Availability. 9300/9320 systems The network VIFs in the cluster should be configured in backup pairs.
manpages and then export it. The manpages are in the $IBRIXHOME/man directory. For example, if $IBRIXHOME is /usr/local/ibrix, the default, you would set the MANPATH variable as follows and then export the variable. MANPATH=$MANPATH:/usr/local/ibrix/man Configuring data collection with Ibrix Collect Ibrix Collect is a log collection utility that gathers relevant information for diagnosis by HP Support.
When setting up SMB, you will need to configure user authentication and then create SMB shares. For more information, see the following: • “Configuring authentication for SMB, FTP, and HTTP” and “Using SMB” in the HP StoreAll Storage File System User Guide • Managing 9000 CIFS Shares in an Active Directory Environment Quick Start Guide CIFS is another name for SMB.
6 Configuring virtual interfaces for client access StoreAll software uses a cluster network interface to carry Fusion Manager traffic and traffic between file serving nodes. This network is configured as bond0 when the cluster is installed. To provide failover support for the Fusion Manager, a virtual interface is created for the cluster network interface.
Configuring backup servers The network VIFs in the cluster are configured in backup pairs. If this step was not done when your cluster was installed, assign backup servers for the bond0:1 interface. For example, node1 is the backup for node2, and node2 is the backup for node1. 1. Add the VIF: # ibrix_nic –a -n bond0:2 –h node1,node2,node3,node4 2.
Specifying VIFs in the client configuration When you configure your clients, you may need to specify the VIF that should be used for client access. NFS/SMB. Specify the VIF IP address of the servers (for example, bond0:1) to establish connection. You can also configure DNS round robin to ensure NFS or SMB client-to-server distribution. In both cases, the NFS/SMB clients will cache the initial IP they used to connect to the respective share, usually until the next reboot. FTP.
The following commands show configuring a bonded VIF and backup nodes for a unified network topology using the 10.10.x.y subnet. VLAN tagging is configured for hosts ib142–129 and ib142–131 on the 51 subnet. Add the bond0.51 interface with the VLAN tag: # ibrix_nic -a -n bond0.51 -h ib142-129 # ibrix_nic -a -n bond0.51 -h ib142-131 Assign an IP address to the bond0:51 VIFs on each node: # ibrix_nic -c -n bond0.51 -h ib142-129 -I 15.226.51.101 -M 255.255.255.0 # ibrix_nic -c -n bond0.51 -h ib142-131 -I 15.
7 Adding Linux and Windows StoreAll clients Linux and Windows StoreAll clients run applications that use the file system. The clients can read, write, and delete files by sending requests to File Serving Nodes. This chapter describes how to install, configure, and register the clients. Linux StoreAll client Prerequisites for installing the Linux StoreAll client Before installing the client software, do the following: • Install a supported version of the operating system, accepting all packages.
3. Verify that the StoreAll client is operational. The following command reports whether StoreAll services are running: /etc/init.d/ibrix_client status Registering Linux StoreAll clients Linux StoreAll clients must be registered manually with the management console before they can mount a file system. To register a client using the CLI, use the following command: /bin/ibrix_client -a -h HOST -e IPADDRESS For example, to register client12.hp.com, which is accessible at IP address 192.
To prefer a network interface for a hostgroup, use the following command: /bin/ibrix_hostgroup -n -g HOSTGROUP -A DESTHOST/IFNAME The destination host (DESTHOST) cannot be a hostgroup. For example, to prefer network interface eth3 for traffic from all StoreAll clients (the clients hostgroup) to file serving node s2.hp.com: /bin/ibrix_hostgroup -n -g clients -A s2.hp.
Windows StoreAll client setup When setting up the Windows StoreAll client, you will need to perform specific tasks on the Active Directory server, the management console, and the Windows StoreAll client. 1. Set up Services for UNIX 3.5 on the Active Directory global catalog server. 2. To configure automatic user mapping, either specify your domain controllers, or allow mapping of local users. See “Configuring automatic user mapping” (page 81). 3.
After configuring automatic user mapping, register the Windows client, and start the service. See “Registering Windows StoreAll clients and starting services” (page 83). Configuring static user mapping This section describes how to configure static user mapping. Configuring groups and users on the Active Directory server You must configure an administrative user and group, a proxy user, the “unknown” Windows user, and any other Windows client users.
If you create other OUs in Active Directory and users in those units will access the file system, delegate control for these OUs to the proxy user also. Configuring an “unknown” Windows user The “unknown” Windows user is displayed as the owner of a file when the client cannot resolve a user mapping. This user is required and must be defined on the management console with the ibrix_activedirectory command. You can assign any name to this user.
1. Launch the Windows StoreAll client GUI and navigate to the Registration tab. 2. 3. 4. Select the client’s IP address from the list. Enter the management console name in the FM Host Name field. Select Recover Registration to avoid having to reregister this client if you reinstall it. This option automatically retrieves the client’s ID from the management console. To start the Windows StoreAll client service, select Start Service After Registration. Click Register.
Starting the StoreAll client service automatically The StoreAll client service, FusionClient, starts manually by default. When the client is functioning to your satisfaction, change the client service to start automatically when the machine is booted. 1. On the client machine, select Settings > Control Panel > Administrative Tools > Services. 2. In the services list, scroll to FusionClient, right-click, and select Properties. 3. Set the Startup Type to Automatic. Click OK.
• Tune Host. Tunable parameters include the NIC to prefer (the client uses the cluster interface by default unless a different network interface is preferred for it), the communications protocol (UDP or TCP), and the number of server threads to use. • Active Directory Settings. Displays current Active Directory settings. See the online help for the client GUI if necessary.
ACEs can be explicit or inherited. An explicit ACE is assigned directly to the object by the owner or an administrator, while an inherited ACE is inherited from the parent directory. ACEs are governed by the following precedence rules: • An explicit deny ACE overrides an explicit allow ACE, and an inherited deny ACE overrides an inherited allow ACE.
read-write-execute permissions, the corresponding permission in the file mode mask for others is set to read-only. The write-execute permissions of the inherited ACE are ignored in the mapping. When an explicit deny ACE is added to a file’s ACL, the corresponding allow permissions are removed for group and others in the file mode mask, and the corresponding special explicit ACEs are updated accordingly. An inherited deny ACE has no effect on the mode mask.
The Permissions Entry window has three permissions that are important to StoreAll software: Read Data, Write Data, and Execute File. These map directly to Read, Write, and Execute in the Linux mode mask, as shown in the following table.
Uninstalling Windows StoreAll clients NOTE: It is not necessary to unmount the file system before uninstalling the Windows StoreAll client software. To uninstall a client, complete the following steps: 1. On the active management console, delete the Windows StoreAll clients from the configuration database: /bin/ibrix_client -d -h 2. 90 Locally uninstall the Windows StoreAll client software from each StoreAll client via the Add or Remove Programs utility in the Control Panel.
8 Completing the 9730 Performance Module installation This chapter describes how to complete the installation of an 9730 Performance Module after installing the module hardware as described in the HP StoreAll 9730 Storage Performance Module Installation Instructions.
7. 8. When the HP Storage screen appears, enter qr to install the software. Repeat steps 4 through 7 on each server. Installing the first expansion blade The examples in this procedure show the installation of the first performance module, adding new blades to an existing 9730 cluster with two blades. Additional performance modules are installed in the same manner. The following screen shows the cluster before the expansion blades are added.
IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 3. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings. While setting the time zone, press Enter and the time zone is listed in a separate window.
Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with IBRIX software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field.
Select the total number of networks and click OK. The Network Configuration screen is displayed. Select an interface and then select Continue. 5. The Configuration Summary lists your configuration. Select Commit to continue the installation. 6. The wizard now obtains OA/VC information from the chassis. If the wizard cannot obtain the information automatically, the following screen appears and you must enter the OA/VC credentials manually.
7. 8.
9. The Join Cluster screen appears. All available management consoles are displayed. Select the applicable management console to complete the configuration.
10. The wizard registers and starts a passive management console on the blade. 11. The blade is now registered with the active management console and a passive management console is installed and registered on the blade. 12. The GUI now shows the blade has been added to the cluster. Installing the second expansion blade The installation procedure is similar to the first node, except the firmware, chassis, SAS, and storage checks are already in place. 1.
IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 4. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings. While setting the time zone, press Enter and the time zone is listed in a separate window.
Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with IBRIX software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field.
Select the total number of networks and click OK. The Network Configuration screen is displayed. Select an interface and then select Continue. 6. The Configuration Summary lists your configuration. Select Commit to continue the installation. 7. The wizard now obtains OA/VC information from the chassis. If the wizard cannot obtain the information automatically, the following screen appears and you must enter the OA/VC credentials manually.
8. 9.
10. The Join Cluster screen appears. All available management consoles are displayed. Select the applicable management console to complete the configuration.
If the applicable management console is not listed or you select Cancel, the Management Console IP screen is displayed. Enter the IP address of the management console and select OK. 11. The wizard registers and starts a passive management console on the blade. 12. The blade is now registered with the active management console and a passive management console is installed and registered on the blade. 13. The GUI now shows the fourth blade in the cluster.
Verify vendor storage Run the Linux pvscan command on the expansion blades to verify that the operating system can see the factory-provisioned preformatted segments (physical volumes): [root@ib121-121 ~]# pvscan PV /dev/sdh VG vg7a32272126c746bfb7829a688c61e5b8 PV /dev/sdg VG vg22d0827592e34a6b9cda1daa746ca4ba . . . . . . . . lvm2 [5.46 TB / 0 lvm2 [5.
To see the LUNs associated with the physical volumes, select Vendor Storage from the Navigator and select the new storage expansion module from the Vendor Storage Panel. In the lower Navigator, expand the Summary completely and select LUN.
9 Expanding an 9720 or 9320 10GbE cluster by an 9730 module Prerequisites The following prerequisites must be complete before adding the expansion module to the existing cluster: 9720 systems • The 9730 expansion module must be cabled to the existing cluster as described in the HP StoreAll Storage Networking Best Practices Guide. • The servers in the existing cluster must be upgraded to the 6.3 release: 1.
IMPORTANT: Use an external USB drive that has external power; do not rely on the USB bus for power to drive the device. 3. 4. Restart the blade to boot from the DVD-ROM. When the HP Storage screen appears, enter qr to install the software. Repeat steps 2–4 on each expansion blade. Use a USB key 1. 2. 3. Copy the ISO to a Linux system. Insert a USB key into the Linux system. Execute cat /proc/partitions to find the USB device partition, which is displayed as dev/sdX.
2. The setup wizard verifies the firmware on the system and notifies you if a firmware update is needed. IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 3. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings.
Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with StoreAll software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field.
Select the number of networks and select OK. The Network Configuration screen is displayed. Select the interface and then select Continue. Or, to edit the interface, select it and then select Configure. The Edit Bonded Interface screen is displayed. Edit the interface details and select OK. 5. The Configuration Summary lists your configuration. Select Commit to continue the installation. The wizard now sets up the blade based on the information you entered. 6.
IMPORTANT: If you are using the default unified network layout, select OK on the Confirming Onboard Administrator and Virtual Connect Settings dialog box and go to step 8. 7. The wizard now validates the information you have entered. It performs the following tests: • Pings the active OA. • Verifies the OA password. • Verifies that OA at that IP address is the active OA.
To configure the iLO IP addresses manually, enter each iLO IP address on the Enter iLO IP Addresses dialog box. 10. The wizard lists the IP addresses you specified on the Confirm iLO IP Addresses dialog box. Select Ok to continue. 11. Configure the chassis interconnect bays (VCs and SAS switches). On the Get Interconnect IP Addresses dialog box, specify whether you want to configure the Interconnect (IC) IP addresses in sequence or manually. Use the space bar to select/deselect the check boxes.
To configure the Interconnect (IC) IP addresses in sequence, enter the first Interconnect (IC) IP address on the Set Interconnect IP Addresses dialog box. The installer then sets the remainder of the addresses sequentially for all 8 interconnect bays. For example, if 172.16.3.21 is the starting Interconnect (IC) IP Address, the installer sets the Interconnect (IC) IP Addresses in the range 172.16.3.21–172.16.3.28.
14. The wizard now configures the OA. This process takes up to 45 minutes to complete. 15. Next, the wizard verifies the VC configuration and creates a new user called hpspAdmin. You may need to provide input for the following: • The wizard attempts to log into the Virtual Connect manager using the Administrator password you supplied earlier. If the attempt fails, you can retry the attempt or reenter the password. (Retry is helpful only if a timeout caused the VC password check to fail.
Log into blade1 again when the Linux login prompt appears. The wizard makes the following checks: • Pings the VC management IP address. • Verifies the hpspAdmin account created earlier. If a check fails, take the corrective actions described on the GUI. 17. The wizard now configures the remaining bays for the Virtual Connect modules in the chassis.
18. The wizard verifies the VC configuration and then creates an hpspAdmin user account on each iLO. 19. The wizard validates the VC configuration and verifies the SAS firmware. If necessary, the SAS switches are flashed with the correct firmware. 20. The wizard verifies the SAS configuration. After determining the correct layout of the storage hardware, the wizard configures the SAS switch zoning so that couplets see the same storage.
21. The wizard powers off blades 2–16, applies the SAS configuration, and then reboots blade 1. Log into blade 1 when the Linux login prompt appears. 22. The wizard takes the following actions: • Verifies the SAS configuration to ensure that SAS zoning is set up correctly • Powers on blades 2–16 • Verifies storage firmware to ensure that is set up correctly • Validates the LUN layout and configures it if necessary 23. The wizard now forms bond0 from eth0 and eth3. 24.
25. The Join Cluster screen appears. All available management consoles are displayed. Select the applicable management console to complete the configuration.
If the applicable management console is not listed or you select Cancel, the Management Console IP screen is displayed. Enter the IP address of the management console and select OK. 26. The wizard registers and starts a passive management console on the blade. The installation is complete. Installing the second expansion blade The installation procedure is similar to the first node, except the firmware, chassis, SAS, and storage checks are already in place. 1.
3. The setup wizard verifies the firmware on the system and notifies you if a firmware update is needed. IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes. Continuing the installation without upgrading to a supported firmware recipe can result in a defective system. 4. Provide the host name (blade name) on the Individual Server Setup screen. The installation detects the time settings.
5. The Network Configuration dialog box defines the server on bond0. Note the following: • The hostname can include alphanumeric characters and the hyphen (–) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with IBRIX software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks.
When you press F2, the Total number of networks screen is displayed. Select the total number of networks and click OK. The Network Configuration screen is displayed. Select an interface and then select Continue. 6. The Configuration Summary lists your configuration. Select Commit to continue the installation.
7. The wizard now obtains OA/VC information from the chassis. If the wizard cannot obtain the information automatically, the following screen appears and you must enter the OA/VC credentials manually. 8.
9. The X9000 Installation - Network Setup Complete screen is displayed. Select Join this IBRIX server to an existing cluster. 10. The Join Cluster screen appears. All available management consoles are displayed. Select the applicable management console to complete the configuration.
If the applicable management console is not listed or you select Cancel, the Management Console IP screen is displayed. Enter the IP address of the management console and select OK.
11. The wizard registers and starts a passive management console on the blade. The installation is complete.
The Storage panel now lists the original physical volumes and the newly discovered physical volumes. To discover the physical volumes from the CLI, run the following command on the active Fusion Manager, specifying just the new blades with the -h option: ibrix_pv -a -h ib121-123,ib121-124 To see the LUNs associated with the physical volumes, select Vendor Storage from the Navigator and select the new storage expansion module from the Vendor Storage Panel.
Expand an existing file system To add any or all of the new physical volumes to an existing file system, complete these steps: • Create a mountpoint for the file system on the new blades: ]# ibrix_mountpoint -c -h ib121-123,ib121-124 -m /ibfs1 • Mount the file system on the blades: # ibrix_mount -f ibfs1 –h ib121-123,ib121-124 -m /ibfs1 • Extend the file system.
10 Expanding an 9320 cluster with an 9320 starter kit The following prerequisites must be complete before adding the new couplet to the existing cluster: • The 9320 starter kit must be cabled to the existing cluster as described in the HP StoreAll Storage Networking Best Practices Guide. • The servers in the existing cluster must be upgraded to the 6.3 release. Installing the latest StoreAll OS software release StoreAll OS version 6.3 is only available through the registered release process.
3. 4. Click Begin. Provide the host name in the Individual Server Setup screen. The installation detects the system time settings. 5. While setting the time zone, press Enter and the time zone is listed in a separate window. Type the first letter of your time zone and then use the “UP” and “DOWN” arrow keys to find your time zone.
6. The default Network Configuration dialog box defines the server on bond0. Note the following: 132 • The hostname, which is the name of the local server, can include alphanumeric characters and the hyphen (-) special character. It is a best practice to use only lowercase characters in hostnames; uppercase characters can cause issues with StoreAll software. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0.
• The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field. Later in this procedure, you can select either Web UI or ASCII mode to complete the installation. A gateway address is required to use the Web UI. • VLAN capabilities provide hardware support for running multiple logical networks over the same physical networking hardware. StoreAll supports the ability to associate a VLAN tag with a FSN interface.
9. In the Add Bond screen, provide the bond name, VLAN Tag ID, IP address, netmask, and gatweway ID. 10. The Configuration Summary lists your configuration. Select Commit to continue the installation. The setup wizard now configures the server according to the information you entered.
11. On the X9000 Installation Complete - Network Setup page, select a method to complete the installation: • Use the Getting Started Wizard. This method can be used only if the original 93xx cluster was configured with StoreAll OS 6.2 using the wizard. The cluster must use the unified network. Select either Continue with cluster setup on this console (Local web UI session) or Continue with cluster setup remotely (Remote web UI session).
The status of the server is now updated on the GUI. 5. 6. 7. Configure the second new server in the same manner. When you click OK, the new server is updated on the GUI. StoreAll HA is also configured on the servers. Click Next on the File Servers screen to save the configuration, and the servers will be automatically registered in the cluster. Exit the wizard.
2. The wizard registers and starts a passive management console on the blade. Repeat this procedure to install the next expansion server.
11 Using ibrixinit The ibrixinit utility is used to install or uninstall the Fusion Manager, file serving node, and statstool packages on a file serving node. It can also be used to install or uninstall the StoreAll client package on a Linux client. The ibrixinit utility is provided with the product distribution. Expand the distribution tarball, or mount the distribution DVD in a directory of your choice. This creates an ibrix subdirectory containing the installer program.
Option Description -V Specifies the User virtual interface IP address for Fusion Manager -c clustername Specifies a name that identifies the entire cluster.
12 Setting up InfiniBand couplets InfiniBand is supported for 9300 and 9320 systems. The following logical network diagram shows an InfiniBand configuration. The following diagram shows network cabling for an InfiniBand configuration.
Downloading and installing the InfiniBand software HP supports Mellanox OFED 1.5.3 for use with 9300/9320 systems. To download the software, go to: http://h20311.www2.hp.com/hpc/us/en/infiniband-matrix.html Locate the HCA you have installed and click Firmware/Software. Select your version of Red Hat Enterprise Linux 5 Server, and then download the appropriate Mellanox InfiniBand Driver. Install OFED 1.5.3 on each file serving node, following the installation instructions provided with the software.
/lib/modules/2.6.18-194.el5/updates/kernel/net/sunrpc/auth_gss/auth_rpcgss.ko /lib/modules/2.6.18-194.el5/updates/kernel/fs/exportfs/exportfs.ko 3. Rename all of the above files to use the following suffix: /path/name.ofed. For example: mv /lib/modules/2.6.18-194.el5/updates/kernel/fs/nfs/nfs.ko /lib/modules/2.6.18-194.el5/updates/kernel/fs/nfs/nfs.ko.ofed 4. Clean up the modules with the depmod -a command and reboot the nodes. A reboot is necessary for the changes to take effect.
InfiniBand tests for end to end: • On listener host, run ib_read_bw. • On sender, run ib_read_bw . You might need to specify a port number with -i. • Run ibnetdiscover. This is a Voltaire tool that does a LID crawl. Troubleshoot switches: • Run the Voltaire switch port_verify utility. This utility reports status for the ports and indicates any problems. Also use port_verify -v.
[root@ib~]# tar -xf ufm-client-utils-2.0.0-28.tgz [root@ib ~]# cd ufm-client-utilst [root@ib ufm-client-utils]# ./install.sh Check dependencies ... OK Checking VoltaireOFED ....... OK Checking version ... 1.4.2_2 Check the distribution ... Red Hat Proceed ufm-discover package /usr/src/redhat/SRPMS/ufm-discover-1.0.0-1.src.rpm ucceed to building ufm-discover package Preparing...
13 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: • Product model names and numbers • Technical support registration number (if applicable) • Product serial numbers • Error messages • Operating system type and revision level • Detailed questions Related information Related documents are available on the Manuals page at: http://www.hp.
14 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Glossary ACE access control entry. ACL access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SMB Server Message Block. The protocol used in Windows environments for shared folders. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UID Unit identification. USM SNMP User Security Model. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface.