HP IBRIX X9720/StoreAll 9730 Storage Administrator Guide Abstract This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting. It does not document StoreAll file system features or standard Linux administrative tools and commands. For information about configuring and using StoreAll file system features, see the HP StoreAll Storage File System User Guide.
© Copyright 2009, 2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Upgrading the StoreAll software to the 6.3 release.......................................10 Upgrading 9720 chassis firmware............................................................................................12 Online upgrades for StoreAll software.......................................................................................12 Preparing for the upgrade...................................................................................................13 Performing the upgrade.......
Configuring ports for a firewall.................................................................................................35 Configuring NTP servers..........................................................................................................36 Configuring HP Insight Remote Support on StoreAll systems..........................................................36 Configuring the StoreAll cluster for Insight Remote Support......................................................
7 Configuring system backups.......................................................................76 Backing up the Fusion Manager configuration............................................................................76 Using NDMP backup applications............................................................................................76 Configuring NDMP parameters on the cluster........................................................................77 NDMP process management.......................
Controlling Statistics tool processes.....................................................................................113 Troubleshooting the Statistics tool............................................................................................114 Log files...............................................................................................................................114 Uninstalling the Statistics tool...................................................................................
Viewing the output from an add-on script........................................................................147 Viewing data collection information....................................................................................149 Adding/deleting commands or logs in the XML file..............................................................149 Viewing software version numbers..........................................................................................149 Troubleshooting specific issues..
Automatic upgrades.........................................................................................................185 Manual upgrades............................................................................................................186 Preparing for the upgrade............................................................................................186 Saving the node configuration......................................................................................
D Warnings and precautions......................................................................220 Electrostatic discharge information..........................................................................................220 Preventing electrostatic discharge.......................................................................................220 Grounding methods.....................................................................................................220 Grounding methods...................
1 Upgrading the StoreAll software to the 6.3 release This chapter describes how to upgrade to the 6.3 StoreAll software release. You can also use this procedure for any subsequent 6.3.x patches. IMPORTANT: Print the following table and check off each step as you complete it. NOTE: (Upgrades from version 6.0.x) CIFS share permissions are granted on a global basis in v6.0.X. When upgrading from v6.0.X, confirm that the correct share permissions are in place.
Table 1 Prerequisites checklist for all upgrades (continued) Step Step completed? Description 4. Set the crash kernel to 256M in the /etc/grub.conf file. The /etc/grub.conf file might contain multiple instances of the crash kernel parameter. Make sure you modify each instance that appears in the file. NOTE: Save a copy of the /etc/grub.conf file before you modify it. The following example shows the crash kernel set to 256M: kernel /vmlinuz-2.6.18-194.el5 ro root=/dev/vg1/lv1 crashkernel=256M@16M 5.
Table 1 Prerequisites checklist for all upgrades (continued) Step Description 10 For 9720 systems, delete the existing vendor storage by entering the following command: Step completed? ibrix_vs -d -n EXDS The vendor storage is registered automatically after the upgrade. 11 Record all host tunings, FS tunings and FS mounting options by using the following commands: 1. To display file system tunings, enter: ibrix_fs_tune -l >/local/ibrix_fs_tune-l.txt 2.
When performing an online upgrade, note the following: • File systems remain mounted and client I/O continues during the upgrade. • The upgrade process takes approximately 45 minutes, regardless of the number of nodes. • The total I/O interruption per node IP is four minutes, allowing for a failover time of two minutes and a failback time of two additional minutes. • Client I/O having a timeout of more than two minutes is supported.
After the upgrade Complete these steps: 1. If your cluster nodes contain any 10Gb NICs, reboot these nodes to load the new driver. You must do this step before you upgrade the server firmware, as requested later in this procedure. 2. Upgrade your firmware as described in “Upgrading firmware” (page 136). 3. Start any remote replication, rebalancer, or data tiering tasks that were stopped before the upgrade. 4.
5. Change to the /local/ibrix/ directory. cd /local/ibrix/ 6. Run the following upgrade script: ./auto_ibrixupgrade The upgrade script automatically stops the necessary services and restarts them when the upgrade is complete. The upgrade script installs the Fusion Manager on all file serving nodes. The Fusion Manager is in active mode on the node where the upgrade was run, and is in passive mode on the other file serving nodes.
4. 5. 6. 7. 8. • /stage • /alt Verify that all FSN servers have a minimum of 4 GB of free/available storage on the /local partition by using the df command . Verify that all FSN servers are not reporting any partition as 100% full (at least 5% free space) by using the df command . Note any custom tuning parameters, such as file system mount options. When the upgrade is complete, you can reapply the parameters. Stop all client I/O to the cluster or file systems.
Performing the upgrade manually This upgrade method is supported only for upgrades from StoreAll software 6.x to the 6.3 release. Complete the following steps first for the server running the active Fusion Manager and then for the servers running the passive Fusion Managers: 1. This release is only available through the registered release process. To obtain the ISO image, contact HP Support to register for the release and obtain access to the software dropbox. 2.
Use ibrix_cifsconfig to set the parameters, specifying the value appropriate for your cluster (1=enabled, 0=disabled). The following examples set the parameters to the default values for the 6.3 release: ibrix_cifsconfig -t -S "smb_signing_enabled=0, smb_signing_required=0" ibrix_cifsconfig -t -S "ignore_writethru=1" The SMB signing feature specifies whether clients must support SMB signing to access SMB shares. See the HP StoreAll Storage File System User Guide for more information about this feature.
The following example is for a RHEL 4.8 client with kernel version 2.6.9-89.ELsmp: # /usr/local/ibrix/bin/verify_client_update 2.6.9-89.35.1.ELsmp Kernel update 2.6.9-89.35.1.ELsmp is compatible. nl If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system. The StoreAll client software is then automatically updated with the new kernel, and StoreAll client services start automatically.
5. If any archive API shares exist for the file system, delete them. NOTE: To list all HTTP shares, enter the following command: ibrix_httpshare -l To list only REST API (Object API) shares, enter the following command: ibrix_httpshare -l -f -v 1 | grep "objectapi: true" | awk '{ print $2 }' In this instance is the file system.
NOTE: The REST API (Object API) functionality has expanded, and any REST API (Object API) shares you created in previous releases are now referred to as HTTP-StoreAll REST API shares in file-compatible mode. The 6.3 release is also introducing a new type of share called HTTP-StoreAll REST API share in Object mode. ibrix_httpshare -a -c -t -f -p -P -S “ibrixRestApiMode=filecompatible, anonymous=true” In this instance: 5.
To retry the copy of configuration, use the following command: /usr/local/ibrix/autocfg/bin/ibrixapp upgrade -f -s • If the install of the new image succeeds, but the configuration restore fails and you need to revert the server to the previous install, run the following command and then reboot the machine. This step causes the server to boot from the old version (the alternate partition).
2. On the node now hosting the active Fusion Manager (ib51-102 in the example), unregister node ib51-101: [root@ib51-102 ~]# ibrix_fm -u ib51-101 Command succeeded! 3. On the node hosting the active Fusion Manager, register node ib51-101 and assign the correct IP address: [root@ib51-102 ~]# ibrix_fm -R ib51-101 -I 10.10.51.101 Command succeeded! NOTE: When registering a Fusion Manager, be sure the hostname specified with -R matches the hostname of the server.
If you did not see the Version mismatch, upgrade needed in the command’s output, see “Troubleshooting an Express Query Manual Intervention Failure (MIF)” (page 152). Perform the following steps only if you see the Version mismatch, upgrade needed in the command’s output: 1. Disable auditing by entering the following command: ibrix_fs -A -f -oa audit_mode=off In this instance is the file system. 2.
2 Product description HP X9720 and 9730 Storage are a scalable, network-attached storage (NAS) product. The system combines HP StoreAll software with HP server and storage hardware to create a cluster of file serving nodes.
• Multiple environments. Operates in both the SAN and DAS environments. • High availability. The high-availability software protects servers. • Tuning capability. The system can be tuned for large or small-block I/O. • Flexible configuration. Segments can be migrated dynamically for rebalancing and data tiering. High availability and redundancy The segmented architecture is the basis for fault resilience—loss of access to one or more segments does not render the entire file system inaccessible.
3 Getting started This chapter describes how to log in to the system, boot the system and individual server blades, change passwords, and back up the Fusion Manager configuration. It also describes the StoreAll software management interfaces. IMPORTANT: Follow these guidelines when using your system: • Do not modify any parameters of the operating system or kernel, or update any part of the X9720/9730 Storage unless instructed to do so by HP; otherwise, the system could fail to operate properly.
File systems. Set up the following features as needed: • NFS, SMB (Server Message Block), FTP, or HTTP. Configure the methods you will use to access file system data. • Quotas. Configure user, group, and directory tree quotas as needed. • Remote replication. Use this feature to replicate changes in a source file system on one cluster to a target file system on either the same cluster or a second cluster. • Data retention and validation. Use this feature to manage WORM and retained files.
3. 4. Double-click the first server name. Log in as normal. NOTE: By default, the first port is connected with the dongle to the front of blade 1 (that is, server 1). If server 1 is down, move the dongle to another blade. Using the serial link on the Onboard Administrator If you are connected to a terminal server, you can log in through the serial link on the Onboard Administrator.
If you are using HTTP to access the Management Console, open a web browser and navigate to the following location, specifying port 80: http://:80/fusion If you are using HTTPS to access the Management Console, navigate to the following location, specifying port 443: https://:443/fusion In these URLs, is the IP address of the Fusion Manager user VIF. The Management Console prompts for your user name and password.
System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition.
Statistics Historical performance graphs for the following items: • Network I/O (MB/s) • Disk I/O (MB/s) • CPU usage (%) • Memory usage (%) On each graph, the X-axis represents time and the Y-axis represents performance. Use the Statistics menu to select the servers to monitor (up to two), to change the maximum value for the Y-axis, and to show or hide resource usage distribution for CPU and memory. Recent Events The most recent cluster events.
example, you can sort the contents of the Mountpoint column in ascending or descending order, and you can select the columns that you want to appear in the display. Adding user accounts for Management Console access StoreAll software supports administrative and user roles. When users log in under the administrative role, they can configure the cluster and initiate operations such as remote replication or snapshots.
StoreAll client interfaces StoreAll clients can access the Fusion Manager as follows: • Linux clients. Use Linux client commands for tasks such as mounting or unmounting file systems and displaying statistics. See the HP StoreAll Storage CLI Reference Guide for details about these commands. • Windows clients. Use the Windows client GUI for tasks such as mounting or unmounting file systems and registering Windows clients.
# passwd ibrix You will be prompted to enter the new password. Configuring ports for a firewall IMPORTANT: To avoid unintended consequences, HP recommends that you configure the firewall during scheduled maintenance times. When configuring a firewall, you should be aware of the following: • SELinux should be disabled. • By default, NFS uses random port numbers for operations such as mounting and locking.
Port Description 137/udp Between file serving nodes and SMB clients (user network) 138/udp 139/tcp 445/tcp 9000:9002/tcp Between file serving nodes and StoreAll clients (user network) 9000:9200/udp 20/tcp, 20/udp Between file serving nodes and FTP clients (user network) 21/tcp, 21/udp 7777/tcp Between GUI and clients that need to access the GUI 8080/tcp 5555/tcp, 5555/udp Dataprotector 631/tcp, 631/udp Internet Printing Protocol (IPP) 1344/tcp, 1344/udp ICAP Configuring NTP servers When the
NOTE: Configuring Phone Home enables the hp-snmp-agents service internally. As a result, a large number of error messages, such as the following, could occasionally appear in /var/log/hp-snmp-agents/cma.log: Feb 08 13:05:54 x946s1 cmahostd[25579]: cmahostd: Can't update OS filesys object: /ifs1 (PEER3023) The cmahostd daemon is part of the hp-snmp-agents service. This error message occurs because the file system exceeds TB.
IMPORTANT: You must compile and manually register the StoreAll MIB file by using HP Systems Insight Manager: 1. Download ibrixMib.txt from /usr/local/ibrix/doc/. 2. Rename the file to ibrixMib.mib. 3. In HP Systems Insight Manager, complete the following steps: a. Unregister the existing MIB by entering the following command: \mibs>mxmib -d ibrixMib.mib b. Copy the ibrixMib.mib file to the \mibs directory, and then enter the following commands: \mibs>mcompile ibrixMib.
Configuring the Virtual Connect Manager To configure the Virtual Connect Manager on an X9720/9730 system, complete the following steps: 1. From the Onboard Administrator, select OA IP > Interconnect Bays > HP VC Flex-10 > Management Console. 2. On the HP Virtual Connect Manager, open the SNMP Configuration tab. 3. Configure the SNMP Trap Destination: 4. • Enter the Destination Name and IP Address (the CMS IP). • Select SNMPv1 as the SNMP Trap Format. • Specify public as the Community String.
Configuring Phone Home settings To configure Phone Home on the GUI, select Cluster Configuration in the upper Navigator and then select Phone Home in the lower Navigator. The Phone Home Setup panel shows the current configuration.
Click Enable to configure the settings on the Phone Home Settings dialog box. Skip the Software Entitlement ID field; it is not currently used. The time required to enable Phone Home depends on the number of devices in the cluster, with larger clusters requiring more time.
To configure Entitlements, select a device and click Modify to open the dialog box for that type of device. The following example shows the Server Entitlement dialog box. The customer-entered serial number and product number are used for warranty checks at HP Support. Use the following commands to entitle devices from the CLI. The commands must be run for each device present in the cluster.
Enter the read community string on the Credentials > SNMP tab. This string should match the Phone Home read community string. If the strings are not identical, the Fusion Manager IP might be discovered as “Unknown.
Devices are discovered as described in the following table.
Enter the read community string on the Credentials > SNMP tab. This string should match the Phone Home read community string. If the strings are not identical, the device will be discovered as “Unknown.” The following example shows discovered devices on HP SIM 6.3. File serving nodes are discovered as ProLiant server. Configuring device Entitlements Configure the CMS software to enable remote support for StoreAll systems.
Enter the following custom field settings in HP SIM: • Custom field settings for X9720/9730 Onboard Administrator The Onboard Administrator (OA) is discovered with OA IP addresses. When the OA is discovered, edit the system properties on the HP Systems Insight Manager.
The devices you entitled should be displayed as green in the ENT column on the Remote Support System List dialog box. If a device is red, verify that the customer-entered serial number and part number are correct and then rediscover the devices. Testing the Insight Remote Support configuration To determine whether the traps are working properly, send a generic test trap with the following command: snmptrap -v1 -c public .1.3.6.1.4.1.232 6 11003 1234 .1.3.6.1.2.1.1.5.0 s test .
Troubleshooting Insight Remote Support Devices are not discovered on HP SIM Verify that cluster networks and devices can access the CMS. Devices will not be discovered properly if they cannot access the CMS. The maximum number of SNMP trap hosts has already been configured If this error is reported, the maximum number of trapsink IP addresses have already been configured. For OA devices, the maximum number of trapsink IP addresses is 8.
4 Configuring virtual interfaces for client access StoreAll software uses a cluster network interface to carry Fusion Manager traffic and traffic between file serving nodes. This network is configured as bond0 when the cluster is installed. To provide failover support for the Fusion Manager, a virtual interface is created for the cluster network interface.
3. To assign the IFNAME a default route for the parent cluster bond and the user VIFS assigned to FSNs for use with SMB/NFS, enter the following ibrix_nic command at the command prompt: # ibrix_nic -r -n IFNAME -h HOSTNAME-A -R 4. Configure backup monitoring, as described in “Configuring backup servers” (page 50). Creating a bonded VIF NOTE: The examples in this chapter use the unified network and create a bonded VIF on bond0.
For example: # # # # ibric_nic ibric_nic ibric_nic ibric_nic -m -m -m -m -h -h -h -h node1 node2 node3 node4 -A -A -A -A node2/bond0:1 node1/bond0:1 node4/bond0:1 node3/bond0:1 Configuring automated failover To enable automated failover for your file serving nodes, execute the following command: ibrix_server -m [-h SERVERNAME] Example configuration This example uses two nodes, ib50-81 and ib50-82. These nodes are backups for each other, forming a backup pair.
NOTE: Because the backup NIC cannot be used as a preferred network interface for StoreAll clients, add one or more user network interfaces to ensure that HA and client communication work together. Configuring VLAN tagging VLAN capabilities provide hardware support for running multiple logical networks over the same physical networking hardware. To allow multiple packets for different VLANs to traverse the same physical interface, each packet must have a field added that contains the VLAN tag.
5 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The agile Fusion Manager maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. The agile Fusion Manager is installed on all file serving nodes when the cluster is installed. The Fusion Manager is active on one node, and is passive on the other nodes.
The command takes effect immediately. The failed-over Fusion Manager remains in nofmfailover mode until it is moved to passive mode using the following command: ibrix_fm -m passive NOTE: A Fusion Manager cannot be moved from nofmfailover mode to active mode.
What happens during a failover The following actions occur when a server is failed over to its backup: 1. The Fusion Manager verifies that the backup server is powered on and accessible. 2. The Fusion Manager migrates ownership of the server’s segments to the backup and notifies all servers and StoreAll clients about the migration. This is a persistent change. If the server is hosting the active FM, it transitions to another server. 3.
The wizard also attempts to locate the IP addresses of the iLOs on each server. If it cannot locate an IP address, you will need to enter the address on the dialog box. When you have completed the information, click Enable HA Monitoring and Auto-Failover for both servers. Use the NIC HA Setup dialog box to configure NICs that will be used for data services such as SMB or NFS. You can also designate NIC HA pairs on the server and its backup and enable monitoring of these NICs.
For example, you can create a user VIF that clients will use to access an SMB share serviced by server ib69s1. The user VIF is based on an active physical network on that server. To do this, click Add NIC in the section of the dialog box for ib69s1. On the Add NIC dialog box, enter a NIC name. In our example, the cluster uses the unified network and has only bond0, the active cluster FM/IP. We cannot use bond0:0, which is the management IP/VIF. We will create the VIF bond0:1, using bond0 as the base.
Next, enable NIC monitoring on the VIF. Select the new user NIC and click NIC HA. On the NIC HA Config dialog box, check Enable NIC Monitoring.
In the Standby NIC field, select New Standby NIC to create the standby on backup server ib69s2. The standby you specify must be available and valid. To keep the organization simple, we specified bond0:1 as the Name; this matches the name assigned to the NIC on server ib69s1. When you click OK, the NIC HA configuration is complete.
You can create additional user VIFs and assign standby NICs as needed. For example, you might want to add a user VIF for another share on server ib69s2 and assign a standby NIC on server ib69s1. You can also specify a physical interface such eth4 and create a standby NIC on the backup server for it. The NICs panel on the GUI shows the NICs on the selected server.
The NICs panel for the ib69s2, the backup server, shows that bond0:1 is an inactive, standby NIC and bond0:2 is an active NIC. Changing the HA configuration To change the configuration of a NIC, select the server on the Servers panel, and then select NICs from the lower Navigator. Click Modify on the NICs panel. The General tab on the Modify NIC Properties dialog box allows you change the IP address and other NIC properties.
Configuring automated failover manually To configure automated failover manually, complete these steps: 1. Configure file serving nodes in backup pairs. 2. Identify power sources for the servers in the backup pair. 3. Configure NIC monitoring. 4. Enable automated failover. 1. Configure server backup pairs File serving nodes are configured in backup pairs, where each server in a pair is the backup for the other. This step is typically done when the cluster is installed.
3. Configure NIC monitoring NIC monitoring should be configured on user VIFs that will be used by NFS, SMB, FTP, or HTTP. IMPORTANT: When configuring NIC monitoring, use the same backup pairs that you used when configuring backup servers. Identify the servers in a backup pair as NIC monitors for each other. Because the monitoring must be declared in both directions, enter a separate command for each server in the pair.
For example, to delete the standby that was assigned to interface eth2 on file serving node s1.hp.com: ibrix_nic -b -U s1.hp.com/eth2 Turn off automated failover: ibrix_server -m -U [-h SERVERNAME] To specify a single file serving node, include the -h SERVERNAME option. Failing a server over manually The server to be failed over must belong to a backup pair. The server can be powered down or remain up during the procedure.
Setting up HBA monitoring You can configure High Availability to initiate automated failover upon detection of a failed HBA. HBA monitoring can be set up for either dual-port HBAs with built-in standby switching or single-port HBAs, whether standalone or paired for standby switching via software. The StoreAll software does not play a role in vendor- or software-mediated HBA failover; traffic moves to the remaining functional port with no Fusion Manager involvement.
Turning HBA monitoring on or off If your cluster uses single-port HBAs, turn on monitoring for all of the ports to set up automated failover in the event of HBA failure. Use the following command: ibrix_hba -m -h HOSTNAME -p PORT For example, to turn on HBA monitoring for port 20.00.12.34.56.78.9a.bc on node s1.hp.com: ibrix_hba -m -h s1.hp.com -p 20.00.12.34.56.78.9a.
• HBA port monitoring • Status of automated failover (on or off) For each High Availability feature, the summary report returns status for each tested file serving node and optionally for their standbys: • Passed. The feature has been configured. • Warning. The feature has not been configured, but the significance of the finding is not clear.
User nics configured with a standby nic PASSED HBA ports monitored Hba port 21.01.00.e0.8b.2a.0d.6d monitored Hba port 21.00.00.e0.8b.0a.0d.6d monitored FAILED FAILED Not monitored Not monitored Capturing a core dump from a failed node The crash capture feature collects a core dump from a failed node when the Fusion Manager initiates failover of the node. You can use the core dump to analyze the root cause of the node failure.
3. 4. 5. 6. Highlight the BIOS Serial Console & EMS option in main menu, and then press the Enter key. Highlight the BIOS Serial Console Port option and then press the Enter key. Select the COM1 port, and then press the Enter key. Highlight the BIOS Serial Console Baud Rate option, and then press the Enter key. Select the 115200 Serial Baud Rate. Highlight the Server Availability option in main menu, and then press the Enter key. Highlight the ASR Timeout option and then press the Enter key.
6 Configuring cluster event notification Cluster events There are three categories for cluster events: Alerts. Disruptive events that can result in loss of access to file system data. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition. Information. Normal events that change the cluster. The following table lists examples of events included in each category.
utilization dips 10% below the threshold. For example, a notification is sent the first time usage reaches 90% or more. The next notice is sent only if the usage declines to 80% or less (event is reset), and subsequently rises again to 90% or above.
Viewing email notification settings The ibrix_event -L command provides comprehensive information about email settings and configured notifications. ibrix_event -L Email Notification SMTP Server From Reply To : : : : Enabled mail.hp.com FM@hp.com MIS@hp.com EVENT ------------------------------------asyncrep.completed asyncrep.failed LEVEL ----ALERT ALERT TYPE ----EMAIL EMAIL DESTINATION ----------admin@hp.com admin@hp.
Configuring the SNMP agent The SNMP agent is created automatically when the Fusion Manager is installed. It is initially configured as an SNMPv2 agent and is off by default. Some SNMP parameters and the SNMP default port are the same, regardless of SNMP version. The default agent port is 161. SYSCONTACT, SYSNAME, and SYSLOCATION are optional MIB-II agent parameters that have no default values. NOTE: The default SNMP agent port was changed from 5061 to 161 in the StoreAll 6.1 release.
ibrix_snmptrap -c -h HOSTNAME -v 3 [-p PORT] -n USERNAME [-j {MD5|SHA}] [-k AUTHORIZATION_PASSWORD] [-y {DES|AES}] [-z PRIVACY_PASSWORD] [-x CONTEXT_NAME] [-s {on|off}] The following command creates a v3 trapsink with a named user and specifies the passwords to be applied to the default algorithms. If specified, passwords must contain at least eight characters.
Configuring groups and users A group defines the access control policy on managed objects for one or more users. All users must belong to a group. Groups and users exist only in SNMPv3. Groups are assigned a security level, which enforces use of authentication and privacy, and specific read and write views to identify which managed objects group members can read and write. The command to create a group assigns its SNMPv3 security level, read and write views, and context name.
7 Configuring system backups Backing up the Fusion Manager configuration The Fusion Manager configuration is automatically backed up whenever the cluster configuration changes. The backup occurs on the node hosting the active Fusion Manager. The backup file is stored at /tmp/fmbackup.zip on that node. The active Fusion Manager notifies the passive Fusion Manager when a new backup file is available. The passive Fusion Manager then copies the file to /tmp/fmbackup.
hard quota limit for the directory tree has been exceeded, NDMP cannot create a temporary file and the restore operation fails. Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster. To configure the parameters on the GUI, select Cluster Configuration from the Navigator, and then select NDMP Backup. The NDMP Configuration Summary shows the default values for the parameters.
To configure NDMP parameters from the CLI, use the following command: ibrix_ndmpconfig -c [-d IP1,IP2,IP3,...] [-m MINPORT] [-x MAXPORT] [-n LISTENPORT] [-u USERNAME] [-p PASSWORD] [-e {0=disable,1=enable}] -v [{0=10}] [-w BYTES] [-z NUMSESSIONS] NDMP process management Normally all NDMP actions are controlled from the DMA.
Viewing or rescanning tape and media changer devices To view the tape and media changer devices currently configured for backups, select Cluster Configuration from the Navigator, and then select NDMP Backup > Tape Devices. If you add a tape or media changer device to the SAN, click Rescan Device to update the list. If you remove a device and want to delete it from the list, reboot all of the servers to which the device is attached.
8 Creating host groups for StoreAll clients A host group is a named set of StoreAll clients. Host groups provide a convenient way to centrally manage clients. You can put different sets of clients into host groups and then perform the following operations on all members of the group: • Create and delete mount points • Mount file systems • Prefer a network interface • Tune host parameters • Set allocation policies Host groups are optional.
To create one level of host groups beneath the root, simply create the new host groups. You do not need to declare that the root node is the parent. To create lower levels of host groups, declare a parent element for host groups. Do not use a host name as a group name. To create a host group tree using the CLI: 1. Create the first level of the tree: ibrix_hostgroup -c -g GROUPNAME 2.
For example, to add the domain rule 192.168 to the finance group: ibrix_hostgroup -a -g finance -D 192.168 Viewing host groups To view all host groups or a specific host group, use the following command: ibrix_hostgroup -l [-g GROUP] Deleting host groups When you delete a host group, its members are reassigned to the parent of the deleted group.
9 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring X9720/9730 hardware The GUI displays status, firmware versions, and device information for the servers, chassis, and system storage included in X9720 and 9730 systems. The Management Console displays a top-level status of the chassis, server, and storage hardware components.
Select the server component that you want to view from the lower Navigator panel, such as NICs.
The following are the top-level options provided for the server: NOTE: Information about the Hardware node can be found in “Monitoring hardware components” (page 87). • • HBAs. The HBAs panel displays the following information: ◦ Node WWN ◦ Port WWN ◦ Backup ◦ Monitoring ◦ State NICs. The NICs panel shows all NICs on the server, including offline NICs.
• • • ◦ Route ◦ Standby Server ◦ Standby Interface Mountpoints. The Mountpoints panel displays the following information: ◦ Mountpoint ◦ Filesystem ◦ Access NFS. The NFS panel displays the following information: ◦ Host ◦ Path ◦ Options CIFS. The CIFS panel displays the following information: NOTE: CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for SMB. • • • ◦ Name ◦ Value Power.
Monitoring hardware components The front of the chassis includes server bays and the rear of the chassis includes components such as fans, power supplies, Onboard Administrator modules, and interconnect modules (VC modules and SAS switches). The following Onboard Administrator view shows a chassis enclosure on a StoreAll 9730 system. To monitor these components from the GUI: 1. Click Servers from the upper Navigator tree. 2.
Monitoring blade enclosures To view summary information about the blade enclosures in the chassis: 1. Expand the Hardware node. 2. Select the Blade Enclosure node under the Hardware node. The following summary information is displayed for the blade enclosure: • Status • Type • Name • UUID • Serial number Detailed information of the hardware components in the blade enclosure is provided by expanding the Blade Enclosure node and clicking one of the sub-nodes.
The sub-nodes under the Blade Enclosure node provide information about the hardware components within the blade enclosure: Monitoring X9720/9730 hardware 89
Table 2 Obtaining detailed information about a blade enclosure Panel name Information provided Bay • Status • Type • Name • UUID • Serial number • Model • Properties Temperature Sensor: The Temperature Sensor panel • Status displays information for a bay, OA module or for the blade • Type enclosure. • UUID • Properties Fan: The Fan panel displays information for a blade enclosure.
Obtaining server details The Management Console provides detailed information for each server in the chassis. To obtain summary information for a server, select the Server node under the Hardware node. The following overview information is provided for each server: • Status • Type • Name • UUID • Serial number • Model • Firmware version • Message1 • Diagnostic Message1 1 Column dynamically appears depending on the situation.
Table 3 Obtaining detailed information about a server Panel name Information provided CPU • Status • Type • Name • UUID • Model • Location ILO Module • Status • Type • Name • UUID • Serial Number • Model • Firmware Version • Properties Memory DiMM • Status • Type • Name • UUID • Location • Properties NIC • Status • Type • Name • UUID • Properties Power Management Controller • Status • Type • Name • UUID • Firmware Version Storage Cluster • Status • Type • Name • UUID 92 Monitoring cluster ope
Table 3 Obtaining detailed information about a server (continued) Panel name Information provided Drive: Displays information about each drive in a storage cluster. • Status • Type • Name • UUID • Serial Number • Model • Firmware Version • Location • Properties Storage Controller (Displayed for a server) • Status • Type • Name • UUID • Serial Number • Model • Firmware Version • Location • Message • Diagnostic message Volume: Displays volume information for each server.
Table 3 Obtaining detailed information about a server (continued) Panel name Information provided Temperature Sensor: Displays information for each temperature sensor. • Status • Type • Name • UUID • Locations • Properties Monitoring storage and storage components Select Vendor Storage from the Navigator tree to display status and device information for storage and storage components. The Vendor Storage panel lists the HP 9730 CX storage systems included in the system.
The Management Console provides a wide-range of information in regards to vendor storage, as shown in the following image. Drill down into the following components in the lower Navigator tree to obtain additional details: • Servers. The Servers panel lists the host names for the attached storage. • Storage Cluster. The Storage Cluster panel provides detailed information about the storage cluster. See “Monitoring storage clusters” (page 96) for more information. • Storage Switch.
Monitoring storage clusters The Management Console provides detailed information for each storage cluster. Click one of the following sub-nodes displayed under the Storage Clusters node to obtain additional information: • Drive Enclosure. The Drive Enclosure panel provides detailed information about the drive enclosure. Expand the Drive Enclosure node to view information about the power supply and sub enclosures. See “Monitoring drive enclosures for a storage cluster” (page 96) for more information.
Expand the Drive Enclosure node to provide additional information about the power supply and sub enclosures. Table 4 Details provided for the drive enclosure Node Where to find detailed information Power Supply “Monitoring the power supply for a storage cluster” (page 97) Sub Enclosure “Monitoring sub enclosures” (page 98) Monitoring the power supply for a storage cluster Each drive enclosure also has power supplies.
Monitoring sub enclosures Expand the Sub Enclosure node to obtain information about the following components for each sub-enclosure: • • • • 98 Drive. The Drive panel provides the following information about the drives in a sub-enclosure: ◦ Status ◦ Volume Name ◦ Type ◦ UUID ◦ Serial Number ◦ Model ◦ Firmware Version ◦ Location. This column displays where the drive is located. For example, assume the location for a drive in the list is Port: 52 Box 1 Bay: 7.
◦ Name ◦ UUID ◦ Properties Monitoring pools for a storage cluster The Management Console lists a Pool node for each pool in the storage cluster. Select one of the Pool nodes to display information about that pool. When you select the Pool node, the following information is displayed in the Pool panel: • Status • Type • Name • UUID • Properties To obtain details on the volumes in the pool, expand the Pool node and then select the Volume node.
• UUID • Properties The following image shows information for two volumes named LUN_15 and LUN_16 on the Volume panel. Monitoring storage controllers for a storage cluster The Management Console displays a Storage Controller node for each storage controller in the storage cluster.
• UUID • Properties. Provides information about the read, write and cache size properties. In the following image, the IO Cache Module panel shows an IO cache module with read/write properties enabled.
In the following image, the LUNs panel displays the LUNs for a storage cluster. Monitoring the status of file serving nodes The dashboard on the GUI displays information about the operational status of file serving nodes, including CPU, I/O, and network performance information.
Monitoring cluster events StoreAll software events are assigned to one of the following categories, based on the level of severity: • Alerts. A disruptive event that can result in loss of access to file system data. For example, a segment is unavailable or a server is unreachable. • Warnings. A potentially disruptive condition where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition.
Administrator module., eventId:000D0004, location:OAmodule in chassis S/N:USE123456W, level:ALERT FILESYSTEM : HOST : ix24-03.ad.hp.com USER NAME : OPERATION : SEGMENT NUMBER : PV NUMBER : NIC : HBA : RELATED EVENT : 0 The ibrix_event -l and -i commands can include options that act as filters to return records associated with a specific file system, server, alert level, and start or end time. See the HP StoreAll Network Storage System CLI Reference Guide for more information.
The detailed report consists of the summary report and the following additional data: • Summary of the test results • Host information such as operational state, performance data, and version data • Nondefault host tunings • Results of the health checks By default, the Result Information field in a detailed report provides data only for health checks that received a Failed or a Warning result.
6.3.72 6.3.72 GNU/Linux Red Hat Enterprise Linux Server release 5.5 (Tikanga) 2.6.18-194. x86_64 x86_64 Remote Hosts ============ Host Type Network Protocol Connection State ------- ------ ---------- -------- ---------------bv18-03 Server 10.10.18.3 true S_SET S_READY S_SENDHB bv18-04 Server 10.10.18.
To view the statistics from the CLI, use the following command: ibrix_stats -l [-s] [-c] [-m] [-i] [-n] [-f] [-h HOSTLIST] Use -s -c -m -i -n -f -h the options to view only certain statistics or to view statistics for specific file serving nodes: Summary statistics CPU statistics Memory statistics I/O statistics Network statistics NFS statistics The file serving nodes to be included in the report Sample output follows: ---------Summary-----------HOST Status CPU Disk(MB/s) Net(MB/s) lab12-10.hp.
10 Using the Statistics tool The Statistics tool reports historical performance data for the cluster or for an individual file serving node. You can view data for the network, the operating system, and the file systems, including the data for NFS, memory, and block devices. Statistical data is transmitted from each file serving node to the Fusion Manager, which controls processing and report generation.
Upgrading the Statistics tool from StoreAll software 6.0 The statistics history is retained when you upgrade to version 6.1 or later. The Statstool software is upgraded when the StoreAll software is upgraded using the ibrix_upgrade and auto_ibrixupgrade scripts. Note the following: • If statistics processes were running before the upgrade started, those processes will automatically restart after the upgrade completes successfully.
The Time View lists the reports in chronological order, and the Table View lists the reports by cluster or server. Click a report to view it.
Generating reports To generate a new report, click Request New Report on the StoreAll Management Console Historical Reports GUI. To generate a report, enter the necessary specifications and click Submit. The completed report appears in the list of reports on the statistics home page. When generating reports, be aware of the following: • A report can be generated only from statistics that have been gathered. For example, if you start the tool at 9:40 a.m. and ask for a report from 9:00 a.m. to 9:30 a.m.
Maintaining the Statistics tool Space requirements The Statistics tool requires about 4 MB per hour for a two-node cluster. To manage space, take the following steps: • Maintain sufficient space (4 GB to 8 GB) for data collection in the /usr/local/statstool/ histstats directory. • Monitor the space in the /local/statstool/histstats/reports/ directory. For the default values, see “Changing the Statistics tool configuration” (page 112).
The following actions occur after a successful failover: • If Statstool processes were running before the failover, they are restarted. If the processes were not running, they are not restarted. • The Statstool passive management console is installed on the StoreAll Fusion Manager in maintenance mode. • Setrsync is run automatically on all cluster nodes from the current active Fusion Manager. • Loadfm is run automatically to present all file system data in the cluster to the active Fusion Manager.
Troubleshooting the Statistics tool Testing access To verify that ssh authentication is enabled and data can be obtained from the nodes without prompting for a password, run the following command: # /usr/local/ibrix/stats/bin/stmanage testpull Other conditions • Data is not collected. If data is not being gathered in the common directory for the Statistics Manager (/usr/local/statstool/histstats/ by default), restart the Statistics tool processes on all nodes.
11 Maintaining the system Shutting down the system To shut down the system completely, first shut down the StoreAll software, and then power off the hardware. Shutting down the StoreAll software Use the following procedure to shut down the StoreAll software. Unless noted otherwise, run the commands from the node hosting the active Fusion Manager. 1. Stop any active remote replication, data tiering, or rebalancer tasks.
7. Unmount all file systems on the cluster nodes: ibrix_umount -f To unmount file systems from the GUI, select Filesystems > unmount. 8. Verify that all file systems are unmounted: ibrix_fs -l If a file system fails to unmount on a particular node, continue with this procedure. The file system will be forcibly unmounted during the node shutdown. 9. Shut down all StoreAll Server services and verify the operation: # pdsh -a /etc/init.d/ibrix_server stop | dshbak # pdsh -a /etc/init.
Starting up the system To start an X9720 system, first power on the hardware components, and then start the StoreAll Software. Powering on the system hardware To power on the system hardware, complete the following steps: 1. Power on the 9100cx disk capacity block(s). 2. Power on the 9100c controllers. 3. Wait for all controllers to report “on” in the 7-segment display. 4. Power on the file serving nodes.
Performing a rolling reboot The rolling reboot procedure allows you to reboot all file serving nodes in the cluster while the cluster remains online. Before beginning the procedure, ensure that each file serving node has a backup node and that StoreAll HA is enabled. See “Configuring virtual interfaces for client access” (page 49) and “Configuring High Availability on the cluster” (page 54) for more information about creating standby backup pairs, where each server in a pair is the standby for the other.
You can locally override host tunings that have been set on StoreAll Linux clients by executing the ibrix_lwhost command. Tuning file serving nodes on the GUI The Modify Server(s) Wizard can be used to tune one or more servers in the cluster. To open the wizard, select Servers from the Navigator and then click Tuning Options from the Summary panel. The General Tunings dialog box specifies the communications protocol (TCP or UDP) and the number of admin and server threads.
The Module Tunings dialog box adjusts various advanced parameters that affect server operations. On the Servers dialog box, select the servers to which the tunings should be applied.
Tuning file serving nodes from the CLI All Fusion Manager commands for tuning hosts include the -h HOSTLIST option, which supplies one or more hostgroups. Setting host tunings on a hostgroup is a convenient way to tune a set of clients all at once. To set the same host tunings on all clients, specify the clients hostgroup. CAUTION: Changing host tuning settings alters file system performance. Contact HP Support before changing host tuning settings.
• To list host tuning settings on file serving nodes, StoreAll clients, and hostgroups, use the following command. Omit the -h argument to see tunings for all hosts. Omit the -n argument to see all tunings. ibrix_host_tune -l [-h HOSTLIST] [-n OPTIONS] • To set the communications protocol on nodes and hostgroups, use the following command. To set the protocol on all StoreAll clients, include the -g clients option.
Migrating segments Segment migration transfers segment ownership but it does not move segments from their physical locations in the storage system. Segment ownership is recorded on the physical segment itself, and the ownership data is part of the metadata that the Fusion Manager distributes to file serving nodes and StoreAll clients so that they can locate segments.
The new owner of the segment must be able to see the same storage as the original owner. The Change Segment Owner dialog box lists the servers that can see the segment you selected. Select one of these servers to be the new owner. The Summary dialog box shows the segment migration you specified. Click Back to make any changes, or click Finish to complete the operation. To migrate ownership of segments from the CLI, use the following commands.
The following command migrates ownership of segments ilv2 and ilv3 in file system ifs1 to server2: ibrix_fs -m -f ifs1 -s ilv2,ilv3 -h server2 Migrate ownership of all segments owned by specific servers: ibrix_fs -m -f FSNAME -H HOSTNAME1,HOSTNAME2 [-M] [-F] [-N] For example, to migrate ownership of all segments in file system ifs1 from server1 to server2: ibrix_fs -m -f ifs1 -H server1,server2 Evacuating segments and removing storage from the cluster Before removing storage used for a StoreAll software
On the Evacuate Advanced dialog box, locate the segment to be evacuated and click Source. Then locate the segments that will receive the data from the segment and click Destination. If the file system is tiered, be sure to select destination segments on the same tier as the source segment. The Summary dialog box lists the source and destination segments for the evacuation. Click Back to make any changes, or click Finish to start the evacuation.
The Active Tasks panel reports the status of the evacuation task. When the task is complete, it will be added to the Inactive Tasks panel. 4. When the evacuation is complete, run the following command to retire the segment from the file system: ibrix_fs -B -f FSNAME -n BADSEGNUMLIST The segment number associated with the storage is not reused. The underlying LUN or volume can be reused in another file system or physically removed from the storage solution when this step is complete. 5.
3015A4021.C34A994C, poid 3015A4021.C34A994C, primary 4083040FF.7793558E poid 4083040FF.7793558E Use the inum2name utility to translate the primary inode ID into the file name. Removing a node from a cluster In the following procedure, the cluster contains four nodes: FSN1, FSN2, FSN3, and FSN4. FSN4 is the node being removed. The user NIC for FSN4 is bond0:1. The file system name is ibfs1, which is mounted on /ibfs1 and shared as ibfs1 through NFS and SMB .
7. Remove all NFS and SMB shares from FSN4 (in this example, ibfs1 is shared via NFS and CIFS): ibrix_exportfs -U -h FSN4 -p *:/ibfs1 ibrix_cifs -d -s ibfs1 -h FSN4 8. Unmount ibfs1 from FSN4 and delete the mountpoint on FSN4 from the cluster: ibrix_umount –f ibfs1 –h FSN4 ibrix_mountpoint –d –h FSN4 –m /ibfs1 9. Remove FSN4 from AgileFM quorum participation: ibrix_fm -u FSN4 10. Delete FSN4 from the cluster: ibrix_server -d -h FSN4 11. Reconfigure High Availability on FSN3, if needed. 12.
When creating user network interfaces for file serving nodes, keep in mind that nodes needing to communicate for file system coverage or for failover must be on the same network interface. Also, nodes set up as a failover pair must be connected to the same network interface. For a highly available cluster, HP recommends that you put protocol traffic on a user network and then set up automated failover for it (see “Configuring High Availability on the cluster” (page 54)).
Setting network interface options in the configuration database To make a VIF usable, execute the following command to specify the IP address and netmask for the VIF. You can also use this command to modify certain ifconfig options for a network. ibrix_nic -c -n IFNAME -h HOSTNAME [-I IPADDR] [-M NETMASK] [-B BCASTADDR] [-T MTU] For example, to set netmask 255.255.0.0 and broadcast address 10.0.0.4 for interface eth3 on file serving node s4.hp.com: ibrix_nic -c -n eth3 -h s4.hp.com -M 255.255.0.0 -B 10.0.
Preferring a network interface for a hostgroup You can prefer an interface for multiple StoreAll clients at one time by specifying a hostgroup. To prefer a user network interface for all StoreAll clients, specify the clients hostgroup. After preferring a network interface for a hostgroup, you can locally override the preference on individual StoreAll clients with the command ibrix_lwhost.
1. 2. 3. 4. Unmount the file system from the client. Change the client’s IP address. Reboot the client or restart the network interface card. Delete the old IP address from the configuration database: ibrix_client -d -h CLIENT 5. Re-register the client with the Fusion Manager: register_client -p console_IPAddress -c clusterIF -n ClientName 6. Remount the file system on the client. Changing the cluster interface If you restructure your networks, you might need to change the cluster interface.
ibrix_nic -d -n eth3 -h s1.hp.com,s2.hp.com Viewing network interface information Executing the ibrix_nic command with no arguments lists all interfaces on all file serving nodes. Include the -h option to list interfaces on specific hosts. ibrix_nic -l -h HOSTLIST The following table describes the fields in the output. Field Description BACKUP HOST File serving node for the standby network interface. BACKUP-IF Standby network interface. HOST File serving node.
12 Licensing This chapter describes how to view your current license terms and how to obtain and install new StoreAll software product license keys. Viewing license terms The StoreAll software license file is stored in the installation directory. To view the license from the GUI, select Cluster Configuration in the Navigator and then select License. To view the license from the CLI, use the following command: ibrix_license -i The output reports your current node count and capacity limit.
13 Upgrading firmware Before performing any of the procedures in this chapter, read the important warnings, precautions, and safety information in “Warnings and precautions” (page 220) and “Regulatory information” (page 224).
◦ 6Gb_SAS_BL_SW ◦ 3Gb_SAS_BL_SW (9720 systems) Enter the following command to show which components could be flagged for flash upgrade.
1. Run the /opt/hp/platform/bin/hpsp_fmt -fr command to verify that the firmware on this node and subsequent nodes in this cluster is correct and up-to-date. This command should be performed before placing the cluster back into service. The following figure shows an example of the firmware recommendation output and corrective component upgrade flash: IMPORTANT: 1. Server 2. Chassis 3. Storage 2. 3.
4. Perform the flash operation by entering the following command and then go to step 5: hpsp_fmt -flash -c --force 5. If the components require a reboot on flash, failover the FSN for continuous operation as described in the following steps: NOTE: Although the following steps are based on a two-node cluster, all steps can be used in a multiple node clusters. a. Determine whether the node to be flashed is the active Fusion Manager by enter the following command: ibrix_fm -i b.
NOTE: The -p switch in the failover operation lets you reboot the effected node and in turn the flash of the following components: f. • BIOS • NIC • Power_Mgmt_Ctlr • SERVER_HDD • Smart_Array_Ctlr • Storage_Ctlr Once the FSN boots up, verify the software reports the FSN as Up, FailedOver by enter the following command: ibrix_server -l g. Confirm the recommended flash was completed successfully by enter the following command: hpsp_fmt -fr server -o /tmp/fwrecommend.
for information about installing StoreAll software on the blades in the module. These documents are located on the StoreAll manuals page: http://www.hp.com/support/StoreAllManuals Adding new server blades on 9720 systems NOTE: This requires the use of the Quick Restore DVD. See “Recovering the X9720/9730 Storage” (page 154) for more information. 1. On the front of the blade chassis, in the next available server blade bay, remove the blank. 2. Prepare the server blade for installation. 3.
4. 5. 6. Install the software on the server blade. The Quick Restore DVD is used for this purpose. See “Recovering the X9720/9730 Storage” (page 154) for more information. Set up fail over. For more information, see the HP StoreAll Storage File System User Guide. Enable high availability (automated failover) by running the following command on server 1: # ibrix_server -m 7. Discover storage on the server blade: ibrix_pv -a 8.
14 Troubleshooting Collecting information for HP Support with the IbrixCollect Ibrix Collect is a log collection utility that allows you collect relevant information for diagnosis by HP Support when system issues occur. The collection can be triggered manually using the GUI or CLI, or automatically during a system crash.
4. Click Okay. To collect logs and command results using the CLI, use the following command: ibrix_collect -c -n NAME NOTE: • Only one manual collection of data is allowed at a time. • When a node restores from a system crash, the vmcore under /var/crash/ directory is processed. Once processed, the directory will be renamed /var/crash/ _PROCESSED. HP Support may request that you send this information to assist in resolving the system crash.
To specify more than one collection to be deleted at a time from the CLI, provide the names separated by a semicolon. To delete all data collections manually from the CLI, use the following command: ibrix_collect -d -F Configuring Ibrix Collect You can configure data collection to occur automatically upon a system crash. This collection will include additional crash digester output. The archive filename of the system crash-triggered collection will be in the format _crash_.tgz.
To set up email settings to send cluster configurations using the CLI, use the following command: ibrix_collect -C -m [-s ] [-f ] [-t ] NOTE: More than one email ID can be specified for -t option, separated by a semicolon. The “From” and “To” command for this SMTP server are Ibrix Collect specific. Obtaining custom logging from ibrix_collect add-on scripts You can create add-on scripts that capture custom StoreAll and operating system commands and logs.
2. Place the added-on script in the following directory: /usr/local/ibrix/ibrixcollect/ibrix_collect_add_on_scripts/ The following example shows several add-on scripts stored in the ibrix_collect_add_on_scripts directory: root@host2 /]# ls -l /usr/local/ibrix/ibrixcollect/ibrix_collect_add_on_scripts/ total 8 -rwxr-xr-x 1 root root 93 Dec 7 13:39 60_addOn.sh -rwxrwxrwx 1 root root 48 Dec 20 09:22 63_AddOnTest.sh 3.
2. The output of the add-on scripts is available under the tar file of the individual node. To view the contents of the directory, enter the following command: [root@host2 /]#ls -l The following is an example of the output displayed: total 3520 -rw-r--r-- 1 root root 2021895 Dec 20 12:41 addOnCollection.tgz 3. Extract the tar file, containing the output of the add-on script.
7. View the contents of the //logs/add_on_script/local/ibrixcollect/ ibrix_collect_additional_data directory: [root@host2 ibrix_collect_additional_data]#ls -l The command displays the following output: total 4 -rw-r--r-- 1 root root 2636 Dec 20 12:39 63_AddOnTest.out In this instance, 63_AddOnTest.out displays the output of the add-on script.
Troubleshooting specific issues Software services Cannot start services on a file serving node, or Linux StoreAll client SELinux might be enabled. To determine the current state of SELinux, use the getenforce command. If it returns enforcing, disable SELinux using either of these commands: setenforce Permissive setenforce 0 To permanently disable SELinux, edit its configuration file (/etc/selinux/config) and set SELINUX=parameter to either permissive or disabled. SELinux will be stopped at the next boot.
Windows StoreAll clients Logged in but getting a “Permission Denied” message The StoreAll client cannot access the Active Directory server because the domain name was not specified. Reconfigure the Active Directory settings, specifying the domain name. See the HP StoreAll Storage Installation Guide for more information. Verify button in the Active Directory Settings tab does not work This issue has the same cause as the above issue.
Troubleshooting an Express Query Manual Intervention Failure (MIF) An Express Query Manual Intervention Failure (MIF) is a critical error that occurred during Express Query execution. These are failures Express Query cannot recover from automatically. After a MIF occurrence the specific file system is logically removed from the Express Query and it requires a manual intervention to perform the recovery.
5. Sometimes cluster and file system health checks have an OK status but Express Query is yet in a MIF condition for one or several specific file systems. This unlikely situation occurs when some data has been corrupted and it cannot be recovered. To solve this situation: a. If there is a full backup of the file system involved, do a restore. b. If there is no full backup: 1. Disable Express Query for the file system, by entering the following command: ibrix_fs -T -D -f 2.
15 Recovering the X9720/9730 Storage Use these instructions if the system fails and must be recovered, or to add or replace a server blade. CAUTION: The Quick Restore DVD restores the file serving node to its original factory state. This is a destructive process that completely erases all of the data on local hard drives. Obtaining the latest StoreAll software release StoreAll OS version 6.3 is only available through the registered release process.
For example: ibrix_nic -m -h titan16 -D titan15/eth2 Restoring an X9720 node with StoreAll 6.1 or later If you are restoring an X9720 node with StoreAll OS 6.1 or later, the restore process will not work properly if you are using management IP addresses and credentials that are not the factory defaults. To resolve this issue: 1. Image the StoreAll OS software on the server blade. (imaging means the software is copied to the local hard drives of the server blade). 2. Log in to the node.
1. Log in to the server. • 9730 systems. The welcome screen for the installation wizards appears, and the setup wizard then verifies the firmware on the system and notifies you if a firmware update is needed. (The installation/configuration times noted throughout the wizard are for a new installation. Replacing a node requires less time.) IMPORTANT: HP recommends that you update the firmware before continuing with the installation. 9730 systems have been tested with specific firmware recipes.
NOTE: If a management console is not located, the following screen appears. Select Enter FM IP and go to step 5. 3. The Verify Hostname dialog box displays a hostname generated by the management console. Enter the correct hostname for this server. The Verify Configuration dialog box shows the configuration for this node. Because you changed the hostname in the previous step, the IP address is incorrect on the summary. Select Reject, and the following screen appears. Select Enter FM IP. 4.
5. On the Server Networking Configuration dialog box, configure this server for bond0, the cluster network. Note the following: • The hostname can include alphanumeric characters and the hyphen (-) special character. Do not use an underscore (_) in the hostname. • The IP address is the address of the server on bond0. • The default gateway provides a route between networks. If your default gateway is on a different subnet than bond0, skip this field. The Configuration Summary lists your configuration.
6. This step applies only to 9730 systems. If you are restoring a blade on an X9720 system, go to step 8. The 9730 blade being restored needs OA/VC information from the chassis. It can obtain this information directly from blade 1, or you can enter the OA/VC credentials manually.
7. • Storage configuration • Networking on the blade On the Join a Cluster – Step 2 dialog box, enter the requested information. NOTE: On the dialog box, Register IP is the Fusion Manager (management console) IP, not the IP you are registering for this blade. 8. The Network Configuration dialog box lists the interfaces configured on the system. If the information is correct, select Continue and go to the next step.
NOTE: If you are recovering an X9720 node with StoreAll OS 6.1 or later, you might be unable to change the cluster network for bond1. See “Manually recovering bond1 as the cluster” (page 165) for more information. 9. The Configuration Summary dialog box lists the configuration you specified. Select Commit to apply the configuration. 10. Because the hostname you specified was previously registered with the management console, the following message appears. Select Yes to replace the existing server.
11. The wizard now registers a passive Management Console (Fusion Manager) on the blade and then configures and starts it. The wizard then runs additional setup scripts. NOTE: If you are connected to iLO and using the virtual console, you will lose the iLO connection when the platform scripts are executed. After a short period of time you can again connect to the iLO and bring up the virtual console.
6. If you disabled NIC monitoring before using the QuickRestore DVD, re-enable the monitor: ibrix_nic -m -h MONITORHOST -A DESTHOST/IFNAME For example: ibrix_nic -m -h titan16 -A titan15/eth2 7. 8. Configure Insight Remote Support on the node. See “Configuring HP Insight Remote Support on StoreAll systems” (page 36). Run ibrix_health -l from the node hosting the active Fusion Manager to verify that no errors are being reported.
3. Push the original share information from the management console database to the restored node. On the node hosting the active management console, first create a temporary SMB share: ibrix_cifs -a -f FSNAME -s SHARENAME -p SHAREPATH NOTE: You cannot create an SMB share with a name containing an exclamation point (!) or a number sign (#) or both. Then delete the temporary SMB share: ibrix_cifs -d -s SHARENAME 4.
Troubleshooting Manually recovering bond1 as the cluster If you are unable to use the installation wizard to recover bond1 as the cluster, perform the following procedure: 1. Create bond0 and bond1: a. Create the ifcfg-bond0 file in the /etc/sysconfig/network-scripts directory with the following parameters: BOOTPROTO=none BROADCAST=10.30.255.255 DEVICE=bond0 IPADDR=10.30.3.16 NETMASK=255.255.0.0 ONBOOT=yes SLAVE=no USERCTL=no BONDING_OPTS="miimon=100 mode=1 updelay=100" MTU=1500 b.
3. Determine if the MAC address is present in the ifcfg files. If not, obtain the MAC address and append it to each eth port: a. Execute the ip ad command to obtain the MAC address of all eth ports. b. Add the MAC address to the ifcfg-ethx file of each slave eth port. The following is an example of the MAC address: HWADDR=68:B5:99:B3:11:88 c. 4. Ensure that the ONBOOT parameter is set to “no” (ONBOOT=no) in the ifcfg-ethx file of each eth port.
8. Register the server to the Fusion Manager configuration: a. (Not a replacement server) Run the register_server command: [root@X9720 ~]# /usr/local/ibrix/bin/register_server -p 172.16.3.65 -c bond1 -n r150b16 -u bond0 NOTE: The command will fail if the server is a replacement server because the server is already registered, as shown in the following example: iadconf.xml does not exist...creating new config.
a. Run the following commands from the active Fusion Manager (r150b15 in this example) to view the existing servers and then unregister the passive Fusion Manager (r150b16 in this example): To view the registered management consoles: [root@r150b15 ibrix]#ibrix_fm -l The command provides the following output: NAME IP ADDRESS ------- ---------r150b15 172.16.3.15 r150b16 172.16.3.
10. Restart the Fusion Manager services by entering the following command: [root@X9720 ~]#service ibrix_fusionmanager restart Stopping Fusion Manager Daemon [ OK ] Starting Fusion Manager Daemon [ OK ] 11. Run the following command, which ensures the node does not enter configuration mode upon the next login: rm -rf .run_wizard 12. Complete the following verifications: a.
16 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Installing and maintaining the HP 3Gb SAS BL Switch • HP 3Gb SAS BL Switch Installation Instructions • HP 3Gb SAS BL Switch Customer Self Repair Instructions To access these manuals, go to the Manuals page (http://www.hp.com/support/manuals) and click bladesystem > BladeSystem Interconnects > HP BladeSystem SAS Interconnects. Maintaining the X9700cx (also known as the HP 600 Modular Disk System) • HP 600 Modular Disk System Maintenance and Service Guide Describes removal and replacement procedures.
After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources.
17 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Cascading Upgrades If you are running a StoreAll version earlier than 5.6, do incremental upgrades as described in the following table. If you are running StoreAll 5.6, upgrade to 6.1 before upgrading to 6.3. If you are upgrading from Upgrade to Where to find additional information StoreAll version 5.4 StoreAll version 5.5 “Upgrading the StoreAll software to the 5.5 release” (page 188) StoreAll version 5.5 StoreAll version 5.6 “Upgrading the StoreAll software to the 5.
NOTE: • Verify that the root partition contains adequate free space for the upgrade. Approximately 4 GB is required. • Be sure to enable password-less access among the cluster nodes before starting the upgrade. • Do not change the active/passive Fusion Manager configuration during the upgrade. • Linux StoreAll clients must be upgraded to the 6.x release. Upgrading 9720 chassis firmware Before upgrading 9720 systems to StoreAll software 6.1, the 9720 chassis firmware must be at version 4.0.0-13.
6. On 9720 systems, delete the existing vendor storage: ibrix_vs -d -n EXDS The vendor storage will be registered automatically after the upgrade. Performing the upgrade The online upgrade is supported only from the StoreAll 6.x to 6.1 release. Complete the following steps: 1. Obtain the latest HP StoreAll 6.1 ISO image from the StoreAll software dropbox. Contact HP Support to register for the release and obtain access to the dropbox. 2.
1. 2. 3. 4. 5. 6. 7. Ensure that all nodes are up and running. To determine the status of your cluster nodes, check the dashboard on the GUI or use the ibrix_health command. Verify that ssh shared keys have been set up. To do this, run the following command on the node hosting the active instance of the agile Fusion Manager: ssh Repeat this command for each node in the cluster. Note any custom tuning parameters, such as file system mount options.
Wait up to 15 minutes for the file systems to unmount. Troubleshoot any issues with unmounting file systems before proceeding with the upgrade. See “File system unmount issues” (page 23). 13. On 9720 systems, delete the existing vendor storage: ibrix_vs -d -n EXDS The vendor storage will be registered automatically after the upgrade. Performing the upgrade This upgrade method is supported only for upgrades from StoreAll software 5.6.x to the 6.1 release. Complete the following steps: 1.
ibrix_cifsconfig -t -S "smb_signing_enabled=0, smb_signing_required=0" ibrix_cifsconfig -t -S "ignore_writethru=1" The SMB signing feature specifies whether clients must support SMB signing to access SMB shares. See the HP StoreAll Storage File System User Guide for more information about this feature. Whenignore_writethru is enabled, StoreAll software ignores writethru buffering to improve SMB write performance on some user applications that request it. 7. 8. Mount file systems on Linux StoreAll clients.
/usr/local/ibrix/bin/verify_client_update The following example is for a RHEL 4.8 client with kernel version 2.6.9-89.ELsmp: # /usr/local/ibrix/bin/verify_client_update 2.6.9-89.35.1.ELsmp Kernel update 2.6.9-89.35.1.ELsmp is compatible. If the minor kernel update is compatible, install the update with the vendor RPM and reboot the system. The StoreAll client software is then automatically updated with the new kernel, and StoreAll client services start automatically.
Progress and status reports The utility writes log files to the directory /usr/local/ibrix/log/upgrade60 on each node containing segments from the file system being upgraded. Each node contains the log files for its segments. Log files are named ___upgrade.log. For example, the following log file is for segment ilv2 on host ib4-2: ib4-2_ilv2_2012-03-27_11:01_upgrade.
4. Enter the ibrix_fs command to set the file system’s data retention and autocommit period to the desired values. See the HP StoreAll Storage CLI Reference Guide for additional information about the ibrix_fs command. Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support.
Node is not registered with the cluster network Nodes hosting the agile Fusion Manager must be registered with the cluster network. If the ibrix_fm command reports that the IP address for a node is on the user network, you will need to reassign the IP address to the cluster network. For example, the following commands report that node ib51-101, which is hosting the active Fusion Manager, has an IP address on the user network (192.168.51.101) instead of the cluster network.
4. Unmount the file systems and continue with the upgrade procedure. Moving the Fusion Manager VIF to bond1 When the 9720 system is installed, the cluster network is moved to bond1. The 6.1 release requires that the Fusion Manager VIF (Agile_Cluster_VIF) also be moved to bond1 to enable access to ports 1234 and 9009. To move the Agile_Cluster_VIF to bond1, complete these steps: 1.
After you have completed the procedure, if the Fusion Manager is not failing over or the /usr/ local/ibrix/log/Iad.log file reports errors communicating to port 1234 or 9009, contact HP Support for further assistance. Upgrading the StoreAll software to the 5.6 release This section describes how to upgrade to the latest StoreAll software release. The management console and all file serving nodes must be upgraded to the new release at the same time. Upgrades to the StoreAll software 5.
The upgrade script performs all necessary upgrade steps on every server in the cluster and logs progress in the file /usr/local/ibrix/setup/upgrade.log. After the script completes, each server will be automatically rebooted and will begin installing the latest software. 5. After the install is complete, the upgrade process automatically restores node-specific configuration information and the cluster should be running the latest software.
1. 2. 3. 4. 5. Obtain the latest Quick Restore image from the HP kiosk at http://www.software.hp.com/ kiosk (you will need your HP-provided login credentials). Burn the ISO image to a DVD. Insert the Quick Restore DVD into the server DVD-ROM drive. Restart the server to boot from the DVD-ROM. When the StoreAll Network Storage System screen appears, enter qr to install the StoreAll software on the file serving node. The server reboots automatically after the software is installed.
Troubleshooting upgrade issues If the upgrade does not complete successfully, check the following items. For additional assistance, contact HP Support. Automatic upgrade Check the following: • If the initial execution of /usr/local/ibrix/setup/upgrade fails, check /usr/local/ibrix/setup/upgrade.log for errors. It is imperative that all servers are up and running the StoreAll software before you execute the upgrade script. • If the install of the new OS fails, power cycle the node. Try rebooting.
IMPORTANT: • Do not start new remote replication jobs while a cluster upgrade is in progress. If replication jobs were running before the upgrade started, the jobs will continue to run without problems after the upgrade completes. • If you are upgrading from a StoreAll 5.x release, ensure that the NFS exports option subtree_check is the default export option for every NFS export. See “Common issue across all upgrades from StoreAll 5.x” (page 174) for more information.
Manual upgrades Upgrade paths There are two manual upgrade paths: a standard upgrade and an agile upgrade. • The standard upgrade is used on clusters having a dedicated Management Server machine or blade running the management console software. • The agile upgrade is used on clusters having an agile management console configuration, where the management console software is installed in an active/passive configuration on two cluster nodes.
5. Change to the installer directory if necessary and run the upgrade: ./ibrixupgrade -f 6. Verify that the management console is operational: /etc/init.d/ibrix_fusionmanager status The status command should report that the correct services are running. The output is similar to this: Fusion Manager Daemon (pid 18748) running... 7. Check /usr/local/ibrix/log/fusionserver.log for errors.
Completing the upgrade 1. From the management console, turn automated failover back on: /bin/ibrix_server -m 2. Confirm that automated failover is enabled: /bin/ibrix_server -l In the output, HA displays on. 3. Verify that all version indicators match for file serving nodes.
2. 3. 4. 5. Move the /ibrix directory used in the previous release installation to ibrix.old. For example, if you expanded the tarball in /root during the previous StoreAll installation on this node, the installer is in /root/ibrix. Expand the distribution tarball or mount the distribution DVD in a directory of your choice. Expanding the tarball creates a subdirectory named ibrix that contains the installer program.
2. From the management console, turn automated failover back on: /bin/ibrix_server -m 3. Confirm that automated failover is enabled: /bin/ibrix_server -l In the output, HA displays on. 4. From the management console, perform a manual backup of the upgraded configuration: /bin/ibrix_fm -B 5. Verify that all version indicators match for file serving nodes.
5. Wait approximately 60 seconds for the failover to complete, and then run the following command on the node that was the target for the failover: /bin/ibrix_fm -i The command should report that the agile management console is now Active on this node. 6. From the node on which you failed over the active management console in step 4, change the status of the management console from maintenance to passive: /bin/ibrix_fm -m passive 7.
16. From the node on which you failed back the active management console in step 8, change the status of the management console from maintenance to passive: /bin/ibrix_fm -m passive 17. If the node with the passive management console is also a file serving node, manually fail over the node from the active management console: /bin/ibrix_server -f -p -h HOSTNAME Wait a few minutes for the node to reboot, and then run the following command to verify that the failover was successful.
1. Manually fail over the file serving node: /bin/ibrix_server -f -p -h HOSTNAME The node will be rebooted automatically. 2. Move the /ibrix directory used in the previous release installation to ibrix.old. For example, if you expanded the tarball in /root during the previous StoreAll installation on this node, the installer is in /root/ibrix. Expand the distribution tarball or mount the distribution DVD in a directory of your choice.
4. Propagate a new segment map for the cluster: /bin/ibrix_dbck -I -f FSNAME 5. Verify the health of the cluster: /bin/ibrix_health -l The output should specify Passed / on. Agile offline upgrade This upgrade procedure is appropriate for major upgrades.
5. 6. named ibrix that contains the installer program. For example, if you expand the tarball in /root, the installer is in /root/ibrix. Change to the installer directory if necessary and run the upgrade: ./ibrixupgrade -f The installer upgrades both the management console software and the file serving node software on this node. Verify the status of the management console: /etc/init.d/ibrix_fusionmanager status The status command confirms whether the correct services are running.
ipfs1 102592 0 (unused) If either grep command returns empty, contact HP Support. 6. From the active management console node, verify that the new version of StoreAll software FS/IAS is installed on the file serving nodes: /bin/ibrix_version -l -S Completing the upgrade 1. Remount the StoreAll file systems: /bin/ibrix_mount -f -m 2. From the node hosting the active management console, turn automated failover back on: /bin/ibrix_server -m 3.
B StoreAll 9730 component and cabling diagrams Back view of the main rack Two StoreAll 9730 CXs are located below the SAS switches; the remaining StoreAll 9730 CXs are located above the SAS switches. The StoreAll 9730 CXs are numbered starting from the bottom (for example, the StoreAll 9730 CX 1 is located at the bottom of the rack; StoreAll 9730 CX 2 is located directly above StoreAll 9730 CX 1). 1. 9730 CX 6 2. 9730 CX 5 3. 9730 CX 4 4. 9730 CX 3 5. c7000 6. 9730 CX 2 7. 9730 CX 1 8.
Back view of the expansion rack 1. 9730 CX 8 2. 9730 CX 7 StoreAll 9730 CX I/O modules and SAS port connectors 1. Secondary I/O module (Drawer 2) 2. SAS port 2 connector 3. SAS port 1 connector 4. Primary I/O module (Drawer 2) 5. SAS port 2 connector 6. SAS port 1 connector 7. SAS port 1 connector 8. SAS port 2 connector 9. Primary I/O module (Drawer 1) 10. SAS port 1 connector 11. SAS port 2 connector 12.
StoreAll 9730 CX 1 connections to the SAS switches The connections to the SAS switches are: • SAS port 1 connector on the primary I/O module (Drawer 1) to port 1 on the Bay 5 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 1) to port 1 on the Bay 6 SAS switch • SAS port 1 connector on the primary I/O module (Drawer 2) to port 1 on the Bay 7 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 2) to port 1 on the Bay 8 SAS switch TIP: The number corresponding t
StoreAll 9730 CX 2 connections to the SAS switches On Drawer 1: • SAS port 1 connector on the primary I/O module (Drawer 1) to port 2 on the Bay 5 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 1) to port 2 on the Bay 6 SAS switch On Drawer 2: • SAS port 1 connector on the primary I/O module (Drawer 2) to port 2 on the Bay 7 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 2) to port 2 on the Bay 8 SAS switch 204 StoreAll 9730 component and cabling diagram
StoreAll 9730 CX 3 connections to the SAS switches On Drawer 1: • SAS port 1 connector on the primary I/O module (Drawer 1) to port 3 on the Bay 5 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 1) to port 3 on the Bay 6 SAS switch On Drawer 2: • SAS port 1 connector on the primary I/O module (Drawer 2) to port 3 on the Bay 7 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 2) to port 3 on the Bay 8 SAS switch StoreAll 9730 CX 3 connections to the SAS switc
StoreAll 9730 CX 7 connections to the SAS switches in the expansion rack On Drawer 1: • SAS port 1 connector on the primary I/O module (Drawer 1) to port 7 on the Bay 5 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 1) to port 7 on the Bay 6 SAS switch On Drawer 2: • SAS port 1 connector on the primary I/O module (Drawer 2) to port 7 on the Bay 7 SAS switch • SAS port 1 connector on the secondary I/O module (Drawer 2) to port 7 on the Bay 8 SAS switch 206 StoreAll 9730 compone
C The IBRIX X9720 component and cabling diagrams Base and expansion cabinets A minimum IBRIX X9720 Storage base cabinet has from 3 to 16 performance blocks (that is, server blades) and from 1 to 4 capacity blocks. An expansion cabinet can support up to four more capacity blocks, bringing the system to eight capacity blocks. The servers are configured as file serving nodes, with one of the servers hosting the active Fusion Manager. The Fusion Manager is responsible for managing the file serving nodes.
Back view of a base cabinet with one capacity block 1. Management switch 2 2. Management switch 1 3. X9700c 1 4. TFT monitor and keyboard 5. c-Class Blade enclosure 6.
Front view of a full base cabinet 1 X9700c 4 6 X9700cx 3 2 X9700c 3 7 TFT monitor and keyboard 3 X9700c 2 8 c-Class Blade Enclosure 4 X9700c 1 9 X9700cx 2 5 X9700cx 4 10 X9700cx 1 Base and expansion cabinets 209
Back view of a full base cabinet 210 1 Management switch 2 7 X9700cx 4 2 Management switch 1 8 X9700cx 3 3 X9700c 4 9 TFT monitor and keyboard 4 X9700c 3 10 c-Class Blade Enclosure 5 X9700c 2 11 X9700cx 2 6 X9700c 1 12 X9700cx 1 The IBRIX X9720 component and cabling diagrams
Front view of an expansion cabinet The optional X9700 expansion cabinet can contain from one to four capacity blocks. The following diagram shows a front view of an expansion cabinet with four capacity blocks. 1. X9700c 8 5. X9700cx 8 2. X9700c 7 6. X9700cx 7 3. X9700c 6 7. X9700cx 6 4. X9700c 5 8.
Back view of an expansion cabinet with four capacity blocks 1. X9700c 8 5. X9700cx 8 2. X9700c 7 6. X9700cx 7 3. X9700c 6 7. X9700cx 6 4. X9700c 5 8. X9700cx 5 Performance blocks (c-Class Blade enclosure) A performance block is a special server blade for the X9720. Server blades are numbered according to their bay number in the blade enclosure. Server 1 is in bay 1 in the blade enclosure, and so on. Server blades must be contiguous; empty blade bays are not allowed between server blades.
Rear view of a c-Class Blade enclosure 1. Interconnect bay 1 (Virtual Connect Flex-10 10 Ethernet Module) 6. Interconnect bay 6 (reserved for future use) 2. Interconnect bay 2 (Virtual Connect Flex-10 7. Interconnect bay 7 (reserved for future use) 10 Ethernet Module) 3. Interconnect bay 3 (SAS Switch) 8. Interconnect bay 8 (reserved for future use) 4. Interconnect bay 4 (SAS Switch) 9. Onboard Administrator 1 5. Interconnect bay 5 (reserved for future use) 10.
The X9720 Storage automatically reserves eth0 and eth3 and creates a bonded device, bond0. This is the management network. Although eth0 and eth3 are physically connected to the Flex-10 Virtual Connect (VC) modules, the VC domain is configured so that this network is not seen by the site network. With this configuration, eth1 and eth2 are available for connecting each server blade to the site network.
X9700c (array controller with 12 disk drives) Front view of an X9700c 1. Bay 1 5. Power LED 2. Bay 2 6. System fault LED 3. Bay 3 7. UID LED 4. Bay 4 8. Bay 12 Rear view of an X9700c 1. Battery 1 9. Fan 2 2. Battery 2 10. X9700c controller 2 3. SAS expander port 1 11. SAS expander port 2 4. UID 12. SAS port 1 5. Power LED 13. X9700c controller 1 6. System fault LED 14. Fan 1 7. On/Off power button 15. Power supply 1 8.
Front view of an X9700cx 1. Drawer 1 2. Drawer 2 Rear view of an X9700cx 1. Power supply 5. In SAS port 2. Primary I/O module drawer 2 6. Secondary I/O module drawer 1 3. Primary I/O module drawer 1 7. Secondary I/O module drawer 2 4. Out SAS port 8. Fan Cabling diagrams Capacity block cabling—Base and expansion cabinets A capacity block is comprised of the X9700c and X9700cx. CAUTION: operation.
1 X9700c 2 X9700cx primary I/O module (drawer 2) 3 X9700cx secondary I/O module (drawer 2) 4 X9700cx primary I/O module (drawer 1) 5 X9700cx secondary I/O module (drawer 1) Virtual Connect Flex-10 Ethernet module cabling—Base cabinet Site network Onboard Administrator Available uplink port 1. Management switch 2 7. Bay 5 (reserved for future use) 2. Management switch 1 8. Bay 6 (reserved for future use) 3.
SAS switch cabling—Base cabinet NOTE: Callouts 1 through 3 indicate additional X9700c components. 1 X9700c 4 2 X9700c 3 3 X9700c 2 4 X9700c 1 5 SAS switch ports 1through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Ports 2 through 4 are reserved for additional capacity blocks. 6 SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). Reserved for expansion cabinet use.
1 X9700c 8 5 SAS switch ports 1 through 4 (in interconnect bay 3 of the c-Class Blade Enclosure). Used by base cabinet. 2 X9700c 7 6 SAS switch ports 5 through 8 (in interconnect bay 3 of the c-Class Blade Enclosure). 3 X9700c 6 7 SAS switch ports 1 through 4 (in interconnect bay 4 of the c-Class Blade Enclosure). 4 X9700c 5 8 SAS switch ports 5 through 8 (in interconnect bay 4 of the c-Class Blade Enclosure). Used by base cabinet.
D Warnings and precautions Electrostatic discharge information To prevent damage to the system, be aware of the precautions you need to follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor could damage system boards or other static-sensitive devices. This type of damage could reduce the life expectancy of the device.
Equipment symbols If the following symbols are located on equipment, hazardous conditions could exist. WARNING! Any enclosed surface or area of the equipment marked with these symbols indicates the presence of electrical shock hazards. Enclosed area contains no operator serviceable parts. To reduce the risk of injury from electrical shock hazards, do not open this enclosure. WARNING! Any RJ-45 receptacle marked with these symbols indicates a network interface connection.
WARNING! To reduce the risk of personal injury or damage to the equipment: • Observe local occupational safety requirements and guidelines for heavy equipment handling. • Obtain adequate assistance to lift and stabilize the product during installation or removal. • Extend the leveling jacks to the floor. • Rest the full weight of the rack on the leveling jacks. • Attach stabilizing feet to the rack if it is a single-rack installation.
WARNING! To reduce the risk of personal injury or damage to the equipment, the installation of non-hot-pluggable components should be performed only by individuals who are qualified in servicing computer equipment, knowledgeable about the procedures and precautions, and trained to deal with products capable of producing hazardous energy levels.
E Regulatory information For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at http:// www.hp.com/support/Safety-Compliance-EnterpriseProducts.
Glossary ACE Access control entry. ACL Access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UID Unit identification. USM SNMP User Security Model. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface. WINS Windows Internet Name Service. WWN World Wide Name. A unique identifier assigned to a Fibre Channel device.
Index Symbols /etc/sysconfig/i18n file, 28 A agile Fusion Manager, 53 AutoPass, 135 B backups file systems, 76 Fusion Manager configuration, 76 NDMP applications, 76 Belarus Kazakhstan Russia EAC marking, 224 booting server blades, 29 booting the system, 29 C cabling diagrams, X9720, 216 capacity blocks, X9720 overview, 214 CLI, 33 clients access virtual interfaces, 51 cluster events, monitor, 103 health checks, 104 license key, 135 license, view, 135 log files, 106 operating statistics, 106 version numb
Details panel, 32 Navigator, 32 open, 29 view events, 103 H hardware power on, 117 shut down, 116 hazardous conditions symbols on equipment, 221 HBAs display information, 66 monitor for high availability, 65 health check reports, 104 help obtaining, 170 High Availability agile Fusion Manager, 53 automated failover, turn on or off, 63 check configuration, 66 configure automated failover manually, 62 detailed configuration report, 67 fail back a node, 64 failover protection, 26 HBA monitor, 65 manual failove
R rack stability warning, 171 regulatory information, 224 Turkey RoHS material content declaration, 224 Ukraine RoHS material content declaration, 224 related documentation, 170 rolling reboot, 118 routing table entries add, 133 delete, 133 S segments evacuate from cluster, 125 migrate, 123 server blades booting, 29 overview, 212 server blades, 9720 add, 141 servers configure standby, 50 crash capture, 68 failover, 55 tune, 118 shut down, hardware and software, 115 SNMP event notification, 72 SNMP MIB, 74
loading rack, 221 weight, 221 warranty information HP Enterprise servers, 224 HP Networking products, 224 HP ProLiant and X86 Servers and Options, 224 HP Storage products, 224 websites HP, 171 HP Subscriber's Choice for Business, 171 spare parts, 171 weight, warning, 221 Windows StoreAll clients, upgrade, 19, 180 230 Index