HP StorageWorks X9320 Network Storage System Administrator Guide Abstract This guide describes tasks related to cluster configuration and monitoring, system upgrade and recovery, hardware component replacement, and troubleshooting. It does not document X9000 file system features or standard Linux administrative tools and commands. For information about configuring and using X9000 Software file system features, see the HP StorageWorks X9000 File Serving Software File System User Guide.
© Copyright 2010 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Product description.....................................................................................9 X9320 Network Storage System features.....................................................................................9 System components...................................................................................................................9 HP X9000 Software features....................................................................................................
Identifying standby-paired HBA ports...............................................................................32 Turning HBA monitoring on or off....................................................................................32 Deleting standby port pairings........................................................................................32 Deleting HBAs from the configuration database.................................................................32 Displaying HBA information...............
Viewing operating statistics for file serving nodes........................................................................51 9 Maintaining the system.............................................................................53 Shutting down the system.........................................................................................................53 Shutting down the X9000 Software......................................................................................53 Powering off the hardware...
13 Upgrading firmware................................................................................77 Upgradable firmware..............................................................................................................77 Downloading MSA2000 G2/G3 firmware................................................................................77 Installing firmware upgrades....................................................................................................77 14 Troubleshooting.........
Cabling diagrams................................................................................................................108 Cluster network cabling diagram.......................................................................................108 SATA option cabling........................................................................................................109 SAS option cabling..........................................................................................................
Hungarian notice.............................................................................................................145 Italian notice...................................................................................................................145 Latvian notice..................................................................................................................146 Lithuanian notice.....................................................................................................
1 Product description The HP StorageWorks X9320 Network Storage System is a highly available, scale-out storage solution for file data workloads. The system combines HP X9000 File Serving Software with HP server and storage hardware to create an expansible cluster of file serving nodes.
• • Optional HP StorageWorks X9300 Network Storage System Base Rack.
To ensure continuous data access, X9000 Software provides manual and automated failover protection at various points: • Server. A failed node is powered down and a designated standby server assumes all of its segment management duties. • Segment. Ownership of each segment on a failed node is transferred to a designated standby server. • Network interface. The IP address of a failed network interface is transferred to a standby network interface until the original network interface is operational again.
2 Getting started IMPORTANT: Do not modify any parameters of the operating system or kernel, or update any part of the X9320 Network Storage System unless instructed to do so by HP; otherwise, the X9320 Network Storage System could fail to operate properly.
• Snapshots. Use this feature to capture a point-in-time copy of a file system. • File allocation. Use this feature to specify the manner in which segments are selected for storing new files and directories. For more information about these file system features, see the HP StorageWorks File Serving Software File System User Guide. Management interfaces Cluster operations are managed through the X9000 Software management console, which provides both a GUI and a CLI.
The GUI dashboard opens in the same browser window. You can open multiple GUI windows as necessary. See the online help for information about all GUI displays and operations. The GUI dashboard enables you to monitor the entire cluster. There are three parts to the dashboard: System Status, Cluster Overview, and the Navigator.
System Status The System Status section lists the number of cluster events that have occurred in the last 24 hours. There are three types of events: Alerts. Disruptive events that can result in loss of access to file system data. Examples are a segment that is unavailable or a server that cannot be accessed. Warnings. Potentially disruptive conditions where file system access is not lost, but if the situation is not addressed, it can escalate to an alert condition.
Navigator The Navigator appears on the left side of the window and displays the cluster hierarchy. You can use the Navigator to drill down in the cluster configuration to add, view, or change cluster objects such as file systems or storage, and to initiate or view tasks such as snapshots or replication. When you select an object, a details page shows a summary for that object. The lower Navigator allows you to view details for the selected object, or to initiate a task.
Adding user accounts for GUI access X9000 Software supports administrative and user roles. When users log in under the administrative role, they can configure the cluster and initiate operations such as remote replication or snapshots. When users log in under the user role, they can view the cluster configuration and status, but cannot make configuration changes or initiate operations. The default administrative user name is ibrix. The default regular username is ibrixuser.
NOTE: The Windows X9000 client application can be started only by users with Administrative privileges. • Status. Shows the client’s management console registration status and mounted file systems, and provides access to the IAD log for troubleshooting. • Registration. Registers the client with the management console, as described in the HP StorageWorks File Serving Software Installation Guide. • Mount. Mounts a file system.
file. For example, you will need to lock specific ports for rpc.statd, rpc.lockd, rpc.mountd, and rpc.quotad. • It is best to allow all ICMP types on all networks; however, you can limit ICMP to types 0, 3, 8, and 11 if necessary. Be sure to open the ports listed in the following table. Port Description 22/tcp SSH 9022/tcp SSH for Onboard Administrator (OA); only for X9720 blades 123/tcp, 123/upd NTP 5353/udp Multicast DNS, 224.0.0.
HP Insight Remote Support software HP Insight Remote Support supplements your monitoring, 24x7 to ensure maximum system availability by providing intelligent event diagnosis, and automatic, secure submission of hardware event notifications to HP, which will initiate a fast and accurate resolution, based on your product’s service level. Notifications may be sent to your authorized HP Channel Partner for on-site service, if configured and available in your country.
3 Configuring virtual interfaces for client access X9000 Software uses a cluster network interface to carry management console traffic and traffic between file serving nodes. This network is configured as bond0 when the cluster is installed. For clusters with an agile management console configuration, a virtual interface is also created for the cluster network interface to provide failover support for the console.
1. Identify the VIF: # ibrix_nic –a -n bond1:2 –h node1,node2,node3,node4 2. Set up a standby server for each VIF: # # # # ibric_nic ibric_nic ibric_nic ibric_nic –b –b –b –b –H –H –H –H node1/bond1:1,node2/bond1:2 node2/bond1:1,node1/bond1:2 node3/bond1:1,node4/bond1:2 node4/bond1:1,node3/bond1:2 Configuring NIC failover NIC monitoring should be configured on VIFs that will be used by NFS, CIFS, FTP, or HTTP. Use the same backup pairs that you used when configuring standby servers.
FTP. When you add an FTP share on the Add FTP Shares dialog box or with the ibrix_ftpshare command, specify the VIF as the IP address that clients should use to access the share. HTTP. When you create a virtual host on the Create Vhost dialog box or with the ibrix_httpvhost command, specify the VIF as the IP address that clients should use to access shares associated with the Vhost. X9000 clients. Use the following command to prefer the appropriate user network.
4 Configuring failover This chapter describes how to configure failover for agile management consoles, file serving nodes, network interfaces, and HBAs. Agile management consoles The management console maintains the cluster configuration and provides graphical and command-line user interfaces for managing and monitoring the cluster. Typically, one active management console and one passive management console are installed when the cluster is installed.
The failed-over management console remains in maintenance mode until it is moved to passive mode using the following command: ibrix_fm -m passive A management console cannot be moved from maintenance mode to active mode. Viewing information about management consoles To view mode information, use the following command: ibrix_fm –i NOTE: If the management console was not installed in an agile configuration, the output will report FusionServer: fusion manager name not set! (active, quorum is not configured).
To determine the progress of a failover, view the Status tab on the GUI or execute the ibrix_server -l command. While the management console is migrating segment ownership, the operational status of the node is Up-InFailover or Down-InFailover, depending on whether the node was powered up or down when failover was initiated. When failover is complete, the operational status changes to Up-FailedOver or Down-FailedOver.
file serving node and its standby, allowing the failing server to be centrally powered down by the management console in the case of automated failover, and manually in the case of a forced manual failover. X9000 Software works with iLO, IPMI, OpenIPMI, and OpenIPMI2 integrated power sources and with APC power sources. Preliminary configuration Certain configuration steps are required when setting up power sources: • All types.
For example, to identify that node s1.hp.com has been moved from slot 3 to slot 4 on APC power source ps1: /bin/ibrix_hostpower -m -i 3,4 -s ps1 -h s1.hp.com Dissociating a file serving node from a power source You can dissociate a file serving node from an integrated power source by dissociating it from slot 1 (its default association) on the power source.
Failing back a file serving node After automated or manual failover of a file serving node, you must manually fail back the server, which restores ownership of the failed-over segments and network interfaces to the server. Before failing back the node, confirm that the primary server can see all of its storage resources and networks. The segments owned by the primary server will not be accessible if the server cannot see its storage. To fail back a file serving node, use the following command.
Identifying standbys To protect a network interface, you must identify a standby for it on each file serving node that connects to the interface. The following restrictions apply when identifying a standby network interface: • The standby network interface must be unconfigured and connected to the same switch (network) as the primary interface. • The file serving node that supports the standby network interface must have access to the file system that the clients on that interface will mount.
Deleting standbys To delete a standby for a network interface, use the following command: /bin/ibrix_nic -b -U HOSTNAME1/IFNAME1 For example, to delete the standby that was assigned to interface eth2 on file serving node s1.hp.com: /bin/ibrix_nic -b -U s1.hp.com/eth2 Setting up HBA monitoring You can configure High Availability to initiate automated failover upon detection of a failed HBA.
Identifying standby-paired HBA ports Identifying standby-paired HBA ports to the configuration database allows the management console to apply the following logic when they fail: • If one port in a pair fails, do nothing. Traffic will automatically switch to the surviving port, as configured by the vendor or the software. • If both ports in a pair fail, fail over the server’s segments to the standby server.
Field Description Port State Operational state of the port. Backup Port WWN WWPN of the standby port for this port (standby-paired HBAs only). Monitoring Whether HBA monitoring is enabled for this port. Checking the High Availability configuration Use the ibrix_haconfig command to determine whether High Availability features have been configured for specific file serving nodes.
The -h HOSTLIST option lists the nodes to check. To also check standbys, include the -b option. To view results only for file serving nodes that failed a check, include the -f argument. The -s option expands the report to include information about the file system and its segments. The -v option produces detailed information about configuration checks that received a Passed result. For example, to view a detailed report for file serving nodes xs01.hp.com: /bin/ibrix_haconfig -i -h xs01.hp.
5 Configuring cluster event notification Setting up email notification of cluster events You can set up event notifications by event type or for one or more specific events. To set up automatic email notification of cluster events, associate the events with email recipients and then configure email settings to initiate the notification process.
To turn off all Alert notifications for admin@hp.com: /bin/ibrix_event -d -e ALERT -m admin@hp.com To turn off the server.registered and filesystem.created notifications for admin1@hp.com and admin2@hp.com: /bin/ibrix_event -d -e server.registered,filesystem.created -m admin1@hp.com,admin2@hp.com Testing email addresses To test an email address with a test message, notifications must be turned on.
• Associating event notifications with trapsinks (all SNMP versions) • View definition (V3 only) • Group and user configuration (V3 only) X9000 Software implements an SNMP agent on the management console that supports the private X9000 Software MIB. The agent can be polled and can send SNMP traps to configured trapsinks. Setting up SNMP notifications is similar to setting up email notifications.
on and off. The default is on. For example, to create a v2 trapsink with a new community name, enter: ibrix_snmptrap -c -h lab13-116 -v 2 -m private For a v3 trapsink, additional options define security settings. USERNAME is a v3 user defined on the trapsink host and is required. The security level associated with the trap message depends on which passwords are specified—the authentication password, both the authentication and privacy passwords, or no passwords.
ibrix_snmpview -a -v VIEWNAME [-t {include|exclude}] -o OID_SUBTREE [-m MASK_BITS] The subtree is added in the named view. For example, to add the X9000 Software private MIB to the view named hp, enter: ibrix_snmpview -a -v hp -o .1.3.6.1.4.1.18997 -m .1.1.1.1.1.1.1 Configuring groups and users A group defines the access control policy on managed objects for one or more users. All users must belong to a group. Groups and users exist only in SNMPv3.
6 Configuring system backups Backing up the management console configuration The management console configuration is automatically backed up whenever the cluster configuration changes. The backup takes place on the node hosting the active management console (or on the Management Server, if a dedicated management console is configured). The backup file is stored at /tmp/fmbackup.zip on the machine where it was created.
Configuring NDMP parameters on the cluster Certain NDMP parameters must be configured to enable communications between the DMA and the NDMP Servers in the cluster. To configure the parameters on the management console GUI, select Cluster Configuration from the Navigator, and then select NDMP Backup. The NDMP Configuration Summary shows the default values for the parameters. Click Modify to configure the parameters for your cluster on the Configure NDMP dialog box.
To cancel a session, select that session and click Cancel Session. Canceling a session kills all spawned sessions processes and frees their resources if necessary. To see similar information for completed sessions, select NDMP Backup > Session History. To view active sessions from the CLI, use the following command: ibrix_ndmpsession –l To view completed sessions, use the following command. The -t option restricts the history to sessions occurring on or before the specified date.
NDMP events An NDMP Server can generate three types of events: INFO, WARN, and ALERT. These events are displayed on the management console GUI and can be viewed with the ibrix_event command. INFO events. These events specify when major NDMP operations start and finish, and also report progress. For example: 7012:Level 3 backup of /mnt/ibfs7 finished at Sat Nov 7 21:20:58 PST 2009 7013:Total Bytes = 38274665923, Average throughput = 236600391 bytes/sec. WARN events.
7 Creating hostgroups for X9000 clients A hostgroup is a named set of X9000 clients. Hostgroups provide a convenient way to centrally manage clients using the management console. You can put different sets of clients into hostgroups and then perform the following operations on all members of the group: • Create and delete mountpoints • Mount file systems • Prefer a network interface • Tune host parameters • Set allocation policies Hostgroups are optional.
To set up one level of hostgroups beneath the root, simply create the new hostgroups. You do not need to declare that the root node is the parent. To set up lower levels of hostgroups, declare a parent element for hostgroups. Optionally, you can specify a domain rule for a hostgroup. Use only alphanumeric characters and the underscore character (_) in hostgroup names. Do not use a host name as a group name. To create a hostgroup tree using the CLI: 1.
/bin/ibrix_hostgroup -l [-g GROUP] Deleting hostgroups When you delete a hostgroup, its members are assigned to the parent of the deleted group.
8 Monitoring cluster operations This chapter describes how to monitor the operational state of the cluster and how to monitor cluster health. Monitoring the status of file serving nodes The dashboard on the management console GUI displays information about the operational status of file serving nodes, including CPU, I/O, and network performance information. To view status from the CLI, use the ibrix_server -l command.
Events are written to an events table in the configuration database as they are generated. To maintain the size of the file, HP recommends that you periodically remove the oldest events. See “Removing events from the events database table” (page 48) for more information. You can set up event notifications through email (see “Setting up email notification of cluster events” (page 35)) or SNMP traps (see “Setting up SNMP notifications” (page 36)).
Health check reports The summary report provides an overall health check result for all tested file serving nodes and X9000 clients, followed by individual results. If you include the -b option, the standby servers for all tested file serving nodes are included when the overall result is determined. The results will be one of the following: • Passed. All tested hosts and standby servers passed every health check. • Failed. One or more tested hosts failed a health check.
lab15-62 Report =============== Overall Result ============== Result Type State Network Thread Protocol ------ ------ ----------------------- ------ -------PASSED Server Up, HBAsDown 99.126.39.72 16 true CPU Information =============== Cpu(System,User,Util,Nice) -------------------------0, 1, 1, 0 Memory Information ================== Mem Total Mem Free --------- -------1944532 1841548 Module Up time Last Update ------ --------- ---------------------------- Loaded 3267210.
Segment owner for segment 2 filesystem ifs2 matches on Iad and Fusion Manager PASSED ifs1 file system uuid matches on Iad and Fusion Manager PASSED ifs1 file system generation matches on Iad and Fusion Manager PASSED ifs1 file system number segments matches on Iad and Fusion Manager PASSED ifs1 file system mounted state matches on Iad and Fusion Manager PASSED Segment owner for segment 1 filesystem ifs1 matches on Iad and Fusion Manager PASSED Superblock owner for segment 1 of filesystem ifs2 on
lab12-10.hp.com 1034616 703672 2031608 2031360 ---------CPU----------HOST User System Nice Idle IoWait Irq SoftIrq lab12-10.hp.com 0 0 0 0 97 1 0 ---------NFS v3-------HOST Null Getattr Setattr Lookup Access Readlink Read Write lab12-10.hp.com 0 0 0 0 0 0 0 0 52 HOST lab12-10.hp.com Create Mkdir Symlink Mknod Remove Rmdir Rename 0 0 0 0 0 0 0 HOST lab12-10.hp.
9 Maintaining the system Shutting down the system To shut down the system completely, first shut down the X9000 software, and then power off the system hardware. Shutting down the X9000 Software Use the following procedure to shut down the X9000 Software. Unless noted otherwise, run the commands from the dedicated Management Console or from the node hosting the active agile management console. 1. Disable HA for all file serving nodes: ibrix_server -m -U 2.
1. 2. 3. Power on the dedicated Management Console or the node hosting the active agile management console. Power on the file serving nodes (*root segment = segment 1; power on owner first, if possible). Monitor the nodes on the management console and wait for them all to report UP in the output from the following command: ibrix_server -l 4. Mount file systems and verify their content.
1. 2. 3. Reboot the node directly from Linux. (Do not use the "Power Off" functionality in the management console, as it does not trigger failover of file serving services.) The node will fail over to its backup. Wait for the management console to report that the rebooted node is Up. From the management console, failback the node, returning services to the node from its backup.
Contact HP Support to obtain the values for OPTIONLIST. List the options as option=value pairs, separated by commas. To set host tunings on all clients, include the -g clients option. • To reset host parameters to their default values on nodes or hostgroups: /bin/ibrix_host_tune -U {-h HOSTLIST|-g GROUPLIST} [-n OPTIONS] To reset all options on all file serving nodes, hostgroups, and X9000 clients, omit the -h HOSTLIST and -n OPTIONS options.
Migrating specific segments Use the following command to migrate ownership of the segments in LVLIST on file system FSNAME to a new host and update the source host: /bin/ibrix_fs -m -f FSNAME -s LVLIST -h HOSTNAME [-M] [-F] [-N] To force the migration, include -M. To skip the source host update during the migration, include -F. To skip host health checks, include -N. The following command migrates ownership of ilv2 and ilv3 in file system ifs1 to s1.hp.
3. If quotas are enabled on the file system, disable them: ibrix_fs -q -D -f FSNAME 4. Evacuate the segment. Select the file system on the management console GUI and then select Tasks > Rebalancer from the lower Navigator. Click Start on the Task Summary page to open the Start Rebalancing dialog, and then open the Advanced tab. In the Source Segments column, select the segments to evacuate, and in the Destination Segments column, select the segments to receive the data.
have only one cluster interface. For backup purposes, each file serving node and management console can have two cluster NICs. • User network interface. This network interface carries traffic between file serving nodes and clients. Multiple user network interfaces are permitted. The cluster network interface was created for you when your cluster was installed. For clusters with an agile management console configuration, a virtual interface is used for the cluster network interface.
/bin/ibrix_nic -a -n IFNAME -h HOSTLIST If you are identifying a VIF, add the VIF suffix (:nnnn) to the physical interface name. For example, the following command identifies virtual interface eth1:1 to physical network interface eth1 on file serving nodes s1.hp.com and s2.hp.com: /bin/ibrix_nic -a -n eth1:1 -h s1.hp.com,s2.hp.
Preferring a network interface for a hostgroup You can prefer an interface for multiple X9000 clients at one time by specifying a hostgroup. To prefer a user network interface for all X9000 clients, specify the clients hostgroup. After preferring a network interface for a hostgroup, you can locally override the preference on individual X9000 clients with the command ibrix_lwhost.
Changing the cluster interface If you restructure your networks, you might need to change the cluster interface. The following rules apply when selecting a new cluster interface: • The management console must be connected to all machines (including standby servers) that use the cluster network interface. Each file serving node and X9000 client must be connected to the management console by the same cluster network interface. A Gigabit (or faster) Ethernet port must be used for the cluster interface.
/bin/ibrix_nic -l -h HOSTLIST The following table describes the fields in the output. Field Description BACKUP HOST File serving node for the standby network interface. BACKUP-IF Standby network interface. HOST File serving node. An asterisk (*) denotes the management console. IFNAME Network interface on this file serving node. IP_ADDRESS IP address of this NIC. LINKMON Whether monitoring is on for this NIC. MAC_ADDR MAC address of this NIC.
10 Migrating to an agile management console The agile management console configuration provides one active management console and one passive management console installed on different nodes in the cluster. The migration procedure configures the current Management Server machine as a host for an agile management console and installs another instance of the agile management console on a file serving node.
Run one of the following commands: /etc/init.d/network restart service network restart Verify that you can ping the new local IP address. 4. Configure the agile management console: ibrix_fm -c -d –n -v cluster -I In the command, is the old cluster IP address for the original management console and is the new IP address you acquired.
[root@x109s1 ~]# ibrix_fm -i FusionServer: x109s1 (active, quorum is running) ================================================ Command succeeded! 11. Verify that there is only one management console in this cluster: ibrix_fm -f For example: [root@x109s1 ~]# ibrix_fm -f NAME IP ADDRESS ------ ---------X109s1 172.16.3.100 Command succeeded! 12. Install a passive agile management console on a second file serving node.
NOTE: If iLO was not previously configured on the server, the command will fail with the following error: com.ibrix.ias.model.BusinessException: x467s2 is not associated with any power sources Use the following command to define the iLO parameters into the X9000 cluster database: ibrix_powersrc -a -t ilo -h HOSTNAME -I IPADDR [-u USERNAME -p PASSWORD] See the installation guide for more information about configuring iLO.
x109s3 172.16.3.3 Command succeeded! 5. Remove the Management Server machine from the cluster database: ibrix_server -d -h HOSTNAME 6. To provide high availability for the management console, install a passive agile management console on another file serving node. In the command, the -F option forces the overwrite of the new_lvm2_uuid file that was installed with the X9000 Software.
11 Upgrading the X9000 Software This chapter describes how to upgrade to the latest X9000 File Serving Software release. The management console and all file serving nodes must be upgraded to the new release at the same time. Note the following: • Upgrades to the X9000 Software 5.6 release are supported for systems currently running X9000 Software 5.5.x. If your system is running an earlier release, first upgrade to the 5.5 release, and then upgrade to 5.6.
1. 2. 3. 4. 5. Check the dashboard on the management console GUI to verify that all nodes are up. If file systems are mounted from Windows X9000 clients, unmount them using the X9000 Windows client configuration wizard. Obtain the latest release image from the HP kiosk at http://www.software.hp.com/kiosk (you will need your HP-provided login credentials). Copy the release .iso file onto the current active management console.
4. Run the following command to verify that automated failover is off. In the output, the HA column should display off. /bin/ibrix_server -l 5. On the active management console node, stop the NFS and SMB services on all file serving nodes to prevent NFS and CIFS clients from timing out.
5. When the following screen appears, enter qr to install the X9000 software on the file serving node. The server reboots automatically after the software is installed. Remove the DVD from the DVD-ROM drive. Restoring the node configuration Complete the following steps on each node, starting with the previous active management console: 1. 2. 3. Log in to the node. The configuration wizard should pop up. Escape out of the configuration wizard.
4. Confirm that automated failover is enabled: /bin/ibrix_server -l In the output, HA should display on. 5. From the node hosting the active management console, perform a manual backup of the upgraded configuration: /bin/ibrix_fm -B 6. 7. Upgrade X9000 clients: • For Linux clients, see “Upgrading Linux X9000 clients” (page 73). • For Windows clients, see “Upgrading Windows X9000 clients” (page 73).
3. 4. 5. 6. 7. Launch the Windows Installer and follow the instructions to complete the upgrade. Register the Windows X9000 client again with the cluster and check the option to Start Service after Registration. Check Administrative Tools | Services to verify that the X9000 Client service is started. Launch the Windows X9000 client. On the Active Directory Settings tab, click Update to retrieve the current Active Directory settings. Mount file systems using the X9000 Windows client GUI.
Manual upgrade Check the following: • If the restore script fails, check /usr/local/ibrix/setup/logs/restore.log for details. • If configuration restore fails, look at /usr/local/ibrix/autocfg/logs/appliance.log to determine which feature restore failed. Look at the specific feature log file under /usr/ local/ibrix/setup/logs/ for more detailed information.
12 Licensing This chapter describes how to view your current license terms and how to obtain and install new X9000 Software product license keys. NOTE: For MSA2000 G2 licensing (for example, snapshots), see the MSA2000 G2 documentation. Viewing license terms The X9000 Software license file is stored in the installation directory on the management console. To view the license from the management console GUI, select Cluster Configuration in the Navigator and then select License.
13 Upgrading firmware Upgradable firmware The HP X9320 system includes several components with upgradable firmware. The following table lists these components and specifies whether they can be upgraded online and in a nondisruptive manner.
14 Troubleshooting Managing support tickets A support ticket includes system and X9000 software information useful for analyzing performance issues and node terminations. A support ticket is created automatically if a file serving node terminates unexpectedly. You can also create a ticket manually if your cluster experiences issues that need to be investigated by HP Support.
Support ticket states Support tickets are in one of the following states: Ticket State Description COLLECTING_LOGS The data collection operation is collecting logs and command output. COLLECTED_LOGS The data collection operation has completed on all nodes in the cluster. CREATING The data collected from each node is being copied to the active management console. CREATED The ticket was created successfully.
# ssh {hostname for file serving node} Viewing software version numbers To view version information for a list of hosts, use the following command: /bin/ibrix_version -l [-h HOSTLIST] For each host, the output includes: • Version number of the installed file system • Version numbers of the IAD and File System module • Operating system type and OS kernel version • Processor architecture The -S option shows this information for all file serving nodes.
never migrated back to the primary server. If you execute ibrix_fs -i -f FSNAME, the output will list No in the ONBACKUP field, indicating that the primary server now owns the segments, even though it does not. In this situation, you will be unable to complete the failback after you fix the storage subsystem problem. Perform the following manual recovery procedure: 1. 2. Restore the failed storage subsystem. Reboot the primary server, which will allow the arrested failback to complete.
NOTE: The ibrix_dbck command should be used only under the direction of HP Support. To run a health check on a file serving node, use the following command: /bin/ibrix_health -i -h HOSTLIST If the last line of the output reports Passed, the file system information on the file serving node and management console is consistent.
15 Replacing components Customer replaceable components WARNING! Before performing any of the procedures in this chapter, read the important warnings, precautions, and safety information in “Warnings and precautions” (page 135) and “Regulatory compliance and safety” (page 138). IMPORTANT: To avoid unintended consequences, HP recommends that you perform the procedures in this chapter during scheduled maintenance times.
Additional documentation In addition to this document, you will need the following documents, which are available at http:// www.hp.com/support/manuals. On the Manuals page, clock Servers > ProLiant ml/dl and tc series servers. For file serving node procedures, select HP Proliant DL380 G6 Server series or HP Proliant DL380 G7 Server series.
5. For a file serving node, fail back the server using the GUI or CLI: • On the GUI, select Servers from the Navigator pane, and then select the appropriate server from the Servers pane. Next, select the server name in the left pane, and click Failback. • On the CLI, execute the following command: ibrix_server -f –U –h Replacing a NIC adapter To replace a NIC adapter on a file serving node or the X9300 Management Server: 1. If the server is an X9300 Management Server, skip this step.
16 Recovering a file serving node Use the following procedure to recover a failed file serving node. You will need to create a QuickRestore DVD, as described later, and then install it on the affected node. This step installs the operating system and X9000 Software on the node and launches a configuration wizard.
The server reboots automatically after the software is installed. Remove the DVD from the DVD-ROM drive. 7. When your cluster was configured initially, the installer may have created a template for configuring file serving nodes. To use this template to configure the file serving node undergoing recovery, go to “Configuring a file serving node using the original template” (page 87).
3. The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select the appropriate management console for this cluster. NOTE: If the list does not include the appropriate management console, or you want to customize the cluster configuration for the file serving node, select Cancel. Go to “Configuring a file serving node manually” (page 91) for information about completing the configuration. 88 4.
NOTE: If you select Reject, the wizard will exit and the shell prompt will be displayed. You can restart the Wizard by entering the command /usr/local/ibrix/autocfg/bin/menu_ss_wizard or logging in to the server again. 6. If the specified hostname already exists in the cluster (the name was used by the node you are replacing), the Replace Existing Server window asks whether you want to replace the existing server with the node you are configuring.
If you configured a passive management console, enter the following command to verify the status of the console: ibrix_fm -i Next, complete the restore on the file serving node. Completing the restore on a file serving node Complete the following steps: 1. 2. Ensure that you have root access to the node. The restore process sets the root password to hpinvent, the factory default. The QuickRestore DVD enables the iptables firewall.
5. If Insight Remote Support was previously enabled on this file serving node, run the following command to start Insight Remote Support services each time the node is rebooted: chkconfig hp-snmp-agents on To start Insight Remote Support services now, run the following commands: service hpsmhd start service snmpd restart service hp-snmp-agents start 6. Run ibrix_health -l from the X9000 management console to verify that no errors are being reported.
3. The Configuration Wizard attempts to discover management consoles on the network and then displays the results. Select Cancel to configure the node manually. (If the wizard cannot locate a management console, the screen shown in step 4 will appear.) 4. The file serving node Configuration Menu appears.
5. The Cluster Configuration Menu lists the configuration parameters that you will need to set. Use the Up and Down arrow keys to select an item in the list. When you have made your select, press Tab to move to the buttons at the bottom of the dialog box, and press Space to go to the next dialog box. 6. Select Management Console from the menu, and enter the IP address of the management console. This is typically the address of the management console on the cluster network. 7.
8. Select Time Zone from the menu, and then use Up or Down to select your time zone. 9. Select Default Gateway from the menu, and enter the IP Address of the host that will be used as the default gateway.
10. Select DNS Settings from the menu, and enter the IP addresses for the primary and secondary DNS servers that will be used to resolve domain names. Also enter the DNS domain name. 11. Select NTP Servers from the menu, and enter the IP addresses or hostnames for the primary and secondary NTP servers.
12. Select Networks from the menu. Select to create a bond for the cluster network. You are creating a bonded interface for the cluster network; select Ok on the Select Interface Type dialog box. Enter a name for the interface (bond0 for the cluster interface) and specify the appropriate options and slave devices. The factory defaults for the slave devices are eth0 and eth3. Use Mode 6 bonding for 1GbE networks and Mode 1 bonding for 10GbE networks.
13. When the Configure Network dialog box reappears, select bond0.
14. To complete the bond0 configuration, enter a space to select the Cluster Network role. Then enter the IP address and netmask information that the network will use. Repeat this procedure to create a bonded user network (typically bond1 with eth1 and eth2) and any custom networks as required. 15. When you have completed your entries on the File Serving Node Configuration Menu, select Continue. 16.
IMPORTANT: Configure a passive agile management console only if the agile management console is enabled and an active agile management console is configured. If you configured a user network, enter a VIF IP address and netmask for the network. If you configured a passive management console, enter the following command to verify the status of the console: ibrix_fm -i IMPORTANT: Next, go to “Completing the restore on a file serving node” (page 90).
17 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: • Product model names and numbers • Technical support registration number (if applicable) • Product serial numbers • Error messages • Operating system type and revision level • Detailed questions Related information Related documents are available on the Manuals page at http://www.hp.
Using HP StorageWorks MSA Disk Arrays • HP StorageWorks 2000 G2 Modular Smart Array Reference Guide • HP StorageWorks 2000 G2 Modular Smart Array CLI Reference Guide • HP StorageWorks P2000 G3 MSA System CLI Reference Guide • Online help for HP StorageWorks Storage Management Utility (SMU) and Command Line Interface (CLI) On the Manuals page, select storage >Disk Storage Systems > MSA Disk Arrays >HP StorageWorks 2000sa G2 Modular Smart Array or HP StorageWorks P2000 G3 MSA Array Systems.
Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates After registering, you will receive email notification of product enhancements, new driver versions, firmware updates, and other product resources.
A System component and cabling diagrams System component diagrams Front view of X9300c array controller or X9300cx 3.
Rear view of X9300c array controller Item Description 1 Power supplies 2 Power switches 3 Host ports 4 CLI port 5 Network port 6 Service port (used by service personnel only) 7 Expansion port (connects to drive enclosure) Rear view of X9300cx 3.
Front view of file serving node Item Description 1 Quick-release levers (2) 2 HP Systems Insight Manager display 3 Hard drive bays 4 SATA optical drive bay 5 Video connector 6 USB connectors (2) Rear view of file serving node Item Description 1 PCI slot 5 2 PCI slot 6 3 PCI slot 4 4 PCI slot 2 5 PCI slot 3 6 PCI slot 1 7 Power supply 2 (PS2) 8 Power supply 1 (PS1) 9 USB connectors (2) 10 Video connector 11 NIC 1 connector 12 NIC 2 connector System component diagrams
Item Description 13 Mouse connector 14 Keyboard connector 15 Serial connector 16 iLO 2 connector 17 NIC 3 connector 18 NIC 4 connector 106 System component and cabling diagrams
Server PCIe card PCI slot HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 empty 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 empty 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 NC364T Quad 1Gb NIC 2 empty 3 HP SC08Ge 3Gb SAS Host Bus Adapter 4 empty 5 empty 6 HP SC08Ge 3Gb SAS Host Bus Adapter 1 HP SC08Ge 3Gb SAS Host Bus Adapter 2 empty 3 NC522SFP dual 10Gb NIC 4 empty 5 empty 6 SATA 1G
Cabling diagrams Cluster network cabling diagram 108 System component and cabling diagrams
SATA option cabling Line Description SAS I/O pathController A SAS I/O pathController B Cabling diagrams 109
SAS option cabling Line Description SAS I/O pathArray 1: Controller A SAS I/O pathArray 1: Controller B SAS I/O pathArray 2: Controller A SAS I/O pathArray 2: Controller B 110 System component and cabling diagrams
Drive enclosure cabling Item Description 1 SAS controller in X9300c controller enclosure 2 I/O modules in four X9300cx drive enclosures Cabling diagrams 111
B Spare parts list This appendix lists spare parts (both customer replaceable and non customer replaceable) for the X9320 Network Storage System components. Spare parts are categorized as follows: • Mandatory. Parts for which customer self repair is mandatory. If you request HP to replace these parts, you will be charged for the travel and labor costs of this service. • Optional. Parts for which customer self repair is optional. These parts are also designed for customer self repair.
Description Spare part number Customer self repair SPS-DIMM,8GB PC3-10600R,512MX4,ROHS 501536-001 Optional SPS-DRV,HD,146GB,15K 2.
Description Spare part number Customer self repair SPS-BAFFLE 496061-001 Mandatory SPS-BACKPLANE, PS 496062-001 Optional SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Manda
Description Spare part number Customer self repair SPS-DRV,HD 1TB MSA2 3.5" 7.
Description Spare part number Customer self repair SPS-DRV,ODD, SLIM SATA DVD RW 481429-001 Optional SPS-TRAY, DVD 532390-001 Mandatory SPS-POWER SUPPLY, 750W 511778-001 Optional SPS-KIT, MISC HARDWARE 496058-001 Mandatory SPS-BD,8 PORT EXT,SAS,HBA 489103-001 Optional SPS-BD,NIC,X4 PCI-E,4 PORT,1000 BASE-T 436431-001 Mandatory SPS-HARDWARE MTG KIT 574765-001 Mandatory SPS-CORD,AC PWR IEC/IEC 6 FT 142258-001 Mandatory SPS-CARD, RISER 496057-001 Optional SPS-DRV,HD 146GB MSA2 3.
Description Spare part number Customer self repair SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional
Description Spare part number Customer self repair SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACK
Description Spare part number Customer self repair SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 O
Description Spare part number Customer self repair SPS-BAFFLE 496061-001 Mandatory SPS-BACKPLANE, PS 496062-001 Optional SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory
10 GbE spare parts 10 GbE 48 TB (AW543A) Description Spare part number Customer self repair SPS-HOOD 496056-001 Mandatory SPS-CAGE, PCI 496060-001 Optional SPS-BAFFLE 496061-001 Mandatory SPS-BACKPLANE, PS 496062-001 Optional SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SAT
Description Spare part number Customer self repair SPS-DRV,HD 146GB MSA2 3.5" 15K DP SAS 480937-001 Optional SPS-DRV,HD 300GB MSA2 3.5" 15K DP SAS 480938-001 Optional SPS-DRV,HD 450GB MSA2 3.5" 15K DP SAS 480939-001 Optional SPS-DRV,HD 500GB MSA2 3.5" 7.2K SATA 480940-001 Optional SPS-DRV,HD 750GB MSA2 3.5" 7.2K SATA 480941-001 Optional SPS-DRV,HD 1TB MSA2 3.5" 7.
Description Spare part number Customer self repair SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.26 GHZ, 8M, 80W 490073-001 Optional SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS-DRV,HD,146GB,10K 2.
10 GbE 21.
Description Spare part number Customer self repair SPS-DRV,HD 300GB MSA2 3.5" 15K DP SAS 480938-001 Optional SPS-DRV,HD 450GB MSA2 3.5" 15K DP SAS 480939-001 Optional SPS-DRV,HD 500GB MSA2 3.5" 7.2K SATA 480940-001 Optional SPS-DRV,HD 750GB MSA2 3.5" 7.2K SATA 480941-001 Optional SPS-DRV,HD 1TB MSA2 3.5" 7.
Description Spare part number Customer self repair SPS-PROC,NEHALEM EP 2.26 GHZ, 8M, 80W 490073-001 Optional SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number Customer self repair SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.26 GHZ, 8M, 80W 490073-001 Optional SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number Customer self repair SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.26 GHZ, 8M, 80W 490073-001 Optional SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number Customer self repair SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.26 GHZ, 8M, 80W 490073-001 Optional SPS-DIMM,4GB PC3-10600R,256MX4,ROHS 501534-001 Mandatory SPS-DRV,HD,146GB,10K 2.
Description Spare part number Customer self repair SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number Customer self repair SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACKPLANE,SAS 507690-001 Optional SPS-PROC,NEHALEM EP 2.
Description Spare part number Customer self repair SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 Optional SPS-BD, PCIE 496078-001 Optional SPS-BEZEL 496080-001 Mandatory SPS-BACK
Description Spare part number Customer self repair SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Mandatory SPS-CAGE, DVD OPT DRIVE 496076-001 Mandatory SPS-BD, PCIX 496077-001 O
Description Spare part number Customer self repair SPS-BAFFLE 496061-001 Mandatory SPS-BACKPLANE, PS 496062-001 Optional SPS-CAGE, PS BACKPLANE 496063-001 Optional SPS-HEATSINK, 80W 496064-001 Optional SPS-FAN 496066-001 Mandatory SPS-CAGE, FAN 496067-001 Mandatory SPS-BD,SYSTEM I/O, W/SUBPAN 496069-001 Optional SPS-CABLE, SAS BACKPLANE 496070-001 Mandatory SPS-CABLE, SATA DVD PWR 496071-001 Mandatory SPS-BD, SID 496073-001 Mandatory SPS-CAGE, HD, SFF 496074-001 Manda
C Warnings and precautions Electrostatic discharge information See Electrostatic discharge. Grounding methods There are several methods for grounding. Use one or more of the following methods when handling or installing electrostatic sensitive parts: • Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are flexible straps with a minimum of 1 megohm 10 percent resistance in the ground cords.
WARNING! Power supplies or systems marked with these symbols indicate the presence of multiple sources of power. WARNING! Any product or assembly marked with these symbols indicates that the component exceeds the recommended weight for one individual to handle safely. Rack warnings and precautions Ensure that precautions have been taken to provide for rack stability and safety. It is important to follow these precautions providing for rack stability and safety, and to protect both personnel and property.
WARNING! To reduce the risk of electric shock or damage to the equipment: • Allow the product to cool before removing covers and touching internal components. • Do not disable the power cord grounding plug. The grounding plug is an important safety feature. • Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times. • Disconnect power from the device by unplugging the power cord from either the electrical outlet or the device.
D Regulatory compliance and safety Regulatory compliance identification numbers For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number.
Declaration of conformity for products marked with the FCC logo, United States only This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device could not cause harmful interference, and (2) this device must accept any interference received, including interference that could cause undesired operation. For questions regarding your product, contact: Hewlett-Packard Company P. O.
International notices and statements Canadian notice (Avis Canadien) Class A equipment This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations. Cet appareil numrique de la classe A respecte toutes les exigences du Rglement sur le matriel brouilleur du Canada. Class B equipment This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations.
Korean notice (A&B) Class A equipment Class B equipment Safety Battery Replacement notice WARNING! The computer contains an internal lithium manganese dioxide, a vanadium pentoxide, or an alkaline battery pack. A risk of fire and burns exists if the battery pack is not properly handled. To reduce the risk of personal injury: • Do not attempt to recharge the battery. • Do not expose the battery to temperatures higher than 60˚C (140˚F).
be a minimum of 1.00 mm2 or 18 AWG, and the length of the cord must be between 1.8 m (6 ft) and 3.6 m (12 ft). If you have questions about the type of power cord to use, contact an HP-authorized service provider. NOTE: Route power cords so that they will not be walked on and cannot be pinched by items placed upon or against them. Pay particular attention to the plug, electrical outlet, and the point where the cords exit from the product.
NOTE: For more information on static electricity, or for assistance with product installation, contact your authorized reseller.
English notice Estonian notice Finnish notice French notice 144 Regulatory compliance and safety
German notice Greek notice Hungarian notice Italian notice Waste Electrical and Electronic Equipment directive 145
Latvian notice Lithuanian notice Polish notice Portuguese notice 146 Regulatory compliance and safety
Slovakian notice Slovenian notice Spanish notice Swedish notice Waste Electrical and Electronic Equipment directive 147
Glossary ACE Access control entry. ACL Access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
Index A F agile management console, 24 AutoPass, 76 failover automated, 22 NIC, 22 FCC logo, 139 file serving node recover, 86 file serving nodes configure power sources for failover, 26 dissociate power sources, 28 fail back, 29 fail over manually, 28 health checks, 48 identify standbys, 26 maintain consistency with configuration database, 81 migrate segments, 56 monitor status, 47 operational states, 47 power management, 54 prefer a user network interface, 60 run health check, 82 start or stop processe
detailed configuration report, 33 dissociate power sources, 28 fail back a node, 29 failover a node manually, 28 failover protection, 10 HBA monitoring, turn on or off, 32 identify network interface monitors, 30 identify network interface standbys, 30 identify standby-paired HBA ports, 32 identify standbys for file serving nodes, 26 power management for nodes, 54 set up automated failover, 26 set up HBA monitor, 31 set up manual failover, 28 set up network interface monitoring, 29 set up power sources, 26 s
T technical support HP, 100 service locator website, 101 U upgrades firmware, 77 Linux X9000 clients, 73 Windows X9000 clients, 73 X9000 Software, 69 automatic, 69 manual, 70 user network interface add, 59 configuration rules, 62 defined, 58 identify for X9000 clients, 59 modify, 60 prefer, 60 unprefer, 61 V virtual interfaces, 21 bonded, create, 21 client access, 22 configure standby servers, 21 guidelines, 21 W warning rack stability, 101 warnings loading rack, 136 Waste Electrical and Electronic Equip