HP IBRIX X9000 Series 5.6.
© Copyright 2012 Hewlett-Packard Development Company, L.P. Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
Version: 5.6.2 (build 5.6.118) Description This release contains updates to HP X9000 File Serving Software, HP X9320 and X9720 Network Storage Systems, and HP X9300 Network Storage Gateway systems. The X9000 Software features a highly scalable file system, CIFS, NFS, FTP, and HTTP file services, high availability, remote replication, data tiering, and CLI and GUI management interfaces, and is installed on HP Network Storage System and Network Storage Gateway servers.
Other supported software Software Supported versions Linux X9000 clients • Red Hat Enterprise Linux 5.1, 5.2, 5.3, 5.4, 5.5 (all 64 bit) nl • Red Hat Enterprise Linux 4 Updates 5, 6, 7, 8 (all 64 bit) • SUSE Linux Enterprise Server 11 (64 bit) • SUSE Linux Enterprise Server 10 SP3 (64 bit) • openSUSE 11.1 (64 bit) • CentOS 4.5, 5.1, 5.2, 5.3, 5.
Fixes Corrected in 5.6.2 (build 5.6.118) The following fixes were made in this release: • The chapter “Upgrading the X9000 Software” in the administrator guides was incomplete. See “Upgrades to 5.6” (page 18) for more information. • When a local user or group was deleted, the user or group was not deleted from the CIFS database. • An lwiod failure caused the CIFS service to stop. • An attempt to create a file over RPC succeeded, but the file did not exist.
• Under certain conditions, mounting a file system could cause a server to terminate unexpectedly. • The mtime of a file changed when the file was migrated, rebalanced, or evacuated to another segment. • A segment did not activate for a file system, causing an exception in JmsResource. CIFS • Applications connecting to X9000 CIFS shares experienced file read delays when oplocks were enabled. • A CIFS share could not be deleted.
Remote Replication • When files were deleted on the source, remote replication took too long to delete them on the destination. • A CRR task remained in a persistent running state and had to be terminated manually. Authentication • Multiple unnecessary messages reporting Failed to authenticate user (name = '') were logged in /var/log/messages. • Incorrect invalid password errors caused the AD account to be locked. • NTLMv2 authentication did not work correctly.
The following versions of the software are supported. Software Supported versions HP SIM 6.2 or higher IRSA A.05.50 or higher IRSS A.05.50 or higher For product descriptions and information about downloading the software, see the HP Insight Remote Support Software web page: http://www.hp.com/go/insightremotesupport For information about HP SIM, see the following page: http://www.hp.com/products/systeminsightmanager For IRSA documentation, see the following page: http://www.hp.
To enter more than one SNMP Manager IP, add the following lines in the snmp.conf file: rwcommunity public rocommunity public trapsink public For , enter CMS/IRSS or any other SNMP manager IP. After updating the snmp.conf file, restart the snmpd service: # service snmpd restart For more information about the /sbin/hpsnmpconfig script, see “SNMP Configuration” in the hp-snmp-agents(4) man page.
NOTE: If you are using IRSS, see “Using the HP Insight Remote Support Configuration Wizard” and “Editing Managed Systems to Complete Configuration” in the HP Insight Remote Support Standard A.05.50 Hosting Device Configuration Guide. If you are using IRSA, see "Using the Remote Support Setting Tab to Update Your Client and CMS Information” and “Adding Individual Managed Systems” in the HP Insight Remote Support Advanced A.05.50 Operations Guide.
NOTE: For storage support on X9300 systems, do not set the Custom Delivery ID. (The MSA is an exception; the Custom Delivery ID is set as previously described.) Test the configuration To determine whether the traps are working properly, send a generic test trap with the following command: snmptrap -v1 -c public .1.3.6.1.4.1.232 6 11003 1234 .1.3.6.1.2.1.1.5.0 s test .1.3.6.1.4.1.232.11.2.11.1.0 i 0 .1.3.6.1.4.1.232.11.2.8.1.
• Sparse files on the source file system are replicated unsparse on the target. That is, all blocks corresponding to the file size are allocated on the target cluster. Consequently, if the target file system is the same size as the source file system, remote replication can fail because there is no space left on the target file system.
console. If the node remains down and the node hosting the active management console also goes down, the cluster configuration data may become inconsistent, depending on the order in which the nodes are rebooted. • When the active management console is moved to maintenance mode, a passive management console will transition to active mode. Be sure that this transition is complete before you move the previously active management console from maintenance mode to passive mode.
are symmetric between all servers in the cluster, and are not specific to any one machine name in the cluster.) • When joining a CIFS domain, the $ character cannot be used in passwords unless it is escaped with a slash (\) and enclosed in single quotes (' '). For example: ibrix_auth -n IB.LAB -A john -P 'password1\$like' Snapshots • Snapshot creation may fail while mounting the snapshot. The snapshot will be created successfully, but it will not be mounted.
1. On the standard management console, check for the IbrixServer RPM: # rpm –qa | grep –i IbrixServer If the RPM is present, the output will be similar to the following: IbrixServer- 2. If the IbrixServer RPM is present, uninstall the RPM: # rpm -e IbrixServer- 3. On each file serving node, check for the Ibrix Fusion Manager RPM: # rpm –qa | grep –i IbrixFusionManager If the RPM is present, the output will be similar to the following: IbrixFusionManager- 4.
The default value of this command is 900; the value is in seconds. A higher value reduces the probably of all components toggling from Up to Stale and back to Up because of the conditions listed above, but will increase the time before an actual component failure is reported. HP Insight Remote Support • In certain cases, large number of error messages such as the following appear in /var/log/ hp-snmp-agents/cma.
Upgrades • Before upgrading to the 5.6 release, it is necessary to run the save_cluster_config command to save the current configuration. If you lose this configuration file, reboot the nodes and run save_cluster_config again. You can then continue with the upgrade process. • System User login credentials are backed up and restored during upgrades. However, any user data is not retained. Users must save and restore their individual data manually during an upgrade.
Documentation additions and changes Following are additions and changes to the user documentation. Upgrades to 5.6 The following information is an addition to the chapter “Upgrading the X9000 Software” in the administrator guide for your system. Before performing the upgrade, complete the following steps if necessary: • Upgrades to the X9000 Software 5.6 release are supported for systems currently running X9000 Software 5.5.x . If your system is running an earlier release, first upgrade to the latest 5.
Support Ticket configuration The following information corrects the information about SSH keys included in “Managing support tickets” in the X9300, X9320, and X9720 administrator guides. The Support Ticket feature requires that two-way shared SSH keys be configured on all file serving nodes. If shared SSH keys are not configured on your cluster, use the following procedure to configure them: 1. On all file serving nodes and the management console, run the following commands as root: # mkdir -p $HOME/.
1. 2. 3. Identify the segment residing on the physical volume to be removed. Select Storage from the Navigator on the management console GUI. Note the file system and segment number on the affected physical volume. Locate other segments on the file system that can accommodate the data being evacuated from the affected segment. Select the file system on the management console GUI and then select Segments from the lower Navigator.
Compatibility/Interoperability Note the following: • Every member of the cluster must be running the same version of X9000 Software. • The cluster must include an even number of file serving nodes.