HP StorageWorks HP Scalable NAS File Serving Software upgrade guide HP Scalable NAS 3.7.
Legal and notice information © Copyright 2006, 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide ................................................................... 7 Intended audience ............................................................................................... HP technical support ............................................................................................ Subscription service ............................................................................................. HP websites .............................................................
Order for upgrading servers ............................................................................... MxFS migration package ............................................................................. Install the MxFS migration package ........................................................ Overview of the migration process .......................................................... Upgrade the first server ......................................................................................
Order for upgrading servers ......................................................................... 65 Upgrade procedure ........................................................................................... 66 6 Upgrade HP Clustered Gateway/HP 4400 Scalable NAS systems ............................................................................. 71 Order for upgrading servers ............................................................................... Pre-upgrade: MxFS migration package ........
Additional required OS packages .......................................................... 2. Build the kernel (optional) ......................................................................... 3. HBA drivers and HP Scalable NAS ............................................................ HBA provided with HP Scalable NAS ...................................................... Third-party MPIO solution ...................................................................... SAN boot disk ....................
About this guide This guide provides information about the following upgrades: • PolyServe Matrix Server 3.5.1 or HP Clustered File System 3.5.1 to HP Scalable NAS File Serving Software 3.7.0 • MxFS-Linux 3.5.1or FS Option for Linux 3.5.1 to FS Option for Linux 3.7.0 • MxDB-Oracle-HiAv 3.5.1 to MxDB-Oracle-HiAv 3.7.0 • HP Clustered Gateway 3.5.1 to HP X5500 Storage Gateway for Linux 3.7.0 • HP 4400 Scalable NAS 3.5.1 to HP 4400 Scalable NAS 3.7.
Subscription service HP recommends that you register your product at the Subscriber's Choice for Business website: http://www.hp.com/go/e-updates After registering, you will receive e-mail notification of product enhancements, new driver versions, firmware updates, and other product resources. HP websites For additional information, see the following HP websites: • • • • • http://www.hp.com http://www.hp.com/go/scalablenas http://www.hp.com/go/storage http://www.hp.com/service_locator http://www.hp.
1 Overview Upgrades to HP Scalable NAS File Serving Software 3.7.0 are supported from PolyServe Matrix Server 3.5.1/HP Clustered File System 3.5.1. If you are running a different version of Matrix Server/HP Clustered File System, you will first need to upgrade to Matrix Server/HP Clustered File System 3.5.1 and then upgrade to HP Scalable NAS File Serving Software 3.7.0. CAUTION: Do not deviate from the upgrade procedures in this document. It is important to complete the upgrade steps exactly as described.
• Patched source kernels for sites that need to build a custom kernel or compile third-party kernel modules. • Debug kernels for more advanced diagnostic purposes. For each operating system, HP Scalable NAS provides a kernels RPM, pmxs--kernels-3.7.0-..rpm, that includes the binary, source and debug kernels for that operating system. This guide describes how to install the kernel RPM and the binary and source RPMs.
Contents of the HP Scalable NAS distribution The HP Scalable NAS distribution contains the following files: • pmxs-3.7.0-..rpm. The HP Scalable NAS software in Red Hat Package Manager (RPM) format. The RPM also includes drivers for the supported Host Bus Adapters. • mxconsole-3.7.0-..rpm. The Management Console and mx utility in RPM format. • mxconsole_3.7.0...msi. The Management Console and mx utility in Microsoft Windows format. • pmxs-quota-tools-3.13-..
I/O scheduler policies The Linux I/O schedulers (also called elevators) attempt to sort and issue disk I/O requests according to specific priority policies. Testing has shown that the deadline policy results in the best performance for the PSFS filesystem. The deadline policy is the default in the SLES10 kernel. For RHEL5, the default policy is cfq. In the HP RHEL5 binary kernels and kernel sources provided by HP, the I/O scheduler policy has been set to deadline.
Other considerations • You will need to install a new license file during the upgrade. If the new file is not in place when you start HP Scalable NAS, license violations will be reported on the Management Console and in the cluster log, and the product will shut down after one hour and 45 minutes. • If a server is temporarily out of the cluster during the upgrade (for example, for maintenance), you will need to upgrade it to 3.7.0 before returning it to the cluster.
Although this document refers to /opt/hpcfs, /var/opt/hpcfs and /etc/opt/ hpcfs, you can continue to use /opt/polyserve, /var/opt/polyserve and /etc/opt/polyserve as the directory names. Product changes affecting the upgrade You should be aware of the following changes: • HP Scalable NAS now uses the Pluggable Authentication Modules (PAM) mechanism for authentication. When HP Scalable NAS is installed, it determines whether PAM is configured on the system.
• Notifiers. The built-in notifiers provided with previous releases of Matrix Server/HP Clustered File System are no longer supported. Existing notifiers will continue to function until the last server is upgraded. At that point, the notifiers will be deleted from the configuration. If you are currently using notifiers, be sure to back up your notifier scripts. After the upgrade is complete, you will need to recreate the notifiers. • The stand-alone Management Console provided with the 3.5.
Matrix Server/HP Clustered File System configuration The Matrix Server configuration is stored in the /var/opt/polyserve and /etc/ opt/polyserve directories. On HP Clustered File System, the configuration is stored in the /var/opt/hpcfs and /etc/opt/hpcfs directories. You will need to create separate tar files for these directories using tar -cf to preserve the permissions on the files. Be sure to copy the tar files to a secure location such as another server.
to be saved only once. The upgrade procedures indicate the point at which you should save the configuration. 1. Save the following Samba configuration files in the /etc/samba directory: • smb.conf • all smb.conf. files • smb.default • smbpasswd • smbusers • smb.
Save the file secrets.tbd. Also, if you are using LDAP as a name service backend and/or domain-level or ads-level security, you may have a private directory containing passwords and account credentials used to access the domain server. • testparm –vs | grep '/' Use this command to check for any other paths containing files that should be backed up. PSFS filesystems Back up all PSFS filesystems before starting the upgrade.
2 Upgrade Matrix Server/MxFS-Linux or HP Clustered File System/FS Option software-only products This procedure must be used for sites running both Matrix Server and MxFS-Linux and for sites running both the HP Clustered File System and FS Option for Linux software-only products. Upgrades to HP Scalable NAS File Serving Software 3.7.0 and FS Option for Linux 3.7.0 are supported only from the 3.5.1 releases of these products.
2. Upgrade the first node and import the cluster configuration from a node running 3.5.1. 3. Run the MxReg migration tool on all remaining nodes. This tool halts vhost NFS file serving and exports the MxReg data. 4. Upgrade the remaining nodes to HP Scalable NAS and FS Option 3.7.0. 5. Reestablish all NFS client connections. 6. Complete the post-upgrade steps described in Chapter 7, page 81. Order for upgrading servers Servers must be upgraded one-at-a-time during the upgrade.
NOTE: If a server is accidentally upgraded out of order, when a higher-number server joins the cluster, the servers still running the old version of HP Clustered File System/Matrix Server will think they need to leave the cluster and will no longer send configuration and status updates. To correct this problem, stop HP Scalable NAS File Serving Software on the node that was upgraded out of order. Continue upgrading the other nodes before starting HP Scalable NAS File Serving Software on that node.
Migration actions on the first server During the upgrade of the first server, you will need to run a tool, mximport, that asks for the following information: • The hostname and root password for a server running the 3.5.1 release. • The Matrix Server or HP Clustered File System admin password for the 3.5.1 cluster. • A location to store MxReg data. HP recommends that you specify a path on a PSFS filesystem on the shared storage. The filesystem should be a persistent mount.
Creation of the new mxds datastore When the upgrade has been completed on all servers and the cluster is running, HP Scalable NAS will create the mxds datastore and import the NFS state information saved in the mxreg_export.xml file to the new datastore. NFS serving will start automatically when the process is complete. Upgrade the first server The server with the highest IP address should be upgraded first. Use this procedure to upgrade the server. 1. Disable the server on the Management Console.
7. Install the appropriate HP Scalable NAS kernel. To install the binary kernel, run the following command: # rpm -ihv kernel-HPPS-.370..x86_64.rpm After installing the RPM, verify the bootloader. Check the file /boot/grub/ menu.lst and verify that the default is set to the HP binary kernel. NOTE: If you need to compile third-party kernel modules or build a custom kernel, see Appendix B, page 99. 8. Reboot the server. 9. Install HP Scalable NAS File Serving Software 3.7.
16. Load the HP Scalable NAS default HBA driver and verify that the server has access to the SAN storage. Run the following command to load the driver: # /etc/init.d/pmxs load 17. Run the following command to see a list of devices and review the output to verify that the SAN is configured as you expect. # cat /proc/partitions 18. Run the mximport tool to import the cluster configuration from a node running 3.5.1. The mximport tool automatically collects the necessary configuration information from the 3.5.
20. Change the HBA driver if necessary. HP Scalable NAS includes several versions of the HBA drivers for the supported FC host bus adapters. For each host bus adapter, one driver version is designated as the default in the /etc/opt/ hpcfs/fc_pcitable file. If you require a different driver version (a version included with HP Scalable NAS or from another source), you can install that driver version in place of the HP Scalable NAS default version.
IMPORTANT: Upgrade the servers in order of IP address, starting with the server with the highest numbered address. Then continue to upgrade the servers in descending order of IP address, with the lowest numbered server being upgraded last. Do not restart Matrix Server/HP Clustered File System after running the MxFS migration tool in step 1, as the resulting state change may not be captured in the export. 1. Run the MxReg migration tool, mxfs_upgrade_prep.
4. Stop Matrix Server/HP Clustered File System on the server: # /etc/init.d/pmxs stop 5. Back up the cluster configuration on the server, as described under Back up the cluster configuration, page 15. 6. Perform a fresh installation of the operating system. See Appendix A, page 93 for operating system information. 7. Install the kernels RPM corresponding to your OS. This RPM contains the HP Scalable NAS binary, source, and debug kernels. # rpm -i pmxs-kernels-3.7.0-..
12. Install the FS Option for Linux support RPM from the product CD or the location where you have downloaded the software. # rpm -i /mxfs--support-3.7.0-..rpm 13. Install the FS Option for Linux product RPM from the CD or the location where you have downloaded the software. # rpm -i /mxfs-3.7.0-..rpm 14. Install the Management Console and mx utility. # rpm -i /mxconsole-3.7.0-..rpm 15. Install the quota tools RPM if desired.
20. Change the HBA driver if necessary. HP Scalable NAS includes several versions of the HBA drivers for the supported FC host bus adapters. For each host bus adapter, one driver version is designated as the default in the /etc/opt/ hpcfs/fc_pcitable file. If you require a different driver version (a version included with HP Scalable NAS or from another source), you can install that driver version in place of the HP Scalable NAS default version.
NOTE: If NFS serving does not start, place the mxreg_export.xml file that you backed up earlier into the directory /var/opt/hpcfs/run (or /var/opt/polyserve/ run). If you did not save the mxreg_export.xml file, a backup is available at /var/opt/hpcfs/run/mxreg_export_backup.xml on the first server to be upgraded. Rename this file to mxreg_export.xml. If NFS serving still does not start, check the log file for the import script in /var/ opt/hpcfs/debug/mxfs_upgrade.log.
Upgrade Matrix Server/MxFS-Linux or HP Clustered File System/FS Option softwareonly products
3 Upgrade Matrix Server and MxDB-Oracle-HiAv These instructions provide a guide for upgrading a typical MxDB-Oracle-HiAv 3.5.1 Oracle single instance database that is running in the cluster. You may need to modify this process to fit the needs of your site. For example, you may have additional packages or applications that need to be migrated. Be sure to test the upgrade process thoroughly before deploying the process on production systems. Upgrades to HP Scalable NAS File Serving Software 3.7.
Also save the following MxDB-Oracle-HiAv information: • Record the primary and backup nodes for the Virtual Oracle Service associated with each database. Run the following command for each database: mxdb -d –Q • Preserve any custom MxDB-Oracle-HiAv files. For example, MxDB-Oracle-HiAv supports the optional configuration file mxdbha_.conf, where is the name of the service monitor associated with the Virtual Oracle Service. If used, the file is located in $ORACLE_HOME/dbs.
availability. When Oracle binaries are relinked from an upgraded node, only upgraded nodes can support a failover. Servers still running the old version of the operating system will not be able to use the relinked Oracle Home. Although the servers will still be candidates for failover, they will not be able to support a failover until they are upgraded. Consequently, HP recommends that you do not relink binaries until all of the cluster nodes have been upgraded.
NOTE: If a server is accidentally upgraded out of order, when a higher-numbered server joins the cluster, the servers still running the old version of Matrix Server will think they need to leave the cluster and will no longer send configuration and status updates. To correct this problem, stop HP Scalable NAS on the node that was upgraded out of order. Continue upgrading the other nodes before starting HP Scalable NAS on that node.
5. Back up the cluster configuration on the server. See Back up the cluster configuration, page 15. If you are using the HP Samba deployment kit, also save the Samba configuration files (see Samba configuration, page 16). 6. Perform a fresh installation of the operating system. See Appendix A, page 93. 7. Install the kernels RPM corresponding to your OS. This RPM contains the HP Scalable NAS binary, source, and debug kernels. # rpm -i pmxs--kernels-3.7.0-..
13. Install the Performance Dashboard RPM if desired. # rpm -i /mxsperfmon-3.7.0-..rpm 14. Install the MxDB-Oracle-HiAv RPM package. # rpm -i /mxdb_oracle_ha_3.7.0-..rpm 15. Verify host name resolution for the virtual host (check DNS or /etc/hosts). 16. Reboot the server. 17. Load the HP Scalable NAS default HBA driver and verify that the server has access to the SAN storage. Run the following command to load the HBA driver: # /etc/init.d/pmxs load 18.
21. Change the HBA driver if necessary. HP Scalable NAS includes several versions of the HBA drivers for the supported FC host bus adapters. For each host bus adapter, one driver version is designated as the default in the /etc/opt/ hpcfs/fc_pcitable file. If you require a different driver version (a version included with HP Scalable NAS or from another source), you can install that driver version in place of the HP Scalable NAS default version.
26. HP Scalable NAS includes a script called SizingActions that configures certain operating system parameters to improve system performance. This script must be disabled if you need to set Oracle-specific kernel parameters on the upgraded server. (See “Determine whether the SizingActions script should be used” on page 87 for more information about the script.) After disabling the script, add the Oracle-specific kernel parameters to the /etc/sysctl file and run the command sysctl -p to apply the changes.
d. In the Connect to field, specify the hostname of the upgraded server. e. Click the As User button to the right of the Connect to field to display the Authentication Parameters dialog. Specify the root user and password, click both Add to bookmarks and Remember this password, and then click OK. The Add Bookmarks dialog appears next. Specify a name for the cluster and click OK.
f. Add the remaining servers to the bookmarked cluster. On the HP Scalable NAS Connect window, enter the hostname for the next server, click the As User button, and then specify the root user and password and click Add to bookmarks and Remember this password on the Authentication Parameters dialog. When you click OK, the Add Bookmark dialog will allow you to add the server to the bookmarked cluster. Repeat this step for each additional server. g.
NOTE: Relinking Oracle binaries causes an Oracle service outage and may need to be scheduled.
Upgrade Matrix Server and MxDB-Oracle-HiAv
4 Upgrade Matrix Server and MxDB-Oracle-HiAv and change the Oracle word size These instructions provide a guide for upgrading a typical MxDB-Oracle-HiAv 3.5.1 Oracle single-instance database running in a cluster with a 32-bit operating system. You may need to modify this process to fit the needs of your site. For example, you may have additional packages or applications that need to be migrated. Be sure to test the upgrade process thoroughly before deploying the process on production systems.
you can also upgrade to a newer Oracle release. MxDB-Oracle-HiAv 3.7 supports both 64-bit 10.2.0.4 and 11.1.0.7. Performance characteristics can be very different between Oracle major releases and often init.ora parameters require adjusting or become obsolete. Similarly, performance tuning differences exist when moving from a 32-bit to 64-bit environment, particularly changes in memory requirements.
Back up system, database, and MxDB-Oracle-HiAv files Before starting the upgrade, make a complete system backup and have a recent full database backup and archive logs available. As this upgrade requires a complete replacement of the OS and server hardware, be sure to capture local Oracle files such as /etc/oratab that will not be preserved so you can reconfigure the new system correctly.
This command removes all files associated with MxODM and restores Oracle to its configuration before MxODM was installed. Be sure to consider the resulting changes to the performance characteristics during your upgrade testing. Order for upgrading servers Servers must be upgraded one-at-a-time during the upgrade. The server with the numerically highest primary IP address must be upgraded first.
To retain a level of high availability while the servers are being upgraded, HP recommends that you divide your servers into two groups and upgrade one group at a time. Place the servers with higher IP addresses in group 1 (the “high IP group”) and place the remaining servers in group 2 (the “low IP group”).
3. Change the Oracle word size for each $ORACLE_HOME. This step requires the following: • Shut down all databases associated with the $ORACLE_HOME. • Remove (delete) all Virtual Oracle Services associated with the $ORACLE_HOME. • Change the word size from 32-bit to 64-bit for each database according to the Oracle documentation. NOTE: All databases associated with an $ORACLE_HOME must be migrated together.
3. If quotas are configured on PSFS filesystems, back up the quota information. See PSFS filesystems, page 18. 4. Stop Matrix Server on the server: # /etc/init.d/pmxs stop 5. Back up the cluster configuration on the server. See Back up the cluster configuration, page 15. If you are using the HP Samba deployment kit, also save the Samba configuration files (see Samba configuration, page 16). 6. Perform a fresh installation of the operating system.
11. Install the Management Console and mx utility. # rpm -i /mxconsole-3.7.0-..rpm 12. Install the quota tools RPM if desired. # rpm -i /pmxs-quota-tools-3.13-..rpm 13. Install the Performance Dashboard RPM if desired. # rpm -i /mxsperfmon-3.7.0-..rpm 14. Install the MxDB-Oracle-HiAv RPM package. # rpm -i /mxdb_oracle_ha_3.7.0-..rpm 15. Verify host name resolution for the virtual host (check DNS or /etc/hosts).
21. Change the HBA driver if necessary. HP Scalable NAS includes several versions of the HBA drivers for the supported FC host bus adapters. For each host bus adapter, one driver version is designated as the default in the /etc/opt/ hpcfs/fc_pcitable file. If you require a different driver version (a version included with HP Scalable NAS or from another source), you can install that driver version in place of the HP Scalable NAS default version.
25. Reintroduce Oracle-specific information. (Note that Oracle uids and gids on this node must match the other nodes in the cluster.) • The group file (/etc/group) contains entries for the dba and oinstall groups. • The password file (/etc/passwd) contains an account for user oracle. • Group pmxs includes user oracle. • The oracle account has the correct password. • The home directory for user oracle exists (for example, /home/oracle) and has the appropriate Oracle environmental modifications.
NOTE: Complete steps 1–4 for each $ORACLE_HOME and its databases. The remaining steps need to be completed only once. 1. Complete pre-upgrade steps. The Oracle 32-bit to 64-bit upgrade procedures may include pre-upgrade steps that need to be performed before the actual upgrade. These steps should be executed from your 32-bit Oracle service (from a server in the low IP group).
6. Create a .matrixrc file for user root. (If the root password is the same on all servers, the file needs to be created only once and can then be copied to each subsequent server.) The mxdb interface provided with MxDB-Oracle-HiAv checks the .matrixrc file for the user name and password to be used when running scripts (the root user and password are required). You can use the HP Scalable NAS Connect dialog to create bookmarks specifying the authentication information for each server.
The Add Bookmarks dialog appears next. Specify a name for the cluster and click OK. 7. f. Add all of the remaining servers to the bookmarked cluster, including the servers in the low IP group. On the HP Scalable NAS Connect window, enter the hostname for the next server, click the As User button, and then specify the root user and password and click Add to bookmarks and Remember this password on the Authentication Parameters dialog.
9. Adjust /etc/oratab entries and make sure they reflect the new 64-bit $ORACLE_HOME and databases already upgraded. Using the configuration information that you saved earlier, recreate the Virtual Oracle Services for all databases and $ORACLE_HOMEs in the cluster. See the section “Place databases under MxDB-Oracle-HiAv control” in Chapter 3 of the MxDB-Oracle-HiAv installation and administration guide.
4. Back up the cluster configuration on the server. See Back up the cluster configuration, page 15. If you are using the HP Samba deployment kit, also save the Samba configuration files (see Samba configuration, page 16). 5. Perform a fresh installation of the operating system on the server. Oracle prerequisites can also be installed at this time. See Appendix A, page 93. 6. Install the kernels RPM corresponding to your OS. This RPM contains the HP Scalable NAS binary, source, and debug kernels.
12. Install the Performance Dashboard RPM if desired. # rpm -i /mxsperfmon-3.7.0-..rpm 13. Install the MxDB-Oracle-HiAv RPM package. # rpm -i /mxdb_oracle_ha_3.7.0-..rpm 14. Verify host name resolution for the virtual host (check DNS or /etc/hosts). 15. Reboot the server. 16. Load the HP Scalable NAS default HBA driver and verify that the server has access to the SAN storage. Run the following command to load the HBA driver: # /etc/init.d/pmxs load 17.
20. If you installed an HBA driver in the previous step, run the following command to load it: # /etc/init.d/pmxs load Then run the following command to see a list of devices and review the output to verify that the SAN is configured as you expect. # cat /proc/partitions 21. Check the mount points for the PSFS filesystems on the server (for example, u01, u02), and recreate them if necessary to match the other servers in the cluster. 22. Start HP Scalable NAS on the node.
27. Re-enable the server: mx server enable 28. Reboot the server. The server will now join the 3.7.0 cluster. 5. Redistribute Virtual Oracle Services After all of the nodes have been upgraded, reconfigure the primary and backup nodes for your Virtual Oracle Services to include the servers in the low IP group, placing the Virtual Oracle Services on their normal primary and backup servers across the new 64-bit cluster.
These articles describe migrating from 32-bit to 64-bit: • Doc ID 62290.1: Changing between 32-bit and 64-bit Word Sizes • Doc ID: 341880.1 How to convert a 32-bit database to 64-bit database on Linux? • Doc ID 209766.1: Memory Requirements of Databases Migrated from 32-bit to 64-bit These articles describe Oracle RDMBS installs on RHEL or SLES: • Doc ID 376183.1 Defining a default RPMs Installation of the RHEL OS • Doc ID 386391.1 Defining a default RPMs Installation of the SLES OS • Doc ID 748378.
Upgrade Matrix Server and MxDB-Oracle-HiAv and change the Oracle word size
5 Upgrade Matrix Server only Upgrades to HP Scalable NAS File Serving Software 3.7.0 are supported only from Matrix Server 3.5.1. If the Management Console is installed on client machines, you will also need to upgrade that software. Upgrade considerations You should be aware of the following when upgrading to the HP Scalable NAS 3.7.0 release. Order for upgrading servers Servers must be upgraded one-at-a-time during the upgrade.
NOTE: If a server is accidentally upgraded out of order, when a higher-number server joins the cluster, the servers still running the old version of Matrix Server will think they need to leave the cluster and will no longer send configuration and status updates. To correct this problem, stop HP Scalable NAS on the node that was upgraded out of order. Continue upgrading the other nodes before starting HP Scalable NAS on that node. Upgrade procedure To perform the upgrade, complete the following steps.
7. Install the appropriate HP Scalable NAS kernel. To install the binary kernel, run the following command: # rpm -ihv kernel-HPPS-.370..x86_64.rpm After installing the RPM, verify the bootloader. Check the file /boot/grub/ menu.lst and verify that the default is set to the HP binary kernel. NOTE: If you need to compile third-party kernel modules or build a custom kernel, see Appendix B, page 99. 8. Reboot the server. 9. Install HP Scalable NAS File Serving Software 3.7.
16. This step applies only to the first node to be upgraded. Skip this step when upgrading subsequent nodes. a. Run the mximport tool to import the cluster configuration from a node running 3.5.1. The mximport tool automatically collects the necessary cluster configuration information from the 3.5.1 node and sets a flag on the node being upgraded to indicate that this is an upgraded node. The 3.5.1 node must be accessible via ssh.
19. If you installed an HBA driver in the previous step, run the following command to load it: # /etc/init.d/pmxs load Then run the following command to see a list of devices and review the output to verify that the SAN is configured as you expect. # cat /proc/partitions 20. Check the mount points for the PSFS filesystems on the server, and recreate them if necessary to match the other servers. 21. Start HP Scalable NAS on the node: # /etc/init.d/pmxs start 22.
Upgrade Matrix Server only
6 Upgrade HP Clustered Gateway/HP 4400 Scalable NAS systems This chapter describes how to upgrade the following systems: • HP Clustered Gateway 3.5.1 systems running on ProLiant DL380 G5 or DL380 G4 servers • HP 4400 Scalable NAS systems running HP Clustered Filesystem 3.5.1 Order for upgrading servers Servers must be upgraded one-at-a-time during the upgrade. The server with the numerically highest primary IP address must be upgraded first.
Pre-upgrade: MxFS migration package Install the MxFS migration package The MxFS migration package should be installed on all nodes in the cluster before upgrading to 3.7.0. The package includes tools that package and save the MxReg registry information on each node. The MxFS migration package is provided in the RPM mxfs-migration-3.5.1-..rpm, which is included in the product distribution and is also available as a stand-alone package in your download area on www.hp.com.
NOTE: If you do not specify a PSFS filesystem, mximport uses the default location, which is the local /var/opt/polyserve/run (or /var/opt/hpcfs/run) directory on the server. Although the mximport tool copies much of the cluster configuration, it does not copy the MxReg data. It simply creates a structure for this data that will be used by the remaining servers as they are upgraded. Migration actions on the second and subsequent servers The migration package includes a tool called mxfs_upgrade_prep.
Upgrade to the 3.7.0 release Upgrade the first node The server with the highest IP address should be upgraded first. Use this procedure to upgrade the server. NOTE: Before starting the upgrade, back up any customizations, supporting application settings, scripts (Samba scripts, event notifier scripts, and so on), and local changes made to cluster configuration files. HP also recommends backing up all PSFS filesystems. 1. Disable the server on the Management Console.
NOTE: The Quick Restore installs the Linux Device Mapper MPIO RPMs and the HP Device Mapper Multipath Enablement Kit and also configures the necessary fc_pcitable entries. It is not necessary to update the file manually. 8. Update entries in the/etc/hosts file. 9. Load the HBA driver and verify that the server has access to the SAN storage. Run the following command to load the driver: # /etc/init.d/pmxs load 10. Recreate all mount points that existed prior to the upgrade.
NOTE: If you do not specify a location on a PSFS filesystem, mximport will use the default location, which is /var/opt/hpcfs/run on the local server. After the migration script has been run on the last node being upgraded to 3.7.0, you will need to copy the mxreg_export.xml file from /var/ opt/hpcfs/run on that server to the first server to be upgraded. 12. Start HP Scalable NAS on the upgraded server. Run this command: #/etc/init.d/pmxs start 13. Re-enable the server.
NOTE: When the migration tool has been run on the last node to be upgraded, note the following: If the MxReg data was exported to the local /var/opt/hpcfs/run directory on the servers, copy the mxreg_export.xml file from that directory on the last server migrated with mxfs_upgrade_prep.sh to the /var/opt/hpcfs/ run directory on the first node that was upgraded to 3.7.0. If you specified a shared path when you ran the mximport tool, it is not necessary to copy the file to the server running 3.7.
8. The Quick Restore provides two versions of the fc_pcitable file, one for ProLiant DL380 G4 servers and one for ProLiant DL380 G5 servers, with the default being the G5 version. If this server is a ProLiant DL380 G4, rename the files as follows: # mv /etc/opt/hpcfs/fc_pcitable /etc/opt/hpcfs/fc_pcitable-G5 # mv /etc/opt/hpcfs/fc_pcitable-G4 /etc/opt/hpcfs/fc_pcitable 9. Update the entries in the /etc/hosts file. 10. Load the HBA driver and verify that the server has access to the SAN storage.
Post-upgrade tasks When all nodes have been upgraded to 3.7.0 and the cluster is running, complete the following procedures. You may also want to take some of the actions described in Chapter 7, page 81. License alert After the node is upgraded, a “License is invalid” alert will pop up even though the correct license file is present in /etc/opt/hpcfs/licenses. This is normal and can be remedied by issuing the mxconsole command and opening the Configure Cluster window.
Upgrade HP Clustered Gateway/HP 4400 Scalable NAS systems
7 Post-upgrade steps After upgrading to the HP Scalable NAS File Serving Software 3.7.0 release, you may need to do the following: • Add local changes from 3.5.1 configuration files to 3.7.0 configuration files, page 81. • Verify Linux Device Mapper MPIO operations, page 82. • Replace membership partitions that are too small, page 82. • Upgrade filesystems for small files (optional), page 83. • Configure firewalls (optional), page 84. • Determine whether the SizingActions script should be used, page 87.
IMPORTANT: Do not copy the 3.5.1 versions of the files over the 3.7.0 versions. Doing this will cause the loss of configuration information needed by HP Scalable NAS 3.7.0. Instead, merge the changes from the 3.5.1 files into the 3.7.0 versions of the files. Be sure to examine the 3.5.1 files carefully to ensure that the changes are valid for the 3.7.0 release. When modifying a 3.7.0 file, you will need to make the same changes to the file on each server in the cluster.
membership partitions” in the HP Scalable NAS File Serving Software administration guide for more information. Upgrade filesystems for small files (optional) The 3.7.0 release includes a performance enhancement for small files on PSFS filesystems. This feature is enabled by default in PSFS filesystems created on HP Scalable NAS 3.7.0. PSFS filesystems created on earlier releases must be upgraded to enable the small files performance enhancement.
Configure firewalls (optional) It is important to configure firewalls appropriately. Following are some considerations: • For RHEL5, the default OS installation configures firewall rules that prevent the correct operation of HP Scalable NAS. HP recommends that you either select the “No firewall rules” option, or select “Custom” and then ensure that the service ports required by HP Scalable NAS are open.
Port Transport Type Description 7659 TCP Group Communications client connections 7659 UDP Group Communications multicast and unicast messages 7660 UDP Group Communications control token 7661 UDP Group Communications administration and statistics 8649 TCP Perfmon 8649 UDP Perfmon 8651 TCP Perfmon 8651 UDP Perfmon 8652 TCP Perfmon 8652 UDP Perfmon 8940 UDP PanPulse network health detector 9050 TCP HP Scalable NAS Management Console 9060 TCP DLM control and statistics c
Port Transport Type Description 861 TCP mountd 861 UDP mountd 863 TCP rquotad 863 UDP rquotad 865 TCP statd 865 UDP statd 866 TCP statd 866 UDP statd 892 TCP rpc.mountd 892 UDP rpc.mountd 2049 TCP rpc.nfsd 2049 UDP rpc.nfsd 4045 TCP lockd 4045 UDP lockd Samba/CIFS network port numbers Samba/CIFS uses the following port numbers.
Port Transport Type Description 139 UDP NetBIOS Sessions Service 445 TCP Microsoft Directory Service 445 UDP Microsoft Directory Service Determine whether the SizingActions script should be used HP Scalable NAS includes a script called SizingActions that configures certain operating system parameters to improve system performance, particularly in a file serving environment. The changes improve network throughput and make better use system memory.
3. Reboot the node to ensure that the SizingActions parameters are cleared from the system. The SizingActions script will now be inactive at system start. Restore the Samba configuration Use the following procedure to restore the Samba configuration used in the 3.5.1 release. 1. Mount the filesystem that will be used for Samba shares. 2. Install Samba on all servers in the cluster: rpm -ivh samba-client-3–3.0.30–35x86_64.rpm rpm —ivh samba3–3.0.30–35x86_64.
6. Copy Samba configuration files to all other nodes. Run the following HP Scalable NAS script: /opt/hpcfs/tools/smbcfg_dist This script copies the following files to the other cluster nodes: • smb.conf • smb.conf. • smb.default • smbpasswd • smbusers • smb.fstab • lmhosts Use scp to copy any other configuration files to the /etc/samba directory on the other nodes. The Samba virtual host should now be up and running.
Recreate notifiers If you previously used notifier scripts, you will need to recreate the notifiers. HP Scalable NAS File Serving Software 3.7.0 provides three event notifier services: • SNMP Notifier Service. This service sends SNMP notifications, or traps, to the configured SNMP targets when the selected events occur. • Email Notifier Service. This service sends email to specified addresses when the selected events occur. • Script Notifier Service.
• Role-Based Security Control. You can create roles, define the operations that are allowed or denied for each role, and add specific user or group accounts to the roles. • Event management system. The event notifier services can be configured to send an SNMP trap, to send email, or to run a script when specific events occur. The new event viewer shows the HP Scalable NAS event messages currently in the event log.
Post-upgrade steps
A Install the operating system Installation steps Before installing HP Scalable NAS, you will need to complete these steps: 1. Install a supported version of the operating system. 2. Build a custom kernel if needed. See Appendix B, page 99. NOTE: If you will be using a HP Scalable NAS binary kernel, it should be installed as specified in the upgrade procedure. 3. Determine whether the HBA driver should be loaded either during the initial booting of the kernel or when HP Scalable NAS is started. 4.
• For RHEL5, the default OS installation configures firewall rules that prevent the correct operation of HP Scalable NAS. HP recommends that you either select the “No firewall rules” option, or select “Custom” and then ensure that the service ports required by HP Scalable NAS are open. See the HP Scalable NAS File Serving Software administration guide for more information about these ports.
The installation creates the directory structure /opt/polyserve/lib/kernels. RPMs for the binary, source, and debug kernels are in the kernels directory. Then build the kernel using the appropriate source kernel RPMs from the kernels RPM that you just installed. See Appendix B, page 99 for more information. 3.
SAN boot disk With certain storage arrays, the boot disk can be on the SAN. In this case, the HBA driver must be loaded with the kernel so that the boot disk can be located. (You may need to take steps to ensure that the appropriate HBA driver is loaded. See your vendor documentation for more information.) You will need to use non-fabric fencing with this configuration.
nsswitch.conf specifies that the hosts file will be examined first (see the nsswitch.conf(5) man page). Other mechanisms, apart from /etc/hosts, can also be used, depending on your site's network configuration. • Add psfs to the PRUNE FS list. In RHEL5, the list starts with PRUNEFS and is contained in the file /etc/updatedb.conf file. In SLES10, the list starts with UPDATE_PRUNEFS and is contained in the file /etc/sysconfig/locate. The following example shows an updated list on an RHEL5 system.
Install the operating system
B Build a custom kernel This appendix contains the following procedures for both RHEL5 and SLES10: • Compile a third-party kernel module • Extract HP Scalable NAS kernel patches • Rebuild the entire kernel from source RHEL5 Compile a third-party kernel module In general, third-party kernel modules are compiled using a skeletal kernel source tree and pre-computed symbols for the specific kernel configuration they will be loaded into.
will both point to /usr/src/kernels/2.6.18–92.el5–HPPS-x86_64. The environment is now ready to support compilation of third-party kernel modules. Refer to the documentation provided with the module for installation, configuration and build process information. Extract HP Scalable NAS kernel patches Compiling third-party software using our installed kernel build environment, as described in the previous section, is a preferred technique.
2. Create a patched kernel source tree. Run the following command: rpmbuild -bp /usr/src/redhat/SPECS/kernel-2.6.spec This step populates the directory /usr/src/redhat/BUILD/ kernel-2.6.18/linux-2.6.18.arch. 3. Rebuild the kernel using the standard Linux kernel configuration and build process.
SLES10 Compile a third-party kernel module In general, third-party kernel modules are compiled using a skeletal kernel source tree and pre-computed symbols for the specific kernel configuration they will be loaded into. To set up this expected build environment, you will need to install the following RPMs: • The kernel binary RPM: kernel-HPPS-2.6.16.60–0.21.370..x86_64.rpm • The kernel architecture-specific source RPM: kernel-source-2.6.16.60–0.21.370..x86_64.
there may be cases where you must extract the HP Scalable NAS kernel patches so they can be integrated into your custom build environment. To begin, open the architecture-independent kernel source RPM: mkdir tmp rpm2cpio kernel-source-2.6.16.60–0.21.370..src.rpm | (cd tmp; cpio -id) The HP Scalable NAS patches are all in tmp/patches.hpps.tar.bz2. When you integrate the patches into your kernel build source, be sure to add their names to the series.conf file. hppsversion.
Build a custom kernel
C Transition to Linux Device Mapper MPIO The following information describes how to transition to Linux Device Mapper MPIO if you are currently using either the QLogic failover feature or the Matrix Server/HP Clustered File System mxmpio software for multipath support. Transition procedures from other third-party MPIO solutions are not currently available. NOTE: This procedure does not apply to HP 4400 Scalable NAS systems or HP X5500 Storage Gateway for Linux systems. The 3.7.
• device-mapper-1.02.13-6.14 or later • readline_devel_5.1-24 .19 or later Configure Linux Device Mapper MPIO Download the HP Device Mapper Multipath Enablement Kit and configure it as described below. (Do not use the directions provided with the kit.) Complete the following steps on each server: 1. Download the HP Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays 4.1.0 package. The package is available at the following location: http://www.hp.com/go/devicemapper 2. Login as root.
8. Either reboot the server or run the following commands: a. Stop HP Scalable NAS: # /etc/init.d/pmxs stop b. Unload the HP Scalable NAS modules: # /etc/init.d/pmxs unload c. Start multipath services: # /etc/init.d/multipathd start d. Restart HP Scalable NAS: # /etc/init.d/pmxs start Update the fc_pcitable file Update the entry for your FibreChannel Adapter device driver as described below.
QLogic HBA drivers Locate the line for your FibreChannel Adapter device driver in the pcitable file and then add the following options inside the double quotes: ql2xmaxqdepth=16 qlport_down_retry=10 ql2xloginretrycount=30 ql2xfailover=0 ql2xlbType=1 ql2xautorestore=0xa0 ConfigRequired=0 If you had previously enabled QLogic failover in the file, be sure to set the ql2xfailover option to 0. Also remove the comment character (#) if it appears at the beginning of the line.
daemon $DAEMON RETVAL=$? [ $RETVAL -eq 0 ] && touch $lockdir/$prog echo } Unloading drivers When Linux Device Mapper MPIO is configured on the cluster, it is necessary to remove the multipath devices before using the pmxs unload command, which unloads the cluster service and HBA drivers. To remove the multipath devices, first stop HP Scalable NAS: /etc/init.d/pmxs stop Next, run this command: dmsetup remove_all Then unload the cluster service and HBA drivers: /etc/init.