HP StorageWorks HP Scalable NAS File Serving Software installation guide HP Scalable NAS 3.7.
Legal and notice information © Copyright 2006, 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide ................................................................... 6 Intended audience ............................................................................................... HP technical support ............................................................................................ Subscription service ............................................................................................. HP websites .............................................................
Samba/CIFS network port numbers ........................................................ 4. Verify downloaded RPMs ........................................................................ 5. Install the kernels RPM ............................................................................ 6. Install the HP Scalable NAS kernel ............................................................ 7. Install third-party MPIO software (optional) ................................................. 8.
Installation procedure ......................................................................................... 1. Install the operating system ....................................................................... Additional required OS packages .......................................................... 2. Build the kernel (optional) ......................................................................... 3. HBA drivers and HP Scalable NAS ............................................................
About this guide This guide provides information about installing the HP Scalable NAS File Serving Software and FS Option for Linux software-only products. Intended audience This guide is intended for administrators who will be performing the cluster installation. HP technical support For worldwide technical support information, see the HP support website: http://www.hp.
HP websites For additional information, see the following HP websites: • http://www.hp.com • http://www.hp.com/go/scalablenas • http://www.hp.com/go/storage • http://www.hp.com/service_locator • http://www.hp.com/support/manuals Documentation feedback HP welcomes your feedback. To make comments and suggestions about product documentation, please send a message to storagedocsFeedback@hp.com. All submissions become the property of HP.
About this guide
1 Configuration information HP is continually expanding its supported hardware and operating system configurations. Check the compatibility matrix on www.hp.com for the latest compatibility information: http://h18006.www1.hp.com/products/storage/software/polyserve/support/ compatibility.pdf IMPORTANT: This document applies to customers installing the HP Scalable NAS software-only product.
Hardware Configuration Limit Servers Two to 16 servers. Network Interface Cards Up to four network interfaces per server. Fibre Channel Host Bus Adapters Four FC ports per server can be connected to the cluster SAN. Other FC ports can be connected to non-cluster SANs. Fibre Channel Switches Two levels of cascading switches. Fibre Channel Storage Subsystems Up to 508 LUNs. Supported HBA drivers The Host Bus Adapter vendors frequently release HBA drivers for Linux.
The minimum requirements for cluster servers are as follows: • AMD Opteron or Intel EM64T servers running a supported 64-bit operating system and kernel. • 10 GB of disk space on the installation drive for HP Scalable NAS and its log and runtime files. • Ethernet 10/100/1000 port. When configuring servers, you should be aware of the following: • All servers in the cluster must be on the same subnet. • Servers running the Management Console must have a windowing environment installed.
Cluster SAN configuration guidelines Following are guidelines for configuring the cluster SAN to be used with HP Scalable NAS: • For all Fibre Channel switches, it is best practice to place each HBA port and its storage ports in a separate zone. No other initiator HBA port should be present in this zone. IMPORTANT: If the cluster configuration includes an HP Fibre Channel Virtual Connect Module, the guideline above must be implemented.
I/O scheduler policies The Linux I/O schedulers (also called elevators) attempt to sort and issue disk I/O requests according to specific priority policies. Testing has shown that the deadline policy results in the best performance for the PSFS filesystem. The deadline policy is the default in the SLES10 kernel. For RHEL5, the default policy is cfq. In the HP RHEL5 binary kernels and kernel sources provided by HP, the I/O scheduler policy has been set to deadline.
Configuration information
2 Install HP Scalable NAS and FS Option for Linux This chapter describes how to perform a new installation of HP Scalable NAS and, optionally, FS Option for Linux. Supported operating systems and kernels HP Scalable NAS and FS Option for Linux 3.7.0 are supported on both Red Hat Linux 5 and SuSE Linux Enterprise Server 10. The following operating system versions and kernels are supported. Operating System Kernel Red Hat Enterprise Linux 5 Update 2, 64-bit 2.6.18-92.
source and debug kernels for that operating system. This guide describes how to install the kernel RPM and the binary and source RPMs. For RHEL5, the following binary and source kernel RPMs are provided: • kernel-HPPS-2.6.18-92.el5.370...rpm (the binary kernel) • kernel-2.6.18-92.el5.370..src.rpm (the architecture-independent source kernel) • kernel-HPPS—devel–2.6.18-92.el5.370...
• mxconsole_3.7.0..msi. The Management Console and mx utility in Microsoft Windows format. • pmxs-quota-tools-3.13-.src.rpm. Linux quota commands modified for use on the PSFS filesystem. • pmxs-quota-tools-3.13-..rpm. Linux quota commands modified for use on the PSFS filesystem. • pmxs-quota-tools-3.13-.src.rpm. The source for Linux quota commands modified for use on the PSFS filesystem. • mxsperfmon-3.7.0-..rpm. The Performance Dashboard software.
Action Description Configure firewalls (optional). Ensure that firewalls are configured to allow HP Scalable NAS to operate correctly. Verify downloaded RPMs. Install the kernels RPM. 18 If you downloaded the HP Scalable NAS software, verify the signature for each RPM. This RPM includes the HP Scalable NAS binary, source, and debug kernels. Install the HP Scalable NAS kernel.
Action Description Run the mxcheck utility on each server. This utility verifies that the server’s configuration meets the requirements for HP Scalable NAS. Set a HP Scalable NAS parameter (optional). This step is needed only if your SAN configuration includes a FalconStor device. Install snapshot software (optional). This step is needed only if you will be using the hardware snapshot feature. Configure the cluster.
Create luns or disk partitions for membership partitions HP Scalable NAS uses a set of membership partitions to control access to the SAN and to store the device naming database, which includes the global device names that HP Scalable NAS assigns to the SAN disks placed under its control. The membership partitions must be placed on the SAN, not on local storage. HP Scalable NAS can use either one or three membership partitions.
Configure Fibre Channel switches for the cluster NOTE: For all Fibre Channel switches, it is best practice to place each HBA port and its storage ports in a separate zone. No other initiator HBA port should be present in this zone.
• On Brocade switches only, run the snmpMibCapSet command on the switch. Change the famib setting to yes and accept the default values for the other settings. • If the servers are connected to switches in multiple fabrics, the physical ports on each switch must be assigned to unique domain IDs. A different domain ID must be used on each fabric (any given domain ID can exist on only one fabric in the SAN).
Port Transport Type Description 2301 TCP Array Configuration Utility 2301 UDP Array Configuration Utility 2381 TCP Array Configuration Utility 2381 UDP Array Configuration Utility 6771 TCP HTTPS connection from the HP Scalable NAS Management Console (fixed, IANA registration has been applied for) Internal network port numbers The following network port numbers are used for internal, server-to-server communication.
Port Transport Type Description 9050 TCP HP Scalable NAS Management Console 9060 TCP DLM control and statistics connections 9060 UDP DLM point-to-point messages 9065 TCP MSM control and statistics connections 9065 UDP MSM point-to-point messages NFS network port numbers The NFS network port numbers need to be specified explicitly in the /etc/ sysconfig/nfs file to ensure that they are consistent across reboots. HP recommends that you use the following ports.
Port Transport Type Description 2049 UDP rpc.nfsd 4045 TCP lockd 4045 UDP lockd Samba/CIFS network port numbers Samba/CIFS uses the following port numbers. Port Transport Type Description 137 TCP NetBIOS SMB Service 137 UDP NetBIOS SMB Service 138 TCP NetBIOS Datagram Service 138 UDP NetBIOS Datagram Service 139 TCP NetBIOS Sessions Service 139 UDP NetBIOS Sessions Service 445 TCP Microsoft Directory Service 445 UDP Microsoft Directory Service 4.
5. Install the kernels RPM The pmxs--kernels-3.7.0-..rpm package includes RPMs for the binary kernel, the source kernels, and the debug kernels. OS-specific versions of the kernels RPM are included in the HP Scalable NAS product distribution. (See Supported operating systems and kernels, page 15, for a complete list of the kernels.) To install the kernels RPM, run this command: # rpm -i pmxs--kernels-3.7.0-..
• If you will be using Linux Device Mapper Multipath, it should be installed and configured after installing HP Scalable NAS. See Appendix C, page 67, for more information. • If you will be using other third-party MPIO software, install it now according to the product documentation. After the MPIO software is installed, verify that it can see the LUNs on the storage array. Also check the HP PolyServe Software Manuals web page for any articles about your MPIO software.
10. Install FS Option for Linux Install the FS Option for Linux support RPM from the product CD or the location where you have downloaded the software. # rpm -i /mxfs--support-3.7.0-..rpm Install FS Option for Linux from the product CD or the location where you have downloaded the software. # rpm -i /mxfs-3.7.0-..rpm When FS Option for Linux is installed, it copies the existing /etc/exports file to /etc/exports.
For more information, see the section “Host Bus Adapters (HBAs)” in the HP Scalable NAS File Serving Software administration guide. Do not start HP Scalable NAS after installing the driver. If you need to enable the failover feature provided with the QLogic HBA driver, see the section “Other MPIO support” in the HP Scalable NAS File Serving Software administration guide. NOTE: The /etc/hba.conf file must point to the correct hbaapi library. 13. Reboot and verify the HBA configuration Reboot the server.
14. Verify the SAN configuration This step verifies that the SAN devices are configured appropriately and can be viewed from the servers that will be in the cluster. You will need to perform this step on each server. If an HBA driver has not already been loaded, run the following command to load the driver: # /etc/init.d/pmxs load Next, run the following command to see a list of devices. # cat /proc/partitions Review the output to verify that the SAN is configured as you expect.
# /opt/hpcfs/sbin/exportfssync -t Currently running kernel supports MxFS features If the command does not report this message, either the HP Scalable NAS binary kernel has not been installed, a custom kernel has not been compiled with the patches, the recompiled kernel has not been booted, or nfsd has not been loaded. 16. Run the mxcheck utility This utility should be run on each server. It verifies that the server’s configuration meets the requirements for running HP Scalable NAS.
On the last line, remove the # sign preceding psd_round2_delay and replace -1 with the number of seconds to wait before the psd driver retries the I/O. The recommended value is 45 seconds. psd_round2_delay 45 18. Install hardware snapshot software (optional) Hardware snapshots are supported on HP MSA2000, EVA, and XP storage arrays. Hardware snapshots are also supported on Engenio storage arrays.
XP firmware and software, or to learn more about configuring XP RAID Manager on your servers, contact your HP representative. Engenio storage arrays To take hardware snapshots on Engenio storage arrays, the latest version of SANtricity Storage Manager client software must be installed on all servers in the cluster. Also, the latest version of firmware must be installed on your storage array controllers. To locate this software and firmware, contact your Engenio representative. 19.
Next, enter your user name/password and then click Yes when you are asked whether you want to configure HP Scalable NAS. The Configure Cluster window then appears. You will need to specify information on the tabs in this order: General Settings, SAN & Fencing, Storage Settings, Cluster-Wide Configuration. General settings tab This tab asks for general information needed for cluster operations.
Enter a name or description for this cluster. The cluster name or description appears on the title bar of the HP Scalable NAS Management Console. The name or description can contain up to 80 characters. If you will be using a third-party manager, the name/description will be sent to the manager to help identify the source of SNMP traps. License. HP Scalable NAS can be used with either a temporary or a permanent license. The license is provided in a separate license file.
Notes regarding fencing Before configuring fencing, you should be aware of the following: • If you will be using the Virtual Connect Fibre Channel Module on the HP cClass Blade System, you must configure web management-based fencing. Switch-based fencing cannot be used with these modules. • If you will be using IPMI interfaces with web management-based fencing, you will need to change the password from the factory default. The fencing feature will not work correctly if the password is not changed.
the hostname or IP address of the first FC switch. Repeat this procedure to specify the remaining FC switches, including cascading switches. SNMP Community String. The default SNMP community string for HP Scalable NAS is private. If you want to use a custom community string, enter the appropriate value here. The SNMP community string must be set to the same value on HP Scalable NAS and on the SAN switches configured above.
Remote Management Controller Vendor. Select the vendor for your Remote Management Controllers. For an IBM BladeCenter, also specify the Blade slot. If you will be using IPMI as the fencing method, you should be aware that only one IPMI session can be active at a time. HP Scalable NAS will fail to fence a server if another IPMI session is already active on that server at the time that the fencing attempt is made. Remote Management Controller ID.
• Hostname suffix. Specify the common suffix to append to each server name to determine the associated Remote Management Controller name. For example, if your server names are server1 and server2 and their Remote Management Controllers are server1-iLO and server2-iLO, enter -iLO as the suffix. • IP Delta. Specify the delta to add to each server’s IP address to determine the IP addresses of the associated Remote Management Controllers. For example, if your servers are 1.255.200.12 and 1.255.200.
• Vendor and type selections apply to all servers. This option is enabled by default. Disable the option if your Remote Management Controllers are from different vendors or if, in the case of IBM Remote Management Controllers, some are associated with IBM BladeCenter servers and others are not. • Login shared by all servers. Check this option if all servers in the cluster will be sharing the login account that you specified on the Remote Management Controller tab.
SAN Switches. Specify the hostnames or IP addresses of the Fibre Channel switches that are directly connected to the nodes in the cluster. Click Add, and then specify the hostname or IP address of the first FC switch. Repeat this procedure to specify the remaining FC switches, including cascading switches. SNMP Community String. The default SNMP community string for HP Scalable NAS is private. If you want to use a custom community string, enter the appropriate value here.
Membership Partitions. HP Scalable NAS uses a set of membership partitions to control access to the SAN and to store the device naming database, which includes the global device names that HP Scalable NAS assigns to the SAN disks placed under its control. You will need to select the LUNs or disk partitions that should be used as membership partitions. NOTE: LUNs must already be partitioned as described earlier under Create luns or disk partitions for membership partitions, page 20.
NOTE: When selecting partitions for use as membership partitions, be sure that they do not contain any needed data. When the membership partitions are created, any existing data will be erased. Snapshot configuration. HP Scalable NAS provides support for taking hardware snapshots of PSFS filesystems. (The filesystems must be located on storage arrays supported for snapshots.) If you want to use this capability, you will need to configure the snapshot method.
For MSA2000 arrays, you will be asked for the hostnames or IP addresses of the storage array controllers and also the username and password used to access the controllers. For XP arrays, you will be asked for the local and remote instance numbers of your XP RAID Manager configuration.
For Engenio arrays, you will be asked for the hostnames or IP addresses and password for your storage array controllers. Apply the configuration. When you have completed your entries on the Storage Settings tab, click Apply (at the bottom of the Configure Cluster window). You will then see a message stating the operation will erase all of the data on the membership partitions. Click Yes to continue. The configuration is then saved on the server that you are using to connect to the Management Console.
Scalable NAS on that server. If you configured Web Based Management Fencing, answer No. Otherwise, answer Yes. Go to the Cluster-Wide Configuration tab. Cluster-Wide Configuration tab This tab is used to export the cluster configuration to the other servers that will be in the cluster. It can also be used to start or stop HP Scalable NAS on specific servers and to test the fencing configuration. Select the servers to be configured.
Repeat this procedure to add the remaining servers to the cluster. Export the configuration. Click Select All to select all of the servers appearing in the Address column. Then click Export. The Last Operation Progress column will display status messages as the configuration is exported to each server. If you are using Web Management-based fencing, you may be asked for additional information about each server.
IMPORTANT: If you installed the Performance Dashboard, alert messages referring to adminfs, the administrative filesystem, will appear on the Management Console when HP Scalable NAS starts on the servers. This filesystem is required by the Performance Dashboard and the HP Scalable NAS replication feature. See “Create the administrative filesystem” in the HP Scalable NAS File Serving Software administration guide for information about setting up the filesystem.
Scalable NAS features such as clustered PSFS filesystems, the administrative filesystem, NFS file services, Samba/CIFS file services, hardware snapshots, cluster security, replication, the Performance Dashboard, and the event notification system. IMPORTANT: The replication feature has performance implications for the cluster.
additional changes are made to tune the operating system for the hardware provided with those systems. The SizingActions script is run when HP Scalable NAS starts up. The script does not determine whether the system parameters it adjusts have been modified from their default values by a user on the system. This can be an issue if, for example, you are running an application that requires system parameters such as vmem_max or mmem_max to be modified, typically in the /etc/sysctl.conf file.
• 64 MB of memory. • SuSE Linux Enterprise Server Version 7, 8, 9, or 10; Red Hat Linux 7.2 or 7.3 (Server or Workstation installation); Red Hat Advanced Server 2.1; Red Hat Enterprise Linux AS/ES 2.1; Red Hat Enterprise Linux AS/ES 3.0; Red Hat Enterprise Linux AS/ES 4.0, Red Hat Enterprise Linux 5. • On Red Hat systems, the “compat-libstdc++” package must be installed. • A windowing environment must be installed and configured.
Install HP Scalable NAS and FS Option for Linux
3 Remove HP Scalable NAS software Remove HP Scalable NAS NOTE: If you need to uninstall HP Scalable NAS before upgrading to a later version of the product, use the directions in the HP Scalable NAS File Serving Software upgrade guide. The software should be uninstalled from a location outside of the HP Scalable NAS directory structure. Before removing the software, you will need to stop HP Scalable NAS. To do this, run the following script: # /etc/init.
Remove the Management Console Use this command to remove the Management Console on Linux: # rpm -e mxconsole To remove the Management Console on Windows, select Start > Settings > Control Panel > Add/Remove Programs and remove the application. Remove FS Option for Linux NOTE: If you need to uninstall FS Option for Linux before upgrading to a later version of the product, use the directions in the HP Scalable NAS File Serving Software upgrade guide.
A Install the operating system Installation steps Before installing HP Scalable NAS, you will need to complete these steps: 1. Install a supported version of the operating system. 2. Build a custom kernel if needed. See Appendix B, page 61. NOTE: If you will be using a HP Scalable NAS binary kernel, it should be installed as specified in the installation procedure in Chapter 2. 3.
Additional required OS packages On RHEL5 systems, HP Scalable NAS requires that the following packages be installed. Most of the packages are included in the “default” server installation. • lm_sensors-2.10.0-3.1.x86_64.rpm • net-snmp-5.3.1-24.el5.x86_64.rpm • samba-3.0.28-0.el5.8.x86_64.rpm NOTE: Samba releases later than 3.0.32 are not supported with HP Scalable NAS. • ecryptfs-utils-44 (or greater) • libnl-1.0-18.4.x86_64.rpm For SLES10 systems, no additional packages need to be installed. 2.
common HP Scalable NAS configurations. You may need to take certain steps to ensure that the HBA driver is booted at the correct point. HBA provided with HP Scalable NAS If you will be using an HBA driver provided with HP Scalable NAS, the HBA drivers should not be loaded during the initial boot of the kernel. Instead, when HP Scalable NAS is started, it will load its own HBA driver. The HP binary kernel is preconfigured to support this operation; you do not need to take any further action.
on the server. (The installation procedure in Chapter 2 specifies when to install the driver.) 4. Modify system files You may need to modify the following files on each server: • Edit the /etc/hosts file. The operating system places both localhost and the server name on the 127.0.0.1 entry in the /etc/hosts file. For example: 127.0.0.1 localhost.localdomain localhost HP Scalable NAS requires that the server name appear on a separate line with its real IP address, as in the following example.
To avoid this problem, you will need to modify the mount command specified in the script to enable the root filesystem to be remounted via its device path. Locate the following line in the /etc/init.d/halt script: mount | awk '{ print $3 }' | while read line; do On this line, change $3 to $1. mount | awk '{ print $1 }' | while read line; do 5.
Install the operating system
B Build a custom kernel This appendix contains the following procedures for both RHEL5 and SLES10: • Compile a third-party kernel module • Extract HP Scalable NAS kernel patches • Rebuild the entire kernel from source RHEL5 Compile a third-party kernel module In general, third-party kernel modules are compiled using a skeletal kernel source tree and pre-computed symbols for the specific kernel configuration they will be loaded into.
will both point to /usr/src/kernels/2.6.18–92.el5–HPPS-x86_64. The environment is now ready to support compilation of third-party kernel modules. Refer to the documentation provided with the module for installation, configuration and build process information. Extract HP Scalable NAS kernel patches Compiling third-party software using our installed kernel build environment, as described in the previous section, is a preferred technique.
2. Create a patched kernel source tree. Run the following command: rpmbuild -bp /usr/src/redhat/SPECS/kernel-2.6.spec This step populates the directory /usr/src/redhat/BUILD/kernel-2.6.18/linux-2.6.18.. 3. Rebuild the kernel using the standard Linux kernel configuration and build process.
SLES10 Compile a third-party kernel module In general, third-party kernel modules are compiled using a skeletal kernel source tree and pre-computed symbols for the specific kernel configuration they will be loaded into. To set up this expected build environment, you will need to install the following RPMs: • The kernel binary RPM: kernel-HPPS-2.6.16.60–0.21.370...rpm • The kernel architecture-specific source RPM: kernel-source-2.6.16.60–0.21.370...
there may be cases where you must extract the HP Scalable NAS kernel patches so they can be integrated into your custom build environment. To begin, open the architecture-independent kernel source RPM: mkdir tmp rpm2cpio kernel-source-2.6.16.60–0.21.370..src.rpm | (cd tmp; cpio -id) The HP Scalable NAS patches are all in tmp/patches.hpps.tar.bz2. When you integrate the patches into your kernel build source, be sure to add their names to the series.conf file. hppsversion.
Build a custom kernel
C Configure Linux Device Mapper MPIO Linux Device Mapper Multipath can be used to provide multipath support for HP Scalable NAS. To use Device Mapper Multipath, you will need to complete the following steps: • Install the Device Mapper Multipath tools provided with the operating system. • Configure Device Mapper Multipath. Install Device Mapper Multipath tools Install the following Device Mapper Multipath RPMs on each server. The RPMs are provided with your OS distribution.
Complete the following steps on each server: 1. Download the HP Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays 4.1.0 package. The package is available at the following location: http://www.hp.com/go/devicemapper 2. Login as root. 3. Copy the tar package to a temporary directory such as /tmp/ HPDMmultipath. 4. Unbundle the package: # cd /tmp/HPDMmultipath # tar –xvzf HPDMmultipath-4.1.0.tar.gz # cd HPDMmultipath-4.1.0 5. Install the package: # ./INSTALL 6.
Format of the fc_pcitable file The /etc/opt/hpcfs/fc_pcitable file contains entries only for the drivers installed on your system. By default, the entries in the file are commented out, as indicated by the comment character (#). The file is used only if you add a new entry to the file or modify a default entry (by removing the comment character and then changing the appropriate values).
Emulex HBA drivers Locate the line for your FibreChannel Adapter device driver in the pcitable file and then add the following options inside the double quotes: lpfc_nodev_tmo=28 lpfc_lun_queue_depth=16 lpfc_discovery_threads=32 Also remove the comment character (#) if it appears at the beginning of the line. For example: 0x10df 0xfe00 lpfc lpfc-8.2.0.
Then unload the cluster service and HBA drivers: /etc/init.d/pmxs unload If the multipath devices are not removed before running the pmxs unload command, the HBA drivers will not be unloaded because the multipath devices are associated with storage on those adapters.
Configure Linux Device Mapper MPIO
D Configure the cluster from the command line HP Scalable NAS provides mx commands that can be used to create the initial cluster configuration. These commands are equivalent to the Configure Cluster graphical user interface described in Chapter 2 and can be used in configuration scripts. Be sure to review the description of the Configure Cluster window (see “19. Configure the cluster” on page 33) to become familiar with the actions performed by the mx commands.
IMPORTANT: The mx config check command can be used between the steps of the sequence to validate the configuration. It is especially important to run the command before issuing the start and export commands. Be sure to correct any issues reported by the mx config check command before continuing with the sequence.
IMPORTANT: The mx config check command can be used between the steps of the sequence to validate the configuration. It is especially important to run the command before issuing the start and export commands. Be sure to correct any issues reported by the mx config check command before continuing with the sequence.
Now start HP Scalable NAS on the servers: mx --matrix nodeA server start The initial configuration of the cluster is complete. Sample configuration script The following example shows how a script can be used to configure the cluster. Although this is a bash script, the same ideas apply to other scripting methods. Note the following in the sample script: • The values for the mx commands are specified in a file named cluster.conf.
# Cluster Details # Start Node MATRIX="nodeA" # Other Nodes NODES="nodeB nodeC nodeD" # Fibre Channel Switch Information SWITCHES="99.10.180.253" # Membership Partitions MP="6-6005-08B3-0090-A860-22BE-8098-CC0A-0041/1 6-6005-08B3-0090-A860-22BE -8098-CC0A-0041/2 6-6005-08B3-0090-A860-22BE-8098-CC0A-0041/3" # License File LICENSE="permanent.
cluster, including cascading switches. If you are using an MPIO configuration, be sure to configure all of the switches. The default SNMP community string for HP Scalable NAS is private. If you want to use a custom community string, include the --community option. The SNMP community string must be set to the same value on HP Scalable NAS and the SAN switches.
Specify the hostname for the Remote Management Controller associated with this server. You will need to use this method if your Remote Management Controllers are from different vendors. This method must also be used for IBM BladeCenter servers. --hostsuffix Specify the common suffix to append to each server name to determine the associated Remote Management Controller name.
For hpmsa2000, specify the following: --controllerA The IP address of controller A. --controllerB The IP address of controller B. --username The user name required to access the controllers. --passwd The password required to access the controllers. For hpxp, specify the following: --instanceL The local instance (a number from 0 to 127). --instanceR The remote instance (a number from 0 to 127).