HP StorageWorks HP Scalable NAS File Serving Software command reference guide HP Scalable NAS 3.
Legal and notice information © Copyright 2004, 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About this guide ................................................................... 7 Intended audience ............................................................................................... HP technical support ............................................................................................ Subscription service ............................................................................................. HP websites .............................................................
View membership partitions and their status ................................................... Active and inactive membership partitions ...................................................... Export configuration changes ....................................................................... mprepair options ........................................................................................ mx – cluster command-line interface .....................................................................
psfssuspend – suspend a PSFS filesystem ............................................................... psfsunpack – unpack a PSFS filesystem image ....................................................... psvctl – manage dynamic volumes ....................................................................... quota – report quota information for a user ........................................................... repquota – report quota information for a filesystem ...............................................
Group servers ...................................................................................... 87 mx syntax ......................................................................................................... 88 Class syntax ..................................................................................................... 89 mx account – account management commands ..................................................... 90 mx alert – cluster alert commands .......................................
About this guide This guide provides information about commands and utilities provided with the HP Scalable NAS, FS Option for Linux, and MxDB-Oracle-HiAv software. Intended audience This guide is intended for system administrators managing HP Scalable NAS clusters. HP technical support For worldwide technical support information, see the HP support website: http://www.hp.
HP websites For additional information, see the following HP websites: • http://www.hp.com • http://www.hp.com/go/scalablenas • http://www.hp.com/go/storage • http://www.hp.com/service_locator • http://www.hp.com/support/manuals Documentation feedback HP welcomes your feedback. To make comments and suggestions about product documentation, please send a message to storagedocsFeedback@hp.com. All submissions become the property of HP.
1 Functional cross reference The following sections list commands that are useful for administrative and diagnostic purposes.
Function Command Filesystem Create a filesystem mkpsfs, mx fs create Destroy a filesystem destroypsfs Label a filesystem psfslabel List volumes available for filesystem mx fs showcreateopt Mount a filesystem mx fs mount Recreate a filesystem mx fs recreate Report filesystem information psfsinfo Report volume information sandiskinfo Resume a suspended filesystem psfsresume Resize a filesystem resizepsfs Suspend a filesystem for backups psfssuspend Unmount a filesystem mx fs unmount
Function Command Membership partitions Create or repair mprepair, mx config mp Configure mx config mp Restore membership partition data mpimport Save membership partition data mpdump Monitors Device monitor, manage mx device Service monitor, manage mx service MxDB-Oracle-HiAv Manage MxDB-Oracle-HiAv operations mxdb NFS Enable or disable NLM mxnlmconfig Export groups, manage mx exportgroup Save FS Option for Linux configuration mx exportgroup dump Virtual NFS Service, manage mx vnfs V
Function Command Manage replication components rplmonitor Manage the replication state rplcontrol Report replication status rplstatus Role-Based Security Manage OS accounts belonging to management roles mx account Manage roles for cluster operations mx role SAN Disks, display information sandiskinfo Disks, import into cluster mx disk import Disks, remove from cluster mx disk deport Disks, show status mx disk status Dynamic volumes, display information sandiskinfo Dynamic volumes, manag
Function Command Verify fencing configuration mxfence Snapshots Create snapshot mx snapshot create Configure snapshot methods mx config snapshot Destroy snapshot mx snapshot destroy Snapshot options, display mx snapshot showcreateopt Users Accounts, manage role assignments mx account Quotas, manage edquota, mx quota Quotas, report for users quota Roles, assign to accounts mx role Virtual hosts Manage virtual hosts mx vhost Volumes Back up a dynamic volume mx dynvolume export Manage d
Diagnostic commands Function Command Cluster Alert messages, display mx matrix alert status Cluster requirements, verify on server mxcheck Restore configuration from dump file mx --file Fencing Mark server that cannot be fenced as “down” mx server markdown Test server-based fencing wmtest Unfence ports on FC switches PSANcfg Verify fencing configuration mxfence Filesystem Check and repair a filesystem psfsck Restore quota data psfsrq Log files Collect logs for Technical Suppor
Function Command NFS Enable or disable NLM mxnlmconfig Replication Check replication status rplstatus Convert binary log files into a readable format rpl_create_hr Convert the history file into a readable format rplctldump Manage the replication state rplcontrol SAN FC logins, display PSANinfo FC switch, unfence ports PSANcfg SAN disk information, display sandiskinfo SAN ownership locks, display mxsanlk Server access to SAN, check mxsancheck Servers Mark server as down mx server markd
Function Command Volumes Recover a dynamic volume 16 Functional cross reference mx dynvolume import
2 Cluster commands HP Scalable NAS File Serving Software includes several commands that can be helpful for administrators managing a HP Scalable NAS cluster. Other commands provide diagnostic information and should be used only under the direction of HP personnel. Certain other commands are used internally and should not be run directly.
dlmdebug – debug DLM problems Synopsis /opt/hpcfs/tools/dlmdebug Description This utility should be run only at the request of HP personnel. edquota – edit user and group quotas Synopsis /opt/hpcfs/sbin/edquota Description This command is based on the Linux edquota command but has been modified to work with PSFS filesystems as well as the standard Linux filesystem types. The command is provided on the HP Scalable NAS quota tools RPM. There are no changes to the syntax or operation of the command.
Description This command should be run only at the request of HP personnel. gcstat – print grpcommd statistics Synopsis /opt/hpcfs/tools/gcstat Description This command should be run only at the request of HP personnel. get_fenceidentity – get fencing information Synopsis /opt/hpcfs/sbin/get_fenceidentity Description This utility retrieves the fence identification information for the system on which it is run. The utility is used internally during cluster configuration and should not be run manually.
Description This command should be run only at the request of HP personnel. log_collect – obtain log files Synopsis /opt/hpcfs/tools/log_collect Description This command is used internally by the mxcollect utility and should not be run directly. mcs – manipulate the cluster log This utility provides several commands that are used internally by HP Scalable NAS; however the following commands may be useful when administering a cluster.
mcs select – display events from the cluster event log Synopsis /opt/hpcfs/tools/mcs select [-b] [-c] [-h [] [--count]] [-t []] [] [with ] Description This command can be used to display events from the cluster event log on the local server. The options are: -b Display the output in XML format. -c Do not display column headings in the output. -h [] Display the specified number of events, starting at the beginning of the log.
categoryid The ID assigned to a category. eventid The ID assigned to the event. eventtime The time at which the event occurred on the generating node. location The IP address of the node where the event occurred. message The text provided with the logged event. postedtime The time the event was stored on the local node. processid The process ID of the process logging the event. severity The severity level such as Alert or Critical. source The component that generated the message.
Event fields. The filter event fields are: postedtime The time the event was stored on the local node. The time must be specified as YYYY-MM-DDTHH:NN:SS. The year (YYYY) is the only required element. The month (MM), day (DD), hour (HH), minute (NN) and second (SS) must be two digits in length and can include a leading zero (for example, 200711-12T08:01:59.) If a time is specified, the month and day must also be specified. If a time or date element is not specified, it is assumed to be zero.
> Test if a filter event field is greater. Syntactical elements. Expressions can be enclosed in parentheses “( )” and can contain AND and OR operations, which use the syntax && and || respectively. AND and OR operations can be used only to connect filter event subtypes, filter event fields, and parenthesized statements. Logical negation is also allowed using the ! character.
Description The mkpsfs command creates a PSFS filesystem on the specified device, which must be imported into the cluster. is a psd or psv device and is specified as follows: • For a psd device partition, the device is specified as /dev/psd/psdXXXpYY, where XXX is the drive number and YY is the partition number. As an example, /dev/psd/psd6p4 specifies partition 4 on disk psd6. • For a non-partitioned psd device, the device is specified as /dev/psd/psdXXX, where XXX is the drive number.
Block Size Maximum Filesystem Size 4KB 16TB 8KB 32TB 16KB 64TB 32KB 128TB disable-fzbm Create the filesystem without Full Zone Bit Maps (FZBMs). The FZBM on-disk filesystem format reduces the amount of data that the filesystem needs to read when allocating a block. It is particularly useful for speeding up allocation times on large, relatively full filesystems. enable-quotas Enables quotas on the filesystem.
mpdump – save membership partition data Synopsis /opt/hpcfs/lib/mpdump /opt/hpcfs/lib/mpdump [-v] -F -X /opt/hpcfs/lib/mpdump [-v] -f -x Description The mpdump utility backs up the membership partition data to a file and/or the screen. The command can also be used to back up the mxds datastore. When mpdump is run with no options, the data is output to the screen. The options are: -F Send the data to the default membership partition backup file, /var/opt/hpcfs/run/MP.backup.
mpimport – restore membership partition data Synopsis /opt/hpcfs/lib/mpimport Description The mpimport utility can be used to import disks or dynamic volumes into an existing SCL database. (Either -F or -f is required to import a dynamic volume.) The utility can also be used to deport disks or dynamic volumes from the SCL database, to replace a specific UID with a different UID, and to restore the mxds datastore on the membership partitions.
mpimport -p --local Import the disk indicated by the specified local diskname and assign psdname to it. mpimport [-s] [-M] -F [| ...] Import the specified psd or psv devices. If no devices are specified, import all disks and dynamic volumes listed in the default mpdump backup file. If -s is specified, “strict” importing is done; only those disks and dynamic volumes that can be imported using the psdname indicated in inputfile will be imported.
mprepair – repair membership partitions Synopsis /opt/hpcfs/lib/mprepair Description The mprepair utility can be used to repair any problems if a failure causes servers to have inconsistent views of the membership partitions. NOTE: HP Scalable NAS cannot be running when you use mprepair. To stop the cluster, issue the command # /etc/init.d/pmxs stop on each node. Membership partition file Each server in the cluster has a membership partition file, which is called the “local MP list.
disk containing a membership partition also has its own list of the membership partitions. Under normal operations, these lists should all match. The output from --get_current_mps contains a record for each membership partition. Following is a sample record. 20:00:00:04:cf:13:33:12::0/1 OK 8001Kb active The first field contains the disk UUID followed by a slash and the partition number (partition 1 in the above example). The second field reports the status of the membership partition.
Active and inactive membership partitions A membership partition can be either active or inactive. (This attribute is reported in the last field of the record displayed by the mprepair --get_current_mps command.) The current membership partitions should all be active. If there are old membership partitions in the cluster, you may want to either remove them or mark them as inactive.
mprepair --display_mplists The output shows the local membership partition list on the server where you are running mprepair. It then compares this list with the lists located on the disks containing the membership partitions. The output also includes the device database records for the disks containing the membership partitions. Following is an example.
is the UUID for the device and is the number of the partition on the device. NOTE: If you resilver from a partition that has a status of RESILVER, the operation may initialize partitions that are not currently membership partitions; any existing data on those partitions will be overwritten. Use the --display_mplists option to see the membership partition lists for the current membership partitions.
unlikely; however, if HP Scalable NAS cannot be started on any server in the cluster, you can use the following command to determine whether all membership partitions have a valid Cluster-ID. mprepair --sync-clusterids The command displays the Cluster-IDs found in each membership partition and flags those partitions containing an invalid ID. You can then specify whether you want the command to repair the partitions having a mismatched Cluster-ID.
The options are: -t and/or -h Place the output in a text or html file. A – implies standard out. -l Log output to /var/opt/hpcfs/mxcheck. (Does indexing and suppresses the default text report.) -i Regenerate index.html in /var/opt/hpcfs/mxcheck. (Suppresses default sequence execution and report.) -r Remove all but the last 40 logged reports. (Does indexing, suppresses default text report. Without -l, suppresses default sequence.) -p Prompt for user input when necessary.
Run mxcollect You will need to run mxcollect on each node. The command is in the following directory and can be run from that location: /opt/hpcfs/lib/apache/cgi-bin/pmxs/mxcollect You can also run the command from the HP Scalable NAS web server. Start the web server as follows and specify your authentication credentials if asked for them. https://:6771/cgi-bin/pmxs On the web server, click mxcollect to run the utility.
NOTE: The version of mxcollect provided in earlier Matrix Server/HP Clustered File System releases is still available in the /opt/hpcfs/tools directory. This version is deprecated and does not collect all of the information gathered by the new version of mxcollect.
mxdb – perform MxDB-Oracle-HiAv operations Synopsis /opt/hpcfs/mxdb_oracle_ha/bin/mxdb Description This command is used to invoke the MxDB-Oracle-HiAv GUI. It also includes options that enable Database Administrators to perform the same operations available in the GUI. The options are: -c Start the MxDB-Oracle-HiAv GUI. -d This argument is required. It specifies the Virtual Oracle Service on which the command should be performed.
Clear events raised on a Virtual Oracle Service. Should a node crash, HP Scalable NAS will raise events so that the DBA can take action before returning the Virtual Oracle Service to the failed node. mxfence – verify fencing module configuration Synopsis /opt/hpcfs/sbin/mxfence Description The mxfence utility can be used to verify that HP Scalable NAS has the information needed to fence a server.
NOTE: The NLM locking protocol is enabled by default on HP 4000 Scalable NAS systems. It is disabled by default on the FS Option for Linux software-only product and on HP X5500 Storage Gateway for Linux/HP Scalable NAS Clustered Gateway systems. When the feature is enabled, the contents of the NFSD RPC reply cache are written out to a file when a virtualized NFS server (vhost) is removed from the node.
mxfs_upgrade_prep.sh – save MxReg registry information Synopsis /opt/polyserve/tools/mxfs_upgrade_prep.sh /opt/hpcfs/tools/mxfs_upgrade_prep.sh Description This script is used during upgrades to HP Scalable NAS 3.7.0 to save the MxReg registry information stored on each node running MxFS-Linux/FS Option for Linux 3.5.1. The saved information is later imported into the mxds datastore used by HP Scalable NAS 3.7.0. For more information, see the HP Scalable NAS File Serving Software upgrade guide.
When you invoke mxinit to start HP Scalable NAS, by default it continues running and monitors processes. If you do not want mxinit to monitor processes, invoke it with the -M (or --no-monitor) option. It will then exit after it completes the options you specified. Typically, you should use the pmxs script to start or stop HP Scalable NAS. However, if you want to see verbose output during the start or stop operation, you can run mxinit manually with the --verbose option.
Explicitly tell mxinit to monitor processes. This is the default when mxinit is invoked to start HP Scalable NAS. -M, --no-monitor Explicitly tell mxinit not to monitor processes. --hba-status Display the state of the Fibre Channel host bus adapter drivers. --status Display the status of HP Scalable NAS processes and modules. Following is an example.
Description NOTE: HP recommends that Device Mapper Multipath, the Linux-based MPIO solution, be used in HP Scalable NAS clusters. However, HP Scalable NAS supports other MPIO solutions, including the mxmpio command described here. See the HP Scalable NAS File Serving Software administration guide for more information. HP Scalable NAS uses multipath I/O (MPIO) to eliminate single points of failure. A cluster can include multiple Fibre Channel switches, multiple FC ports per server, and multiported SAN disks.
Set the active target on the specified device. mpiostat [-l] [ ...] List the number of transient errors for each target and show the number of failovers and fatal errors for each device. mpioload [-l] [interval [count]] [ ...] Shows the load for each target (SCSI command I/Os) and total for the PSD device (block layer I/Os), number of failovers, and fatal errors for each device. iostat [-u] [interval [count]] [ ...] Show general I/O statistics for each device.
You can use the following command to specify either a particular HBA or a PSD device. HP Scalable NAS will then fail over the I/O to the path that includes the specified device. In the command, PSD-device is specified by the base name of the device path, such as psd2p1 (not /dev/psd/psd2p1). # mxmpio active target can be one of the following values: I A numerical index on the PSD device target array (0..). M,m A decimal major/minor number identifying the host adapter.
Now use the mxmpio command to change the path for psd2p1 to target 0: # /opt/hpcfs/sbin/mxmpio active 0 psd2p1 To verify the change, run the mxmpio status -l command again. In the following output, device psd2p1 is now active on target 0. # /opt/hpcfs/sbin/mxmpio status -l MPIO Failover is globally enabled Failover Timeout Targets psd1 enabled 30000 0. (41:50) 1. (08:90) psd1p1 enabled 10000 0. (41:51) 1. (08:91) psd1p2 enabled 30000 0. (41:52) 1. (08:92) psd2 enabled 30000 0. (41:10) 1.
Set the timeout value The default timeout period for PSD devices is 30 seconds. If you need to modify this value for a particular PSD device, use the following command. value is in milliseconds; however, the smallest unit is 10 milliseconds. A value of zero disables timeouts. # mxmpio timeout [] Show number of transient errors The mpiostat command lists the number of transient errors for each target and shows the number of failovers and fatal errors for each device.
interval is the number of seconds between samplings. The default is one second. count is the number of samples to make; the default is to sample indefinitely. The information displayed for each interval includes the number of I/Os queued (total block and raw), minimum and maximum latency, count of I/Os, and average latency. The statistics are organized by I/O, with only actively used sizes shown. Latencies are in milliseconds. The minimum and maximum latency is reset every interval.
10. Number of MP failovers 11. Number of MP fatal errors 12... Per-target I/O statistics in tuples, or groups of two numbers. (The number of targets is indicated in field 4.) Each tuple consists of the following fields for each target: • SCSI I/Os queued • Transient failures Note that the “SCSI I/Os queued” numbers are for the underlying disk, not the partition. PSD devices that share the same underlying disk will share the same numbers here. 12+$4*2... I/O statistics in quads, or groups of four numbers.
By default, the NLM locking protocol is disabled in the FS Option for Linux software-only product. If necessary, NLM can be enabled; however, you should be aware of the following caveat: • File locks granted by the NFS server are cluster-coherent. When a failover occurs, the locks are released by the original server and the client automatically reclaims them on the new server (the backup node).
mxsanconf – configure FC switches Synopsis /opt/hpcfs/sbin/mxsanconf Description When a cluster is configured to use fabric-based fencing, mxconfig runs the mxsanconf command on each node to configure the list of Fibre Channel switches that will be managed by HP Scalable NAS. The command creates or updates the files /etc/opt/hpcfs/psSAN.cfg and /var/opt/hpcfs/FCswitches. is either the name or IP address of a switch to be managed.
Following is some sample output. The command was issued on host 99.10.30.3. The SDMP administrator is the administrator for the cluster to which the host belongs. There are three membership partitions. # mxsanlk This host: 99.10.30.3 This host’s SDMP administrator: 99.10.30.
The host on which mxsanlk was run is trying to acquire the SANlock but the SDMP process responsible for the SANlock is unresponsive. locked, sdmp process hung The host on which mxsanlk was run held the SANlock but the SDMP process responsible for the SANlock is now unresponsive. lock is corrupt, will repair This transitional state occurs after the SDMP has detected that the SANlock has been corrupted but before it has repaired the SANlock.
Description This command is typically run by mxconfig and should be run manually only at the request of HP personnel. pmxs – start or stop HP Scalable NAS or view status Synopsis /etc/init.d/pmxs /etc/init.d/pmxs /etc/init.d/pmxs /etc/init.d/pmxs start stop restart status Description HP Scalable NAS runs on each server in the cluster. When a server is booted to run-levels 3 or 5, HP Scalable NAS is started automatically by the script /etc/init.d/pmxs.
The -l command adds the specified HBA port to the list of local ports; -L removes the specified port. The mxsanconf command invokes PSANcfg with these options; they should not be run directly. -u switch ... Unfence all local ports on the specified FC switches. -c community_string Set the snmp community string. -h Print a usage message.
Port 1 Port 2 Port 3 Port 4 Port 5 Port 6 Port 7 Poll time: : oper : oper : oper : oper : oper : oper : oper 0.
• For a psv device, the device is specified as /dev/psv/psvXXX, where XXX is the volume number. For example, /dev/psv/psv1. You do not need to specify the full path name. A name such as psd6p4 or psv1 will work. When psfsck is running in check mode (the default action), it will attempt to fix any corruptions that can be repaired without --rebuild-tree.
Tell psfsck to place information about any corruption it finds into the specified logfile instead of sending it to stderr. --no-modify, -n Check the filesystem in read-only mode. Prevents psfsck from replaying the journal and/or fixing any corruption. If errors are found, it is strongly recommended that you run psfsck again in check mode, without the --no-modify option, before running with the --rebuild-tree option. The -no-modify option cannot be specified in addition to --rebuild-tree or --rebuild-sb.
--set-gdq [T|G|M|K] Set the default quota for groups on the specified filesystem. The modifiers are the same as the set-udq option. (The default is rounded down to the nearest filesystem block.) -e enable-smallfiles Enable the small files performance enhancement feature on filesystems created before the 3.7.0 release. Turning on this feature will not improve the read time of pre-3.7.0 files, but should improve the read performance of any new small files that are created on the filesystem.
The psfsdq and psfsrq commands should be run in conjunction with the standard filesystem backup utilities, as those utilities do not save the quota limits set on the filesystem. psfsinfo – report filesystem information Synopsis /opt/hpcfs/tools/psfsinfo [--feature ] [--version] [--blocksize] [--verbose] ... Description The psfsinfo command reports information about the filesystem.
0 – enabled 1 – not enabled 2 – could not open or read disk 3 – a bad argument was specified --version Display the version of the on-disk filesystem format. --blocksize Display only the block size used by the filesystem. --verbose, -v Enable verbose messages. psfslabel – label a PSFS filesystem Synopsis /opt/hpcfs/bin/psfslabel “
• For a psv device, the device is specified as /dev/psv/psvXXX, where XXX is the volume number. For example, /dev/psv/psv1. You do not need to specify the full path name. A name such as psd6p4 or psv1 will work. The options are: --enable-quotas Build the necessary quota infrastructure on the specified filesystem. The psfsquota utility then examines the existing files and stores current allocations for each user and group owning a file on the filesystem.
When you have completed your work with the suspended filesystem, use the psfsresume utility to resume the filesystem. Issue the psfsresume command from the server where you executed psfssuspend. You must be user root. NOTE: If an attempt to mount the copied filesystem fails with an “FSID conflict” error, run the following command as user root.
Description The psfssema semaphore utility provides a simple synchronization mechanism for managing cluster-wide file locks. This utility can be used in shell scripts on different nodes of a cluster and takes advantage of the PSFS filesystem and its internode communication abilities. For example, you might want to use cluster-wide file locking in a Start or Stop script for a service or device monitor.
to use the raw device (/dev/rpsd/...) to ensure that all blocks copied are up-to-date. The filesystem is essentially unusable while it is suspended; however, applications that can tolerate extended waits for I/O do not need to be terminated. The psfsresume utility restores a suspended filesystem. The psfssuspend and psfsresume utilities affect the specified filesystem on all servers where it is mounted; however, the utilities should be executed on only one server in the cluster.
Description This command is based on the Linux quota command but has been modified to work with PSFS filesystems as well as the standard Linux filesystem types. The command is provided on the HP Scalable NAS quota tools RPM. There are no changes to the syntax or operation of the command. See the Linux quota man page for details about using the command.
You do not need to specify a full path name. A name such as psd6p4 or psv1 will work. This program does not change the size of the partition containing the filesystem. Instead, you will need to use a utility specific to your RAID subsystem to modify the size of the partition. You will need to deport the disk containing the filesystem before you modify the partitions. Be sure to back up your data before using this program. The options are: -q Do not print anything but error messages.
The options are: -i Import the specified configuration file into the mxds datastore. -e Export the configuration file from the mxds datastore. If the configuration file does not exist, it will be created (you can call the file whatever you want). If you specify a path such as /tmp/rplconfig, the file will be written to that location. If only a filename is specified, the file will be written to the current directory.
individual nodes. For example, replication can be running on the cluster but not be running on a particular node. Use the rplstatus command to view the replication state. The states are: State Description 1 Configured. A replication configuration file has been imported and replication can be started. 2 Running. The replication program rplmonitor is running. 3 Waiting.
Kill worker threads that do replication work. Replication activity will stop, but rplcontrol remains alive. The replication state must be 2 (running) or 3 (waiting). This option changes the replication state to 4 (stopped). -c 3 | –s configchange Notify that the replication configuration has changed. This command is sent automatically as needed and does not need to invoked manually. -c 4 | –s exit Force rplmonitor to exit, which stops worker threads if they are running.
rplctldump – convert the history file Synopsis /opt/hpcfs/bin/rplcltdump [-o ] history_file_path Description rplmonitor generates a binary replication history file that records information about the most recent replication intervals. The rplcltdump command converts the history file into a readable format. The -o option specifies the name of the file that will contain the converted output. If this option is not specified, rplmonitor will generate a name for the output file.
-D Reset the keys to the shipped default. -i path The path to a private key created by the user (such as /tmp/id_ds). The matching public key is assumed to be in the same location as the private key, but with a .pub suffix (such as /tmp/id_dsa.pub). -v Create a host key for a virtual host on the destination cluster. This option should be run only on the nodes of the destination cluster.
rplmonitor includes debug flags that can be used to generate debug messages. These flags are intended for diagnostic purposes only and should not be used under normal conditions. The flags are: -v or --verbose Print debug messages on the console. -s or --verbosesentinel Print sentinel-related debug messages on the console. -w or --verbosewatch Pass the verbose flag to rplwatch threads spawned by rplmonitor. -t or --verbosetransport Pass the verbose flag to the transport process created by rplwatch.
/opt/hpcfs/bin/rplstatus Cluster replication status 2 (running) Node replication status 99.30.31.3: 2 (running) 99.30.31.4: 2 (running) 99.30.31.5: 2 (running) 99.30.31.6: 2 (running) Replication sentinel preferred sentinel=99.30.31.6 current sentinel=99.30.31.3 If replication is not running, the output will state that the current sentinel is not defined. The command provides the following options for use in scripts. Only one option can be specified at a time.
rplwatch – watch PSFS filesystems and directories in the replication set Synopsis /opt/hpcfs/bin/rplwatch Description This process watches the PSFS filesystems and directories that are included in the replication set specified in the configuration file. (There is a separate instance of the process for each directory.
The options are: -i Display information for imported disks (the default). -u Display information for unimported disks. -v Display available volumes. -f Display PSFS filesystem volumes. -a Display all information; for -v, display all known volumes. -l Additionally display host-local device name. -r Additionally display local device route information. -U Display output in the format used by the Management Console. This option is used internally by HP Scalable NAS and does not produce human-readable output.
Show local device information The -l option displays the local device name for each disk, as well as the default disk information. When combined with -u, it displays local device names for unimported disks.
Show available subdevices The --subdevices option lists subdevices that are available for use in constructing a dynamic volume.
Lists dynamic volumes that are currently unimported. --importable-volumes Lists unimported dynamic volumes that can be imported into the cluster. --unimportable-volumes Lists unimported dynamic volumes that cannot be imported into the cluster. setquota – set quotas Synopsis /opt/hpcfs/sbin/setquota Description This command is based on the Linux setquota command but has been modified to work with PSFS filesystems as well as the standard Linux filesystem types.
spctl – dump the SanPulse trace buffer Synopsis /opt/hpcfs/tools/spctl -l Description This command should be run only at the request of HP personnel. spdebug – obtain SanPulse debug information Synopsis /opt/hpcfs/tools/spdebug Description This command should be run only at the request of HP personnel. spstat – show cluster state information Synopsis /opt/hpcfs/tools/spstat Description This command should be run only at the request of HP personnel.
wmtest – test server-based fencing Synopsis /opt/hpcfs/tools/wmtest [verbose] Description This command can be run to verify that the server management interface works with HP Scalable NAS server-based fencing. The command tests only the fencing agents. Valid brands: demo, dell, hp, ibm, ipmi Valid blades: 1-14; 0 for non-BladeCenter Valid commands: status, on, off Do not run this command from the server that you are trying to power on, power off, or reset.
Cluster commands
3 mx commands The mx utility provides a command-line interface for administering a cluster and monitoring its operation. The matrixrc file HP Scalable NAS can use an optional, external configuration file named .matrixrc to provide authentication information for cluster connections. If the file is configured, it will be used when you connect to a cluster through either the HP Scalable NAS Connect window or the mx command.
• The fourth field, default, specifies that this server will be connected to by default if a server name is not specified on the command line. Specifying a default server is optional. Blank lines and lines beginning with a # character are ignored. Notes regarding the .matrixrc file When working with the .matrixrc file, you should be aware of the following: • When editing the .matrixrc file by hand, you need to put quotation marks around user names or passwords that contain spaces.
To connect to a different server, include the --matrix option and specify the server name on the command line. For example, the following command connects to server acme1 as user admin using the password secret1. mx --matrix srv1 server status Use wildcards You can use wildcards in the .matrixrc file to match machine names: srv* srv3 root root secret1 secret1 default In the following command, --matrix srv8 matches the wildcard.
try the other servers in the list. If “default” is omitted, the mx command will attempt to connect to the servers in the order that they are specified in the list. # production cluster prod root secret1 default { srv1 srv2 srv3 root secret1 default srv4 root secret2 } mx syntax The mx utility has the following syntax: mx [mx_options] class command [command_options] The mx_options affect an entire mx command session. The options are: --help Displays a command summary.
Specifies the user to be logged in. --password Specifies the user’s password. Class syntax The mx utility can manipulate the following classes of cluster objects. Specify -–help to see a short command synopsis for each class.
Class Cluster Object server Server service Service monitor snapshot Snapshot vhost Virtual host vnfs Virtual NFS Service To specify a command affecting a class, use this syntax: For example, the following command displays the status of servers that are currently up: mx server status --up mx account – account management commands Use the following mx account commands to manage user and group accounts that belong to management roles.
--type Whether the account is for a user or group, or is unknown. The default is GROUP. listroles—List the role memberships of an account mx account listroles [--form ] [--type ] [--effective] [--noHeaders] [--csv] [--showBorder] [] This command lists the roles to which the account belongs. To show information for the current user, omit the account parameter.
mx alert – cluster alert commands Use the following command to view HP Scalable NAS alerts. alert—Display all outstanding alerts mx alert status [--severity ] [--noHeaders] [--csv] [--showborder] The options are: [--severity ] Filters the alerts according to the specified alert level. The levels are: INFO, WARNING, ERROR, CRITICAL. If you specify more than one alert level, use commas to separate the levels. [--noHeaders] Do not display column headers in the output.
rename—Rename an application mx application rename status—Show status for an application mx application status [--severity OK|WARNING|ERROR] [ ...] mx config – cluster configuration commands Use the following commands to configure the cluster. NOTE: If you are performing the initial configuration of a cluster, the mx config, mx config mp, and mx config snapshot commands must be entered in a specific order.
Command Description santype Set the SAN storage type secret Set the cluster secret license key testfencing Test the fencing configuration webfencing Configure the web-based fencing module check—Check the cluster configuration mx config check This command specifies whether cluster components are configured or unconfigured. It does not verify that components are configured correctly. To see the state of each component, use commands such as mx config list and mx config mp list.
The default SNMP community string for HP Scalable NAS is private. If you want to use a custom community string, use the --community option to enter the appropriate value. The SNMP community string must be set to the same value on HP Scalable NAS and on the SAN switches. import—Import the cluster configuration mx config import This command can be used when the cluster is either online or offline.
The current administrative traffic protocol (either Multicast or Unicast). Multicast is the default. [--santype] The current storage type (either Fibre Channel or iSCSI). [--servers] The servers currently in the cluster. [--snapshots] The snapshots currently in the cluster. [--status] The current status of the cluster (STARTING, RUNNING, STOPPING, or STOPPED). [--switches] The Fibre Channel switches currently configured in the cluster.
This command can be used only when the cluster is offline. Specify Multicast or Unicast as appropriate. santest—Test the switch configuration mx config santest --santype [fc|iscsi] ... This command can be used only when the cluster is offline. is either fc for Fibre Channel or iscsi for iSCSI. santype—Set the SAN storage type mx config santype [fc|isci] This command can be used only when the cluster is offline.
--hostsuffix The common suffix to append to each server name to determine the associated Remote Management Controller name. For example, if your server names are server1 and server2 and their Remote Management Controllers are server1-iLO and server2-iLO, specify -iLO as the suffix. --ipdelta The delta to add to each server’s IP address to determine the IP addresses of the associated Remote Management Controllers. For example, if your servers are 1.255.200.12 and 1.255.200.
NOTE: The administrative filesystem is preconfigured on HP StorageWorks 4400 Scalable NAS systems. IMPORTANT: Do not use psfssuspend to suspend the administrative filesystem. Doing this can cause issues with the cluster. Determine the size for the administrative filesystem The minimum size for the administrative filesystem is 10GB; however HP recommends that the filesystem be at least 50GB. If the replication feature is used, you may need to create a larger filesystem.
Create or extend the administrative filesystem The administrative filesystem must be created manually. Before creating the filesystem, obtain the UUIDs and partition numbers for the volumes or disks that will be used for the filesystem. The disk or disks used for the administrative filesystem do not need to be imported into the cluster and can be either partitioned or unpartitioned. The sandiskinfo -a command lists the disk UUIDs and partitions for all disks imported into the cluster.
Allow disks that contain existing volume information to be reused. (The existing data is destroyed.) The command creates a dynamic volume containing one or more subdevices. You can extend the volume later with additional subdevices if needed. Striping is not configured on the volume. The administrative filesystem is mounted at /_adminfs on all nodes existing in the cluster at the time the filesystem is created. The filesystem appears on the Management Console in the same manner as other PSFS filesystems.
Command Description dump Dump the membership partition configuration list List the current membership partitions in a running or stopped cluster list_avail_disks List disks that can be used for membership partitions list_avail_partitions List partitions that can be used for membership partitions repair Repair a membership partition set Add or replace membership partitions In general, these commands can be used when HP Scalable NAS is online or offline.
Returns the size of the LUNs containing the membership partitions. Without this option, the output reports the size of the membership partitions (the “used” size). The “used” size should be the same for all three membership partitions, but the physical size can vary and will always be the same or larger than the “used” size. [--noHeaders] Do not display column headers in the output. [--csv] Display the output in comma-separated value format. [--showborder] Display borders in the output.
This command can be used to resilver a corrupt membership partition. is the disk UID of the membership partition to be repaired. The --reuse option allows disks that contain existing volume information to be reused. (The existing data is destroyed.) The --reuse option is available only when the cluster is offline.
add—add a new snapshot method configuration mx config snapshot add --method [--options ] The options are: --method The supported types are hpmsa2000, hpxp, hpeva, and engenio. [--options ] For hpmsa2000, specify the following: --controllerA The IP address of controller A. --controllerB The IP address of controller B. --username The user name required to access the controllers.
The password for the storage array controller. The mx config snapshot showtype command also lists the options available for your snapshot method. delete—delete a snapshot method configuration mx config snapshot delete --method The options are: --method The supported types are hpmsa2000, hpxp, hpeva, and engenio. For hpeva, use the --hostname option to specify the hostname for the management appliance.
For hpmsa2000 and engenio, use the --controllerA option to specify the hostname or IP address for controllerA on the storage array. For hpeva, use the --hostname option to specify the hostname for the management appliance. For hpxp, use either the --instanceL or --instanceR option. mx device – device monitor commands Use the following commands to configure device monitors or to display their status.
The maximum amount of time to wait for a probe of the device to complete. For DISK and SHARED_FILESYSTEM device monitors, the default is five seconds. For CUSTOM device monitors, the default is 60 seconds. [--frequency ] The interval at which the monitor probes the device. For DISK and SHARED_FILESYSTEM device monitors, the default is 30 seconds. For CUSTOM device monitors, the default is 60 seconds. [--probeSeverity nofailover|autorecover|noautorecover] The failover behavior for the monitor.
The amount of time to wait for the Recovery script to complete. [--startScript