HP StorageWorks Clustered File System 3.6.
Legal and notice information © Copyright 1999-2008 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 HP Technical Support HP Storage Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 HP NAS Services Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Quick Start Checklist Cluster Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Introduction to HP Clustered File System Product Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Other Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tested Configuration Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume and Filesystem Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Management Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . User Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authentication Considerations. . . . . .
Contents Enable a Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change the IP Address for a Server. . . . . . . . . . . . . . . . . . . . . . . . Move a Server to Another FibreChannel Port . . . . . . . . . . . . . . . Move a Server to Another Cluster . . . . . . . . . . . . . . . . . . . . . . . . . HP Clustered File System License File . . . . . . . . . . . . . . . . . . . . . . . . Upgrade the License File . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents vi Disk Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Options for Dynamic Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 8 Configure Dynamic Volumes Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic and Dynamic Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Dynamic Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents View Drive Letter or Path Assignments . . . . . . . . . . . . . . . . . . . Remove Drive Letter or Path Assignments . . . . . . . . . . . . . . . . Set Permissions for Filesystems Accessed via Mount Points . . 8.3 Short File Names and Name Tunneling . . . . . . . . . . . . . . . . . . . Using the Extended Character Set . . . . . . . . . . . . . . . . . . . . . . . . Determine Status of 8.3 SFN on a Filesystem. . . . . . . . . . . . . . . Alternate Data Streams (ADS) . . . . . . . . . . . . . . . .
Contents viii 12 Configure Security Features Role-Based Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add a New Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Allow or Deny Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assign Accounts to a Role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Effective Rights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents ix Script Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 14 Cluster Operations on the Applications Tab Applications Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Applications Tab . . . . . . . . . . . . . . . . . . . . . . . .
Contents Network Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Processes Are Not Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphs Are Not Displayed for a Node . . . . . . . . . . . . . . . . . . . . Filesystems Are Not Displayed on the Dashboard . . . . . . . . . . Missing or Incomplete Volume Objects in Windows Perfmon Reset the Perfmon Configuration. . . . . . . . . . . . . . . . . . . . . . . . .
Contents Types of Device Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Device Monitors and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . Device Monitor Activeness Policy . . . . . . . . . . . . . . . . . . . . . . . . Add or Modify a Device Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Settings for Device Monitors . . . . . . . . . . . . . . . . . . . . . . Probe Severity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Server Cannot Be Fenced. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Cannot Be Located . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Insertion of New Storage . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations for Storage Capacity Upgrades . . . . . . . . . Online Replacement of a FibreChannel Switch . . . . . . . . . . . . . . .
1 HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP website: http://www.hp.com/support. From this website, select the country of origin. For example, the North American technical support number is 800-633-3600. NOTE: For continuous quality improvement, calls may be recorded or monitored.
Chapter 1: HP Technical Support 2 HP NAS Services Website The HP NAS Services site allows you to choose from convenient HP Care Pack Services packages or implement a custom support solution delivered by HP ProLiant Storage Server specialists and/or our certified service partners. For more information see us at http://www.hp.com/hps/storage/ns_nas.html.
2 Quick Start Checklist The following checklist is intended for new installations of HP Clustered File System and includes typical steps to configure the cluster. Cluster Configuration Steps The following checklist assumes that the installation and configuration steps described in the HP StorageWorks Clustered File System Setup Guide have been completed. Action Description Review administrative considerations and restrictions.
Chapter 2: Quick Start Checklist Action Description Create dynamic volumes. Dynamic volumes can include multiple disks and are used for PSFS filesystems. See “Create a Dynamic Volume” on page 80. Create PSFS filesystems. Select the dynamic volume to be used for the filesystem and configure the appropriate options such as block size and disk quotas. See “Create a Filesystem” on page 101.
Chapter 2: Quick Start Checklist Action Description Prepare for cluster security Create administrative Create roles that allow or deny permission to roles (optional). perform cluster operations and assign users and groups to the roles. See “Role-Based Security” on page 142. Review the audit log feature. HP Clustered File System provides an audit trail of operations that change the configuration or state of the cluster. See “HP Clustered File System Audit Trail” on page 154.
Chapter 2: Quick Start Checklist Action Description Configure application monitoring as necessary Configure virtual hosts. Virtual hosts provide failover protection for servers and network services. If you will be monitoring other applications, create virtual hosts as necessary. See “Add or Modify a Virtual Host” on page 207. Configure service monitors. HP Clustered File System provides built-in service monitors such as HTTP and TCP and also allows you to create your own custom monitors.
3 Introduction to HP Clustered File System HP StorageWorks Clustered File System provides a cluster structure for managing a group of network servers and a Storage Area Network (SAN) as a single entity. Product Features HP Clustered File System includes the following features: • Fully distributed data-sharing environment. The PSFS filesystem enables all servers in the cluster to directly access shared data stored on a SAN.
Chapter 3: Introduction to HP Clustered File System 8 • Cluster-wide administration. The HP CFS Management Console (a Java-based graphical user interface) and the corresponding commandline interface enable you to configure and manage the entire cluster either remotely or from any server in the cluster. • Failover support for network applications.
Chapter 3: Introduction to HP Clustered File System 9 The cluster includes these components: Servers. Each server must be running HP Clustered File System. Public LANs. A cluster can include up to four network interfaces per server. Each network interface can be configured to support multiple virtual hosts, which provide failover protection for Web, e-mail, file transfer, and other TCP/IP-based applications. Administrative Network.
Chapter 3: Introduction to HP Clustered File System 10 writes to a PSFS filesystem automatically obtain the appropriate locks from the DLM, ensuring filesystem coherency. grpcommd. Manages HP Clustered File System group communications across the cluster. mxds. Manages the mxds datastore. mxlogd. Manages global error and event messages. The messages are written to the HP Clustered File System event log on each server. PanPulse.
Chapter 3: Introduction to HP Clustered File System 11 Volume Manager The HP Clustered File System Volume Manager can be used to create dynamic volumes consisting of disk partitions that have been imported into the cluster. Dynamic volumes can be configured to use either concatenation or striping. A single PSFS filesystem can be placed on a dynamic volume. The Volume Manager can also be used to extend a dynamic volume and the filesystem located on that volume.
Chapter 3: Introduction to HP Clustered File System 12 HP Clustered File System Databases HP Clustered File System uses the following databases to store cluster information: • Shared Memory Data Store (SMDS). The SANPulse process stores filesystem status information in this database. The database consists of sp_status files that are located in %SystemDrive%\Program Files\Hewlett-Packard\HP Clustered File System\conf on each server. These files should not be changed. • Device database.
Chapter 3: Introduction to HP Clustered File System 13 If any of these health checks fail, HP Clustered File System can transfer the virtual host to a backup server and the network traffic will continue. After creating virtual hosts, you will need to configure your network applications to recognize them. When clients want to access a network application, they use the virtual host address instead of the address of the server where the application is running.
Chapter 3: Introduction to HP Clustered File System 14 Event Notifier Services HP Clustered File System provides event notifier services that can be configured to respond when certain cluster events occur. The services can send an SNMP trap to an SNMP trap forwarding target, send email to certain addresses, or run a script. Each service can be configured with the specific events that should trigger a response from the service.
Chapter 3: Introduction to HP Clustered File System 15 Single FC Port, Single FC Switch, Single Fabric This is the simplest configuration. Each server has a single FibreChannel port connected to the FibreChannel switch. The SAN includes two RAID arrays. In this configuration, multiported SAN disks can protect against a port failure, but not a switch failure.
Chapter 3: Introduction to HP Clustered File System 16 Single FC Port, Dual FC Switch, Single Fabric In this example, the fabric includes two FibreChannel switches. Servers 1–3 are connected to the first FC switch; servers 4–6 are connected to the second switch. The FC switches are connected to two RAID arrays, which contain multiported disks. If a switch fails, the servers connected to the other switch will survive and access to storage will be maintained.
Chapter 3: Introduction to HP Clustered File System 17 iSCSI Configuration This example shows an iSCSI configuration. The Microsoft iSCSI initiator is installed on each server. Ideally, a separate network should be used for connections to the iSCSI storage arrays.
4 Cluster Administration HP StorageWorks Clustered File System can be administered either with the HP CFS Management Console or from the command line. Administrative Considerations and Restrictions You should be aware of the following when managing HP Clustered File System. Network Hostname Resolution Normal operation of the cluster depends on a reliable network hostname resolution service. If the hostname lookup facility becomes unreliable, this can cause reliability problems for the running cluster.
Chapter 4: Cluster Administration 19 • If one of these hostnames has already been referenced unsuccessfully, the DNS resolver cache may need to be flushed with “ipconfig /flushdns” (see Microsoft Knowledge Base article 320845). • Certain Microsoft Knowledge Base articles caution that in the case of Exchange SMTP, and possibly other applications, the use of the hosts file can interfere with mail flow (see Microsoft Knowledge Base article 296215).
Chapter 4: Cluster Administration 20 may result in filesystem corruption. For example, the connection must not be moved from one switch port to another, and a new FibreChannel connection for the server must not be established while HP Clustered File System is running on the server. • If servers from multiple clusters can access the SAN via a shared FC fabric, avoid importing the same disk into more than one cluster.
Chapter 4: Cluster Administration 21 • Active Directory user and groups should be used in filesystem ACLs. Do not use local users and groups because they are meaningless to other nodes in the cluster. • HP Clustered File System nodes should not be used as domain controllers because the two services will compete for resources, resulting in decreased performance. • The DNS servers used by Active Directory and HP Clustered File System should not reside on HP Clustered File System nodes.
Chapter 4: Cluster Administration 22 Tested Configuration Limits HP has tested HP Clustered File System configurations up to the following limits: • 16 servers per cluster • 256 imported LUNs per cluster for FC configurations; for iSCSI configurations, the maximum number of connections for the iSCSI initiator • 128 filesystems per cluster on 32-bit systems; 256 filesystems on 64-bit systems • 2048 filesystem mounts per cluster • 128 virtual hosts per cluster • 128 service and/or device monitors per cluste
Chapter 4: Cluster Administration 23 • For both dynamic volumes and basic volumes, the maximum number of files/inodes is limited only by the available space. If zero-length files are created, the upper bound is similar to the maximum block count, which is about 2^32. Cluster Management Applications HP Clustered File System provides two applications to manage the cluster: the HP CFS Management Console, and mx, the corresponding command-line interface.
Chapter 4: Cluster Administration 24 • If a .matrixrc file exists, the user credentials specified in the file for the selected server are used. • If there is not a .matrixrc file or the file does not include user credentials, the credentials provided by single sign-on semantics are used. • If single sign-on fails, the user is prompted for a user name and password.
Chapter 4: Cluster Administration 25 Start the Management Console To start the Management Console, select Start > Programs > HP StorageWorks CFS > HP CFS Management Console. The HP Clustered File System Connect window asks for connection information and then starts the HP Management Console or opens the Configure Cluster window. Connect to: Type a cluster or server name or select a name from the dropdown list. When you connect to a server or cluster, it is added to the dropdown list.
Chapter 4: Cluster Administration 26 Authentication Parameters and Bookmarks When you connect to a cluster, you can optionally supply a user name and password. When you click the “As User” button, the Authentication Parameters dialog asks for this information. User: Type the name of the user who will be accessing the cluster. Password: Type the user’s password. If you do not want to be prompted for the password again, click the “Remember this password” checkbox.
Chapter 4: Cluster Administration 27 the server name, the username, and password. The password is automatically encrypted. If you do not check “Remember this password,” only the server and user names are added to the file. NOTE: The default location for the file is %userprofile%\.matrixrc on the server. Manage Bookmarks The Bookmarks display lists the cluster connections that are configured in the .matrixrc file.
Chapter 4: Cluster Administration 28 • Delete. If a cluster is selected, this option removes the bookmark for that cluster. If a server is selected, the option removes just that server from the bookmark. • Rename. If a cluster is selected, this option allows you to rename that cluster. If a server is selected, you can replace that server with a different server in the cluster. After typing the new name, press Enter. • Set Default.
Chapter 4: Cluster Administration 29 server that has been bookmarked. The resulting list of bookmarks matches the list of servers in the cluster to which the connected server belongs. You can then select the cluster, or any server in the cluster, for the connection. If the connection attempt fails, HP Clustered File System will try to make the connection via another bookmarked server in that cluster. Update an Existing .matrixrc File to Use New Features If your .
Chapter 4: Cluster Administration 30 When you invoke the HP Management Console or mx commands, by default the application checks the current software version on the server to which it is being connected and then downloads the software only if that version is not already in the local cache. If for some reason the software version running on the server cannot be identified, the applications use the latest version in the cache.
Chapter 4: Cluster Administration 31 Servers Tab This tab lists the entire configuration of each server configured in the cluster, including the network interfaces on the server, any virtual hosts associated with those interfaces, any device monitors created on the server, and any PSFS filesystems mounted on the server.
Chapter 4: Cluster Administration 32 Virtual Hosts Tab The Virtual Hosts tab shows all virtual hosts in the cluster. For each virtual host, the window lists the network interfaces on which the virtual host is configured, any service monitors configured on that virtual host, and any device monitors associated with that virtual host.
Chapter 4: Cluster Administration 33 Applications Tab This view shows the application monitors configured in the cluster and provides the ability to manage and monitor them from a single screen. The tab uses a table format, with a column for each server in the cluster. The application monitors appear in the rows of the table. You can reorder the information on this tab or limit the information that is displayed.
Chapter 4: Cluster Administration Filesystems Tab The Filesystems tab shows all PSFS filesystems in the cluster.
Chapter 4: Cluster Administration 35 Cluster Alerts The Alerts section at the bottom of theHP CFS Management Console window lists errors that have occurred in cluster operations. Double click on an alert message to see all of the information about the alert. For alerts affecting cluster components such as servers or monitors, you can double-click in the Source column to highlight the source of the error on the main Management Console window.
Chapter 4: Cluster Administration 36 If you receive an alert telling you to reboot a server, the message will remain in the Alerts section until either HP Clustered File System is restarted on the rebooted server or the server is removed from the cluster. To view the current Alerts from the command line, use the mx alert status command. HP Clustered File System Operations Many HP Clustered File System operations can be run in the background.
Chapter 4: Cluster Administration 37 %SystemDrive%\Program Files\Hewlett-Packard\HP Clustered File System\lib • An SNMP event notifier service that can send HP Clustered File System events as SNMP traps. To enable the HP Clustered File System SNMP extension agent, the Microsoft SNMP service must be installed and configured on all nodes in the cluster. This service is included in the Microsoft Windows 2003 distribution; however, it is not automatically installed.
Chapter 4: Cluster Administration 38 View Installed Software The Installed Software window lists the operating system and HP software that are currently installed on each server in the cluster. To see this window, select Help > Installed Software. Use the following command at the Command Prompt to see the software installed on specific servers: mx server listsoftware If the operating system uses the 64-bit architecture, x64 will be specified in the output.
Chapter 4: Cluster Administration 39 Start HP Clustered File System To start HP Clustered File System on a particular server, use one of these methods: • Open the Configure Cluster window (select Tools > Configure on the HP CFS Management Console or click Configure on the HP Clustered File System Connect dialog) and go to the Cluster-Wide Configuration tab. Select the server and then click Start Service. This method enables the service if it disabled. • Run the mx server start command.
Chapter 4: Cluster Administration 40 File System Connect dialog) and go to the Cluster-Wide Configuration tab. Select the server and then click Stop Service. • Run the mx server stop command. • Issue the command net stop matrixserver at the Command Prompt. • Use the Microsoft Management Console Services snap-in. (One way to access the snap-in is: Start > Control Panel > Administrative Tools > Services.) On the snap-in, stop the HP StorageWorks CFS service.
Chapter 4: Cluster Administration 41 (For more information about the reserved mount points, see “Differences Between HP Clustered File System and Microsoft Utilities for Volumes and Filesystems” on page 98.) Back Up and Restore Membership Partitions The membership partitions contain three databases that need to be backed up: • The device database, which contains information about imported disks. • The volume database, which contains information about dynamic volumes.
Chapter 4: Cluster Administration 42 To restore the device database and volume database to the membership partitions, use the mpimport -f command. The input file is typically conf\MP.backup. For more information about mpdump and mpimport, see the HP StorageWorks Clustered File System Command Reference.
Chapter 4: Cluster Administration 43 firewall, it may be necessary to change the firewall rules to allow traffic for this port. Port Transport Type Description 6771 TCP HTTPS connection from the Management Console (fixed, IANA registration has been applied for) Internal Network Port Numbers The following network port numbers are used for internal, server-toserver communication.
5 Configure Servers Before adding a server to a cluster, verify the following: • The server is connected to the SAN if it will be accessing PSFS filesystems. • The server is configured as a fully networked host supporting the services to be monitored. For example, if you want HP Clustered File System to provide failover protection for your Web service, the appropriate Web server software must be installed and configured on the servers.
Chapter 5: Configure Servers 45 2. Start the HP Management Console on one node (select Start > Programs > On the HP Clustered File System Connect window, specify the server, click the Connect button, and select Configure. If you are prompted for the user name and password, specify the appropriate values. 3. Select the Storage Configuration tab on the Configure Cluster window. In the SAN Switches section of the tab, click the Add button to configure the new switch. 4.
Chapter 5: Configure Servers 46 a. Start the HP Management Console on the new server (select Start > Programs > On the HP Clustered File System Connect window, specify the server, click the Connect button, and select Configure. If you are prompted for the user name and password, specify the appropriate values. b. When the Configure Cluster window appears, click Import. Then, on the Import window, type the IP address or DNS name of the server from which you want to import the configuration. 4.
Chapter 5: Configure Servers 47 • Use the HP Management Console to change drive letter assignments. Note that the change will take place on all nodes and may affect applications. • Use Windows Disk Manager to change the assignments. If you are using Windows 2000 Terminal Services to make the change, you will need to log out and then log back in before you can use the reassigned drive letters.
Chapter 5: Configure Servers 48 Server Severity to determine whether it is possible to fail back virtual hosts to that server automatically. ClusterPulse also considers each virtual host’s failback policy, which specifies whether it should fail back or remain on the backup server. (See “Virtual Hosts and Failover” on page 212 for more information.) The Server Severity can be configured on each server. The settings are: AUTORECOVER. This is the default value for Server Severity.
Chapter 5: Configure Servers 49 • If the server is hosting other HP Clustered File System applications, disable the server and wait for the applications to move to other servers. Servers can be deleted on the Cluster-Wide Configuration tab on the Configure Cluster window. Select the server and then click Remove Server. To delete servers from the command line, use this command: mx server delete ...
Chapter 5: Configure Servers 50 2. Change the IP address of server S2. We will now identify the server as S2a. 3. Start HP Clustered File System on server S2a. The server joins the cluster, which now consists of servers S1, S2, S3, and S2a. Server 2 is down and S1, S2a, and S3 are up. 4. Delete server S2 from the cluster. This step will remove references to the server. 5. Update virtual hosts and any other cluster entities that used server S2 to now include S2a.
Chapter 5: Configure Servers 51 3. Select the server in the Address column and then click Export. The Last Operation Progress column will display status messages as the configuration is exported to the server. 4. Start HP Clustered File System on the server. The server will still be selected in the Address column. Click Start Service to start HP Clustered File System. A status message will appear in the Last Operation Progress column.
Chapter 5: Configure Servers 52 Upgrade One Server and Export This procedure requires that HP Clustered File System be stopped on all servers. Execute the procedure on one server in the HP. 1. On one server, start the Management Console. Enter the IP address of the server on the HP Clustered File System Connect window, and click the Configure button. NOTE: If there is a .matrixrc file on the system running mxconsole, you will see a Disconnect dialog instead of the Connection Parameters window.
Chapter 5: Configure Servers 53 Supported HP Clustered File System Features HP Clustered File System provides device monitors, service monitors, and notifiers. The license agreement for each server determines which features are supported on that server. You can use the Display Features option on the HP CFS Management Console to determine the supported features for a particular server. Select the server on the Servers window, right-click, and select View Features.
Chapter 5: Configure Servers 54 Migrate Existing Servers to HP Clustered File System In HP Clustered File System, the names of your servers should be different from the names of the virtual hosts they support. A virtual host can then respond regardless of the state of any one of the servers. In some cases, the name of an existing server may have been published as a network host before HP Clustered File System was configured.
Chapter 5: Configure Servers 55 HP Clustered File System provides failover protection for this configuration. Without HP Clustered File System, requests are simply alternated between the servers. If a server goes down, requests to that server do not connect. To configure for round-robin load balancing with HP Clustered File System, you define virtual hosts as addresses in the A records on the DNS. Then use HP Clustered File System to associate primary and backup servers with that virtual host.
Chapter 5: Configure Servers 56 The DNS server is configured for round robin using the following A records: Address Time to Live Record Service Type IP Address www.acmd.com. 60 IN A 10.1.1.1 www.acmd.com. 60 IN A 10.1.1.2 Address: The virtual hostnames that customers use to send requests to your site. (The period following the “.com” in the address is required.) Time to Live: The number of seconds an address can be cached by intermediate DNS servers for load balancing.
6 Configure Network Interfaces When you add a server to the cluster, HP Clustered File System determines whether each network interface on that server meets the following conditions: • The network interface is up and running. • The network interface is multicast-capable. • 802.3x Ethernet flow control is not used. • Each network interface card (NIC) is on a separate network. Network interfaces meeting these conditions are automatically configured into the cluster.
Chapter 6: Configure Network Interfaces 58 specify the networks that you prefer to use for the administrative traffic. For performance reasons, we recommend that these networks be isolated from the networks used by external clients to access the cluster. When HP Clustered File System is started, the PanPulse process selects the administrative network from the available networks. When a new server joins the cluster, the PanPulse process on that server tries to use the established administrative network.
Chapter 6: Configure Network Interfaces 59 configuration file, the Servers window may not match your current network configuration exactly.) Each network interface is labeled “Hosting Enabled” or “Hosting Disabled,” which indicates whether it can be used for virtual hosts. The Management Console uses the following icons to represent the status of each network interface. The network interface allows administrative traffic. A green checkmark indicates the current administrative network.
Chapter 6: Configure Network Interfaces 60 When the PanPulse process locates another network that all servers in the cluster can access, all of the servers fail over the administrative network to that network. The process looks for another network in this order: • Networks that allow administrative traffic. • Networks that discourage administrative traffic.
Chapter 6: Configure Network Interfaces 61 To allow or discourage administrative traffic on a network interface, select that network interface on the Servers window, right-click, and then select either “Allow Admin. Traffic,” “Discourage Admin. Traffic,” or “Exclude Admin Traffic” as appropriate. The setting is applied to all interfaces within the same subnet on all servers of the cluster. On the command line, issue the appropriate mx netif command: mx netif allowadmintraffic ...
Chapter 6: Configure Network Interfaces 62 • To add a network interface, select the server for that interface on the Servers window, right-click, and select Add Network Interface. • To modify an existing network interface, select that interface, rightclick, and select Properties. The network interface must be down; you cannot modify an “up” network interface. Server: The name or IP address of the server that will include the new network interface. IP: Type the IP address for the network interface.
Chapter 6: Configure Network Interfaces 63 mx netif update --netmask [--adminTraffic ] Remove a Network Interface This option can be useful when performing off-line configuration of a server. To remove a network interface, select that interface on the Servers window, right-click, and then select Delete. You cannot delete a network interface that is up.
7 Configure the SAN SAN configuration includes the following: • Import SAN disks into the cluster. • Deport SAN disks from the cluster. • Display information about SAN disks. Overview SAN Configuration Requirements Be sure that your SAN configuration meets the requirements specified in the HP StorageWorks Clustered File System Setup Guide. Storage Control Layer Module The Storage Control Layer (SCL) module manages shared SAN devices.
Chapter 7: Configure the SAN 65 access the device. Although the identifiers (such as psd2 or psd2p6) appear on certain HP CFS Management Console windows, they are generally only needed for internal use by HP Clustered File System. Device Identifiers and GPT Disks When the SCL assigns device identifiers to the partitions on GPT disks, it skips the first partition because that partition cannot be used by HP Clustered File System.
Chapter 7: Configure the SAN 66 The higher-numbered partitions will continue to work correctly; however, you should be aware of the following: • A new volume cannot include subdevices having partition numbers above 31. Existing volumes cannot be extended to include the highernumbered partitions. • You will not be able to take a hardware snapshot of partitions with numbers above 31. • A single Alert will be issued if any disks or volumes in the cluster contain unsupported partitions.
Chapter 7: Configure the SAN 67 This issue occurs because the Windows partition table causes space to be reserved at the start of the LUN, which can cause a misalignment with the array’s storage. If your storage array is affected by this issue, the simplest way to avoid the situation is to create an unused partition at the start of the LUN and then ensure that the second partition starts on an aligned boundary.
Chapter 7: Configure the SAN 68 • Disks containing an active membership partition can be imported; however, the partition containing the active membership partition cannot be used for a filesystem. Before importing the disk, you can run mprepair to inactivate the membership partition (see “The mprepair Utility” on page 274). You will then be able to use the partition when you import the disk into the cluster.
Chapter 7: Configure the SAN 69 To determine the uuid for a disk, run the following command, which prints the uuid, the size, and a vendor string for each unimported SAN disk. mx disk status You can also use the Disk Info window to import a disk. Deport SAN Disks Deporting a disk removes it from cluster control. You cannot deport a disk that contains a membership partition. To deport a disk from the HP CFS Management Console, select Storage > Disk > Deport or click the Deport icon on the toolbar.
Chapter 7: Configure the SAN 70 Local Disk Information The Disk Info window displays disk information from the viewpoint of the local server. It can be used to match the disk names appearing in the Microsoft Disk Management utility (the Local Name) with the disk names that HP Clustered File System uses (the PSD Name). You can also use this window to import or deport SAN disks.
Chapter 7: Configure the SAN 71 NOTE: Because the first partition on GPT disks cannot be used by HP Clustered File System, that partition is skipped when HP Clustered File System assigns device identifiers to the partitions. The first identifier, psdXp1, is assigned to partition 2, the second identifier, psdXp2, is assigned to partition 3, and so on.
Chapter 7: Configure the SAN 72 The window shows the following information for each PSFS filesystem: • The label assigned to the filesystem. • The mount point or drive letter assigned to the filesystem. Click in the cell to see the mount point/drive letter for each server on which the filesystem is configured. • The volume used for the filesystem. Click in the cell to see the properties for the filesystem. • The number of CIFS shares.
Chapter 7: Configure the SAN 73 The options are: -i Display information for imported disks (the default). -u Display information for unimported disks. -v Display available volumes. -f Display PSFS filesystem volumes. -a Display all information; for -v, display all known volumes. -l Additionally display host-local device name. -r Additionally display local device route information. -U Display output in the format used by the HP Management Console.
Chapter 7: Configure the SAN 74 Show Local Device Information The -l option displays the local device name for each disk, as well as the default disk information. When combined with -u, it displays local device names for unimported disks. sandiskinfo -al Disk: \\.\Global\psd1 (Membership Disk) Uid: 20:00:00:04:cf:13:38:18::0 SAN info: fcswitch5:7 Vendor: SEAGATE Capacity: 34733M Local Device Paths: \\.
Chapter 7: Configure the SAN 75 Disk=20:00:00:04:cf:13:38:18::0 partition=08 type=(unknown) Volume: \\.\Global\psd2p4 Size: 9220M Disk=20:00:00:04:cf:13:38:3a::0 partition=04 type=(unknown) When combined with -a, the -v option lists all volumes, including those used for PSFS filesystems and membership partitions. Options for Dynamic Volumes The following sandiskinfo options apply only to dynamic volumes.
Chapter 7: Configure the SAN 76 Dynamic Volume: psv2 Size: 490M Stripe=32K/optimal Subdevice: 20:00:00:04:cf:13:38:18::0/7 Size: 490M psd1p7 Dynamic Volume: psv3 Size: 490M Stripe=8K/optimal Subdevice: 20:00:00:04:cf:13:38:18::0/10 Size: 490M psd1p10 Display Unimported Dynamic Volumes The following options can be used to display information about unimported dynamic volumes: --unimported-volumes Lists dynamic volumes that are currently unimported.
8 Configure Dynamic Volumes HP Clustered File System includes a CFS Volume Manager that you can use to create, extend, recreate, or delete dynamic volumes, if you have purchased the separate license. Dynamic volumes allow large filesystems to span multiple disks, LUNs, or storage arrays. Dynamic volumes can be deported from the cluster and later imported back into the original cluster or into another cluster. Overview Basic and Dynamic Volumes Volumes are used to store PSFS filesystems.
Chapter 8: Configure Dynamic Volumes 78 Types of Dynamic Volumes HP Clustered File System supports two types of dynamic volumes: striped and concatenated. The volume type determines how data is written to the volume. • Striping. When a dynamic volume is created with striping enabled, a specific amount of data (called the stripe size) is written to each subdevice in turn. For example, a dynamic volume could include three subdevices and a stripe size of 64 KB.
Chapter 8: Configure Dynamic Volumes 79 Destroying a dynamic volume removes the volume signature from each subdevice associated with the volume, freeing the subdevices for use in other dynamic volumes or filesystems. Configuration Limits The configuration limits for dynamic volumes are as follows: • A maximum size of 32 TB for a dynamic volume on 64-bit operating systems. • A maximum size of 16 TB for a dynamic volume on 32-bit operating systems. • A maximum of 128 dynamic volumes per cluster.
Chapter 8: Configure Dynamic Volumes 80 Create a Dynamic Volume When you create a dynamic volume, you will need to select the subdevices to be included in the volume. If the volume will be striped, you will also need to select a stripe size. Optionally, HP Clustered File System can also create a filesystem that will be placed on the dynamic volume.
Chapter 8: Configure Dynamic Volumes 81 If you are creating a filesystem, you can also set various filesystem options. Click the Options button to see the Filesystem Options dialog, which allows you to select the block size for the filesystem and to configure quotas. (See “Filesystem Options” on page 103 for details about this dialog.) Available Subdevices: The display includes all imported subdevices that are not currently in use by another imported volume and that do not have a filesystem in place.
Chapter 8: Configure Dynamic Volumes 82 To create a dynamic volume from the command line, use this command. You can use either spaces or commas to separate the subdevice names. mx dynvolume create [--stripesize <4KB-64MB>] The following command lists the available subdevices: mx dynvolume showcreateopt Dynamic Volume Properties To see the configuration for a dynamic volume, select Storage > Dynamic Volume > Volume Properties and then choose the volume that you want to view.
Chapter 8: Configure Dynamic Volumes 83 The Stripe State reported in the “Dynamic Volume Properties” section will be one of the following: • Unstriped. The volume is concatenated and striping is not in effect. • Optimal. The volume has only one stripeset that includes all subdevices. Each subdevice is written to in turn. • Suboptimal. The volume has been extended and includes more than one stripeset. The subdevices in the first stripeset will be completely filled before writes to the next stripeset begin.
Chapter 8: Configure Dynamic Volumes 84 The volume information appears near the end of the output, in the section labeled “Membership Partition Volume Database.” The information includes the stripe size and the subdevices making up each stripeset. Extend a Dynamic Volume The Extend Volume option allows you to add subdevices to an existing dynamic volume. When you extend the volume on which a filesystem is mounted, you can optionally increase the size of the filesystem to fill the size of the volume.
Chapter 8: Configure Dynamic Volumes 85 Dynamic Volume Properties: The current properties of this dynamic volume. Filesystem Properties: The properties for the filesystem located on this dynamic volume. Available Subdevices: Select the additional subdevices to be added to the dynamic volume. The list includes all available subdevices on imported disks, including subdevices belonging to unimported volumes. Use the arrow keys to reorder those subdevices if necessary.
Chapter 8: Configure Dynamic Volumes 86 NOTE: If you selected a subdevice that is associated with an unimported volume, you will see a message reporting that the subdevice contains a volume signature. The message asks whether you want to destroy the affected unimported dynamic volume and reuse this subdevice for the volume you are extending. Be sure that you do not need the unimported dynamic volume before doing this.
Chapter 8: Configure Dynamic Volumes 87 mx dynvolume destroy Recreate a Dynamic Volume Occasionally you may want to recreate a dynamic volume. For example, you might want to implement striping on a concatenated volume or, if a striped dynamic volume has been extended, you might want to recreate the volume to place all of the subdevices in the same stripe set.
Chapter 8: Configure Dynamic Volumes 88 You can change or reorder the subdevices used for the volume and enable striping if desired. To recreate a volume from the command line, you will first need to use the dynvolume destroy command and then run the dynvolume create command.
Chapter 8: Configure Dynamic Volumes 89 to a dynamic volume. The new dynamic volume will contain only the original subdevice; you can use the Extend Volume option to add other subdevices to the dynamic volume. NOTE: The new dynamic volume is unstriped. It is not possible to add striping to a converted dynamic volume. If you want to use striping, you will need to recreate the volume.
Chapter 8: Configure Dynamic Volumes 90 Dynamic Volume Recovery The Dynamic Volume Recovery feature provides the ability to rebuild a dynamic volume from the LUNs originally in the volume. This feature can be used for purposes such as the following: • Move dynamic volumes from one cluster to another. Deport the dynamic volumes on the original cluster and then import them on the new cluster. • Recover dynamic volumes from mirrored LUNs for disaster recovery purposes.
Chapter 8: Configure Dynamic Volumes 91 Select the dynamic volumes that you want to deport and click the Deport icon in the toolbar. To deport dynamic volumes from the command line, use this command: mx dynvolume deport ... Import a Dynamic Volume When a dynamic volume is imported, the unimported LUNs associated with the volume will be imported and the psv binding, which HP Clustered File System uses to control access to the dynamic volume, will be created.
Chapter 8: Configure Dynamic Volumes 92 Select the dynamic volumes that you want to import and click the Import icon in the toolbar. To import dynamic volumes from the command line, first use the following command to list the dynamic volumes that can be imported: mx dynvolume list --importable Locate the entry for the that you want to import. The appears in the first column of the output. Then use the following command to import the volume, specifying the .
Chapter 8: Configure Dynamic Volumes 93 Duplicate. The volume cannot be reassembled because more than one physical device matched a logical subdevice specification. Potential causes of this problem are: • Both sides of a mirror were exposed (that is, lunmasked) to the cluster. • One of the devices is a snapclone of the other. • One of devices is a disk copy or block-level backup/copy of the other. Truncated.
Chapter 8: Configure Dynamic Volumes The mx dynvolume create, mx dynvolume extend, and mx fs create commands include the --reuse option, which causes the operation to proceed even though the specified subdevice may already be in use by another dynamic volume. The operation will destroy the volume previously using the subdevice.
9 Configure PSFS Filesystems HP StorageWorks Clustered File System provides the PSFS filesystem. This direct-access shared filesystem enables multiple servers to concurrently read and write data stored on shared SAN storage devices. A journaling filesystem, PSFS provides live crash recovery.
Chapter 9: Configure PSFS Filesystems 96 The PSFS filesystem does not migrate processes from one server to another. If you want processes to be spread across servers, you will need to take the appropriate actions. Journaling Filesystem When you initiate certain filesystem operations such as creating, opening, or moving a file or modifying its size, the filesystem writes the metadata, or structural information, for that event to a transaction journal. The filesystem then performs the operation.
Chapter 9: Configure PSFS Filesystems 97 Filesystem Management and Integrity HP Clustered File System uses the SANPulse process to manage PSFS filesystems. SANPulse performs the following tasks. • Coordinates filesystem mounts, unmounts, and crash recovery operations. • Checks for cluster partitioning, which can occur when cluster network communications are lost but the affected servers can still access the SAN.
Chapter 9: Configure PSFS Filesystems 98 Disk Quotas Disk quotas are enabled or disabled at the filesystem level. When quotas are enabled, the filesystem performs quota accounting to track the disk use of each user having an assigned disk quota. When you create a filesystem and enable quotas, you can also set options including the default hard and soft limits for users on the filesystem. A hard limit specifies the maximum amount of disk space in the filesystem that can be used by files owned by the user.
Chapter 9: Configure PSFS Filesystems 99 The Windows operating system and Windows Disk Management utilities are not fully aware of PSFS filesystems or HP Clustered File System dynamic volumes. Although these Microsoft utilities can be useful for troubleshooting issues, they cannot display status from the perspective of an HP Clustered File System volume or filesystem. Dynamic Volumes Dynamic volumes created with the HP Clustered File System Volume Manager are not the same as Microsoft dynamic volumes.
Chapter 9: Configure PSFS Filesystems 100 NOTE: Be careful not to accidentally back up PSFS filesystems multiple times by using both your site-defined drive letter/mount point assignments and the reserved mount points. Filter out the reserved mount points from backup jobs, and instead use your own site-defined assignments.
Chapter 9: Configure PSFS Filesystems 101 • You do not have to assign the same mount points or drive letters to each filesystem on each node. When you use the HP Management Console to assign a drive letter/mount point, the assignment applies to every node. However, you can use the mx fs assign command or the Windows LDM, mountvol.exe, or diskpart.exe commands to assign drive letters/mount points uniquely on each node. • You can assign multiple mount points to the same filesystem if necessary.
Chapter 9: Configure PSFS Filesystems 102 NOTE: If the new filesystem needs to use 8.3 short file names and name tunneling, use the psfsformat command to create the filesystem, as it includes options to enable those features. If the HP CFS Management Console or mx filesystem command is used to create the filesystem, you will need to enable 8.3 support later on as described under “8.3 Short File Names and Name Tunneling” on page 116.
Chapter 9: Configure PSFS Filesystems 103 NOTE: Although the “Disk Management” pane of the Microsoft Management Console (MMC) limits volume labels on non-NTFS filesystems to 11 characters, the HP Clustered File System tools allow PSFS filesystem labels to be up to 32 characters long. Available Volumes: This part of the window lists the basic or dynamic volumes that are currently unused. Subdevices belonging to unimported volumes that are on imported disks are also included.
Chapter 9: Configure PSFS Filesystems 104 The Quotas tab allows you to specify whether disk quotas should be enabled on the filesystem. You can enable or disable quotas on a filesystem at any time. (See “Enable or Disable Quotas” on page 128.) When you enable quotas, you can also set default hard and soft quotas and select other quota parameters. To enable quotas on the filesystem, check the “Enable quotas” checkbox.
Chapter 9: Configure PSFS Filesystems 105 You can then set default hard and soft quotas for users on that filesystem. If you do not want a default limit, click “Unlimited,” which is the default. To assign a limit, click “Limit” and then specify the appropriate size in either kilobyes, megabytes, gigabytes, or terabytes. The defaults are rounded down to the nearest filesystem block. NOTE: The default user quotas apply to all users who do not have an individual quota assigned.
Chapter 9: Configure PSFS Filesystems 106 There are two options: • Static default quota. The default limits are explicitly assigned to the user. Subsequent changes to the default values for the filesystem do not affect the quota limits for the user. This is the default, and matches the NTFS policy. • Dynamic default quota. No explicit default limits are assigned to the user. Instead, the effective limits applied to the user are the default values for the filesystem at the time of each operation.
Chapter 9: Configure PSFS Filesystems 107 If you want to change this option after the filesystem is created, you will need to either disable quotas and then re-enable them with the changed option, or recreate the filesystem. Recreate a Filesystem If you want to reformat a filesystem, select the filesystem on the Filesystems window, right-click, and select Recreate Filesystem.
Chapter 9: Configure PSFS Filesystems 108 Create a Filesystem from the Command Line To create a filesystem, use the psfsformat or mx commands. The psfsformat Command Use this syntax: psfsformat[-fq] [-l
Chapter 9: Configure PSFS Filesystems 109 The -o option has the following parameters: • blocksize=# Specify the block size (either 4096 or 8192) for the filesystem. • disable-fzbm Create the filesystem without Full Zone Bit Maps (FZBMs). The FZBM on-disk filesystem format reduces the amount of data that the filesystem needs to read when allocating a block. It is particularly useful for speeding up allocation times on large, relatively full filesystems.
Chapter 9: Configure PSFS Filesystems 110 • softdefault= [T|G|M|K] Set the default soft quota on the filesystem. The optional modifiers are the same as default. • static-default or dynamic-default With static-default, quota limits for new users are copied from the default quota values set for the filesystem. With dynamic-default, quota limits for new users are linked from the default quota values set for the filesystem.
Chapter 9: Configure PSFS Filesystems 111 • [--blocksize [4K|8K]] The block size for the filesystem. • [--reuse] This option applies only to mx fs create. Reuse a psd device. If you will be creating a filesystem on a psd device that is associated with an unimported volume, the -reuse option must be used to tell the command to reuse the device. Without this option, the attempt to create the filesystem will fail because the device contains a volume signature.
Chapter 9: Configure PSFS Filesystems 112 • [--defaultQuotaType ] With staticdq, the default, quota limits for new users are copied from the default quota values set for the filesystem. With dynamicdq, the quota limits are linked from the default quota values. If the default values are changed, the user’s quota limits are also changed. • [--sparseFileAccounting ] How quota accounting for sparse files is managed.
Chapter 9: Configure PSFS Filesystems 113 Specify Mount Path: Mount paths are also known as NTFS junctions, and allow you to mount another volume in an empty directory on an NTFS volume. Type the complete pathname (for example, C:\data1). HP Clustered File System can create the mount path if it does not already exist on each server that will mount the filesystem. If necessary, check “Create the directory if it does not exist.
Chapter 9: Configure PSFS Filesystems 114 You can also view this information from the command line. To see the assignments for a filesystem, use this command: mx fs queryassignments To see the assignments on specific servers, use this command: mx fs getdriveletter --server Remove Drive Letter or Path Assignments If you no longer want to associate a filesystem with a particular drive letter or mount path, you can remove the assignment.
Chapter 9: Configure PSFS Filesystems 115 Cacls.exe utility can set permissions only on the mount point folder. Permissions applied to the mount point folder do not apply to the underlying root directory of the mounted volume. This is by Microsoft design. It is possible, however, to set permissions on the root directory of a mounted volume using either the Cacls.exe utility or Windows Explorer. Cacls.exe. Use the cacls /m command-line switch to apply permissions.
Chapter 9: Configure PSFS Filesystems 116 8.3 Short File Names and Name Tunneling By default, PSFS filesystems do not support the creation of 8.3 short file names (SFN) and name tunneling; however, support for these features can be enabled on specific filesystems. These features should be enabled only if you have a specific need for them, as the use of 8.3 files causes degradation in filesystem performance. The degradation is proportional to the number of 8.3 files created. If 8.
Chapter 9: Configure PSFS Filesystems 117 When 8.3 support is enabled, applications requiring 8.3 short file names and name tunneling will work correctly. Filesystems enabled for 8.3 support cannot be mounted on HP Clustered File System versions earlier than 3.6.1, even if the 8.3 support is later disabled. 8.3 support can be disabled on a filesystem by using the following command. Before running the command, be sure that the volume is not in use. psfscheck -e disable8dot3 When 8.
Chapter 9: Configure PSFS Filesystems 118 DefaultUsrQuota: 0 DefaultGrpQuota: 0 DefaultSoftUsrQuota: 0 enable8dot3 = 1 allowextchar = 0 Features: FZBM QUOTA SPARSE_FILES EIGHT_THREE_PRIMED ADS Run-time flags: PSFS_RT_QUOTA_NO_ENFORCE PSFS_RT_QUOTA_STATIC_DEFAULT The following psfscheck command reports whether 8.3 support is enabled or disabled. Because the command unmounts the filesystem, it should not be run when the volume is in use.
Chapter 9: Configure PSFS Filesystems 119 Run-time flags: PSFS_RT_QUOTA_NO_ENFORCE PSFS_RT_QUOTA_STATIC_DEFAULT View or Change Filesystem Properties To see information about a specific filesystem, select that filesystem, right-click, and select Properties. Label: This field specifies the label that is assigned to the filesystem. If the filesystem does not have a label, the field will be blank. You can change the label if necessary.
Chapter 9: Configure PSFS Filesystems 120 Extend a Mounted Filesystem If the Volume allocation display shows that there is space remaining on the volume, you can use the “Extend Filesystem” option on the Properties window to increase the size of the PSFS filesystem to the maximum size of the volume. When you click on the Extend Filesystem button, you will see a warning such as the following. When you click Yes, HP Clustered File System will extend the filesystem to use all of the available space.
Chapter 9: Configure PSFS Filesystems 121 Quotas Tab The Quotas tab allows you to enable or disable quotas on the filesystem, to set the default hard and soft limits, and to configure other quota options. See “Filesystem Options” on page 103 for more information about the quota options.
Chapter 9: Configure PSFS Filesystems 122 View Filesystem Status from the Command Line You can use the following mx command to see status information. mx fs status [--verbose] [--standard|--snapshots] The command lists the status of each filesystem. The --verbose option also displays the FS type (always PSFS), the size of the filesystem in KB, and the UUID of the parent disk. The --standard option shows only standard filesystems; the --snapshot option shows only snapshots.
Chapter 9: Configure PSFS Filesystems 123 Select the filesystem on the Management Console, right-click, and select Extend Volume. HP Clustered File System then determines whether the disk contains space that can be used to extend the volume or partition. On the Extend Basic Volume window, specify the amount of space that should be added to the filesystem. The “Before” section reports the current disk space information for the filesystem and the disk.
Chapter 9: Configure PSFS Filesystems 124 NOTE: If you used the Windows Disk Manager utility to assign drive letters or mount paths for the filesystem, you will need to reassign them on each node after the resize operation is complete. If the drive letters or mount paths were assigned via the HP Management Console, they will still be correct.
Chapter 9: Configure PSFS Filesystems 125 The psfssuspend command prevents modifications to the filesystem and forces any changed blocks associated with the filesystem to disk. The command performs these actions on all servers that have mounted the filesystem and then returns successfully. Any process attempting to modify a suspended filesystem will block until the filesystem is resumed. These blocked processes may hold resources, thereby causing other processes to block waiting on these resources.
Chapter 9: Configure PSFS Filesystems 126 For a complete description of the options, see the HP StorageWorks Clustered File System Command Reference. Perform a Filesystem Check If a filesystem is not unmounted cleanly, the journal will be replayed the next time the filesystem is mounted to restore consistency. You should seldom need to check the filesystem. However, if a filesystem was corrupted by a hardware or software failure, you can repair it with the psfscheck utility.
Chapter 9: Configure PSFS Filesystems 127 For more information about the check, click the Details button. If psfscheck locates errors that need to be repaired, it will display a message telling you to run the utility from the command line. For more information, see the HP StorageWorks Clustered File System Command Reference Guide.
10 Manage Disk Quotas The PSFS filesystem supports disk quotas, which limit the amount of disk space on a filesystem that can be used for individual user’s files. Hard and Soft Filesystem Limits The PSFS filesystem supports both hard and soft filesystem quotas. A hard quota specifies the maximum amount of disk space on a particular filesystem that can be used by files owned by the user.
Chapter 10: Manage Disk Quotas 129 When you create a PSFS filesystem, you can specify whether quotas should be enabled and you can set quota options on the filesystem. (See “Create a Filesystem” on page 101.) Quotas can also be enabled or disabled on an existing filesystem, using either the HP CFS Management Console or HP Clustered File System commands. The filesystem will be unmounted briefly during the enable/disable operation.
Chapter 10: Manage Disk Quotas 130 Check or uncheck “Enable quotas” as appropriate. If you are enabling quotas, you can set the default hard and soft quotas for users on that filesystem. To do this, click on “Limit” and then specify the appropriate size in either kilobytes, megabytes, gigabytes, or terabytes. The default is rounded down to the nearest filesystem block. (If you do not want a default limit, click “Unlimited.”) The default quotas apply to all users who do not have individual quotas.
Chapter 10: Manage Disk Quotas 131 Manage User Quotas The mx quota command can be used to manage user quotas from the command line. See the HP StorageWorks Clustered File System Command Reference Guide for details about this command. You can also use Microsoft Windows features such as the following to manage user quotas. Refer to the Windows documentation for more information about these features. Quota GUI. The Windows Quota GUI can be accessed from Microsoft Windows Explorer.
Chapter 10: Manage Disk Quotas 132 The Quota Entries window. This window can be accessed via Microsoft Windows Explorer. Display the Properties for the filesystem, select the Quota tab, and then click the Quota Entries button. When using the Quota Entries window, you should be aware of the following: • The “Amount Used” column includes PSFS metadata as well as the space required for the user data in each user’s files. The space used may be different that it would be on another type of filesystem.
Chapter 10: Manage Disk Quotas 133 Back Up and Restore Quotas The psfsdq and psfsrq commands can be used to back up and restore the quota information stored on the PSFS filesystem. These commands should be run in conjunction with standard filesystem backup utilities, as those utilities do not save the quota limits set on the filesystem. NOTE: We recommend that you use the psfsdq and psfsrq commands instead of the Import and Export options on the Quota Entries window.
Chapter 10: Manage Disk Quotas 134 Examples The following command saves the quota information for the filesystem located on device psd1p5. psfsdq -f psd1p5.quotadata psd1p5 The next command restores the data to the filesystem: psfsrq -f psd1p5.
11 Manage Hardware Snapshots HP Clustered File System provides support for taking hardware arraybased snapshots of PSFS filesystems. The snapshots provide a point-intime image of a PSFS filesystem. Users or the Administrator can then use the Microsoft Shadow Copies of Shared Folders feature to recover individual files or whole volumes from the appropriate snapshot image. The subdevices containing the PSFS filesystems must reside on one or more storage arrays that are supported for snapshots.
Chapter 11: Manage Hardware Snapshots 136 CommandView EVA software must be installed on your Management Appliance. Be sure that your versions of SSSU and CommandView EVA are consistent. The SSSU utility must be renamed to %Program Files%\Hewlett-Packard\SANworks \Element Manager for StorageWorks HSV\Bridge\sssu.exe. Engenio Storage Arrays To take hardware snapshots on Engenio storage arrays, a supported version of SANtricity Storage Manager client software must be installed on all servers in the cluster.
Chapter 11: Manage Hardware Snapshots 137 HP EVA Array-Based Snapshots The following dialog appears. Label. The label is used to identify the snapshot on the Management Console. Share as Shadow Copy of Shared Folder. Check this box if you want users to be able to use the snapshot as a shadow copy. HP EVA Options. Snapshots initially consume storage space only to store pointers to the data in the source filesystem, growing in size when source filesystem data is changed.
Chapter 11: Manage Hardware Snapshots 138 Engenio Snapshots The dialog asks for the following information: Label. The label is used to identify the snapshot on the Management Console. Share as Shadow Copy of Shared Folder. Check this box if you want users to be able to use the snapshot as a shadow copy. Engenio Options. The first time a snapshot is taken of a particular filesystem, the snapshot process creates a repository on disk that stores pointers to the data in the source filesystem.
Chapter 11: Manage Hardware Snapshots 139 Snapshots appear on the Management Console beneath the entry for the filesystem, while snapclones appear as a separate filesystem. Each snapshot or snapclone is assigned an HP Clustered File System psd or psv device name. In the following example, the first two filesystem entries are snapclones. The next entry is a regular filesystem, and is followed by snapshots of the filesystem.
Chapter 11: Manage Hardware Snapshots 140 Delete a Snapshot Storage arrays typically limit the number of snapshots that can be taken of a specific filesystem. Before taking an additional snapshot, you will need to delete an existing snapshot. Also, if you want to destroy a filesystem, you will first need to delete all snapshots of that filesystem. To delete a snapshot, select the snapshot on the Management Console, right-click, and select Delete (or use (or select Edit > Filesystem > Create Snapshot).
Chapter 11: Manage Hardware Snapshots 141 To unassign a drive letter or path, type the following: mx fs unassign Using Shadow Copies of Shared Folders When you take a snapshot of a PSFS filesystem, you can specify that it should be shared as a shadow copy. The snapshot provides a point-intime version of the filesystem.
12 Configure Security Features HP Clustered File System provides the following security features: • Role-Based Security. By default, the machine’s local Administrators group has full cluster rights and can perform all HP Clustered File System operations. You can use the Role-Based Security feature to create roles that allow or deny other users and groups the ability to perform specific cluster operations.
Chapter 12: Configure Security Features 143 creating and modifying filesystems. The deny status overrides the allow status. HP Clustered File System provides a built-in System Administrator role that includes all members of the machine local Administrators group. This group has permission to perform all cluster operations.
Chapter 12: Configure Security Features 144 Add a New Role To define a new role, click Add to display the Role Properties window. Name: Type a name for the new role. Role names cannot include the forward slash character (/). Enabled: By default, the role will be enabled when it is created. To disable the role, remove the checkmark. Resource: Use this pane to specify the rights that will apply to the new role.
Chapter 12: Configure Security Features 145 • Setup. Manipulate settings that affect the entire cluster configuration, including membership partitions, licensing, snapshot configuration, fencing configuration, servers, notification settings, and security roles. The Event Notification, Security, and Servers resources are subsets of this resource. – Event Notification. Configure event notification settings. Create affects the ability to enable or disable notifiers.
Chapter 12: Configure Security Features 146 – Custom. Manipulate virtual hosts, service monitors, and device monitors. Create affects the ability to create new application objects. Modify affects the ability to change existing application objects, including adding new servers to an object and rehosting objects. Delete affects the ability to delete application objects. – File Serving. Manipulate FS Option for Windows application objects, including Virtual File Services and Cluster File Shares.
Chapter 12: Configure Security Features 147 Assign Rights Using a Template The Role-Based Security Control Panel includes templates for a Cluster Administrator, File Serving Administrator, Read-Only Operator, SQL Database Administrator, and Storage Administrator. You can use these templates to simplify creating a role. Click Apply Template to see the available templates.
Chapter 12: Configure Security Features 148 Click Add to assign accounts to the role. The Enter an Account dialog then asks for the user or group to be added. Enter an account to add. Type the name or ID for the user or group. Type. Specify whether you are adding a user account or a group account.
Chapter 12: Configure Security Features 149 Form. Specify whether you entered a name or an ID for the account. Tips for Specifying Accounts When specifying accounts for a role, you should be aware of the following: • HP Clustered File System uses the contents of the access token created when you logged into the cluster to determine user and group identities. • To simplify Role-Based Security administration, specify groups instead of users wherever possible.
Chapter 12: Configure Security Features 150 (NetBIOS-domain\username, DNS-name\username, or isolated names without domains) will fail if the user account name contains more than 20 characters. This restriction does not apply to group account names. View Effective Rights The My Rights tab on the Role-Based Security Control Panel lists the effective rights that you have on the cluster. Effective rights are the sum of the rights provided by all of the roles to which you belong.
Chapter 12: Configure Security Features 151 Other Role-Based Security Procedures Export or Import Roles The import and export features can used if you will be configuring a new cluster and want to use the Role-Based Security settings that you have configured on the existing cluster. Click the Export button to save the current settings to the file of your choice. (The default location is your home directory.) The file is written in XML format.
Chapter 12: Configure Security Features 152 When configuring the new cluster, click Import to import the file containing the Role-Based Security settings. The imported settings will replace any current Role-Based Security settings. To import or export Role-Based Security settings from the command line, use these commands: mx role export [--permissionOnly] mx role import [--permissionOnly] The --permissionOnly option omits the list of role members from the import or export.
Chapter 12: Configure Security Features 153 Properties window. Select the role on the Role-Based Security Control Panel and click Edit to display the Role Properties window. Delete a Role When a role is deleted from the cluster configuration, the accounts belonging to the role will automatically lose their membership in that role. Roles are deleted on the Role Properties window. Select the role on the Role-Based Security Control Panel and click Edit to display the Role Properties window.
Chapter 12: Configure Security Features 154 Remove Roles From an Account Use this command: mx account removerole --form --type ... The --form option specifies whether you are entering the name or ID of the account (NAME is the default). The --type option specifies whether the account is for a user or group or is unknown (GROUP is the default).
13 Configure Event Notifiers and View Events HP Clustered File System generates an event message when an error condition or failure occurs or when the status of the cluster changes. To provide an audit trail of cluster operations, a message is also generated when a user requests and is granted or denied authorization to perform a task. Event messages are logged and can be viewed either with the Cluster Event Viewer provided with the HP Management Console or with command-line tools.
Chapter 13: Configure Event Notifiers and View Events 156 When an event message is generated, it is written immediately to the Windows event log on the server where the condition occurred. The message is also sent to the HP Clustered File System mxlogd process, which takes these actions: • Sends the message to the event notifier services configured on the server.
Chapter 13: Configure Event Notifiers and View Events 157 • Email Notifier Service. This service sends email to specified addresses when the selected events occur. • Script Notifier Service. This service allows you to specify a script that will be triggered when selected events occur. If you configured notifiers in previous releases of HP Clustered File System, you can use this service to recreate the notifiers. You will need to configure the services that you want to use with your cluster.
Chapter 13: Configure Event Notifiers and View Events 158 Events to Display. Then, on the following dialog, specify the maximum number of events to display on the Event Viewer. You can save the event listing to a file by clicking Save As on the toolbar or by selecting Viewer > Save As. View Event Details To view all of the information for a particular event, double-click that event on the Event Viewer. The Event Properties window shows the information.
Chapter 13: Configure Event Notifiers and View Events 159 Filter the Event Output The Event Viewer includes three filters that can be used to limit the events that are displayed: • Search All. This filter allows you to enter text to be matched. The Event Viewer will show only those events that include the text in any of the event fields. • Severity. This filter allows you to select one or more severity levels. The Event Viewer will display only the events having the specified severity levels. • Timestamp.
Chapter 13: Configure Event Notifiers and View Events 160 --timestamp Filter by a particular time range expressed as . --noHeaders Do not display column headers in the output. --csv Display the output in comma-separated value format. --showborder Display borders in the output. The mcs select command can also be used to view the cluster log. See the HP StorageWorks Clustered File System Command Reference for more information about this command.
Chapter 13: Configure Event Notifiers and View Events 161 Severity will be set to “Info.” The IDs assigned to the messages will be in the range 39000-39999. To insert a message, use this command at the Command Prompt: mx matrix log You can also use the mcs log command to add a log message as described in the HP StorageWorks Clustered File System Command Reference.
Chapter 13: Configure Event Notifiers and View Events 162 The Event Definition tab provides a Search All filter that lists messages matching the specified term. You can also select one or more severity levels to be matched. Use the following command to view all event messages from the command line: mx eventnotifier list Select Events for a Notifier Service The Event Definition tab can be used to select the events that should trigger the appropriate notifier services.
Chapter 13: Configure Event Notifiers and View Events 163 To add or remove notifier events from the command line, use these commands. If you do not specify a service, the events will be added or removed from all services. You can specify individual event IDs or a range of IDs to be added. Use commas to separate the values, for example: 100, 300-400,555.
Chapter 13: Configure Event Notifiers and View Events 164 Target. Enter either the hostname or the IP address of the SNMP trap forwarding target. The trap-forwarding destination port is the IANA registered port for snmptrap (162/udp). Community. Enter the community string that is used to access the target. The default is public. Disable the SNMP trap forwarding service. This checkbox can be used to enable or disable the service as necessary.
Chapter 13: Configure Event Notifiers and View Events 165 From Email address. Type the email address that will be specified as the sender of the notification emails. If this option is not included, the server name will be used as the sender. To Email address. Type the email addresses to which event notifier email should be sent. If multiple addresses will be specified, use semicolons to separate the addresses. Subject line.
Chapter 13: Configure Event Notifiers and View Events 166 Omit cluster description. By default, the cluster description assigned to the cluster appears in the source address for the email. For example, if the cluster description is Cluster X and the --from is clust2@company.com, the source address for the email will be Cluster X . When the Omit cluster description option is checked, the cluster description does not appear in the source address.
Chapter 13: Configure Event Notifiers and View Events 167 Script. Enter the full path of the script to be run when events configured to trigger the script occur. If the script does not reside on a shared filesystem, ensure that it is replicated to the specified location on all servers. Disable the Script notifier service. This checkbox can be used to enable or disable the service as necessary. To test the service, click Send Test Message.
Chapter 13: Configure Event Notifiers and View Events 168 Test Notifier Services The configuration tabs for the SNMP, email, and script notification services contain a Test button that verifies that events can be sent to the configured notification service. An error will be reported if the service is disabled or is not configured. Enable or Disable a Notifier Service The tabs used to configure the notifier services also have a checkbox to disable the services.
Chapter 13: Configure Event Notifiers and View Events 169 Import or Export the Notifier Event Settings The import and export features and can used if you will be configuring a new cluster and want to use the notifier event settings that you have configured on the existing cluster. Click the Export Definitions button to save the current settings to the file of your choice. (The default is your home directory.) The exported settings include all of the event notifiers and their event definition assignments.
Chapter 13: Configure Event Notifiers and View Events 170 Script Requirements For the script to work properly, the following requirements must be met: • The script or program must be accessible from each node in the cluster. It is recommended that an identical copy of the script or program be placed on local storage on each node to ensure that it will always be available. • The script must be able to be executed on each node.
Chapter 13: Configure Event Notifiers and View Events Following is an example of the input XML (as with the above, not all elements are required for each event): PAGE 18414 Cluster Operations on the Applications Tab The Applications tab on the Management Console shows all HP Clustered File System applications, virtual hosts, service monitors, and device monitors configured in the cluster and enables you to manage and monitor them from a single screen. Applications Overview An application provides a way to group associated cluster resources (virtual hosts, service monitors, and device monitors) so that they can be treated as a unit.
Chapter 14: Cluster Operations on the Applications Tab 173 a device monitor, the application will use the same name as the device monitor. The Applications Tab The Management Console lists applications and their associated resources (virtual hosts, service and device monitors, CIFS virtual servers) on the Applications tab. The applications and resources appear in the rows of the table. (Double-click on a resource to see its properties.
Chapter 14: Cluster Operations on the Applications Tab 174 The cells indicate whether a resource is deployed on a particular server, as well as the current status of the resource. If a cell is empty, the resource is not deployed on that server. The icons used on the Applications tab report the status of the servers, applications, and resources. The following icons are used in the server columns to indicate the status of applications and resources.
Chapter 14: Cluster Operations on the Applications Tab 175 The application icon and its corresponding status indicate the state of the least healthy resource in the application. If OK appears in the status column, all clients can access the application. If the status is Error or Warning, at least one resource in the application has that status. The possible states for the application are: Icon Status OK Meaning Clients can access the application.
Chapter 14: Cluster Operations on the Applications Tab 176 Filter the Applications Display You can use filters to limit the information appearing on the Application tab. For example, you may want to see only a certain type of monitor, or only monitors that are down or disabled. You can use filters to do this. To add a filter, click the “New Filter” tab and then configure the filter.
Chapter 14: Cluster Operations on the Applications Tab Name: Specify a name for this filter. On the Type tab shown above, select the types of virtual hosts, service monitors, and device monitors that you want to see. Click on the State tab to select specific states that you are interested in viewing. (The Applications tab will be updated immediately.
Chapter 14: Cluster Operations on the Applications Tab 178 Click OK to close the filter. The filter then appears as a separate tab and will be available to you when you connect to any cluster. (Filters are stored per user under the registry key.) To modify an existing filter, select that filter, right-click, and select Edit Filter. To remove a filter, select the filter, right-click, and select Delete Filter.
Chapter 14: Cluster Operations on the Applications Tab 179 When you reach a cell that accepts drops, the cursor will change to an arrow. The following drag and drop operations are allowed. Applications These operations are allowed only for applications that include at most only one virtual host. • Assign an application to a server. Drag the application from the Name column to the empty cell for the server.
Chapter 14: Cluster Operations on the Applications Tab 180 • Switch the primary and backup servers (or two backup servers) for a virtual host. Drag the virtual host from one server cell to the cell for the other server. If the virtual host is active, this operation can disconnect existing applications that depend on the virtual host. When the operation is complete, the ordering for failover will be switched. • Remove a virtual host from a server.
Chapter 14: Cluster Operations on the Applications Tab 181 reordered as necessary. If the monitor was multi-active, it will remain active on any other servers on which it is configured. Menu Operations Applications The following operations affect all entities associated with an HP Clustered File System application. These operations can also be performed from the command line, as described in the HP StorageWorks Clustered File System Command Reference Guide.
Chapter 14: Cluster Operations on the Applications Tab 182 • Add a service monitor. • Enable or disable the virtual host. • View or change the properties for the virtual host. • Delete the virtual host. To perform these procedures, left-click on the cell for the virtual host (click in the Name column). Then right-click and select the appropriate operation from the menu. See “Configure Virtual Hosts” on page 204 for more information about these procedures.
15 Performance Monitoring The MxS_Perfmon*.msi package provided with HP Clustered File System provides performance information for the cluster, individual servers, and PSFS filesystems. (See the HP Clustered File System Setup Guide for information about installing this package.) The MxS_Perfmon package includes two components: the Performance Dashboard and the MxS Perfmon extension.
Chapter 15: Performance Monitoring 184 The dashboard opens in the browser with a full view of the cluster to which the connected node belongs (the Cluster Report). You will need to authenticate to the dashboard by entering the fully qualified NTLM (DOMAIN\User) or UPN (user@FQDN) credentials. Performance Views The Performance Dashboard provides the following metrics for the entire cluster: • Cluster Report.
Chapter 15: Performance Monitoring 185 The dashboard is updated every five minutes. You can also click Get Fresh Data to update the display. NOTE: The averages are calculated locally. If there are holes in the local data when the other nodes have a substantially different value (say a spike), the spikes will not be considered when averages are computed on the local node. That said, the variations will be local, just near the boundary of the hole in most cases.
Chapter 15: Performance Monitoring 186 Physical View Click Physical View on the Cluster Report to see details about the configuration of the nodes in the cluster. Verbosity level. This selection controls the amount of information that is reported for each node. Columns. This feature is unused. Click Full View to return to the Cluster Report. Filesystem Aggregate View Click Filesystem - Aggregate View on the Cluster Report to open the display.
Chapter 15: Performance Monitoring 187 Filesystem Detail View Click Filesystem - Detail View on the Filesystems View to see the metrics for a particular filesystem across the cluster and on each individual node. On the Filesystem Detail View, select the filesystem that you want to monitor. By default, the view reports filesystem metrics for the last hour. Select a different interval from the “Last” drag-down menu, or click Get Fresh Data to update the display.
Chapter 15: Performance Monitoring 188 Metrics View The Metrics View reports processor, memory, network, and other system metrics on each node in the cluster. Click Metrics View on the Cluster Report to open the display. Use the drag-down menus at the top of the display to select the metric to viewed, the time interval, and the order in which the graphs should be displayed.
Chapter 15: Performance Monitoring 189 Host View The Host View provides details about the configuration of the node and reports performance metrics for the selected time period. Last. Select the time interval to be displayed. The choices are the last hour, day, week, month, or year. The bottom portion of the Host View shows the remaining performance metrics for the node.
Chapter 15: Performance Monitoring Node View The Node View lists the hardware and operating system used on the node and reports when the node was last booted and how long it has been up.
Chapter 15: Performance Monitoring 191 Click Physical View to go to the Physical View for the entire cluster. Host-Specific Filesystem View This view shows filesystem throughput and filesystem I/O on the node. To open the view, click Host-specific Filesystem View on the Host Report. You can select up to five filesystems to monitor. If you need to monitor more than five filesystems, open another browser window.
Chapter 15: Performance Monitoring 192 Using the Windows Performance Tool To access the Microsoft Windows Performance tool, open the Control Panel and then select Administrative Tools > Performance. You will need to add the HP Clustered File System objects to the display. On the Performance window, click the Add button (+) and then select the HP Clustered File System objects that you want to monitor.
Chapter 15: Performance Monitoring 193 Use the System Monitor to view the counters or use the Performance Logs and Alerts feature to configure logs and alerts as appropriate for your site. Volume Objects The Perfmon utility can display counters for up to 64 filesystems. If 64 or fewer PSFS filesystems are configured in the cluster, the Add Counters dialog will list a separate object for each filesystem (for example, MxS$Volume$psv4).
Chapter 15: Performance Monitoring 194 The Add Counters dialog will now display individual volume objects for the filesystems listed in the vol_whitelist.conf file. If more than 64 filesystems are listed in the file, the dialog will display objects for the first 63 filesystems in the file. It will also display the object MxS$Volume$_PRUNE_vol_whitelist.conf to notify you that the file contains more filesystems than Perfmon can display.
Chapter 15: Performance Monitoring 195 Description Cluster Metric Node Metric Average CPU load (five minutes) NA Proc Queue Length (5 min) Average CPU load (15 minutes) NA Proc Queue Length (15 min) Average committed physical memory utilization (%) % Committed Memory in Use % Committed Memory in Use Average swap memory utilization (%) % Paging File Usage % Paging File Usage CIFS transfer rate (MB/sec) NA CIFS Server Bytes/sec CIFS operations/sec NA CIFS Server Operations/sec Number of
Chapter 15: Performance Monitoring 196 Description Cluster Metric Node Metric Filesystem throughput Filesystem Throughput Filesystem Throughput Filesystem I/O operations Filesystem I/O Ops Filesystem I/O Ops % Disk usage (maximum) % Disk Usage (Max) % Disk Usage (Max) Number of processes NA Number of Processes Total available disk space NA Total Available Disk Space Total disk space NA Total Disk Space Cluster Counters The cluster counters provided for the Microsoft Windows perfmon ut
Chapter 15: Performance Monitoring 197 • Avg % Paging File Usage The average percent usage of the paging file in the cluster. If the counter approaches 90 percent, consider resizing the paging file to accommodate system needs. • Total CIFS Server Bytes/sec The total Bytes of data the cluster is sending and receiving from the network. This counter relates strictly to the Server service and is a measure of how busy the cluster is.
Chapter 15: Performance Monitoring 198 • % Paging File Usage The percent usage of the paging file in the server. If the counter approaches 90 percent, consider resizing the paging file to accommodate system needs. • CIFS Server Bytes/sec The total Bytes of data the server is sending and receiving from the network. This counter relates strictly to the Server service and is a measure of how busy the server is. • CIFS Server Sessions The total number of active sessions on the server.
Chapter 15: Performance Monitoring 199 changes. To do this, run the following command once on any of the cluster nodes. (The key will automatically be applied to any new nodes added to the cluster.) \tools\mxdstool.exe set int32 /cluster/perfmon/noreload_perfext 1 To re-enable the automatic reload feature, run the following command: \tools\mxdstool.
Chapter 15: Performance Monitoring 200 To reenable the Performance Dashboard and the Perfmon extension on all nodes, set the value to 1: mxdstool set int32 /cluster/perfmon enable 1 You can also disable the Performance Monitor Service by renaming the following registry key: HKLM\System\CurrentControlSet\Services\mxperfext\Performance For example: HKLM\System\CurrentControlSet\Services\mxperfext\Performance _unused Restore the original registry key to enable the service.
Chapter 15: Performance Monitoring 201 Troubleshooting The following tips may be useful if you experience issues with the Performance Monitor Service. Network Issues Verify that multicast is enabled on all networks. If multicast is not supported or available among the cluster nodes, take these steps on all nodes in the cluster: 1. Add the following Windows registry entry to prevent the automatic regeneration of the file \perfmon\conf\gmond_comm.conf.
Chapter 15: Performance Monitoring 202 processes but will eventually give up if the processes do not start. If after a period of time the processes do not appear to be running, ensure that HP Clustered File System is running and then issue the following command on the node to check the status of the processes: mxperfsrv /status If the processes are not running, restart the Performance Monitor Service on the node. See “Restart Performance Monitoring” on page 199.
Chapter 15: Performance Monitoring 203 • The MxS$ performance objects do not appear in the Add Counter dialog. Determine whether the Last Counter registry value under the HKLM\system\CurrentControlSet\services\mxperfext\Performance registry key appears without one or more of following values: Last Help, First Counter, First Help, Object List. If so, delete the Last Counter registry value and then run the update command. cscript /nologo \perfmon\bin\mxperf_reload.
16 Configure Virtual Hosts HP StorageWorks Clustered File System uses virtual hosts to provide failover protection for servers and network applications. Overview A virtual host is a hostname/IP address configured on a set of network interfaces. Each interface must be located on a different server. The first network interface configured is the primary interface for the virtual host. The server providing this interface is the primary server.
Chapter 16: Configure Virtual Hosts 205 Cluster Health and Virtual Host Failover To ensure the availability of a virtual host, HP Clustered File System monitors the health of the administrative network, the active network interface, and the underlying server. If you have created service or device monitors, those monitors periodically check the health of the specified services or devices.
Chapter 16: Configure Virtual Hosts 206 The failover operation to another network interface has minimal impact on clients. For example, if clients were downloading Web pages during the failover, they would receive a “transfer interrupted” message and could simply reload the Web page. If they were reading Web pages, they would not notice any interruption. If the active network interface fails, only the virtual hosts associated with that interface are failed over.
Chapter 16: Configure Virtual Hosts 207 Add or Modify a Virtual Host To add or update a virtual host from the HP CFS Management Console, select the appropriate option: • To add a new virtual host, select Cluster > Add > Add Virtual Host or click the V-Host icon on the toolbar. Then configure the virtual host on the Add Virtual Host window. • To update an existing virtual host, select that virtual host on either the Server or Virtual Hosts window, right-click, and select Properties.
Chapter 16: Configure Virtual Hosts 208 select an existing application name, or leave this field blank. However, if you do not assign a name, HP Clustered File System will use the IP address for the virtual host as the application name. Always active: If you check this box, upon server failure, the virtual host will move to an active server even if all associated service and device monitors are inactive or down.
Chapter 16: Configure Virtual Hosts 209 Network Interfaces: When the “All Servers” box is checked, the virtual host will be configured on all servers having an interface on the network you select for this virtual host. When you add another server to the cluster, the virtual host will automatically be configured on that server. This option can be useful with administrative applications. Available/Members: The Available column lists all network interfaces that are available for this virtual host.
Chapter 16: Configure Virtual Hosts 210 Configure Applications for Virtual Hosts After creating virtual hosts, you will need to configure your network applications to recognize them. For example, if you are using a Web server, you may need to edit its configuration files to recognize and respond to the virtual hosts. By default, FTP responds to any virtual host request it receives.
Chapter 16: Configure Virtual Hosts 211 Rehost a Virtual Host You can use the Rehost option to modify the configuration of a virtual host. For example, you might want to change the primary for the virtual host or reorder the backups. To use this option, select the virtual host, right-click, and then select Rehost. The Virtual Host Rehost window then appears. When you make your changes and click OK, you will see a message warning that this action may cause a disruption of service.
Chapter 16: Configure Virtual Hosts 212 Change the Virtual IP Address for a Virtual Host When you change the virtual IP address of a virtual host, you will also need to update your name server and to configure applications to recognize the new virtual IP address. The order in which you perform these tasks is dependent on your application and the requirements of your site. You can use mx commands to change the virtual IP address of a virtual host. Complete these steps: 1.
Chapter 16: Configure Virtual Hosts 213 When certain events occur on the server where a virtual host is located, the ClusterPulse process will attempt to fail over the virtual host to another server configured for that virtual host. For example, if the server goes down, ClusterPulse will check the health of the other servers and then determine the best location for the virtual host.
Chapter 16: Configure Virtual Hosts 214 • The PanPulse process controls whether a network interface is marked up or down. When PanPulse determines that an interface currently hosting a virtual host is down, ClusterPulse will begin searching for another server on which to locate the virtual host. 3. ClusterPulse narrows the list to those servers without inactive, down, or disabled HP Clustered File System device monitors.
Chapter 16: Configure Virtual Hosts 215 Specify Failover/Failback Behavior The Probe Severity setting allows you to specify whether a failure of the service or device monitor probe should cause the virtual host to fail over. For example, you could configure a gateway device monitor to watch a router. The device monitor probe might occasionally time out because of heavy network traffic to the router; however the router is still functioning.
Chapter 16: Configure Virtual Hosts 216 • For service monitors, you can assign a priority to each monitor (the Service Priority setting). If ClusterPulse cannot locate an interface where all services are “up” on the underlying server, it selects an interface where the highest priority service is “up” on the underlying server.
Chapter 16: Configure Virtual Hosts 217 • After the virtual host fails over to node 2, a service monitor probe fails on that node. Now both nodes have a down service monitor. Failback does not occur because the servers are equally healthy. If the failed service is then restored on node 1, that node will now be healthier than node 2 and failback will occur. (Note that if the virtual host policy was AUTOFAILBACK, failback would occur when the probe failed on node 2 because both servers were equally healthy.
17 Configure Service Monitors Service monitors are typically used to monitor a network service such as HTTP or FTP. If a service monitor indicates that a network service is not functioning properly on the primary server, HP Clustered File System can transfer the network traffic to a backup server that also provides that network service. Overview Before creating a service monitor for a particular service, you will need to configure that service on your servers.
Chapter 17: Configure Service Monitors 219 severity, Start scripts, and Stop scripts) are consistent across all servers configured for a virtual host. Service Monitors and Failover If a monitored service fails, HP Clustered File System attempts to relocate any virtual hosts associated with the service monitor to a network interface on a healthier server.
Chapter 17: Configure Service Monitors 220 FTP Service Monitor By default the FTP service monitor probes TCP port 21 of the virtual host address. You can change this port number to the port number configured for your FTP server. The default frequency of the probe is every 30 seconds. The default time that the service monitor waits for a probe to complete is five seconds. The probe function attempts to connect to port 21 and expects to read an initial message from the FTP server.
Chapter 17: Configure Service Monitors 221 service if it is not already started. When the service monitor instance becomes inactive, the monitor stops the NT service if the probe type for the monitor is set to Single-Probe. When you configure the monitor, you will need to indicate whether dependent services of the NT service should also be started and stopped.
Chapter 17: Configure Service Monitors 222 TCP Service Monitor The generic TCP service monitor defaults to TCP port 0. You should set the port to the listening port of your server software. The default frequency of the probe is every 30 seconds. The default time that the service monitor waits for a probe to complete is five seconds. Because the service monitor cannot know what to expect from the TCP port connection, it simply attempts to connect to the specified port.
Chapter 17: Configure Service Monitors 223 Add or Modify a Service Monitor Adding a service monitor configures HP Clustered File System monitoring only. It does not configure the service itself.
Chapter 17: Configure Service Monitors 224 Monitor Type: Select the type of service that you want to monitor. Timeout: The maximum amount of time that the monitor_agent process will wait for a probe to complete. For most monitors, the default timeout interval is five seconds. You can use the default setting or specify a new timeout interval. Frequency: The interval of time, in seconds, at which the monitor probes the designated service.
Chapter 17: Configure Service Monitors 225 To add or update a service monitor from the command line, use this command: mx service add|update [--type DNS|FTP|HTTP|HTTPS|IMAP4|NNTP| NTSERVICE|POP3|SMTP|TCP|CUSTOM] [--timeout ] [--frequency ] [] ... NOTE: The --type option cannot be used with the mx service update command. See “Advanced Settings for Service Monitors” for information about the other arguments that can be specified for service monitors.
Chapter 17: Configure Service Monitors 226 Service Monitor Policy The Policy tab lets you specify the failover behavior of the service monitor and set its service priority. Timeout and Failure Severity This setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a probe of a monitored service fails.
Chapter 17: Configure Service Monitors 227 monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When the service is recovered on the original node, failback occurs according to the virtual host’s failback policy. NOAUTORECOVER. The virtual host fails over when a monitor probe fails and the monitor is disabled on the original node, preventing automatic failback.
Chapter 17: Configure Service Monitors 228 Probe Type Service monitors can be configured to be either single-probe or multiprobe. A multi-probe monitor performs the probe function on each node where the monitor is configured, regardless of whether the monitor instance is active or inactive. This is the default for the built-in monitors. Single-probe monitors perform the probe function only on the node where the monitor instance is active.
Chapter 17: Configure Service Monitors 229 Scripts Service monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the service. Start script. Runs as a service is becoming active on a server. Stop script. Runs as a service is becoming inactive on a server.
Chapter 17: Configure Service Monitors 230 without considering this to be an error. In both of these cases, the script should exit with a zero exit status. This behavior is necessary because HP Clustered File System runs the Start and Stop scripts to establish the desired start/stop activity, even though the service may actually have been started by something other than HP Clustered File System before ClusterPulse was started.
Chapter 17: Configure Service Monitors 231 If you want to reverse this order, preface the Stop script with the prefix [post] on the Scripts tab. Event Severity If a Start or Stop script fails or times out, a monitor event is created on the the node where the failure or timeout occurred. Configuration errors can also cause this behavior. You can view these events on the HP CFS Management Console and clear them from the Console or command line after you have fixed the problems that caused them.
Chapter 17: Configure Service Monitors 232 3. The Start script is run on the server where the virtual host is becoming active. PARALLEL. The strict ordering sequence for Stop and Start scripts is not enforced. The scripts run in parallel across the cluster as a virtual host is in transition. The PARALLEL configuration can speed up failover time for services that do not depend on strict ordering of Start and Stop scripts.
Chapter 17: Configure Service Monitors 233 UP or DOWN as appropriate. If the service is UP, the monitor will report UP Active (disabled). To disable a service monitor, select it on the Management Console, rightclick, and select Disable. To disable a service monitor from the command line, use this command: mx service disable ... Enable a Previously Disabled Service Monitor From the Management Console, select the service monitor to be enabled, right-click, and select Enable.
18 Configure Device Monitors HP StorageWorks Clustered File System provides built-in device monitors that can be used to watch local disks, gateway devices, or an NT service, or to monitor access to a SAN disk partition containing a PSFS filesystem. You can also create custom device monitors. Overview A device monitor is configured on one or more servers in the cluster. Depending on the type of monitor, it can be active on all servers on which it is configured, or on only one server.
Chapter 18: Configure Device Monitors 235 Type Default Timeout Default Frequency Other Parameters SHARED_FILESYSTEM 5 seconds 30 seconds Filesystem, filename CUSTOM 60 seconds 60 seconds User probe script Activity Types for Device Monitors The activity type specifies where the device monitor can be active. The activity type can be one of the following: • Single-Active. The monitor is active on only one of the selected servers.
Chapter 18: Configure Device Monitors 236 GATEWAY Device Monitor When certain network failures occur, the servers in a cluster can lose communication with each other. This situation can result in a partition, or split, of the cluster. For example, in a two-server cluster, each server would assume that it remained in the cluster and that the other server was down. The gateway device monitor detects the network failure and prevents the cluster from partitioning.
Chapter 18: Configure Device Monitors 237 The monitor probe queries the status of the NT service. If the status is SERVICE_RUNNING, the service status remains Up. If the status does not indicate that the NT service is running, the service status is set to Down. The NTSERVICE monitor is also available as a service monitor. When deciding whether to create a service monitor or a device monitor, consider the effect that you want the monitor to have on the cluster.
Chapter 18: Configure Device Monitors 238 Custom Device Monitor A CUSTOM device monitor can be used if the built-in device types are not sufficient for your needs. Custom device monitors can be particularly useful when integrating HP Clustered File System with a custom application. When you create a CUSTOM monitor, you will need to supply the probe script. In the script, probe commands should determine the health of the device as necessary.
Chapter 18: Configure Device Monitors 239 The device monitor activeness policy decision is made as follows: 1. If the device monitor on a specific server is disabled, then the device monitor will not be made active on that server. 2. ClusterPulse considers the list of servers that are both up and enabled and that are configured for the device monitor.
Chapter 18: Configure Device Monitors 240 Add or Modify a Device Monitor Select the appropriate option from the HP CFS Management Console: • To add a new device monitor, select the server to be associated with the monitor from the Servers window, right-click, and select Add Device Monitor (or click the Device icon on the toolbar). Then configure the device monitor on the New Device Monitor window.
Chapter 18: Configure Device Monitors 241 Device Type: Select the appropriate device type (DISK, GATEWAY, NTSERVICE, SHARED_FILESYSTEM, or CUSTOM). See “Overview” on page 234 for a description of these monitors. Frequency and Timeout: These fields are set to the default values for the type of device you have selected. Change them as needed. Additional parameters: Depending on the type of monitor you are creating, you will be asked for an additional parameter. • DISK monitor.
Chapter 18: Configure Device Monitors 242 decimal IP address of the hostname for the server, and is the name assigned to the SHARED_FILESYSTEM device monitor. • CUSTOM monitor. Specify the pathname to the probe script to be used with the monitor. The following example shows a device monitor created on the server svr1. To add a device monitor from the command line, use this command: mx device add --servers ,,...
Chapter 18: Configure Device Monitors 243 Probe Severity The Probe Severity tab lets you specify the failover behavior of the monitor. The Probe Severity setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a monitored device fails.
Chapter 18: Configure Device Monitors 244 monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When device access is recovered on the original node, failback occurs according to the virtual host’s failback policy. NOAUTORECOVER. The virtual host fails over when a monitor probe fails and the monitor is disabled on the original node, preventing automatic failback.
Chapter 18: Configure Device Monitors 245 Custom Scripts The Scripts tab lets you configure custom Recovery, Start, and Stop scripts for a device monitor. Device monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the device. Start script. Runs as a device is becoming active on a server. Stop script.
Chapter 18: Configure Device Monitors 246 must be robust enough to run when the device is already stopped, without considering this to be an error. In both of these cases, the script should exit with a zero exit status. This behavior is necessary because HP Clustered File System runs the Start and Stop scripts to establish the desired start/stop activity, even though the device may actually have been started by something other than HP Clustered File System before the ClusterPulse process was started.
Chapter 18: Configure Device Monitors 247 If you want to reverse this order, preface the Stop script with the prefix [post] on the Scripts tab. Event Severity If a Start or Stop script fails or times out, a monitor event is created on the the node where the failure or timeout occurred. Configuration errors can also cause this behavior. You can view these events on the HP CFS Management Console and clear them from the Console or command line after you have fixed the problems that caused them.
Chapter 18: Configure Device Monitors 248 2. ClusterPulse waits for all Stop scripts to complete. 3. The Start script is run on the server where the virtual host or shared device is becoming active. PARALLEL. The strict ordering sequence for Stop and Start scripts is not enforced. The scripts run in parallel across the cluster as a shared device or virtual host is in transition.
Chapter 18: Configure Device Monitors 249 When a device monitor detects a failure, HP Clustered File System attempts to fail over the active virtual hosts associated with that monitor. By default, all virtual hosts on the servers used with the device monitor are dependent on the device monitor. However, you can specify that only certain virtual hosts be dependent on the device monitor.
Chapter 18: Configure Device Monitors 250 Probe Type. The servers on which the monitor probe will occur. Select Single-Probe to conduct the probe only on the server where the monitor is active. Select Multi-Probe to conduct the probe on all servers configured for the monitor. Activity Type. Where the monitor can be active. The options are: • Single-Active. The monitor is active on only one of the selected servers.
Chapter 18: Configure Device Monitors 251 Available Servers/Selected Servers. The type of the device monitor affects whether the monitor should be configured on one or multiple servers. • A GATEWAY monitor is multi-active and can be configured on multiple servers. • For SHARED_FILESYSTEM monitors, you should select the servers that mount the monitored filesystem and are running the applications that access data from that filesystem.
Chapter 18: Configure Device Monitors 252 Enable a Device Monitor From the Management Console, select the device monitor to be enabled, right-click, and select Enable. To enable a device monitor from the command line, use this command: mx device enable ... Clear Device Monitor Error Condition To clear a error from a device monitor, select that monitor, right-click, and select Clear Last Error.
19 Advanced Monitor Topics The topics described here provide technical details about HP Clustered File System operations. This information is not required to use HP Clustered File System in typical configurations; however, it may be useful if you want to design custom scripts and monitors, to integrate HP Clustered File System with custom applications, or to diagnose complex configuration problems.
Chapter 19: Advanced Monitor Topics 254 The following examples show state transitions for a service monitor that uses the default values for autorecovery, priority, and serial script ordering. Start and Stop scripts are also defined for the monitor. The virtual host associated with the monitor has a primary interface and two backup interfaces. The first example shows the state transitions that occur at startup from an unknown state. At i1, all instances of the monitor have completed stopping.
Chapter 19: Advanced Monitor Topics 255 When a failure occurs on the Primary, the virtual host needs to fail over to a backup. HP Clustered File System now looks for the best location for the virtual host. Because the probe status on the first backup is “down,” HP Clustered File System chooses the second backup, where the probe status is “up.” At i5 in the following example, the probe fails on the Primary. At i6, the virtual host is deconfigured on the Primary.
Chapter 19: Advanced Monitor Topics 256 Custom Device Monitors A custom device monitor is associated with a list of servers and a list of virtual hosts configured on those servers. A custom device monitor can be active on only one server at a time. On each server, the monitor uses a probe mechanism to determine whether the service is active. The probe mechanism is in one of the following states on each server: Up, Down, Unknown, Timeout. A custom device monitor also has an activity status on each server.
Chapter 19: Advanced Monitor Topics Primary 257 Time t1 active Vhost status inactive Service probe status unknown Service monitor activity active undefined star ting Device probe status unknown Device monitor activity active undefined star ting up inactive down inactive stopping up First Bac kup Vhost status inactive Service probe status unknown Service monitor activity undefined up inactive stopping Device probe status Device monitor activity Sec ond Bac kup Vhost status unknown
Chapter 19: Advanced Monitor Topics 258 Integrate Custom Applications There are many ways to integrate custom applications with HP Clustered File System: • Use service monitors or device monitors to monitor the application • Use a predefined monitor or your own user-defined monitor • Use Start, Stop, and Recovery scripts Following are some examples of these strategies.
Chapter 19: Advanced Monitor Topics 259 Built-In Monitor or User-Defined Monitor? To decide whether to use a built-in monitor or a user-defined monitor, first determine whether a built-in monitor is available for the service you want to monitor and then consider the degree of content verification that you need.
Chapter 19: Advanced Monitor Topics 260 This script connects to port 2468, sends a string specified by the protocol, and determines whether it has received an expected response. You distribute this script to the same location on all servers on virtual host vh1, and then create a custom service monitor that uses that script. This provides not only verification of the connection, but a degree of content verification.
Chapter 19: Advanced Monitor Topics 261 • MX_SERVER=IP address The primary address of the server that calls the script. The address is specified in dotted decimal format. • MX_TYPE=(SERVICE|DEVICE) Whether the script is for a service or device monitor. • MX_VHOST=IP address The IP address of the virtual host. The address is specified in dotted decimal format. (Applies only to service monitors.) • MX_PORT=Port or name The port or name of the service monitor. (Applies only to service monitors.
20 SAN Maintenance The following information and procedures apply to SANs used with HP StorageWorks Clustered File System. Server Access to the SAN When a server is either added to the cluster or rebooted, HP Clustered File System needs to take some administrative actions to make the server a full member of the cluster with access to the shared filesystems on the SAN. During this time, the HP CFS Management Console reports the message “Joining cluster” for the server.
Chapter 20: SAN Maintenance 263 • Repeated I/O errors when the server tries to write to a PSFS journal. The server then loses access to the affected filesystem. When the disk experiencing the I/O errors is fixed, the server will automatically regain access to the filesystem. The HP CFS Management Console typically displays an alert message when a server loses access to the SAN.
Chapter 20: SAN Maintenance 264 before making the partition table changes and then reenabling access afterwards. If you should later need to repartition a disk containing a membership partition, you will need to stop HP Clustered File System before you change the layout. While the cluster is stopped, you will not be able to access other disks in the cluster. You will also need to take one of the above steps to force the servers in the cluster to recognize the changes.
Chapter 20: SAN Maintenance 265 mxsanlk This host: 10.10.30.3 This host’s SDMP administrator: 10.10.30.1 Membership Partition -------------------psd1p1 psd2p1 psd3p3 SANlock State ------------held by SDMP administrator held by SDMP administrator held by SDMP administrator Any of these messages can appear in the “SANlock State” column. • held by SDMP administrator The SANlock was most recently held by the SDMP administrator of the cluster to which the host where mxsanlk was run belongs.
Chapter 20: SAN Maintenance 266 • trying to lock, not yet committed by owner The SANlock is either not held or has not yet been committed by its holder. The host on which mxsanlk was run is trying to acquire the SANlock. • unlocked, trying to lock The SANlock does not appear to be held. The host on which mxsanlk was run is trying to acquire the SANlock. • unlocked The SANlock does not appear to be held. If a host holds the SANlock, it has not yet committed its hold.
Chapter 20: SAN Maintenance 267 • locked (lock is corrupt, will repair) The host on which mxsanlk was run holds the lock. The SANlock was corrupted but will be repaired. If a membership partition cannot be accessed, use the mx config mp set command or the mprepair utility to correct the problem. Depending on the status of the SDMP process, when you invoke mxsanlk you may see one of the following messages: Checking for SDMP activity, please wait... Still trying... The SDSMP is inactive at this host.
Chapter 20: SAN Maintenance 268 Online Operations When HP Clustered File System is running, the Add, Repair, and Replace options on the Storage Settings tab and the mx config mp set and repair commands can be used only in the following circumstances: • A disk containing a membership partition is out-of-service. Use the replace option or mx config mp set to move the partition to another disk. • You need to move one or more membership partitions to different storage.
Chapter 20: SAN Maintenance 269 Membership Partition States The Storage Settings tab reports the state of each membership partition. The possible states are: • OK. The membership partition is functioning correctly. • FENCED. The server has been fenced and cannot access the SAN. Start HP Clustered File System if it is not running or reboot the server. • NOT_FOUND. HP Clustered File System cannot find the device containing the membership partition. Check the device for hardware problems.
Chapter 20: SAN Maintenance 270 Check the device for hardware problems. If the issue cannot be resolved, replace the membership partition. • RESILVER. The membership partition is not up-to-date. HP Clustered File System will resilver the membership partition automatically. You can resilver the partition manually if desired. • CORRUPT. The membership partition is not valid. Resilver the partition. • CID_MISMATCH. The Cluster-ID is out-of-sync among the membership partitions and must be reset.
Chapter 20: SAN Maintenance 271 When you select a partition and click Replace, you will see a confirmation message describing the replace operation. A message also appears when the replace operation is complete. NOTE: The Replace option on the Storage Settings tab is available only when HP Clustered File System is running.
Chapter 20: SAN Maintenance 272 All of the available partitions on that disk or LUN then appear in the bottom of the window. Select one of these partitions and click Add. (The minimum size for a membership partition is 1 GB.) Repeat this procedure to select one more membership partition. We recommend that the partitions be on different disks. When selecting partitions for use as membership partitions, be sure that they do not contain any needed data.
Chapter 20: SAN Maintenance 273 The mx config mp Commands The mx config mp set and repair commands can be used while HP Clustered File System is either online or offline; however, only the operations listed under “Online Operations” on page 268 can be performed while HP Clustered File System is running. For other operations, HP Clustered File System must be offline on all nodes in the cluster.
Chapter 20: SAN Maintenance 274 --reuse Allow disks that contain existing volume information to be reused. (The existing data is destroyed.) Repair a Membership Partition This command resilvers the specified membership partition. mx config mp repair [--reuse] The --reuse option allows disks that contain existing volume information to be reused. (The existing data is destroyed.) This option is available only when the cluster is offline.
Chapter 20: SAN Maintenance 275 be in the Active state. The mprepair utility can be used to repair any problems if a failure causes servers to have inconsistent views of the membership partitions.
Chapter 20: SAN Maintenance 276 another SAN component. When the problem is repaired, the status should return to OK. CORRUPT. The partition is not valid. You will need to resilver the partition. This step copies the membership data from a valid membership partition to the corrupted partition. NOTE: The membership partition may have become corrupt because it was used by another application. Before resilvering, verify that it is okay to overwrite any existing data on the partition. RESILVER.
Chapter 20: SAN Maintenance 277 Export Configuration Changes When you change the membership partition configuration with mprepair, it updates the membership list on the local server. It also updates the lists on the disks containing the membership partitions specified in the local MP file. After making changes with mprepair, you will need to export the configuration to the other servers in the cluster.
Chapter 20: SAN Maintenance Disk records: Recid 1: 20:00:00:04:cf:13:33:12::0 psd1 Recid 258: 20:00:00:04:cf:13:3c:92::0 psd2 Host registry entries: Host ID: 10.10.30.4 fencestatus=0 SAN Loc:10:00:00:00:c9:2d:27:7d::0 idstatus=0 Host ID: 10.10.30.3 fencestatus=0 SAN Loc:10:00:00:00:c9:2d:27:78::0 idstatus=0 278 (switch=fcswitch5) (switch=fcswitch5) Search the SAN for Membership Partitions.
Chapter 20: SAN Maintenance 279 The resilver operation synchronizes all other membership partitions and the local membership partition list. Repair a Membership Partition. This command resilvers the specified membership partition. mprepair --repair [--force] indicates the membership partition to be resilvered. is the UUID for the device and is the number of the partition on the device.
Chapter 20: SAN Maintenance 280 server in the cluster, you can use the following command to determine whether all membership partitions have a valid Cluster-ID. mprepair --sync-clusterids The command displays the Cluster-IDs found in each membership partition and flags those partitions containing an invalid ID. You can then specify whether you want the command to repair the partitions having a mismatched Cluster-ID.
Chapter 20: SAN Maintenance 281 8. Enable HP Clustered File System and the psd driver: mxservice -install psdcoinst -install 9. Reboot the server to return the psd driver to the driver stack. 10. When the system is rebooted, HP Clustered File System will still be disabled in the Windows Services Control Panel. Re-enable it for Automatic startup if desired. 11. Start HP Clustered File System (or wait until the next reboot).
Chapter 20: SAN Maintenance 282 cluster (for example, because the server has crashed) and you cannot reboot the server. Run the command from a server that is communicating with the cluster, not from the non-responsive server. If none of the servers are responsive, try to execute the command from a client using the Microsoft psexec utility.
Chapter 20: SAN Maintenance 283 • Be sure to verify that the server is physically down or physically disconnected from the shared storage before running the mx server markdown command. Filesystem corruption can occur if the server is not actually down and can access the shared storage. • If the server is up but is physically disconnected from the shared storage when the mx server markdown command is run, the server must be rebooted before it is reconnected to shared storage.
Chapter 20: SAN Maintenance 284 Also consult your FC switch documentation or the FC switch vendor. If the switch appears to be operating properly, contact HP Support. Storage Online Insertion of New Storage HP Clustered File System supports online insertion (OLI) of new storage, provided that OLI support is present for your combination of storage device, SAN fabric, HBA vendor-supplied device driver, and the associated HBA vendor-supplied libhbaapi.
Chapter 20: SAN Maintenance 285 Membership Partition Timeout The membership partition timeout should be increased to 120 seconds (120000ms). This value is set in the registry. Complete the following steps: Start regedit and navigate to the following registry key: [HKEY_LOCAL_MACHINE\SOFTWARE\PolyServe\MatrixServer\mxservice\ Started Processes\sanpulse] Double-click ProgramArguments and, on the Edit String dialog, enter -o sdmp_io_timeout=120000 as the Value data.
Chapter 20: SAN Maintenance 286 Remove the comment character (#) at the beginning of the line and set the psd_timeout value to 180: psd_timeout 180 Restart the Nodes After changing the timeouts on all nodes, stop and restart HP Clustered File System on each node. The stop/restart can be performed on one node at a time. After the Storage Capacity Upgrade When the disk upgrade is complete, restore the original scl.conf file and remove the registry value that you added.
Chapter 20: SAN Maintenance 287 • The FC connectors must be reinserted in the same location on the new switch. For example, the FC connector that was plugged into port 1 on the original switch must be plugged into port 1 on the new switch. If these conditions are not met, you will not be able to perform online replacement of the switch. Instead, you will need to stop the cluster, replace the switch, and use mxconfig to reconfigure the new switch into the cluster.
Chapter 20: SAN Maintenance 288 10. Clear any stale zone configuration on the new switch with the cfgClear command. 11. Save the clean configuration with the cfgSave command. 12. Configure the new switch. If you saved the original configuration with the configUpload command, use the configDownload command to restore it. Otherwise, use the configure command. (You may need to consult your site’s SAN administrator or your Brocade representative for the correct configuration information.) 13.
Chapter 20: SAN Maintenance 289 are available elsewhere but might conveniently be captured here. One way to record the information is to capture the output of a CLI session. The following commands show the type of data that might be useful. show ip ethernet for the IP address. show switch for the fabric operating mode. show zoning for the zone configuration. 3. After the original switch has been powered down, power up the new switch and set the IP address to the old switch's address to allow EWS access.
21 Other Cluster Maintenance This chapter describes how to perform the following activities: • Collect HP Clustered File System log files with mxcollect • Check the server configuration • Disable a server for maintenance • Troubleshoot a cluster • Troubleshoot service and device monitors Collect Log Files with mxcollect The mxcollect utility collects error event logs that can be useful for diagnosing technical issues with HP Clustered File System.
Chapter 21: Other Cluster Maintenance 291 You will then see a command window that says “Collecting files.” The information collected from that node is written to the file mxcollect_machinename_yyyymmdd_hhmmss_default.zip. This file is placed in the folder %SystemDrive%\Program Files\Hewlett-Packard\HP Clustered File System\conf\mxcollect. Upload mxcollect Files to HP Support After running mxcollect, you can upload the resulting files to HP Support. Contact HP Support for more information.
Chapter 21: Other Cluster Maintenance 292 1. Disable the server. (Choose the server from the Servers window on the HP CFS Management Console, right-click, and select Disable.) This step causes the virtual host to fail over to a backup network interface on another server. 2.
Chapter 21: Other Cluster Maintenance 293 HTTP server is monitored by a FTP monitor, the HTTP server is considered down. Also check the following: 1. Verify that the server is connected to the network. 2. Verify that the network devices and interfaces are properly configured on the server. 3. Ensure that the ClusterPulse process and the service monitor agent (monitor_agent) are running on the server.
Chapter 21: Other Cluster Maintenance 294 Troubleshoot Monitor Problems You may encounter the following problems with service and device monitors. Monitor Status If the monitor status is not reported as Up, check the last error message string and the last event message string that monitor_agent returned to HP Clustered File System for any service or device monitor on any server in the cluster. The error or event message provides more status information.
Chapter 21: Other Cluster Maintenance 295 expected status choices. This could occur if the Management Console is out of date and does not support the version of HP Clustered File System running on the server. “Event” Status The “Event” status is displayed when monitor_agent encounters an error while executing the probe, Start, Stop, or Recovery scripts. The status of the monitor may be “Up” even though an event has been reported.
Chapter 21: Other Cluster Maintenance 296 transition for a monitor. This indicates an internal error and should be reported to HP Support. The event is written into the event log. To view the error, select the monitor on the Management Console, right-click, and select View Last Error.
Chapter 21: Other Cluster Maintenance 297 • Starting • Stopping • Inactive • Active The activity status is not an error condition; it represents the activity of scripts associated with the monitor. However, if the activity status continues to have a value other than Active or Inactive, there may be a script problem that requires attention. Active status indicates that the probe script will be executed at the probe frequency.
A Management Console Icons The Management Console uses the following icons. HP Clustered File System Entities The following icons represent the HP Clustered File System entities. If an entity is disabled, the color of the icon becomes less intense.
Chapter : Management Console Icons 299 Additional icons are added to the entity icon to indicate the status of the entity. The following example shows the status icons for the server entity. The status icons are the same for all entities and have the following meanings. Monitor Probe Status The following icons indicate the status of service monitor and device monitor probes. If the monitor is disabled, the color of the icons is less intense.
Chapter : Management Console Icons 300 On the Applications tab, virtual hosts and single-active monitors use the following icons to indicate the primary and backups. Multi-active monitors use the same icons but do not include the primary or backup indication. Management Console Alerts The Management Console uses the following icons to indicate the severity of the messages that appear in the Alert window.
Index A accounts assign to role 147 administrative network allow, discourage or exclude traffic 60 defined 9 failover 59 network topology 58 requirements for 57 select 58 alerts Alerts pane on Management Console 35 display error on Management Console 35 display on Management Console 35 icons on Management Console 300 Applications tab drag and drop operations 178 filter display, filter, on Applications tab 176 icons 174 manage monitors 182 manage resources 178 menu operations 181 modify display 173 rehost vi
Index configurations, supported 14 design guidelines 14 maintenance 290 Clustered File System defined 7 Cluster-ID, reset 279 ClusterPulse defined 9 failover 212 configuration back up 40 device monitor 240 network interface 57 PSFS filesystems 95 SAN disks 64 server 47 service monitor 223 system design guidelines 14 virtual host 207 configurations, supported 14 Connect window authentication parameters 26 bookmarks 26 Clear History button 25 Connect button 25 custom monitors device 238 environment variables
Index E email notifier service, configure 164 environment variables scripts 260 error messages PSFS filesystem 263 event log audit trail 154 Events Viewer 157 Windows Event Viewer 160 Event Notification Control Panel 161 event notifier services add events 162 configure 161 custom notifier scripts 169 email notifier service 164 enable or disable 168 import or export event settings 169 remove events 162 restore event settings to defaults 168 script notifier service 166 SNMP notifier service 163 Event Viewer
Index filesystems, PSFS Automatic Data Streams (ADS) 118 view storage summary 71 firewall 42 FTP service monitor 220 G GATEWAY device monitor 236 getting help 1 GPT disks 65 grpcommd process 10 H Host Bus Adapter (HBA) change 280 host registry, clear 279 HP NAS services website 2 storage website 1 technical support 1 HTTP service monitor 220 HTTPS service monitor 220 I Installed Software viewer 38 iSCSI configuration 17 L load balancing, DNS 54 M maintenance procedures 290 Management Console alerts 35
Index configure to recognize virtual hosts 210 network topology 58 NOFAILBACK, virtual host 216 notifiers, See event notifier services ntbackup utility 42 NTSERVICE device monitor 236 NTSERVICE service monitor 220 O OLI, storage 284 online repair 268 P PanPulse process administrative network 58 defined 10 partitions on SAN disks requirement for importing 67 Perfmon utility, Windows add objects 192 cluster counters 196 cluster performance objects 192 disable objects or counters 200 enable or disable Perfm
Index modify 152 rename 152 view from command line 153 view rights 150 S SAN (storage area network) 9 SAN access 262 sandiskinfo utility 72 SANPulse process defined 10 responsibilities 97 SCL (Storage Control Layer) defined 10 device database 12 operation of 64 script notifier service 166 scripts, device monitor configure 245 event severity 247 ordering 247 scripts, event severity 231 scripts, service monitor applications, integrate with 259 configure 228 event severity 231 SDMP process 10 server backup 1
Index defined 136 delete 140 errors 139 remove drive letters or mount paths 140 support for 135 SNMP notifier service, configure 163 SNMP service defined 14 SNMP trap forwarding target, configure 163 software, installed software viewer 38 Start scripts device monitor 245 service monitor 228 Stop scripts device monitor 245 service monitor 228 Storage Summary window 71 subdevices, for dynamic volumes 77 T TCP service monitor 222 technical support, HP 1 technical support, run mxcollect utility 290 troublesho