HP StorageWorks Clustered File System 3.6.
Legal and notice information © Copyright 1999-2008 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 HP Technical Support HP Storage Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 HP NAS Services Website . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Quick Start Checklist Cluster Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 Introduction to HP Clustered File System Product Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Other Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tested Configuration Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume and Filesystem Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authentication Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . Start the Management Console . . . . . .
Contents Move a Server to Another Cluster . . . . . . . . . . . . . . . . . . . . . . . . . HP Clustered File System License File . . . . . . . . . . . . . . . . . . . . . . . . Upgrade the License File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Refresh the License File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supported HP Clustered File System Features . . . . . . . . . . . . . . Limit the Servers That Can Join a Cluster . . . . . . . . . . . . . . . . . .
Contents vi 8 Configure Dynamic Volumes Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic and Dynamic Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Dynamic Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dynamic Volume Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Volume Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Features Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quotas Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Filesystem Status from the Command Line . . . . . . . . . . . Extend a Basic Volume and Its Filesystem . . . . . . . . . . . . . . . . . . . . Suspend a Filesystem for Backups. . . . . . . . . . . . . . .
Contents Export or Import Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable or Disable a Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify a Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rename a Role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete a Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents The Applications Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filter the Applications Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Applications Tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Drag and Drop” Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . Menu Operations . . . . . . . . . . . . . . . . . .
Contents Types of Device Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Device Monitors and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . Device Monitor Activeness Policy . . . . . . . . . . . . . . . . . . . . . . . . Add or Modify a Device Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Settings for Device Monitors . . . . . . . . . . . . . . . . . . . . . . Probe Severity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Server Cannot Be Fenced. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Cannot Be Located . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Insertion of New Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Replacement of a FibreChannel Switch . . . . . . . . . . . . . . . . Replace a Brocade FC Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replace a McDATA FC Switch . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP website: http://www.hp.com/support. From this website, select the country of origin. For example, the North American technical support number is 800-633-3600. NOTE: For continuous quality improvement, calls may be recorded or monitored.
HP Technical Support 2 HP NAS Services Website The HP NAS Services site allows you to choose from convenient HP Care Pack Services packages or implement a custom support solution delivered by HP ProLiant Storage Server specialists and/or our certified service partners. For more information see us at http://www.hp.com/hps/storage/ns_nas.html.For the latest documentation, go tohttp://www.hp.com/support/manuals.
2 Quick Start Checklist The following checklist is intended for new installations of HP Clustered File System and includes typical steps to configure the cluster. Cluster Configuration Steps The following checklist assumes that the installation and configuration steps described in the HP StorageWorks Clustered File System Installation Guide or Setup Guide, depending on your product, have been completed. Action Description Review administrative considerations and restrictions.
Chapter 2: Quick Start Checklist Action Description Create dynamic volumes. Dynamic volumes can include multiple disks and are used for PSFS filesystems. See “Create a Dynamic Volume” on page 78. Create PSFS filesystems. Select the dynamic volume to be used for the filesystem and configure the appropriate options such as block size and disk quotas. See “Create a Filesystem” on page 99.
Chapter 2: Quick Start Checklist Action Description Prepare for cluster security Create administrative Create roles that allow or deny permission to roles (optional). perform cluster operations and assign users and groups to the roles. See “Role-Based Security” on page 135. Review the audit log feature. HP Clustered File System provides an audit trail of operations that change the configuration or state of the cluster. See “HP Clustered File System Audit Trail” on page 148.
Chapter 2: Quick Start Checklist Action Description Configure application monitoring as necessary Configure virtual hosts. Virtual hosts provide failover protection for servers and network services. If you will be monitoring other applications, create virtual hosts as necessary. See “Add or Modify a Virtual Host” on page 180. Configure service monitors. HP Clustered File System provides built-in service monitors such as HTTP and TCP and also allows you to create your own custom monitors.
3 Introduction to HP Clustered File System HP StorageWorks Clustered File System provides a cluster structure for managing a group of network servers and a Storage Area Network (SAN) as a single entity. Product Features HP Clustered File System includes the following features: • Fully distributed data-sharing environment. The PSFS filesystem enables all servers in the cluster to directly access shared data stored on a SAN.
Chapter 3: Introduction to HP Clustered File System 8 • Cluster-wide administration. The HP CFS Management Console (a Java-based graphical user interface) and the corresponding commandline interface enable you to configure and manage the entire cluster either remotely or from any server in the cluster. • Failover support for network applications.
Chapter 3: Introduction to HP Clustered File System 9 The cluster includes these components: Servers. Each server must be running HP Clustered File System. Public LANs. A cluster can include up to four network interfaces per server. Each network interface can be configured to support multiple virtual hosts, which provide failover protection for Web, e-mail, file transfer, and other TCP/IP-based applications. Administrative Network.
Chapter 3: Introduction to HP Clustered File System 10 writes to a PSFS filesystem automatically obtain the appropriate locks from the DLM, ensuring filesystem coherency. grpcommd. Manages HP Clustered File System group communications across the cluster. mxds. Manages the mxds datastore. mxlogd. Manages global error and event messages. The messages are written to the HP Clustered File System event log on each server. PanPulse.
Chapter 3: Introduction to HP Clustered File System 11 Volume Manager The HP Clustered File System Volume Manager can be used to create dynamic volumes consisting of disk partitions that have been imported into the cluster. Dynamic volumes can be configured to use either concatenation or striping. A single PSFS filesystem can be placed on a dynamic volume. The Volume Manager can also be used to extend a dynamic volume and the filesystem located on that volume.
Chapter 3: Introduction to HP Clustered File System 12 HP Clustered File System Databases HP Clustered File System uses the following databases to store cluster information: • Shared Memory Data Store (SMDS). The SANPulse process stores filesystem status information in this database. The database consists of sp_status files that are located in %SystemDrive%\Program Files\Hewlett-Packard\HP Clustered File System\conf on each server. These files should not be changed. • Device database.
Chapter 3: Introduction to HP Clustered File System 13 If any of these health checks fail, HP Clustered File System can transfer the virtual host to a backup server and the network traffic will continue. After creating virtual hosts, you will need to configure your network applications to recognize them. When clients want to access a network application, they use the virtual host address instead of the address of the server where the application is running.
Chapter 3: Introduction to HP Clustered File System 14 SNMP Service The HP Clustered File System SNMP service provides tools that can be used to retrieve all cluster-wide state and status information. The service includes the following: • An SNMP extension agent that is used by the Microsoft SNMP service. (The Microsoft SNMP service must be installed and configured on the servers in the cluster.) • MIBs that can be loaded into the MIB browser tool provided with your Network Management Station (NMS).
Chapter 3: Introduction to HP Clustered File System 15 Supported Configurations HP Clustered File System supports multiple FibreChannel switches configured as a single fabric and multiported SAN disks. iSCSI arrays are also supported. The following diagrams show some sample cluster configurations using these components. Single FC Port, Single FC Switch, Single Fabric This is the simplest configuration. Each server has a single FibreChannel port connected to the FibreChannel switch.
Chapter 3: Introduction to HP Clustered File System 16 Single FC Port, Dual FC Switch, Single Fabric In this example, the fabric includes two FibreChannel switches. Servers 1–3 are connected to the first FC switch; servers 4–6 are connected to the second switch. The FC switches are connected to two RAID arrays, which contain multiported disks. If a switch fails, the servers connected to the other switch will survive and access to storage will be maintained.
Chapter 3: Introduction to HP Clustered File System 17 iSCSI Configuration This example shows an iSCSI configuration. The Microsoft iSCSI initiator is installed on each server. Ideally, a separate network should be used for connections to the iSCSI storage arrays.
4 Cluster Administration HP StorageWorks Clustered File System can be administered either with the HP CFS Management Console or from the command line. Administrative Considerations and Restrictions You should be aware of the following when managing HP Clustered File System. Network Hostname Resolution Normal operation of the cluster depends on a reliable network hostname resolution service. If the hostname lookup facility becomes unreliable, this can cause reliability problems for the running cluster.
Chapter 4: Cluster Administration 19 • If one of these hostnames has already been referenced unsuccessfully, the DNS resolver cache may need to be flushed with “ipconfig /flushdns” (see Microsoft Knowledge Base article 320845). • Certain Microsoft Knowledge Base articles caution that in the case of Exchange SMTP, and possibly other applications, the use of the hosts file can interfere with mail flow (see Microsoft Knowledge Base article 296215).
Chapter 4: Cluster Administration 20 may result in filesystem corruption. For example, the connection must not be moved from one switch port to another, and a new FibreChannel connection for the server must not be established while HP Clustered File System is running on the server. • If servers from multiple clusters can access the SAN via a shared FC fabric, avoid importing the same disk into more than one cluster.
Chapter 4: Cluster Administration 21 • Active Directory user and groups should be used in filesystem ACLs. Do not use local users and groups because they are meaningless to other nodes in the cluster. • HP Clustered File System nodes should not be used as domain controllers because the two services will compete for resources, resulting in decreased performance. • The DNS servers used by Active Directory and HP Clustered File System should not reside on HP Clustered File System nodes.
Chapter 4: Cluster Administration 22 Tested Configuration Limits HP has tested HP Clustered File System configurations up to the following limits: • 16 servers per cluster • 256 imported LUNs per cluster for FC fabric configurations; for iSCSI configurations, the maximum number of connections for the iSCSI initiator • 128 filesystems per cluster on 32-bit systems; 256 filesystems on 64-bit systems • 2048 filesystem mounts per cluster • 128 virtual hosts per cluster • 128 service and/or device monitors per
Chapter 4: Cluster Administration 23 files are created, the upper bound is similar to the maximum block count, which is about 2^32. User Authentication HP Clustered File System can be managed via the HP Management Console GUI or from the command line. The HP Clustered File System mx command provides command-line equivalents of Management Console operations. HP Clustered File System also provides other commands to perform various cluster operations.
Chapter 4: Cluster Administration 24 Authentication Considerations You should be aware of the following recommendations and guidelines: • We recommend that single sign-on be used to authenticate users. When users connect to the HP Management Console, they can use the “As User” feature to log in as another user if necessary. On the command line, the Windows runas command can be used to become a administrative user before running the HP Management Console or cluster commands.
Chapter 4: Cluster Administration 25 Connect to: Type a cluster or server name or select a name from the dropdown list. When you connect to a server or cluster, it is added to the dropdown list. Click the Clear History button to delete the list. (Saved bookmarks will remain.) By default, the Connect window logs you onto the cluster using your OS user credentials. If you want to log on as another user, click the “As User” button ( ).
Chapter 4: Cluster Administration 26 User: Type the name of the user who will be accessing the cluster. Password: Type the user’s password. If you do not want to be prompted for the password again, click the “Remember this password” checkbox. (For the password to be saved, you will also need to create a bookmark.) Add to bookmarks: Click this checkbox to create a bookmark for this connection.
Chapter 4: Cluster Administration 27 Manage Bookmarks The Bookmarks display lists the cluster connections that are configured in the .matrixrc file. Click the Bookmarks button on the HP Clustered File System Connect window to display the current list of bookmarks and the available options.You can connect to any of the servers or clusters in the list. Double click on the server or cluster, or select it and then click on either Connect or Configure. The bookmark options are: • Add.
Chapter 4: Cluster Administration 28 • Set Default. If you set a server as the default, HP Clustered File System will first attempt to use that server to connect to the cluster. If the server is not available, HP Clustered File System will start at the top of the list of servers and attempt to connect to them in turn until it reaches an available server. If you have several clusters in the Bookmarks list, you can set one of them to be the default for connections when the HP Management Console is started.
Chapter 4: Cluster Administration 29 Update an Existing .matrixrc File to Use New Features If your .matrixrc file was used in an HP Clustered File System release earlier than 3.4.0 and has single servers configured, you will need to create a bookmark entry for the cluster in order to use the “synchronize bookmarks” feature. To do this, take one of these steps: • Click the Add button on the HP Clustered File System Connect window to add a new bookmark.
Chapter 4: Cluster Administration 30 machine. The local machine then uses the software from the cache whenever possible. When you invoke the HP Management Console or mx commands, by default the application checks the current software version on the server to which it is being connected and then downloads the software only if that version is not already in the local cache.
Chapter 4: Cluster Administration 31 Console Icons” on page 269 describes the icons used to represent cluster entries and their status. The tabs on the Console window show different views of the cluster. Servers Tab This tab lists the entire configuration of each server configured in the cluster, including the network interfaces on the server, any virtual hosts associated with those interfaces, any device monitors created on the server, and any PSFS filesystems mounted on the server.
Chapter 4: Cluster Administration 32 Virtual Hosts Tab The Virtual Hosts tab shows all virtual hosts in the cluster. For each virtual host, the window lists the network interfaces on which the virtual host is configured, any service monitors configured on that virtual host, and any device monitors associated with that virtual host.
Chapter 4: Cluster Administration 33 Applications Tab This view shows the application monitors configured in the cluster and provides the ability to manage and monitor them from a single screen. The tab uses a table format, with a column for each server in the cluster. The application monitors appear in the rows of the table. You can reorder the information on this tab or limit the information that is displayed.
Chapter 4: Cluster Administration Filesystems Tab The Filesystems tab shows all PSFS filesystems in the cluster.
Chapter 4: Cluster Administration 35 Cluster Alerts The Alerts section at the bottom of theHP CFS Management Console window lists errors that have occurred in cluster operations. Double click on an alert message to see all of the information about the alert. For alerts affecting cluster components such as servers or monitors, you can double-click in the Source column to highlight the source of the error on the main Management Console window.
Chapter 4: Cluster Administration 36 If you receive an alert telling you to reboot a server, the message will remain in the Alerts section until either HP Clustered File System is restarted on the rebooted server or the server is removed from the cluster. To view the current Alerts from the command line, use the mx alert status command. HP Clustered File System Operations Many HP Clustered File System operations can be run in the background.
Chapter 4: Cluster Administration 37 Use the following command at the Command Prompt to see the software installed on specific servers: mx server listsoftware If the operating system uses the 64-bit architecture, x64 will be specified in the output. Otherwise, the architecture is assumed to be 32-bit. Start HP Clustered File System By default, HP Clustered File System starts automatically when the system is booted. This feature is controlled by the Startup dialog.
Chapter 4: Cluster Administration 38 When HP Clustered File System is started, the mxcheck utility is run to verify that the server meets the configuration requirements needed for HP Clustered File System. Output from the utility appears on the screen and is also written to the Application Log section of the Event Viewer. Stop HP Clustered File System Before stopping HP Clustered File System, be sure to shut down all applications that are accessing PSFS filesystems.
Chapter 4: Cluster Administration 39 NOTE: Be careful not to accidently back up PSFS filesystems multiple times by using both your site-defined drive letter/mount point assignments and the reserved mount points created by HP Clustered File System. Filter out the reserved mount points from backup jobs, and instead use your own site-defined assignments.
Chapter 4: Cluster Administration 40 If the -X option was used to back up the mxds datastore, use the mpimport -X command to restore the datastore to the membership partitions. If the -x option was used for the backup, use mpimport -x to restore the datastore. To restore the device database and volume database to the membership partitions, use the mpimport -f command. The input file is typically conf\MP.backup.
Chapter 4: Cluster Administration 41 Port Transport Type Description 9050 TCP Proprietary connection from the Management Console (configurable, as described below) 9070 TCP HTTP connection from the Management Console (fixed, IANA registration has been applied for) 9071 TCP HTTPS connection from the Management Console (fixed, IANA registration has been applied for) Internal Network Port Numbers The following network port numbers are used for internal, server-toserver communication.
5 Configure Servers Before adding a server to a cluster, verify the following: • The server is connected to the SAN if it will be accessing PSFS filesystems. • The server is configured as a fully networked host supporting the services to be monitored. For example, if you want HP Clustered File System to provide failover protection for your Web service, the appropriate Web server software must be installed and configured on the servers.
Chapter 5: Configure Servers 43 2. Start the HP Management Console on one node (select Start > Programs > On the HP Clustered File System Connect window, specify the server, click the Connect button, and select Configure. If you are prompted for the user name and password, specify the appropriate values. 3. Select the Storage Configuration tab on the Configure Cluster window. In the SAN Switches section of the tab, click the Add button to configure the new switch. 4.
Chapter 5: Configure Servers 44 a. Start the HP Management Console on the new server (select Start > Programs > On the HP Clustered File System Connect window, specify the server, click the Connect button, and select Configure. If you are prompted for the user name and password, specify the appropriate values. b. When the Configure Cluster window appears, click Import. Then, on the Import window, type the IP address or DNS name of the server from which you want to import the configuration. 4.
Chapter 5: Configure Servers 45 • Use the HP Management Console to change drive letter assignments. Note that the change will take place on all nodes and may affect applications. • Use Windows Disk Manager to change the assignments. If you are using Windows 2000 Terminal Services to make the change, you will need to log out and then log back in before you can use the reassigned drive letters.
Chapter 5: Configure Servers 46 Server Severity to determine whether it is possible to fail back virtual hosts to that server automatically. ClusterPulse also considers each virtual host’s failback policy, which specifies whether it should fail back or remain on the backup server. (See “Virtual Hosts and Failover” on page 185 for more information.) The Server Severity can be configured on each server. The settings are: AUTORECOVER. This is the default value for Server Severity.
Chapter 5: Configure Servers 47 • If the server is hosting other HP Clustered File System applications, disable the server and wait for the applications to move to other servers. Servers can be deleted on the Cluster-Wide Configuration tab on the Configure Cluster window. Select the server and then click Remove Server. To delete servers from the command line, use this command: mx server delete ...
Chapter 5: Configure Servers 48 2. Change the IP address of server S2. We will now identify the server as S2a. 3. Start HP Clustered File System on server S2a. The server joins the cluster, which now consists of servers S1, S2, S3, and S2a. Server 2 is down and S1, S2a, and S3 are up. 4. Delete server S2 from the cluster. This step will remove references to the server. 5. Update virtual hosts and any other cluster entities that used server S2 to now include S2a.
Chapter 5: Configure Servers 49 3. Select the server in the Address column and then click Export. The Last Operation Progress column will display status messages as the configuration is exported to the server. 4. Start HP Clustered File System on the server. The server will still be selected in the Address column. Click Start Service to start HP Clustered File System. A status message will appear in the Last Operation Progress column.
Chapter 5: Configure Servers 50 Upgrade One Server and Export This procedure requires that HP Clustered File System be stopped on all servers. Execute the procedure on one server in the HP. 1. On one server, start the Management Console. Enter the IP address of the server on the HP Clustered File System Connect window, and click the Configure button. NOTE: If there is a .matrixrc file on the system running mxconsole, you will see a Disconnect dialog instead of the Connection Parameters window.
Chapter 5: Configure Servers 51 Supported HP Clustered File System Features HP Clustered File System provides device monitors, service monitors, and notifiers. The license agreement for each server determines which features are supported on that server. You can use the Display Features option on the HP CFS Management Console to determine the supported features for a particular server. Select the server on the Servers window, right-click, and select View Features.
Chapter 5: Configure Servers 52 Migrate Existing Servers to HP Clustered File System In HP Clustered File System, the names of your servers should be different from the names of the virtual hosts they support. A virtual host can then respond regardless of the state of any one of the servers. In some cases, the name of an existing server may have been published as a network host before HP Clustered File System was configured.
Chapter 5: Configure Servers 53 HP Clustered File System provides failover protection for this configuration. Without HP Clustered File System, requests are simply alternated between the servers. If a server goes down, requests to that server do not connect. To configure for round-robin load balancing with HP Clustered File System, you define virtual hosts as addresses in the A records on the DNS. Then use HP Clustered File System to associate primary and backup servers with that virtual host.
Chapter 5: Configure Servers 54 The DNS server is configured for round robin using the following A records: Address Time to Live Record Service Type IP Address www.acmd.com. 60 IN A 10.1.1.1 www.acmd.com. 60 IN A 10.1.1.2 Address: The virtual hostnames that customers use to send requests to your site. (The period following the “.com” in the address is required.) Time to Live: The number of seconds an address can be cached by intermediate DNS servers for load balancing.
6 Configure Network Interfaces When you add a server to the cluster, HP Clustered File System determines whether each network interface on that server meets the following conditions: • The network interface is up and running. • The network interface is multicast-capable. • 802.3x Ethernet flow control is not used. • Each network interface card (NIC) is on a separate network. Network interfaces meeting these conditions are automatically configured into the cluster.
Chapter 6: Configure Network Interfaces 56 specify the networks that you prefer to use for the administrative traffic. For performance reasons, we recommend that these networks be isolated from the networks used by external clients to access the cluster. When HP Clustered File System is started, the PanPulse process selects the administrative network from the available networks. When a new server joins the cluster, the PanPulse process on that server tries to use the established administrative network.
Chapter 6: Configure Network Interfaces 57 configuration file, the Servers window may not match your current network configuration exactly.) Each network interface is labeled “Hosting Enabled” or “Hosting Disabled,” which indicates whether it can be used for virtual hosts. The Management Console uses the following icons to represent the status of each network interface. The network interface allows administrative traffic. A green checkmark indicates the current administrative network.
Chapter 6: Configure Network Interfaces 58 When the PanPulse process locates another network that all servers in the cluster can access, all of the servers fail over the administrative network to that network. The process looks for another network in this order: • Networks that allow administrative traffic. • Networks that discourage administrative traffic.
Chapter 6: Configure Network Interfaces 59 To allow or discourage administrative traffic on a network interface, select that network interface on the Servers window, right-click, and then select either “Allow Admin. Traffic,” “Discourage Admin. Traffic,” or “Exclude Admin Traffic” as appropriate. The setting is applied to all interfaces within the same subnet on all servers of the cluster. On the command line, issue the appropriate mx netif command: mx netif allowadmintraffic ...
Chapter 6: Configure Network Interfaces 60 • To add a network interface, select the server for that interface on the Servers window, right-click, and select Add Network Interface. • To modify an existing network interface, select that interface, rightclick, and select Properties. The network interface must be down; you cannot modify an “up” network interface. Server: The name or IP address of the server that will include the new network interface. IP: Type the IP address for the network interface.
Chapter 6: Configure Network Interfaces 61 mx netif update --netmask [--adminTraffic ] Remove a Network Interface This option can be useful when performing off-line configuration of a server. To remove a network interface, select that interface on the Servers window, right-click, and then select Delete. You cannot delete a network interface that is up.
7 Configure the SAN SAN configuration includes the following: • Import SAN disks into the cluster. • Deport SAN disks from the cluster. • Display information about SAN disks. Overview SAN Configuration Requirements Be sure that your SAN configuration meets the requirements specified in the HP StorageWorks Clustered File System Installation Guide or Setup Guide, depending on your product. Storage Control Layer Module The Storage Control Layer (SCL) module manages shared SAN devices.
Chapter 7: Configure the SAN 63 As part of managing shared SAN devices, the SCL also gives each disk a globally unique device identifier that all servers in the cluster use to access the device. Although the identifiers (such as psd2 or psd2p6) appear on certain HP CFS Management Console windows, they are generally only needed for internal use by HP Clustered File System.
Chapter 7: Configure the SAN 64 may have created dynamic volumes using those higher-numbered partitions. The higher-numbered partitions will continue to work correctly; however, you should be aware of the following: • A new volume cannot include subdevices having partition numbers above 31. Existing volumes cannot be extended to include the highernumbered partitions. • You will not be able to take a hardware snapshot of partitions with numbers above 31.
Chapter 7: Configure the SAN 65 I/O operations. The exact alignment characteristics vary by manufacturer and model; consult your storage vendor for alignment recommendations. This issue occurs because the Windows partition table causes space to be reserved at the start of the LUN, which can cause a misalignment with the array’s storage.
Chapter 7: Configure the SAN 66 Containing More Than 31 Partitions” on page 63 for more information.) • Disks containing an active membership partition can be imported; however, the partition containing the active membership partition cannot be used for a filesystem. Before importing the disk, you can run mprepair to inactivate the membership partition (see “The mprepair Utility” on page 247). You will then be able to use the partition when you import the disk into the cluster.
Chapter 7: Configure the SAN 67 To determine the uuid for a disk, run the following command, which prints the uuid, the size, and a vendor string for each unimported SAN disk. mx disk status You can also use the Disk Info window to import a disk. Deport SAN Disks Deporting a disk removes it from cluster control. You cannot deport a disk that contains a membership partition. To deport a disk from the HP CFS Management Console, select Storage > Disk > Deport or click the Deport icon on the toolbar.
Chapter 7: Configure the SAN 68 Local Disk Information The Disk Info window displays disk information from the viewpoint of the local server. It can be used to match the disk names appearing in the Microsoft Disk Management utility (the Local Name) with the disk names that HP Clustered File System uses (the PSD Name). You can also use this window to import or deport SAN disks.
Chapter 7: Configure the SAN 69 NOTE: Because the first partition on GPT disks cannot be used by HP Clustered File System, that partition is skipped when HP Clustered File System assigns device identifiers to the partitions. The first identifier, psdXp1, is assigned to partition 2, the second identifier, psdXp2, is assigned to partition 3, and so on.
Chapter 7: Configure the SAN 70 The window shows the following information for each PSFS filesystem: • The label assigned to the filesystem. • The mount point or drive letter assigned to the filesystem. Click in the cell to see the mount point/drive letter for each server on which the filesystem is configured. • The volume used for the filesystem. Click in the cell to see the properties for the filesystem. • The number of CIFS shares.
Chapter 7: Configure the SAN 71 The options are: -i Display information for imported disks (the default). -u Display information for unimported disks. -v Display available volumes. -f Display PSFS filesystem volumes. -a Display all information; for -v, display all known volumes. -l Additionally display host-local device name. -r Additionally display local device route information. -U Display output in the format used by the HP Management Console.
Chapter 7: Configure the SAN 72 Show Local Device Information The -l option displays the local device name for each disk, as well as the default disk information. When combined with -u, it displays local device names for unimported disks. sandiskinfo -al Disk: \\.\Global\psd1 (Membership Disk) Uid: 20:00:00:04:cf:13:38:18::0 SAN info: fcswitch5:7 Vendor: SEAGATE Capacity: 34733M Local Device Paths: \\.
Chapter 7: Configure the SAN 73 Disk=20:00:00:04:cf:13:38:18::0 partition=08 type=(unknown) Volume: \\.\Global\psd2p4 Size: 9220M Disk=20:00:00:04:cf:13:38:3a::0 partition=04 type=(unknown) When combined with -a, the -v option lists all volumes, including those used for PSFS filesystems and membership partitions. Options for Dynamic Volumes The following sandiskinfo options apply only to dynamic volumes.
Chapter 7: Configure the SAN 74 Dynamic Volume: psv2 Size: 490M Stripe=32K/optimal Subdevice: 20:00:00:04:cf:13:38:18::0/7 Size: 490M psd1p7 Dynamic Volume: psv3 Size: 490M Stripe=8K/optimal Subdevice: 20:00:00:04:cf:13:38:18::0/10 Size: 490M psd1p10 Display Unimported Dynamic Volumes The following options can be used to display information about unimported dynamic volumes: --unimported-volumes Lists dynamic volumes that are currently unimported.
8 Configure Dynamic Volumes HP Clustered File System includes a CFS Volume Manager that you can use to create, extend, recreate, or delete dynamic volumes, if you have purchased the separate license. Dynamic volumes allow large filesystems to span multiple disks, LUNs, or storage arrays. Dynamic volumes can be deported from the cluster and later imported back into the original cluster or into another cluster. Overview Basic and Dynamic Volumes Volumes are used to store PSFS filesystems.
Chapter 8: Configure Dynamic Volumes 76 Types of Dynamic Volumes HP Clustered File System supports two types of dynamic volumes: striped and concatenated. The volume type determines how data is written to the volume. • Striping. When a dynamic volume is created with striping enabled, a specific amount of data (called the stripe size) is written to each subdevice in turn. For example, a dynamic volume could include three subdevices and a stripe size of 64 KB.
Chapter 8: Configure Dynamic Volumes 77 Destroying a dynamic volume removes the volume signature from each subdevice associated with the volume, freeing the subdevices for use in other dynamic volumes or filesystems. Configuration Limits The configuration limits for dynamic volumes are as follows: • A maximum size of 16 TB for a dynamic volume. • A maximum of 128 dynamic volumes per cluster. • A maximum of 128 subdevices per dynamic volume.
Chapter 8: Configure Dynamic Volumes 78 Create a Dynamic Volume When you create a dynamic volume, you will need to select the subdevices to be included in the volume. If the volume will be striped, you will also need to select a stripe size. Optionally, HP Clustered File System can also create a filesystem that will be placed on the dynamic volume.
Chapter 8: Configure Dynamic Volumes 79 If you are creating a filesystem, you can also set various filesystem options. Click the Options button to see the Filesystem Options dialog, which allows you to select the block size for the filesystem and to configure quotas. (See “Filesystem Options” on page 101 for details about this dialog.) Available Subdevices: The display includes all imported subdevices that are not currently in use by another imported volume and that do not have a filesystem in place.
Chapter 8: Configure Dynamic Volumes 80 To create a dynamic volume from the command line, use this command. You can use either spaces or commas to separate the subdevice names. mx dynvolume create [--stripesize <4KB-64MB>] The following command lists the available subdevices: mx dynvolume showcreateopt Dynamic Volume Properties To see the configuration for a dynamic volume, select Storage > Dynamic Volume > Volume Properties and then choose the volume that you want to view.
Chapter 8: Configure Dynamic Volumes 81 The Stripe State reported in the “Dynamic Volume Properties” section will be one of the following: • Unstriped. The volume is concatenated and striping is not in effect. • Optimal. The volume has only one stripeset that includes all subdevices. Each subdevice is written to in turn. • Suboptimal. The volume has been extended and includes more than one stripeset. The subdevices in the first stripeset will be completely filled before writes to the next stripeset begin.
Chapter 8: Configure Dynamic Volumes 82 View Stripeset Information To see the contents of a stripeset, run mpdump.exe with no options from the Command Prompt. The command is in the directory Program Files\Hewlett-Packard\HP Clustered File System\bin on the drive where you installed HP Clustered File System. Following is some sample output. Current Product MP Version: 2 Membership Partition Version: 2 Membership Partitions: 10:00:00:50:13:b3:41:66::63/2 (ONLINE) . . .
Chapter 8: Configure Dynamic Volumes 83 To extend a dynamic volume on the Management Console, select Storage > Dynamic Volume > Extend Volume and then choose the volume that you want to extend. If a filesystem is on the volume, the Extend Dynamic Volume window shows information for both the dynamic volume and the filesystem. Dynamic Volume Properties: The current properties of this dynamic volume. Filesystem Properties: The properties for the filesystem located on this dynamic volume.
Chapter 8: Configure Dynamic Volumes 84 imported disks, including subdevices belonging to unimported volumes. Use the arrow keys to reorder those subdevices if necessary. Extend Filesystem: To increase the size of the filesystem to match the size of the extended volume, click this checkbox. When you click OK, the dynamic volume will be extended. NOTE: If you selected a subdevice that is associated with an unimported volume, you will see a message reporting that the subdevice contains a volume signature.
Chapter 8: Configure Dynamic Volumes 85 To delete a dynamic volume from the command line, use the following command: mx dynvolume destroy Recreate a Dynamic Volume Occasionally you may want to recreate a dynamic volume. For example, you might want to implement striping on a concatenated volume or, if a striped dynamic volume has been extended, you might want to recreate the volume to place all of the subdevices in the same stripe set.
Chapter 8: Configure Dynamic Volumes 86 You can change or reorder the subdevices used for the volume and enable striping if desired. To recreate a volume from the command line, you will first need to use the dynvolume destroy command and then run the dynvolume create command.
Chapter 8: Configure Dynamic Volumes 87 to a dynamic volume. The new dynamic volume will contain only the original subdevice; you can use the Extend Volume option to add other subdevices to the dynamic volume. NOTE: The new dynamic volume is unstriped. It is not possible to add striping to a converted dynamic volume. If you want to use striping, you will need to recreate the volume.
Chapter 8: Configure Dynamic Volumes 88 Dynamic Volume Recovery The Dynamic Volume Recovery feature provides the ability to rebuild a dynamic volume from the LUNs originally in the volume. This feature can be used for purposes such as the following: • Move dynamic volumes from one cluster to another. Deport the dynamic volumes on the original cluster and then import them on the new cluster. • Recover dynamic volumes from mirrored LUNs for disaster recovery purposes.
Chapter 8: Configure Dynamic Volumes 89 Select the dynamic volumes that you want to deport and click the Deport icon in the toolbar. To deport dynamic volumes from the command line, use this command: mx dynvolume deport ... Import a Dynamic Volume When a dynamic volume is imported, the unimported LUNs associated with the volume will be imported and the psv binding, which HP Clustered File System uses to control access to the dynamic volume, will be created.
Chapter 8: Configure Dynamic Volumes 90 Select the dynamic volumes that you want to import and click the Import icon in the toolbar. To import dynamic volumes from the command line, first use the following command to list the dynamic volumes that can be imported: mx dynvolume list --importable Locate the entry for the that you want to import. The appears in the first column of the output. Then use the following command to import the volume, specifying the .
Chapter 8: Configure Dynamic Volumes 91 Duplicate. The volume cannot be reassembled because more than one physical device matched a logical subdevice specification. Potential causes of this problem are: • Both sides of a mirror were exposed (that is, lunmasked) to the cluster. • One of the devices is a snapclone of the other. • One of devices is a disk copy or block-level backup/copy of the other. Truncated.
Chapter 8: Configure Dynamic Volumes The mx dynvolume create, mx dynvolume extend, and mx fs create commands include the --reuse option, which causes the operation to proceed even though the specified subdevice may already be in use by another dynamic volume. The operation will destroy the volume previously using the subdevice.
9 Configure PSFS Filesystems HP StorageWorks Clustered File System provides the PSFS filesystem. This direct-access shared filesystem enables multiple servers to concurrently read and write data stored on shared SAN storage devices. A journaling filesystem, PSFS provides live crash recovery.
Chapter 9: Configure PSFS Filesystems 94 The PSFS filesystem does not migrate processes from one server to another. If you want processes to be spread across servers, you will need to take the appropriate actions. Journaling Filesystem When you initiate certain filesystem operations such as creating, opening, or moving a file or modifying its size, the filesystem writes the metadata, or structural information, for that event to a transaction journal. The filesystem then performs the operation.
Chapter 9: Configure PSFS Filesystems 95 Filesystem Management and Integrity HP Clustered File System uses the SANPulse process to manage PSFS filesystems. SANPulse performs the following tasks. • Coordinates filesystem mounts, unmounts, and crash recovery operations. • Checks for cluster partitioning, which can occur when cluster network communications are lost but the affected servers can still access the SAN.
Chapter 9: Configure PSFS Filesystems 96 Disk Quotas Disk quotas are enabled or disabled at the filesystem level. When quotas are enabled, the filesystem performs quota accounting to track the disk use of each user having an assigned disk quota. When you create a filesystem and enable quotas, you can also set options including the default hard and soft limits for users on the filesystem. A hard limit specifies the maximum amount of disk space in the filesystem that can be used by files owned by the user.
Chapter 9: Configure PSFS Filesystems 97 The Windows operating system and Windows Disk Management utilities are not fully aware of PSFS filesystems or HP Clustered File System dynamic volumes. Although these Microsoft utilities can be useful for troubleshooting issues, they cannot display status from the perspective of an HP Clustered File System volume or filesystem. Dynamic Volumes Dynamic volumes created with the HP Clustered File System Volume Manager are not the same as Microsoft dynamic volumes.
Chapter 9: Configure PSFS Filesystems 98 NOTE: Be careful not to accidentally back up PSFS filesystems multiple times by using both your site-defined drive letter/mount point assignments and the reserved mount points. Filter out the reserved mount points from backup jobs, and instead use your own site-defined assignments.
Chapter 9: Configure PSFS Filesystems 99 • You do not have to assign the same mount points or drive letters to each filesystem on each node. When you use the HP Management Console to assign a drive letter/mount point, the assignment applies to every node. However, you can use the mx fs assign command or the Windows LDM, mountvol.exe, or diskpart.exe commands to assign drive letters/mount points uniquely on each node. • You can assign multiple mount points to the same filesystem if necessary.
Chapter 9: Configure PSFS Filesystems 100 Create a Filesystem from the Management Console To create a filesystem, select Cluster > Add > Add Filesystem on the HP CFS Management Console, or click the Filesystem icon on the toolbar. Label: Type a label that identifies the filesystem.
Chapter 9: Configure PSFS Filesystems 101 NOTE: The Create a Filesystem window identifies volumes by their HP Clustered File System names such as psd1p2. To match these names to their local Windows names, open the Disk Info window (select the server on the Servers tab, right-click, and then select View Local Disk Info).
Chapter 9: Configure PSFS Filesystems 102 The Quotas tab allows you to specify whether disk quotas should be enabled on the filesystem. You can enable or disable quotas on a filesystem at any time. (See “Enable or Disable Quotas” on page 121.) When you enable quotas, you can also set default hard and soft quotas and select other quota parameters. To enable quotas on the filesystem, check the “Enable quotas” checkbox. You can then set default hard and soft quotas for users on that filesystem.
Chapter 9: Configure PSFS Filesystems 103 window allows you to specify whether hard limits should be enforced. You can also specify the type of logging that you want to use. (By default, hard limits are not enforced and logging is not performed.) The Quota Assignment Policy tab lets you select a default quota for new users who do not have an explicit quota limit. The users inherit the default setting the first time that they own a file on the filesystem.
Chapter 9: Configure PSFS Filesystems 104 There are two options: • Static default quota. The default limits are explicitly assigned to the user. Subsequent changes to the default values for the filesystem do not affect the quota limits for the user. This is the default, and matches the NTFS policy. • Dynamic default quota. No explicit default limits are assigned to the user. Instead, the effective limits applied to the user are the default values for the filesystem at the time of each operation.
Chapter 9: Configure PSFS Filesystems 105 If you want to change this option after the filesystem is created, you will need to either disable quotas and then re-enable them with the changed option, or recreate the filesystem. Recreate a Filesystem If you want to reformat a filesystem, select the filesystem on the Filesystems window, right-click, and select Recreate Filesystem.
Chapter 9: Configure PSFS Filesystems 106 Create a Filesystem from the Command Line To create a filesystem, use the psfsformat or mx commands. The psfsformat Command Use this syntax: psfsformat[-fq] [-l
Chapter 9: Configure PSFS Filesystems 107 The -o option has the following parameters: • blocksize=# Specify the block size (either 4096 or 8192) for the filesystem. • disable-fzbm Create the filesystem without Full Zone Bit Maps (FZBMs). The FZBM on-disk filesystem format reduces the amount of data that the filesystem needs to read when allocating a block. It is particularly useful for speeding up allocation times on large, relatively full filesystems. • enable-quotas Enable quotas on the filesystem.
Chapter 9: Configure PSFS Filesystems 108 • logsoftlimit or nologsoftlimit Whether file operations that result in exceeding a user’s soft limit are logged in the system event log. nologsoftlimit is the default. • enforcehardlimit or noenforcehardlimit Whether file operations that will result in exceeding a user’s hard limit are denied or allowed. noenforcehardlimit is the default.
Chapter 9: Configure PSFS Filesystems 109 • [--defaultUserHardLimit ] The default hard limit on the filesystem. unlimited specifies that there is no default. The optional size modifiers specify that the size is in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T). If a modifier is not specified, the size will be calculated in bytes. (The default is rounded down to the nearest filesystem block.) • [--defaultUserSoftLimit ] The default soft limit.
Chapter 9: Configure PSFS Filesystems 110 Drive Letters and Mount Paths To provide access to a PSFS filesystem, you will need to associate it with a drive letter or a mount path. Assign Drive Letters or Paths To assign a drive letter or mount path, select the filesystem on the Filesystems tab on the HP CFS Management Console, right-click, and select Assign Path. The assignment is made on all servers in the cluster.
Chapter 9: Configure PSFS Filesystems 111 You can also assign a drive letter or path from the command line: mx fs assignpath --path [--createdir] The --createdir option creates the mount path if it does not already exist on each server. If a server was out of the cluster while the drive assignment was made or you add a new server to the cluster, you can use the above command to add the drive assignment to that server.
Chapter 9: Configure PSFS Filesystems 112 Remove Drive Letter or Path Assignments If you no longer want to associate a filesystem with a particular drive letter or mount path, you can remove the assignment. Before doing this, be sure that applications are not currently accessing the filesystem via the drive letter or mount path. To remove a drive letter or path assignment, select the filesystem on the Filesystems tab, right-click, and then select Unassign Paths.
Chapter 9: Configure PSFS Filesystems 113 Extend a Mounted Filesystem If the Volume allocation display shows that there is space remaining on the volume, you can use the “Extend Filesystem” option on the Properties window to increase the size of the PSFS filesystem to the maximum size of the volume. When you click on the Extend Filesystem button, you will see a warning such as the following. When you click Yes, HP Clustered File System will extend the filesystem to use all of the available space.
Chapter 9: Configure PSFS Filesystems 114 Features Tab The Features tab shows whether Full Zone Bit Maps (FZBM) or quotas are enabled on the filesystem. Quotas Tab The Quotas tab allows you to enable or disable quotas on the filesystem, to set the default hard and soft limits, and to configure other quota options. See “Filesystem Options” on page 101 for more information about the quota options.
Chapter 9: Configure PSFS Filesystems 115 View Filesystem Status from the Command Line You can use the following mx command to see status information. mx fs status [--verbose] [--standard|--snapshots] The command lists the status of each filesystem. The --verbose option also displays the FS type (always PSFS), the size of the filesystem in KB, and the UUID of the parent disk. The --standard option shows only standard filesystems; the --snapshot option shows only snapshots.
Chapter 9: Configure PSFS Filesystems 116 Extend a Basic Volume and Its Filesystem The HP Management Console provides an option to increase the size of a PSFS filesystem and the basic volume, or partition, on which it is located. NOTE: This option cannot be used to extend filesystems on disks containing an HP Clustered File System membership partition. Select the filesystem on the Management Console, right-click, and select Extend Volume.
Chapter 9: Configure PSFS Filesystems 117 filesystems on the disk that will be deported. Users will not be able to access these filesystems until the resize operation is complete. When you click OK on the Confirm Extend window, HP Clustered File System deports the disk, resizes the filesystem partition by the specified size, reimports the disk, and then expands the filesystem to fill the additional space in the partition.
Chapter 9: Configure PSFS Filesystems 118 The next example uses a mount path: psfssuspend c:\psfs_mount\ The psfssuspend command prevents modifications to the filesystem and forces any changed blocks associated with the filesystem to disk. The command performs these actions on all servers that have mounted the filesystem and then returns successfully. Any process attempting to modify a suspended filesystem will block until the filesystem is resumed.
Chapter 9: Configure PSFS Filesystems 119 The device can be specified in several ways: • By the drive letter, such as X: • By the mount point (junction), such as C:\san\vol2 • By the psd or psv name, such as psd2p2 or psv3 Perform a Filesystem Check If a filesystem is not unmounted cleanly, the journal will be replayed the next time the filesystem is mounted to restore consistency. You should seldom need to check the filesystem.
Chapter 9: Configure PSFS Filesystems 120 For more information about the check, click the Details button. If psfscheck locates errors that need to be repaired, it will display a message telling you to run the utility from the command line. For more information, see the HP StorageWorks Clustered File System Command Reference Guide.
10 Manage Disk Quotas The PSFS filesystem supports disk quotas, which limit the amount of disk space on a filesystem that can be used for individual user’s files. Hard and Soft Filesystem Limits The PSFS filesystem supports both hard and soft filesystem quotas. A hard quota specifies the maximum amount of disk space on a particular filesystem that can be used by files owned by the user.
Chapter 10: Manage Disk Quotas 122 When you create a PSFS filesystem, you can specify whether quotas should be enabled and you can set quota options on the filesystem. (See “Create a Filesystem” on page 99.) Quotas can also be enabled or disabled on an existing filesystem, using either the HP CFS Management Console or HP Clustered File System commands. The filesystem will be unmounted briefly during the enable/disable operation.
Chapter 10: Manage Disk Quotas 123 Check or uncheck “Enable quotas” as appropriate. If you are enabling quotas, you can set the default hard and soft quotas for users on that filesystem. To do this, click on “Limit” and then specify the appropriate size in either kilobytes, megabytes, gigabytes, or terabytes. The default is rounded down to the nearest filesystem block. (If you do not want a default limit, click “Unlimited.”) The default quotas apply to all users who do not have individual quotas.
Chapter 10: Manage Disk Quotas 124 Manage User Quotas The mx quota command can be used to manage user quotas from the command line. See the HP StorageWorks Clustered File System Command Reference Guide for details about this command. You can also use Microsoft Windows features such as the following to manage user quotas. Refer to the Windows documentation for more information about these features. Quota GUI. The Windows Quota GUI can be accessed from Microsoft Windows Explorer.
Chapter 10: Manage Disk Quotas 125 The Quota Entries window. This window can be accessed via Microsoft Windows Explorer. Display the Properties for the filesystem, select the Quota tab, and then click the Quota Entries button. When using the Quota Entries window, you should be aware of the following: • The “Amount Used” column includes PSFS metadata as well as the space required for the user data in each user’s files. The space used may be different that it would be on another type of filesystem.
Chapter 10: Manage Disk Quotas 126 Back Up and Restore Quotas The psfsdq and psfsrq commands can be used to back up and restore the quota information stored on the PSFS filesystem. These commands should be run in conjunction with standard filesystem backup utilities, as those utilities do not save the quota limits set on the filesystem. NOTE: We recommend that you use the psfsdq and psfsrq commands instead of the Import and Export options on the Quota Entries window.
Chapter 10: Manage Disk Quotas 127 Examples The following command saves the quota information for the filesystem located on device psd1p5. psfsdq -f psd1p5.quotadata psd1p5 The next command restores the data to the filesystem: psfsrq -f psd1p5.
11 Manage Hardware Snapshots HP Clustered File System provides support for taking hardware arraybased snapshots of PSFS filesystems. The snapshots provide a point-intime image of a PSFS filesystem. Users or the Administrator can then use the Microsoft Shadow Copies of Shared Folders feature to recover individual files or whole volumes from the appropriate snapshot image. The subdevices containing the PSFS filesystems must reside on one or more storage arrays that are supported for snapshots.
Chapter 11: Manage Hardware Snapshots 129 CommandView EVA software must be installed on your Management Appliance. Be sure that your versions of SSSU and CommandView EVA are consistent. The SSSU utility must be renamed to %Program Files%\Hewlett-Packard\SANworks \Element Manager for StorageWorks HSV\Bridge\sssu.exe. Engenio Storage Arrays To take hardware snapshots on Engenio storage arrays, a supported version of SANtricity Storage Manager client software must be installed on all servers in the cluster.
Chapter 11: Manage Hardware Snapshots 130 HP EVA Array-Based Snapshots The following dialog appears. Label. The label is used to identify the snapshot on the Management Console. Share as Shadow Copy of Shared Folder. Check this box if you want users to be able to use the snapshot as a shadow copy. HP EVA Options. Snapshots initially consume storage space only to store pointers to the data in the source filesystem, growing in size when source filesystem data is changed.
Chapter 11: Manage Hardware Snapshots 131 Engenio Snapshots The dialog asks for the following information: Label. The label is used to identify the snapshot on the Management Console. Share as Shadow Copy of Shared Folder. Check this box if you want users to be able to use the snapshot as a shadow copy. Engenio Options. The first time a snapshot is taken of a particular filesystem, the snapshot process creates a repository on disk that stores pointers to the data in the source filesystem.
Chapter 11: Manage Hardware Snapshots 132 Snapshots appear on the Management Console beneath the entry for the filesystem, while snapclones appear as a separate filesystem. Each snapshot or snapclone is assigned an HP Clustered File System psd or psv device name. In the following example, the first two filesystem entries are snapclones. The next entry is a regular filesystem, and is followed by snapshots of the filesystem.
Chapter 11: Manage Hardware Snapshots 133 Delete a Snapshot Storage arrays typically limit the number of snapshots that can be taken of a specific filesystem. Before taking an additional snapshot, you will need to delete an existing snapshot. Also, if you want to destroy a filesystem, you will first need to delete all snapshots of that filesystem. To delete a snapshot, select the snapshot on the Management Console, right-click, and select Delete (or use (or select Edit > Filesystem > Create Snapshot).
Chapter 11: Manage Hardware Snapshots 134 To unassign a drive letter or path, type the following: mx fs unassign Using Shadow Copies of Shared Folders When you take a snapshot of a PSFS filesystem, you can specify that it should be shared as a shadow copy. The snapshot provides a point-intime version of the filesystem.
12 Configure Security Features HP Clustered File System provides the following security features: • Role-Based Security. By default, the machine’s local Administrators group has full cluster rights and can perform all HP Clustered File System operations. You can use the Role-Based Security feature to create roles that allow or deny other users and groups the ability to perform specific cluster operations.
Chapter 12: Configure Security Features 136 creating and modifying filesystems. The deny status overrides the allow status. HP Clustered File System provides a built-in System Administrator role that includes all members of the machine local Administrators group. This group has permission to perform all cluster operations.
Chapter 12: Configure Security Features 137 Add a New Role To define a new role, click Add to display the Role Properties window. Name: Type a name for the new role. Role names cannot include the forward slash character (/). Enabled: By default, the role will be enabled when it is created. To disable the role, remove the checkmark. Resource: Use this pane to specify the rights that will apply to the new role.
Chapter 12: Configure Security Features 138 • Setup. Manipulate settings that affect the entire cluster configuration, including membership partitions, licensing, snapshot configuration, fencing configuration, servers, notification settings, and security roles. The Event Notification, Security, and Servers resources are subsets of this resource. – Event Notification. Configure event notification settings. Create affects the ability to enable or disable notifiers.
Chapter 12: Configure Security Features 139 – Custom. Manipulate virtual hosts, service monitors, and device monitors. Create affects the ability to create new application objects. Modify affects the ability to change existing application objects, including adding new servers to an object and rehosting objects. Delete affects the ability to delete application objects. – File Serving. Manipulate MxFS for CIFS application objects, including Virtual File Services and Cluster File Shares.
Chapter 12: Configure Security Features 140 Assign Rights Manually Rights can be assigned at different resource levels. If rights are applied to the top-level resource, they apply to all cluster resources. Rights assigned to a second level resource such as Storage apply just to the resources nested in that resource. For example, rights assigned to the Storage resource apply to both the Volumes and Filesystems resources. You can also assign rights to the lowest-level resources.
Chapter 12: Configure Security Features 141 When you select a template, the rights appropriate to that role will be marked with a checkmark to allow the right or an X to deny the right. You can adjust the rights as necessary. Then go to the Members tab to assign group or user accounts to the role. Assign Accounts to a Role The Members tab on the Role Properties window shows the user and group accounts that belong to the role.
Chapter 12: Configure Security Features 142 Click Add to assign accounts to the role. The Enter an Account dialog then asks for the user or group to be added. Enter an account to add. Type the name or ID for the user or group. Type. Specify whether you are adding a user account or a group account.
Chapter 12: Configure Security Features 143 Form. Specify whether you entered a name or an ID for the account. Tips for Specifying Accounts When specifying accounts for a role, you should be aware of the following: • HP Clustered File System uses the contents of the access token created when you logged into the cluster to determine user and group identities. • To simplify Role-Based Security administration, specify groups instead of users wherever possible.
Chapter 12: Configure Security Features 144 (NetBIOS-domain\username, DNS-name\username, or isolated names without domains) will fail if the user account name contains more than 20 characters. This restriction does not apply to group account names. View Effective Rights The My Rights tab on the Role-Based Security Control Panel lists the effective rights that you have on the cluster. Effective rights are the sum of the rights provided by all of the roles to which you belong.
Chapter 12: Configure Security Features 145 Other Role-Based Security Procedures Export or Import Roles The import and export features can used if you will be configuring a new cluster and want to use the Role-Based Security settings that you have configured on the existing cluster. Click the Export button to save the current settings to the file of your choice. (The default location is your home directory.) The file is written in XML format.
Chapter 12: Configure Security Features 146 When configuring the new cluster, click Import to import the file containing the Role-Based Security settings. The imported settings will replace any current Role-Based Security settings. To import or export Role-Based Security settings from the command line, use these commands: mx role export [--permissionOnly] mx role import [--permissionOnly] The --permissionOnly option omits the list of role members from the import or export.
Chapter 12: Configure Security Features 147 Properties window. Select the role on the Role-Based Security Control Panel and click Edit to display the Role Properties window. Delete a Role When a role is deleted from the cluster configuration, the accounts belonging to the role will automatically lose their membership in that role. Roles are deleted on the Role Properties window. Select the role on the Role-Based Security Control Panel and click Edit to display the Role Properties window.
Chapter 12: Configure Security Features 148 Remove Roles From an Account Use this command: mx account removerole --form --type ... The --form option specifies whether you are entering the name or ID of the account (NAME is the default). The --type option specifies whether the account is for a user or group or is unknown (GROUP is the default).
13 Configure Event Notifiers and View Events HP Clustered File System generates an event message when an error condition or failure occurs or when the status of the cluster changes. To provide an audit trail of cluster operations, a message is also generated when a user requests and is granted or denied authorization to perform a task. Event messages are logged and can be viewed either with the Cluster Event Viewer provided with the HP Management Console or with command-line tools.
Chapter 13: Configure Event Notifiers and View Events 150 When an event message is generated, it is written immediately to the Windows event log on the server where the condition occurred. The message is also sent to the HP Clustered File System mxlogd process, which takes these actions: • Sends the message to the event notifier services configured on the server.
Chapter 13: Configure Event Notifiers and View Events 151 The Microsoft SNMP service is required. See “Install and Configure the Microsoft SNMP Service” on page 151 for more information. • Email Notifier Service. This service sends email to specified addresses when the selected events occur. • Script Notifier Service. This service allows you to specify a script that will be triggered when selected events occur.
Chapter 13: Configure Event Notifiers and View Events 152 After the Microsoft SNMP service is installed, you will need to specify a community string. The public community string is adequate, as the string is used only for read-only operations. To add a community string, complete the following steps. 1. Open the Control Panel and select Administrative Tools. 2. Double-click Services. On the Services window, locate the SNMP Service, right-click, and select Properties. 3.
Chapter 13: Configure Event Notifiers and View Events 153 The title bar shows the last time that the Event Viewer was updated. Click Refresh to update the display. By default, the Event Viewer shows the last 1000 messages in the cluster log. To display a different number of messages, select Viewer > Max Events to Display. Then, on the following dialog, specify the maximum number of events to display on the Event Viewer.
Chapter 13: Configure Event Notifiers and View Events 154 Filter the Event Output The Event Viewer includes three filters that can be used to limit the events that are displayed: • Search All. This filter allows you to enter text to be matched. The Event Viewer will show only those events that include the text in any of the event fields. • Severity. This filter allows you to select one or more severity levels. The Event Viewer will display only the events having the specified severity levels. • Timestamp.
Chapter 13: Configure Event Notifiers and View Events 155 View Events from the Command Prompt HP Clustered File System provides commands that can be used to view the cluster log on a particular server and to view outstanding alerts.
Chapter 13: Configure Event Notifiers and View Events 156 remaining options determine the format of the output. --noHeaders omits column headers, --csv prints output in comma-separated format, and --showborder displays borders in the output. Windows Event Viewer You can view the cluster events written to the Windows event log by using the Windows Event Viewer. Select Start > Programs > Administration Tools > Event Viewer, and then click on Matrix Server to see the log messages.
Chapter 13: Configure Event Notifiers and View Events 157 The Control Panel opens on the Event Definition tab, which lists all events defined in the HP Clustered File System event catalog. The Event Definition tab provides a Search All filter that lists messages matching the specified term. You can also select one or more severity levels to be matched.
Chapter 13: Configure Event Notifiers and View Events 158 • Using the checkboxes, check or uncheck the appropriate messages for each service. • Select a message row, right-click, and then set or clear that message for the appropriate notifier services. To add or remove notifier events from the command line, use these commands. If you do not specify a service, the events will be added or removed from all services. You can specify individual event IDs or a range of IDs to be added.
Chapter 13: Configure Event Notifiers and View Events 159 To define a new SNMP trap forwarding target, click Add New Target. Target. Enter either the hostname or the IP address of the SNMP trap forwarding target. The trap-forwarding destination port is the IANA registered port for snmptrap (162/udp). Community. Enter the community string that is used to access the target. The default is public. Disable the SNMP trap forwarding service.
Chapter 13: Configure Event Notifiers and View Events 160 You can change the information for an existing target by selecting it from the SNMP Trap Forwarding Table and then clicking Edit Target. To remove a target, select it from the table and click Remove Target.
Chapter 13: Configure Event Notifiers and View Events 161 To Email address. Type the email addresses to which event notifier email should be sent. If multiple addresses will be specified, use semicolons to separate the addresses. Subject line. Select the amount of information that will appear in the Subject line of the email. The options are short, medium (the default), and long: • Short. Includes only the event severity indicator, for example: Subject:[ERROR] • Medium.
Chapter 13: Configure Event Notifiers and View Events 162 mx eventnotifier configureemail --to --smtpserver [--from ] [--subject ] [--omitdesc] [--smtpport ] [--smtpuser ] [--smtppass ] Configure the Script Notifier Service To configure the script notifier service, select the Script Notification Settings tab. This service runs a script when an event configured for the service occurs.
Chapter 13: Configure Event Notifiers and View Events 163 For more information about notifier scripts, see “Using Custom Notifier Scripts” on page 164. View Configurations from the Command Line The following command can be used to view the events configured for one or more notifier services: mx eventnotifier viewconfig [--snmp] [--email] [--script] With no options, the command displays the configured events for all of the notifier services.
Chapter 13: Configure Event Notifiers and View Events 164 To restore event settings from the command line, use this command: mx eventnotifier restoreevents [--snmp] [--email] [--script] Import or Export the Notifier Event Settings The import and export features and can used if you will be configuring a new cluster and want to use the notifier event settings that you have configured on the existing cluster. Click the Export Definitions button to save the current settings to the file of your choice.
Chapter 13: Configure Event Notifiers and View Events 165 • Event details are placed into a set of environment variables for access by the custom script or program. • Event details, formatted in XML, are passed to the standard input (stdin) of the script or program. Script Requirements For the script to work properly, the following requirements must be met: • The script or program must be accessible from each node in the cluster.
Chapter 13: Configure Event Notifiers and View Events 166 The following variables can be set for an event. (All of the variables are not required for each event).
14 Cluster Operations on the Applications Tab The Applications tab on the Management Console shows all HP Clustered File System applications, virtual hosts, service monitors, and device monitors configured in the cluster and enables you to manage and monitor them from a single screen. Applications Overview An application provides a way to group associated cluster resources (virtual hosts, service monitors, and device monitors) so that they can be treated as a unit.
Chapter 14: Cluster Operations on the Applications Tab 168 a device monitor, the application will use the same name as the device monitor. The Applications Tab The Management Console lists applications and their associated resources (virtual hosts, service and device monitors, CIFS virtual servers) on the Applications tab. The applications and resources appear in the rows of the table. (Double-click on a resource to see its properties.
Chapter 14: Cluster Operations on the Applications Tab 169 The cells indicate whether a resource is deployed on a particular server, as well as the current status of the resource. If a cell is empty, the resource is not deployed on that server. The icons used on the Applications tab report the status of the servers, applications, and resources. The following icons are used in the server columns to indicate the status of applications and resources.
Chapter 14: Cluster Operations on the Applications Tab 170 The possible states for the application are: Icon Status OK Meaning Clients can access the application. Warning Clients can access the application but not from the primary node. Error Clients cannot access the application. In the following example, the status for most of the applications is OK because clients are accessing the application through the primary server. However, the status of application 99.11.14.
Chapter 14: Cluster Operations on the Applications Tab 171 Filter the Applications Display You can use filters to limit the information appearing on the Application tab. For example, you may want to see only a certain type of monitor, or only monitors that are down or disabled. You can use filters to do this. To add a filter, click the “New Filter” tab and then configure the filter. Name: Specify a name for this filter.
Chapter 14: Cluster Operations on the Applications Tab 172 Click OK to close the filter. The filter then appears as a separate tab and will be available to you when you connect to any cluster. (Filters are stored per user under the registry key.) To modify an existing filter, select that filter, right-click, and select Edit Filter. To remove a filter, select the filter, right-click, and select Delete Filter.
Chapter 14: Cluster Operations on the Applications Tab 173 When you reach a cell that accepts drops, the cursor will change to an arrow. The following drag and drop operations are allowed. Applications These operations are allowed only for applications that include at most only one virtual host. • Assign an application to a server. Drag the application from the Name column to the empty cell for the server.
Chapter 14: Cluster Operations on the Applications Tab 174 • Switch the primary and backup servers (or two backup servers) for a virtual host. Drag the virtual host from one server cell to the cell for the other server. If the virtual host is active, this operation can disconnect existing applications that depend on the virtual host. When the operation is complete, the ordering for failover will be switched. • Remove a virtual host from a server.
Chapter 14: Cluster Operations on the Applications Tab 175 reordered as necessary. If the monitor was multi-active, it will remain active on any other servers on which it is configured. Menu Operations Applications The following operations affect all entities associated with an HP Clustered File System application. These operations can also be performed from the command line, as described in the HP StorageWorks Clustered File System Command Reference Guide.
Chapter 14: Cluster Operations on the Applications Tab 176 • Add a service monitor. • Enable or disable the virtual host. • View or change the properties for the virtual host. • Delete the virtual host. To perform these procedures, left-click on the cell for the virtual host (click in the Name column). Then right-click and select the appropriate operation from the menu. See Chapter 9, “Configure Virtual Hosts” on page 177 for more information about these procedures.
15 Configure Virtual Hosts HP StorageWorks Clustered File System uses virtual hosts to provide failover protection for servers and network applications. Overview A virtual host is a hostname/IP address configured on a set of network interfaces. Each interface must be located on a different server. The first network interface configured is the primary interface for the virtual host. The server providing this interface is the primary server.
Chapter 15: Configure Virtual Hosts 178 Cluster Health and Virtual Host Failover To ensure the availability of a virtual host, HP Clustered File System monitors the health of the administrative network, the active network interface, and the underlying server. If you have created service or device monitors, those monitors periodically check the health of the specified services or devices.
Chapter 15: Configure Virtual Hosts 179 The failover operation to another network interface has minimal impact on clients. For example, if clients were downloading Web pages during the failover, they would receive a “transfer interrupted” message and could simply reload the Web page. If they were reading Web pages, they would not notice any interruption. If the active network interface fails, only the virtual hosts associated with that interface are failed over.
Chapter 15: Configure Virtual Hosts 180 Add or Modify a Virtual Host To add or update a virtual host from the HP CFS Management Console, select the appropriate option: • To add a new virtual host, select Cluster > Add > Add Virtual Host or click the V-Host icon on the toolbar. Then configure the virtual host on the Add Virtual Host window. • To update an existing virtual host, select that virtual host on either the Server or Virtual Hosts window, right-click, and select Properties.
Chapter 15: Configure Virtual Hosts 181 select an existing application name, or leave this field blank. However, if you do not assign a name, HP Clustered File System will use the IP address for the virtual host as the application name. Always active: If you check this box, upon server failure, the virtual host will move to an active server even if all associated service and device monitors are inactive or down.
Chapter 15: Configure Virtual Hosts 182 Network Interfaces: When the “All Servers” box is checked, the virtual host will be configured on all servers having an interface on the network you select for this virtual host. When you add another server to the cluster, the virtual host will automatically be configured on that server. This option can be useful with administrative applications. Available/Members: The Available column lists all network interfaces that are available for this virtual host.
Chapter 15: Configure Virtual Hosts 183 Configure Applications for Virtual Hosts After creating virtual hosts, you will need to configure your network applications to recognize them. For example, if you are using a Web server, you may need to edit its configuration files to recognize and respond to the virtual hosts. By default, FTP responds to any virtual host request it receives.
Chapter 15: Configure Virtual Hosts 184 Rehost a Virtual Host You can use the Rehost option to modify the configuration of a virtual host. For example, you might want to change the primary for the virtual host or reorder the backups. To use this option, select the virtual host, right-click, and then select Rehost. The Virtual Host Rehost window then appears. When you make your changes and click OK, you will see a message warning that this action may cause a disruption of service.
Chapter 15: Configure Virtual Hosts 185 Change the Virtual IP Address for a Virtual Host When you change the virtual IP address of a virtual host, you will also need to update your name server and to configure applications to recognize the new virtual IP address. The order in which you perform these tasks is dependent on your application and the requirements of your site. You can use mx commands to change the virtual IP address of a virtual host. Complete these steps: 1.
Chapter 15: Configure Virtual Hosts 186 When certain events occur on the server where a virtual host is located, the ClusterPulse process will attempt to fail over the virtual host to another server configured for that virtual host. For example, if the server goes down, ClusterPulse will check the health of the other servers and then determine the best location for the virtual host.
Chapter 15: Configure Virtual Hosts 187 • The PanPulse process controls whether a network interface is marked up or down. When PanPulse determines that an interface currently hosting a virtual host is down, ClusterPulse will begin searching for another server on which to locate the virtual host. 3. ClusterPulse narrows the list to those servers without inactive, down, or disabled HP Clustered File System device monitors.
Chapter 15: Configure Virtual Hosts 188 Specify Failover/Failback Behavior The Probe Severity setting allows you to specify whether a failure of the service or device monitor probe should cause the virtual host to fail over. For example, you could configure a gateway device monitor to watch a router. The device monitor probe might occasionally time out because of heavy network traffic to the router; however the router is still functioning.
Chapter 15: Configure Virtual Hosts 189 • For service monitors, you can assign a priority to each monitor (the Service Priority setting). If ClusterPulse cannot locate an interface where all services are “up” on the underlying server, it selects an interface where the highest priority service is “up” on the underlying server.
Chapter 15: Configure Virtual Hosts 190 • After the virtual host fails over to node 2, a service monitor probe fails on that node. Now both nodes have a down service monitor. Failback does not occur because the servers are equally healthy. If the failed service is then restored on node 1, that node will now be healthier than node 2 and failback will occur. (Note that if the virtual host policy was AUTOFAILBACK, failback would occur when the probe failed on node 2 because both servers were equally healthy.
16 Configure Service Monitors Service monitors are typically used to monitor a network service such as HTTP or FTP. If a service monitor indicates that a network service is not functioning properly on the primary server, HP Clustered File System can transfer the network traffic to a backup server that also provides that network service. Overview Before creating a service monitor for a particular service, you will need to configure that service on your servers.
Chapter 16: Configure Service Monitors 192 severity, Start scripts, and Stop scripts) are consistent across all servers configured for a virtual host. Service Monitors and Failover If a monitored service fails, HP Clustered File System attempts to relocate any virtual hosts associated with the service monitor to a network interface on a healthier server.
Chapter 16: Configure Service Monitors 193 FTP Service Monitor By default the FTP service monitor probes TCP port 21 of the virtual host address. You can change this port number to the port number configured for your FTP server. The default frequency of the probe is every 30 seconds. The default time that the service monitor waits for a probe to complete is five seconds. The probe function attempts to connect to port 21 and expects to read an initial message from the FTP server.
Chapter 16: Configure Service Monitors 194 service if it is not already started. When the service monitor instance becomes inactive, the monitor stops the NT service if the probe type for the monitor is set to Single-Probe. When you configure the monitor, you will need to indicate whether dependent services of the NT service should also be started and stopped.
Chapter 16: Configure Service Monitors 195 TCP Service Monitor The generic TCP service monitor defaults to TCP port 0. You should set the port to the listening port of your server software. The default frequency of the probe is every 30 seconds. The default time that the service monitor waits for a probe to complete is five seconds. Because the service monitor cannot know what to expect from the TCP port connection, it simply attempts to connect to the specified port.
Chapter 16: Configure Service Monitors 196 Add or Modify a Service Monitor Adding a service monitor configures HP Clustered File System monitoring only. It does not configure the service itself.
Chapter 16: Configure Service Monitors 197 Monitor Type: Select the type of service that you want to monitor. Timeout: The maximum amount of time that the monitor_agent process will wait for a probe to complete. For most monitors, the default timeout interval is five seconds. You can use the default setting or specify a new timeout interval. Frequency: The interval of time, in seconds, at which the monitor probes the designated service.
Chapter 16: Configure Service Monitors 198 To add or update a service monitor from the command line, use this command: mx service add|update [--type DNS|FTP|HTTP|HTTPS|IMAP4|NNTP| NTSERVICE|POP3|SMTP|TCP|CUSTOM] [--timeout ] [--frequency ] [] ... NOTE: The --type option cannot be used with the mx service update command. See “Advanced Settings for Service Monitors” for information about the other arguments that can be specified for service monitors.
Chapter 16: Configure Service Monitors 199 Service Monitor Policy The Policy tab lets you specify the failover behavior of the service monitor and set its service priority. Timeout and Failure Severity This setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a probe of a monitored service fails.
Chapter 16: Configure Service Monitors 200 monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When the service is recovered on the original node, failback occurs according to the virtual host’s failback policy. NOAUTORECOVER. The virtual host fails over when a monitor probe fails and the monitor is disabled on the original node, preventing automatic failback.
Chapter 16: Configure Service Monitors 201 Probe Type Service monitors can be configured to be either single-probe or multiprobe. A multi-probe monitor performs the probe function on each node where the monitor is configured, regardless of whether the monitor instance is active or inactive. This is the default for the built-in monitors. Single-probe monitors perform the probe function only on the node where the monitor instance is active.
Chapter 16: Configure Service Monitors 202 Scripts Service monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the service. Start script. Runs as a service is becoming active on a server. Stop script. Runs as a service is becoming inactive on a server.
Chapter 16: Configure Service Monitors 203 without considering this to be an error. In both of these cases, the script should exit with a zero exit status. This behavior is necessary because HP Clustered File System runs the Start and Stop scripts to establish the desired start/stop activity, even though the service may actually have been started by something other than HP Clustered File System before ClusterPulse was started.
Chapter 16: Configure Service Monitors 204 If you want to reverse this order, preface the Stop script with the prefix [post] on the Scripts tab. Event Severity If a Start or Stop script fails or times out, a monitor event is created on the the node where the failure or timeout occurred. Configuration errors can also cause this behavior. You can view these events on the HP CFS Management Console and clear them from the Console or command line after you have fixed the problems that caused them.
Chapter 16: Configure Service Monitors 205 3. The Start script is run on the server where the virtual host is becoming active. PARALLEL. The strict ordering sequence for Stop and Start scripts is not enforced. The scripts run in parallel across the cluster as a virtual host is in transition. The PARALLEL configuration can speed up failover time for services that do not depend on strict ordering of Start and Stop scripts.
Chapter 16: Configure Service Monitors 206 UP or DOWN as appropriate. If the service is UP, the monitor will report UP Active (disabled). To disable a service monitor, select it on the Management Console, rightclick, and select Disable. To disable a service monitor from the command line, use this command: mx service disable ... Enable a Previously Disabled Service Monitor From the Management Console, select the service monitor to be enabled, right-click, and select Enable.
17 Configure Device Monitors HP StorageWorks Clustered File System provides built-in device monitors that can be used to watch local disks, gateway devices, or an NT service, or to monitor access to a SAN disk partition containing a PSFS filesystem. You can also create custom device monitors. Overview A device monitor is configured on one or more servers in the cluster. Depending on the type of monitor, it can be active on all servers on which it is configured, or on only one server.
Chapter 17: Configure Device Monitors 208 Type Default Timeout Default Frequency Other Parameters SHARED_FILESYSTEM 5 seconds 30 seconds Filesystem, filename CUSTOM 60 seconds 60 seconds User probe script Activity Types for Device Monitors The activity type specifies where the device monitor can be active. The activity type can be one of the following: • Single-Active. The monitor is active on only one of the selected servers.
Chapter 17: Configure Device Monitors 209 GATEWAY Device Monitor When certain network failures occur, the servers in a cluster can lose communication with each other. This situation can result in a partition, or split, of the cluster. For example, in a two-server cluster, each server would assume that it remained in the cluster and that the other server was down. The gateway device monitor detects the network failure and prevents the cluster from partitioning.
Chapter 17: Configure Device Monitors 210 The monitor probe queries the status of the NT service. If the status is SERVICE_RUNNING, the service status remains Up. If the status does not indicate that the NT service is running, the service status is set to Down. The NTSERVICE monitor is also available as a service monitor. When deciding whether to create a service monitor or a device monitor, consider the effect that you want the monitor to have on the cluster.
Chapter 17: Configure Device Monitors 211 Custom Device Monitor A CUSTOM device monitor can be used if the built-in device types are not sufficient for your needs. Custom device monitors can be particularly useful when integrating HP Clustered File System with a custom application. When you create a CUSTOM monitor, you will need to supply the probe script. In the script, probe commands should determine the health of the device as necessary.
Chapter 17: Configure Device Monitors 212 The device monitor activeness policy decision is made as follows: 1. If the device monitor on a specific server is disabled, then the device monitor will not be made active on that server. 2. ClusterPulse considers the list of servers that are both up and enabled and that are configured for the device monitor.
Chapter 17: Configure Device Monitors 213 Add or Modify a Device Monitor Select the appropriate option from the HP CFS Management Console: • To add a new device monitor, select the server to be associated with the monitor from the Servers window, right-click, and select Add Device Monitor (or click the Device icon on the toolbar). Then configure the device monitor on the New Device Monitor window.
Chapter 17: Configure Device Monitors 214 Device Type: Select the appropriate device type (DISK, GATEWAY, NTSERVICE, SHARED_FILESYSTEM, or CUSTOM). See “Overview” on page 207 for a description of these monitors. Frequency and Timeout: These fields are set to the default values for the type of device you have selected. Change them as needed. Additional parameters: Depending on the type of monitor you are creating, you will be asked for an additional parameter. • DISK monitor.
Chapter 17: Configure Device Monitors 215 decimal IP address of the hostname for the server, and is the name assigned to the SHARED_FILESYSTEM device monitor. • CUSTOM monitor. Specify the pathname to the probe script to be used with the monitor. The following example shows a device monitor created on the server svr1. To add a device monitor from the command line, use this command: mx device add --servers ,,...
Chapter 17: Configure Device Monitors 216 Probe Severity The Probe Severity tab lets you specify the failover behavior of the monitor. The Probe Severity setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a monitored device fails.
Chapter 17: Configure Device Monitors 217 monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When device access is recovered on the original node, failback occurs according to the virtual host’s failback policy. NOAUTORECOVER. The virtual host fails over when a monitor probe fails and the monitor is disabled on the original node, preventing automatic failback.
Chapter 17: Configure Device Monitors 218 Custom Scripts The Scripts tab lets you configure custom Recovery, Start, and Stop scripts for a device monitor. Device monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the device. Start script. Runs as a device is becoming active on a server. Stop script.
Chapter 17: Configure Device Monitors 219 must be robust enough to run when the device is already stopped, without considering this to be an error. In both of these cases, the script should exit with a zero exit status. This behavior is necessary because HP Clustered File System runs the Start and Stop scripts to establish the desired start/stop activity, even though the device may actually have been started by something other than HP Clustered File System before the ClusterPulse process was started.
Chapter 17: Configure Device Monitors 220 If you want to reverse this order, preface the Stop script with the prefix [post] on the Scripts tab. Event Severity If a Start or Stop script fails or times out, a monitor event is created on the the node where the failure or timeout occurred. Configuration errors can also cause this behavior. You can view these events on the HP CFS Management Console and clear them from the Console or command line after you have fixed the problems that caused them.
Chapter 17: Configure Device Monitors 221 2. ClusterPulse waits for all Stop scripts to complete. 3. The Start script is run on the server where the virtual host or shared device is becoming active. PARALLEL. The strict ordering sequence for Stop and Start scripts is not enforced. The scripts run in parallel across the cluster as a shared device or virtual host is in transition.
Chapter 17: Configure Device Monitors 222 When a device monitor detects a failure, HP Clustered File System attempts to fail over the active virtual hosts associated with that monitor. By default, all virtual hosts on the servers used with the device monitor are dependent on the device monitor. However, you can specify that only certain virtual hosts be dependent on the device monitor.
Chapter 17: Configure Device Monitors 223 Probe Type. The servers on which the monitor probe will occur. Select Single-Probe to conduct the probe only on the server where the monitor is active. Select Multi-Probe to conduct the probe on all servers configured for the monitor. Activity Type. Where the monitor can be active. The options are: • Single-Active. The monitor is active on only one of the selected servers.
Chapter 17: Configure Device Monitors 224 Available Servers/Selected Servers. The type of the device monitor affects whether the monitor should be configured on one or multiple servers. • A GATEWAY monitor is multi-active and can be configured on multiple servers. • For SHARED_FILESYSTEM monitors, you should select the servers that mount the monitored filesystem and are running the applications that access data from that filesystem.
Chapter 17: Configure Device Monitors 225 Enable a Device Monitor From the Management Console, select the device monitor to be enabled, right-click, and select Enable. To enable a device monitor from the command line, use this command: mx device enable ... Clear Device Monitor Error Condition To clear a error from a device monitor, select that monitor, right-click, and select Clear Last Error.
18 Advanced Monitor Topics The topics described here provide technical details about HP Clustered File System operations. This information is not required to use HP Clustered File System in typical configurations; however, it may be useful if you want to design custom scripts and monitors, to integrate HP Clustered File System with custom applications, or to diagnose complex configuration problems.
Chapter 18: Advanced Monitor Topics 227 The following examples show state transitions for a service monitor that uses the default values for autorecovery, priority, and serial script ordering. Start and Stop scripts are also defined for the monitor. The virtual host associated with the monitor has a primary interface and two backup interfaces. The first example shows the state transitions that occur at startup from an unknown state. At i1, all instances of the monitor have completed stopping.
Chapter 18: Advanced Monitor Topics 228 When a failure occurs on the Primary, the virtual host needs to fail over to a backup. HP Clustered File System now looks for the best location for the virtual host. Because the probe status on the first backup is “down,” HP Clustered File System chooses the second backup, where the probe status is “up.” At i5 in the following example, the probe fails on the Primary. At i6, the virtual host is deconfigured on the Primary.
Chapter 18: Advanced Monitor Topics 229 Custom Device Monitors A custom device monitor is associated with a list of servers and a list of virtual hosts configured on those servers. A custom device monitor can be active on only one server at a time. On each server, the monitor uses a probe mechanism to determine whether the service is active. The probe mechanism is in one of the following states on each server: Up, Down, Unknown, Timeout. A custom device monitor also has an activity status on each server.
Chapter 18: Advanced Monitor Topics Primary 230 Time t1 active Vhost status inactive Service probe status unknown Service monitor activity active undefined star ting Device probe status unknown Device monitor activity active undefined star ting up inactive down inactive stopping up First Bac kup Vhost status inactive Service probe status unknown Service monitor activity undefined up inactive stopping Device probe status Device monitor activity Sec ond Bac kup Vhost status unknown
Chapter 18: Advanced Monitor Topics 231 Integrate Custom Applications There are many ways to integrate custom applications with HP Clustered File System: • Use service monitors or device monitors to monitor the application • Use a predefined monitor or your own user-defined monitor • Use Start, Stop, and Recovery scripts Following are some examples of these strategies.
Chapter 18: Advanced Monitor Topics 232 Built-In Monitor or User-Defined Monitor? To decide whether to use a built-in monitor or a user-defined monitor, first determine whether a built-in monitor is available for the service you want to monitor and then consider the degree of content verification that you need.
Chapter 18: Advanced Monitor Topics 233 This script connects to port 2468, sends a string specified by the protocol, and determines whether it has received an expected response. You distribute this script to the same location on all servers on virtual host vh1, and then create a custom service monitor that uses that script. This provides not only verification of the connection, but a degree of content verification.
Chapter 18: Advanced Monitor Topics 234 • MX_SERVER=IP address The primary address of the server that calls the script. The address is specified in dotted decimal format. • MX_TYPE=(SERVICE|DEVICE) Whether the script is for a service or device monitor. • MX_VHOST=IP address The IP address of the virtual host. The address is specified in dotted decimal format. (Applies only to service monitors.) • MX_PORT=Port or name The port or name of the service monitor. (Applies only to service monitors.
19 SAN Maintenance The following information and procedures apply to SANs used with HP StorageWorks Clustered File System. Server Access to the SAN When a server is either added to the cluster or rebooted, HP Clustered File System needs to take some administrative actions to make the server a full member of the cluster with access to the shared filesystems on the SAN. During this time, the HP CFS Management Console reports the message “Joining cluster” for the server.
Chapter 19: SAN Maintenance 236 • Repeated I/O errors when the server tries to write to a PSFS journal. The server then loses access to the affected filesystem. When the disk experiencing the I/O errors is fixed, the server will automatically regain access to the filesystem. The HP CFS Management Console typically displays an alert message when a server loses access to the SAN. (See Appendix B for more information about these messages.
Chapter 19: SAN Maintenance 237 This can be done either by rebooting the servers after you make the partition table changes, or by manually disabling access to the disks before making the partition table changes and then reenabling access afterwards. If you should later need to repartition a disk containing a membership partition, you will need to stop HP Clustered File System before you change the layout. While the cluster is stopped, you will not be able to access other disks in the cluster.
Chapter 19: SAN Maintenance 238 mxsanlk This host: 10.10.30.3 This host’s SDMP administrator: 10.10.30.1 Membership Partition -------------------psd1p1 psd2p1 psd3p3 SANlock State ------------held by SDMP administrator held by SDMP administrator held by SDMP administrator Any of these messages can appear in the “SANlock State” column. • held by SDMP administrator The SANlock was most recently held by the SDMP administrator of the cluster to which the host where mxsanlk was run belongs.
Chapter 19: SAN Maintenance 239 • trying to lock, not yet committed by owner The SANlock is either not held or has not yet been committed by its holder. The host on which mxsanlk was run is trying to acquire the SANlock. • unlocked, trying to lock The SANlock does not appear to be held. The host on which mxsanlk was run is trying to acquire the SANlock. • unlocked The SANlock does not appear to be held. If a host holds the SANlock, it has not yet committed its hold.
Chapter 19: SAN Maintenance 240 • locked (lock is corrupt, will repair) The host on which mxsanlk was run holds the lock. The SANlock was corrupted but will be repaired. If a membership partition cannot be accessed, use the mx config mp set command or the mprepair utility to correct the problem. Depending on the status of the SDMP process, when you invoke mxsanlk you may see one of the following messages: Checking for SDMP activity, please wait... Still trying... The SDSMP is inactive at this host.
Chapter 19: SAN Maintenance 241 Online Operations When HP Clustered File System is running, the Add, Repair, and Replace options on the Storage Settings tab and the mx config mp set and repair commands can be used only in the following circumstances: • A disk containing a membership partition is out-of-service. Use the replace option or mx config mp set to move the partition to another disk. • You need to move one or more membership partitions to different storage.
Chapter 19: SAN Maintenance 242 Membership Partition States The Storage Settings tab reports the state of each membership partition. The possible states are: • OK. The membership partition is functioning correctly. • FENCED. The server has been fenced and cannot access the SAN. Start HP Clustered File System if it is not running or reboot the server. • NOT_FOUND. HP Clustered File System cannot find the device containing the membership partition. Check the device for hardware problems.
Chapter 19: SAN Maintenance 243 Check the device for hardware problems. If the issue cannot be resolved, replace the membership partition. • RESILVER. The membership partition is not up-to-date. HP Clustered File System will resilver the membership partition automatically. You can resilver the partition manually if desired. • CORRUPT. The membership partition is not valid. Resilver the partition. • CID_MISMATCH. The Cluster-ID is out-of-sync among the membership partitions and must be reset.
Chapter 19: SAN Maintenance 244 When you select a partition and click Replace, you will see a confirmation message describing the replace operation. A message also appears when the replace operation is complete. NOTE: The Replace option on the Storage Settings tab is available only when HP Clustered File System is running.
Chapter 19: SAN Maintenance 245 All of the available partitions on that disk or LUN then appear in the bottom of the window. Select one of these partitions and click Add. (The minimum size for a membership partition is 1 GB.) Repeat this procedure to select one more membership partition. We recommend that the partitions be on different disks. When selecting partitions for use as membership partitions, be sure that they do not contain any needed data.
Chapter 19: SAN Maintenance 246 The mx config mp Commands The mx config mp set and repair commands can be used while HP Clustered File System is either online or offline; however, only the operations listed under “Online Operations” on page 241 can be performed while HP Clustered File System is running. For other operations, HP Clustered File System must be offline on all nodes in the cluster.
Chapter 19: SAN Maintenance 247 --reuse Allow disks that contain existing volume information to be reused. (The existing data is destroyed.) Repair a Membership Partition This command resilvers the specified membership partition. mx config mp repair [--reuse] The --reuse option allows disks that contain existing volume information to be reused. (The existing data is destroyed.) This option is available only when the cluster is offline.
Chapter 19: SAN Maintenance 248 be in the Active state. The mprepair utility can be used to repair any problems if a failure causes servers to have inconsistent views of the membership partitions.
Chapter 19: SAN Maintenance 249 another SAN component. When the problem is repaired, the status should return to OK. CORRUPT. The partition is not valid. You will need to resilver the partition. This step copies the membership data from a valid membership partition to the corrupted partition. NOTE: The membership partition may have become corrupt because it was used by another application. Before resilvering, verify that it is okay to overwrite any existing data on the partition. RESILVER.
Chapter 19: SAN Maintenance 250 Export Configuration Changes When you change the membership partition configuration with mprepair, it updates the membership list on the local server. It also updates the lists on the disks containing the membership partitions specified in the local MP file. After making changes with mprepair, you will need to export the configuration to the other servers in the cluster.
Chapter 19: SAN Maintenance Disk records: Recid 1: 20:00:00:04:cf:13:33:12::0 psd1 Recid 258: 20:00:00:04:cf:13:3c:92::0 psd2 Host registry entries: Host ID: 10.10.30.4 fencestatus=0 SAN Loc:10:00:00:00:c9:2d:27:7d::0 idstatus=0 Host ID: 10.10.30.3 fencestatus=0 SAN Loc:10:00:00:00:c9:2d:27:78::0 idstatus=0 251 (switch=fcswitch5) (switch=fcswitch5) Search the SAN for Membership Partitions.
Chapter 19: SAN Maintenance 252 The resilver operation synchronizes all other membership partitions and the local membership partition list. Repair a Membership Partition. This command resilvers the specified membership partition. mprepair --repair [--force] UID/PART# UID/PART# indicates the membership partition to be resilvered. UID is the UID for the device and PART# is the number of the partition on the device. The membership partition is resilvered from a known valid membership partition.
Chapter 19: SAN Maintenance 253 server in the cluster, you can use the following command to determine whether all membership partitions have a valid Cluster-ID. mprepair --sync-clusterids The command displays the Cluster-IDs found in each membership partition and flags those partitions containing an invalid ID. You can then specify whether you want the command to repair the partitions having a mismatched Cluster-ID.
Chapter 19: SAN Maintenance 254 8. Enable HP Clustered File System and the psd driver: mxservice -install psdcoinst -install 9. Reboot the server to return the psd driver to the driver stack. 10. When the system is rebooted, HP Clustered File System will still be disabled in the Windows Services Control Panel. Re-enable it for Automatic startup if desired. 11. Start HP Clustered File System (or wait until the next reboot).
Chapter 19: SAN Maintenance 255 cluster (for example, because the server has crashed) and you cannot reboot the server. Run the command from a server that is communicating with the cluster, not from the non-responsive server. If none of the servers are responsive, try to execute the command from a client using the Microsoft psexec utility.
Chapter 19: SAN Maintenance 256 • Be sure to verify that the server is physically down or physically disconnected from the shared storage before running the mx server markdown command. Filesystem corruption can occur if the server is not actually down and can access the shared storage. • If the server is up but is physically disconnected from the shared storage when the mx server markdown command is run, the server must be rebooted before it is reconnected to shared storage.
Chapter 19: SAN Maintenance 257 Also consult your FC switch documentation or the FC switch vendor. If the switch appears to be operating properly, contact HP Support. Online Insertion of New Storage HP Clustered File System supports online insertion (OLI) of new storage, provided that OLI support is present for your combination of storage device, SAN fabric, HBA vendor-supplied device driver, and the associated HBA vendor-supplied libhbaapi. (Check with your vendors to determine whether OLI is supported.
Chapter 19: SAN Maintenance 258 • HP Clustered File System must be stopped on any servers that are connected only to the switch to be replaced. If these conditions are not met, you will not be able to perform online replacement of the switch. Instead, you will need to stop the cluster, replace the switch, and use mxconfig to reconfigure the new switch into the cluster. Consult your switch documentation for the appropriate replacement procedure, keeping in mind that the above requirements must be met.
Chapter 19: SAN Maintenance 259 10. Clear any stale zone configuration on the new switch with the cfgClear command. 11. Save the clean configuration with the cfgSave command. 12. Configure the new switch. If you saved the original configuration with the configUpload command, use the configDownload command to restore it. Otherwise, use the configure command. (You may need to consult your site’s SAN administrator or your Brocade representative for the correct configuration information.) 13.
Chapter 19: SAN Maintenance 260 are available elsewhere but might conveniently be captured here. One way to record the information is to capture the output of a CLI session. The following commands show the type of data that might be useful: show ip ethernet for the IP address. show switch for the fabric operating mode. show zoning for the zone configuration. 3. After the original switch has been powered down, power up the new switch and set the IP address to the old switch's address to allow EWS access.
20 Other Cluster Maintenance This chapter describes how to perform the following activities: • Collect HP Clustered File System log files with mxcollect • Check the server configuration • Disable a server for maintenance • Troubleshoot a cluster • Troubleshoot service and device monitors Collect Log Files with mxcollect The mxcollect utility collects error event logs that can be useful for diagnosing technical issues with HP Clustered File System.
Chapter 20: Other Cluster Maintenance 262 You will then see a command window that says “Collecting files.” The information collected from that node is written to the file mxcollect_machinename_yyyymmdd_hhmmss_default.zip. This file is placed in the folder %SystemDrive%\Program Files\Hewlett-Packard\HP Clustered File System\conf\mxcollect. Upload mxcollect Files to HP Support After running mxcollect, you can upload the resulting files to HP Support. Contact HP Support for more information.
Chapter 20: Other Cluster Maintenance 263 1. Disable the server. (Choose the server from the Servers window on the HP CFS Management Console, right-click, and select Disable.) This step causes the virtual host to fail over to a backup network interface on another server. 2.
Chapter 20: Other Cluster Maintenance 264 HTTP server is monitored by a FTP monitor, the HTTP server is considered down. Also check the following: 1. Verify that the server is connected to the network. 2. Verify that the network devices and interfaces are properly configured on the server. 3. Ensure that the ClusterPulse process and the service monitor agent (monitor_agent) are running on the server.
Chapter 20: Other Cluster Maintenance 265 Troubleshoot Monitor Problems You may encounter the following problems with service and device monitors. Monitor Status If the monitor status is not reported as Up, check the last error message string and the last event message string that monitor_agent returned to HP Clustered File System for any service or device monitor on any server in the cluster. The error or event message provides more status information.
Chapter 20: Other Cluster Maintenance 266 expected status choices. This could occur if the Management Console is out of date and does not support the version of HP Clustered File System running on the server. “Event” Status The “Event” status is displayed when monitor_agent encounters an error while executing the probe, Start, Stop, or Recovery scripts. The status of the monitor may be “Up” even though an event has been reported.
Chapter 20: Other Cluster Maintenance 267 transition for a monitor. This indicates an internal error and should be reported to HP Support. The event is written into the event log. To view the error, select the monitor on the Management Console, right-click, and select View Last Error.
Chapter 20: Other Cluster Maintenance 268 • Starting • Stopping • Inactive • Active The activity status is not an error condition; it represents the activity of scripts associated with the monitor. However, if the activity status continues to have a value other than Active or Inactive, there may be a script problem that requires attention. Active status indicates that the probe script will be executed at the probe frequency.
A Management Console Icons The Management Console uses the following icons. HP Clustered File System Entities The following icons represent the HP Clustered File System entities. If an entity is disabled, the color of the icon becomes less intense.
Appendix A: Management Console Icons 270 Additional icons are added to the entity icon to indicate the status of the entity. The following example shows the status icons for the server entity. The status icons are the same for all entities and have the following meanings. Monitor Probe Status The following icons indicate the status of service monitor and device monitor probes. If the monitor is disabled, the color of the icons is less intense.
Appendix A: Management Console Icons 271 On the Applications tab, virtual hosts and single-active monitors use the following icons to indicate the primary and backups. Multi-active monitors use the same icons but do not include the primary or backup indication. Management Console Alerts The Management Console uses the following icons to indicate the severity of the messages that appear in the Alert window.
Index A accounts assign to role 141 administrative network allow, discourage or exclude traffic 58 defined 9 failover 57 network topology 56 requirements for 55 select 56 alerts Alerts pane on Management Console 35 display error on Management Console 35 display on Management Console 35 icons on Management Console 271 Applications tab drag and drop operations 172 filter display, filter, on Applications tab 171 icons 169 manage monitors 176 manage resources 172 menu operations 175 modify display 168 rehost vi
Index ClusterPulse defined 9 failover 185 configuration back up 38 device monitor 213 network interface 55 PSFS filesystems 93 SAN disks 62 server 45 service monitor 196 system design guidelines 14 virtual host 180 configurations, supported 15 Connect window authentication parameters 25 bookmarks 26 Clear History button 25 Connect button 25 custom monitors device 211 environment variables for scripts 233 service 195 D device database defined 12 membership partitions 63 device monitor activeness policy 211
Index error messages PSFS filesystem 236 event log audit trail 148 Events Viewer 152 Windows Event Viewer 156 Event Notification Control Panel 156 event notifier services add events 157 configure 156 custom notifier scripts 164 email notifier service 160 enable or disable 163 import or export event settings 164 remove events 157 restore event settings to defaults 163 script notifier service 162 SNMP notifier service 158 Event Viewer 152 events add to event notifier service 157 alert messages 35 device moni
Index getting help 1 GPT disks 63 grpcommd process 10 H Host Bus Adapter (HBA) change 253 host registry, clear 252 HP NAS services website 2 storage website 1 technical support 1 HTTP service monitor 193 HTTPS service monitor 193 I Installed Software viewer 36 iSCSI configuration 17 L 275 inactivate 252 inactive 249 options on Storage Settings tab 241 repair 243 replace 243 resilver 243, 251 memory, server 14 Microsoft SNMP service, install 151 Microsoft SNMP service, install and configure 151 mount p
Index 276 O assign rights manually 140 assign rights with a template 140 delete 147 enable or disable 146 export or import 145 modiry 146 rename 146 view from command line 147 view rights 144 OLI, storage 257 online repair 241 P PanPulse process administrative network 56 defined 10 partitions on SAN disks requirement for importing 65 ports, network external 40 internal 40 primary server 13 probe severity, failover 190 psd driver 10 PSFS filesystem.
Index server configuration delete 46 disable 47 DNS load balancing 52 enable 47 modify properties 45 server registry 94 service monitor activity status 267 applications, integrate with 232 custom starting/stopping actions 203 defined 13 troubleshooting 265 service monitor configuration add or update 196 advanced settings probe severity 198 scripts 201 delete 205 disable 205 enable 206 service monitor types custom 195 FTP 193 HTTP 193 HTTPS 193 NTSERVICE 193 SMTP 194 TCP 195 shared disks import 65 SHARED_FI
Index guidelines 179 policy for failback 181 rehost via Applications tab 184 virtual host configuration add or update 180 delete 184 volume database 12 volumes dynamic volume recovery 88 import 89 importable 90 unimportable 90 unimported 90 volumes, basic or dynamic 75 278