HP StorageWorks Clustered File System 3.6.0 Windows Storage Server Edition Administration Guide (403103-005, January 2008)
Table Of Contents
- Contents
- HP Technical Support
- Quick Start Checklist
- Introduction to HP Clustered File System
- Cluster Administration
- Administrative Considerations and Restrictions
- Tested Configuration Limits
- Volume and Filesystem Limits
- User Authentication
- Start the Management Console
- Cluster Management Applications
- The HP CFS Management Console
- View Installed Software
- Start HP Clustered File System
- Stop HP Clustered File System
- Back Up and Restore the Cluster Configuration
- HP Clustered File System Network Port Numbers
- Configure Servers
- Configure Network Interfaces
- Configure the SAN
- Configure Dynamic Volumes
- Configure PSFS Filesystems
- Manage Disk Quotas
- Manage Hardware Snapshots
- Configure Security Features
- Configure Event Notifiers and View Events
- Overview
- Install and Configure the Microsoft SNMP Service
- Cluster Event Viewer
- Configure Event Notifier Services
- Select Events for a Notifier Service
- Configure the SNMP Notifier Service
- Configure the Email Notifier Service
- Configure the Script Notifier Service
- View Configurations from the Command Line
- Test Notifier Services
- Enable or Disable a Notifier Service
- Restore Notifier Event Settings to Default Values
- Import or Export the Notifier Event Settings
- Using Custom Notifier Scripts
- Cluster Operations on the Applications Tab
- Configure Virtual Hosts
- Configure Service Monitors
- Configure Device Monitors
- Advanced Monitor Topics
- SAN Maintenance
- Other Cluster Maintenance
- Management Console Icons
- Index
Chapter 15: Configure Virtual Hosts 190
• After the virtual host fails over to node 2, a service monitor probe fails
on that node. Now both nodes have a down service monitor. Failback
does not occur because the servers are equally healthy. If the failed
service is then restored on node 1, that node will now be healthier
than node 2 and failback will occur. (Note that if the virtual host policy
was
AUTOFAILBACK, failback would occur when the probe failed on
node 2 because both servers were equally healthy.)
• After the virtual host fails over to node 2, all service monitor probes
fail on that node. Node 1, with one down monitor, is now healthier
than node 2, with three down monitors, and failback will occur.
Failback Policy and Monitor Probe Severity
The following table shows how the virtual host failback policy interacts
with the probe severity setting for service and device monitors.
Virtual Host
Policy
Monitor Probe
Severity
Behavior When Probe Reports
DOWN
AUTOFAILBACK NOFAILOVER
Failover does not occur.
AUTORECOVER
Failover occurs. When service is
restored, failback occurs.
NOAUTORECOVER
Failover occurs and the monitor is
disabled on the original server. When
the monitor is reenabled, failback
occurs.
NOFAILBACK NOFAILOVER
Failover does not occur.
AUTORECOVER
Failover occurs. The virtual host
remains on the backup server until a
“healthier” server is available.
NOAUTORECOVER
Failover occurs and monitor is
disabled on the original server. The
virtual host remains on the backup
server until a “healthier” server is
available.