Dell™ Failover Clusters With Microsoft® Windows Server® 2003 Software Installation and Troubleshooting Guide w w w. d e l l . c o m | s u p p o r t . d e l l .
Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . Virtual Servers and Resource Groups . . . . . . . . 7 . . . . . . . . . . . . . . . . . . 8 . . . . . . . . . . . . . . . . . . . . . . 8 Quorum Resource Cluster Solution Supported Cluster Configurations . . . . . . . . . . Cluster Components and Requirements Operating System . Cluster Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Storage . . . . . . . . . . . .
Installing the Storage Connection Ports and Drivers . . . . . . . . . . Installing and Configuring the Shared Storage System . . . . . . . . . . . . . . . 24 . . . . . . . . . . . . . 25 . . . . 25 . . . . . 28 Assigning Drive Letters and Mount Points . Configuring Hard Drive Letters When Using Multiple Shared Storage Systems . Formatting and Assigning Drive Letters and Volume Labels to the Disks . . . . . . . . . Configuring Your Failover Cluster . . . . 28 . . . . . . . . . . . .
4 Understanding Your Failover Cluster Cluster Objects. . . . 37 . . . . . . . . . . . . . . . . . . . . . 37 Cluster Networks . . . . . . . . . . . . . . . . . . . . Preventing Network Failure . . . . . . . . . . . . 38 . . . . . . . . . . . . . . . . . . . 38 . . . . . . . . . . . . . . . . . . . . . . 38 Network Interfaces Forming a New Cluster . . . . . . . . . . . . . . . Cluster Resources 39 . . . . . . . . . . . . . 39 . . . . . . . . . . . . . . . . . . . .
Removing Nodes From Clusters Running Microsoft Windows Server 2003. . . . . . . . . . . . . . . . . . . . 57 . . . . . . . . 58 Running chkdsk /f on a Quorum Resource Recovering From a Corrupt Quorum Disk Changing the MSCS Account Password in Windows Server 2003 . . . . . . . . . . . Reformatting a Cluster Disk 6 . . . . . . . 59 . . . . . . . . . . . . . . . 59 Upgrading to a Cluster Configuration . . . . . . . . Before You Begin . . . . . . . . . . . . . . . 61 . . . . . . . . . . . . . . .
Introduction Clustering uses specific hardware and software to join multiple systems together to function as a single system and provide an automatic failover solution. If one of the clustered systems (also known as cluster nodes, or nodes) fails, resources running on the failed system are moved (or failed over) to one or more systems in the cluster by the Microsoft® Cluster Service (MSCS) software. MSCS is the failover software component in specific versions of the Windows® operating system.
Quorum Resource A single shared disk, which is designated as the quorum resource, maintains the configuration data (including all the changes that have been applied to a cluster database) necessary for recovery when a node fails.
Cluster Components and Requirements Your cluster requires the following components: • Operating System • Cluster nodes(servers) • Cluster Storage Operating System Table 1-1 provides an overview of the supported operating systems. See your operating system documentation for a complete list of features. NOTE: Some of the core services are common to all the operating systems. Table 1-1.
Cluster Nodes Table 1-2 lists the hardware requirements for the cluster nodes. Table 1-2. Cluster Node Requirements Component Minimum Requirement Cluster nodes Two to eight Dell PowerEdge™ systems running the Windows Server 2003 operating system. RAM At least 256 MB of RAM installed on each cluster node for Windows Server 2003, Enterprise Edition or Windows Server 2003 R2, Enterprise Edition.
Table 1-2. Cluster Node Requirements (continued) Component Minimum Requirement iSCSI Initiator and For clusters with iSCSI storage, install the Microsoft iSCSI NICs for iSCSI Software Initiator (including iSCSI port driver and Initiator Access Service) on each cluster node. Two iSCSI NICs or Gigabit Ethernet NIC ports per node. NICs with a TCP/IP Off-load Engine (TOE) or iSCSI Off-load capability may also be used for iSCSI traffic.
Other Documents You May Need CAUTION: The safety information that is shipped with your system provides important safety and regulatory information. Warranty information may be included within this document or as a separate document. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Preparing Your Systems for Clustering CAUTION: Only trained service technicians are authorized to remove and access any of the components inside the system. See the safety information shipped with your system for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge.
5 Configure each server node as a member server in the same Windows Active Directory Domain. NOTE: It may also be possible to have cluster nodes serve as Domain controllers. For more information, see “Selecting a Domain Model”. 6 Establish the physical storage topology and any required storage network settings to provide connectivity between the storage array and the servers that will be configured as cluster nodes. Configure the storage system(s) as described in your storage system documentation.
Installation Overview This section provides installation overview procedures for configuring a cluster running the Microsoft® Windows Server® 2003 operating system. NOTE: Storage management software may vary and use different terms than those in this guide to refer to similar entities. For example, the terms "LUN" and "Virtual Disk" are often used interchangeably to designate an individual RAID volume that is provided to the cluster nodes by the storage array.
6 Install or update the storage connection drivers. For more information on connecting your cluster nodes to a shared storage array, see "Preparing Your Systems for Clustering" in the Dell Failover Cluster Hardware Installation and Troubleshooting Guide that corresponds to your storage array. For more information on the corresponding supported adapters and driver versions, see Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
Selecting a Domain Model On a cluster running the Microsoft Windows operating system, all nodes must belong to a common domain or directory model. The following configurations are supported: • All nodes are member servers in an Active Directory® domain. • All nodes are domain controllers in an Active Directory domain. • At least one node is a domain controller in an Active Directory and the remaining nodes are member servers.
Installing and Configuring the Microsoft Windows Operating System NOTE: Windows standby mode and hibernation mode are not supported in cluster configurations. Do not enable either mode. 1 Ensure that the cluster configuration meets the requirements listed in "Cluster Configuration Overview." 2 Cable the hardware. NOTE: Do not connect the nodes to the shared storage systems yet.
8 Reboot node 1. 9 From node 1, write the disk signature and then partition, format, and assign drive letters and volume labels to the hard drives in the storage system using the Windows Disk Management application. For more information, see "Preparing Your Systems for Clustering" in the Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com. 10 On node 1, verify disk access and functionality on all shared disks.
Configuring Windows Networking You must configure the public and private networks in each node before you install MSCS. The following subsections introduce you to some procedures necessary for the networking prerequisites. Assigning Static IP Addresses to Cluster Resources and Components A static IP address is an Internet address that a network administrator assigns exclusively to a system or a resource. The address assignment remains in effect until it is changed by the network administrator.
Table 2-1. Applications and Hardware Requiring IP Address Assignments (continued) Application/Hardware Description Cluster node network adapters For cluster operation, two network adapters are required: one for the public network (LAN/WAN) and another for the private network (sharing heartbeat information between the nodes).
Table 2-2. Examples of IP Address Assignments (continued) Usage Cluster Node 1 Cluster Node 2 Private network static IP address 10.0.0.1 cluster interconnect (for node-to-node communications) 10.0.0.2 Private network subnet mask 255.255.255.0 255.255.255.0 NOTE: Do not configure Default Gateway, NetBIOS, WINS, and DNS on the private network. If you are running Windows Server 2003, disable NetBIOS on the private network.
Setting the Network Interface Binding Order for Clusters Running Windows Server 2003 1 Click the Start button, select Control Panel, and double-click Network Connections. 2 Click the Advanced menu, and then click Advanced Settings. The Advanced Settings window appears. 3 In the Adapters and Bindings tab, ensure that the Public connection is at the top of the list and followed by the Private connection. To change the connection order: a Click Public or Private.
Configuring the Internet Connection Firewall The Windows Server 2003 operating system includes an enhanced Internet Connection Firewall that can be configured to block incoming network traffic to a PowerEdge system. To prevent the Internet Connection Firewall from disrupting cluster communications, additional configuration settings are required for PowerEdge systems that are configured as cluster nodes in an MSCS cluster.
Installing and Configuring the Shared Storage System The shared storage array consists of disk volumes that are used in your cluster. The management software for each supported shared storage array provides a way to create disk volumes and assigns these volumes to all the nodes in your cluster.
To assign drive letters, create mount points, and format the disks on the shared storage system: 1 Turn off the remaining node(s) and open Disk Management on node 1. 2 Allow Windows to enter a signature on all new physical or logical drives. NOTE: Do not create dynamic disks on your hard drives. 3 Locate the icon for the first unnamed, unformatted drive on the shared storage system. 4 Right-click the icon and select Create from the submenu.
To create a mount point: a Click Add. b Click Mount in the following empty NTFS folder. c Type the path to an empty folder on an NTFS volume, or click Browse to locate it. d Click OK. e Go to step 9. 9 Click Yes to confirm the changes. 10 Right-click the drive icon again and select Format from the submenu. 11 Under Volume Label, enter a descriptive name for the new volume; for example, Disk_Z or Email_Data. 12 In the dialog box, change the file system to NTFS, select Quick Format, and click Start.
Configuring Hard Drive Letters When Using Multiple Shared Storage Systems Before installing MSCS, ensure that both nodes have the same view of the shared storage systems. Because each node has access to hard drives that are in a common storage array, each node must have identical drive letters assigned to each hard drive. Your cluster can access more than 22 volumes using volume mount points in Windows Server 2003. NOTE: Drive letters A through D are reserved for the local system.
c Assign the drive letters for the drives. This procedure allows Windows to mount the volumes. d Reassign the drive letter, if necessary. To reassign the drive letter: e • With the mouse pointer on the same icon, right-click and select Change Drive Letter and Path from the submenu. • Click Edit, select the letter you want to assign the drive (for example, Z), and then click OK. • Click Yes to confirm the changes. Power down the node.
Configuring Microsoft Cluster Service (MSCS) With Windows Server 2003 The cluster setup files are automatically installed on the system disk. To create a new cluster: 1 Click the Start button, select Programs→Administrative Tools→Cluster Administrator. 2 From the File menu, select Open Connection. 3 In the Action box of the Open Connection to Cluster, select Create new cluster. The New Server Cluster Wizard window appears. 4 Click Next to continue.
Adding Cluster Nodes Using the Advanced Configuration Option If you are adding additional nodes to the cluster using the Add Nodes wizard and the nodes are not configured with identical internal storage devices, the wizard may generate one or more errors while checking cluster feasibility in the Analyzing Configuration menu. If this situation occurs, select Advanced Configuration Option in the Add Nodes wizard to add the nodes to the cluster.
13 In the Password field of the Cluster Service Account menu, type the password for the account used to run the Cluster Service, and click Next. The Proposed Cluster Configuration menu appears with a summary with the configuration settings for your cluster. 14 Click Next to continue. The new systems (hosts) are added to the cluster. When completed, Tasks completed appears in the Adding Nodes to the Cluster menu. NOTE: This process may take several minutes to complete. 15 Click Next to continue.
Creating a LUN for the Quorum Resource It is recommended that you create a separate LUN—approximately 1 GB in size—for the quorum resource. When you create the LUN for the quorum resource: • Format the LUN with NTFS. • Use the LUN exclusively for your quorum logs. • Do not store any application data or user data on the quorum resource. • To easily identify the quorum resource, it is recommended that you assign the drive letter "Q" to the quorum resource.
Verifying MSCS Operation After you install MSCS, verify that the service is operating properly. If you selected Cluster Service when you installed the operating system, see "Obtaining More Information" on page 34. If you did not select Cluster Service when you installed the operating system: 1 Click the Start button and select Programs→Administrative Tools, and then select Services. 2 In the Services window, verify the following: • In the Name column, Cluster Service appears.
Installing Your Cluster Management Software This section provides information on configuring and administering your cluster using Microsoft® Cluster Administrator. Microsoft provides Cluster Administrator as a built-in tool for cluster management. Microsoft Cluster Administrator Cluster Administrator is Microsoft’s tool for configuring and administering a cluster. The following procedures describe how to run Cluster Administrator locally on a cluster node and how to install the tool on a remote console.
To install Cluster Administrator and the Windows Administration Tools package on a remote console: 1 Select a system that you wish to configure as the remote console. 2 Identify the operating system that is currently running on the selected system.
Understanding Your Failover Cluster Cluster Objects Cluster objects are the physical and logical units managed by a cluster.
Node-to-Node Communication If a network is configured for public (client) access only, the Cluster Service will not use the network for internal node-to-node communications. If all of the networks that are configured for private (or mixed) communication fail, the nodes cannot exchange information and one or more nodes will terminate MSCS and temporarily stop participating in the cluster.
When MSCS is configured on a node, the administrator chooses whether that node forms its own cluster or joins an existing cluster. When MSCS is started, the node searches for other active nodes on networks that are enabled for internal cluster communications. Forming a New Cluster MSCS maintains a current copy of the cluster database on all active nodes. If a node cannot join a cluster, the node attempts to gain control of the quorum resource and form a cluster.
• Check the online state of the resource by configuring the Looks Alive (general check of the resource) and Is Alive (detailed check of the resource) polling intervals in MSCS. • Specify the time requirement for resolving a resource in a pending state (Online Pending or Offline Pending) before MSCS places the resource in Offline or Failed status. • Set specific resource parameters.
Setting Advanced Resource Properties By using the Advanced tab in the Properties dialog box, you can perform the following tasks: • Restart a resource or allow the resource to fail. See "Adjusting the Threshold and Period Values" on page 43 for more information. • Adjust the Looks Alive or Is Alive parameters. • Select the default number for the resource type. • Specify the time parameter for a resource in a pending state.
Quorum Resource Normally, the quorum resource is a common cluster resource that is accessible by all of the nodes. The quorum resource—typically a physical disk on a shared storage system—maintains data integrity, cluster unity, and cluster operations. When the cluster is formed or when the nodes fail to communicate, the quorum resource guarantees that only one set of active communicating nodes is allowed to form a cluster.
Adjusting the Threshold and Period Values The Threshold value determines the number of attempts to restart the resource before the resource fails over. The Period value assigns a time requirement for the Threshold value to restart the resource. If MSCS exceeds the maximum number of restart attempts within the specified time period and the failed resource has not been restarted, MSCS considers the resource to be failed.
Resource Dependencies A dependent resource requires another resource to operate. Table 4-4 describes resource dependencies. Table 4-4. Resource Dependencies Term Definition Dependent resource A resource that depends on other resources. Dependency A resource on which another resource depends. Dependency tree A series of dependency relationships or hierarchy. The following rules apply to a dependency tree: • A dependent resource and its dependencies must be in the same group.
4 On the File menu, point to New and click Resource. 5 In the New Resource wizard, type the appropriate information in the Name and Description fields and select the appropriate Resource type and Group for the new resource. 6 Click Next. 7 Add or remove possible owners of the resource and click Next. The New Resource window appears with Available resources and Resource dependencies selections. • To add dependencies, under Available resources, select a resource, and then click Add.
File Share Resource Type If you want to use your cluster solution as a high-availability file server, select one of the following types of file share for your resource: • Basic file share — Publishes a file folder to the network under a single name. • Share subdirectories — Publishes several network names—one for each file folder and all of its immediate subfolders. This method is an efficient way to create large numbers of related file shares on a file server.
In an active/passive (activex/passivex) configuration, one or more active cluster nodes are processing requests for a clustered application while the passive cluster nodes only wait for the active node(s) to fail. Table 4-5 provides a description of active/active configuration types. Table 4-5.
Failover Policies When implementing a failover policy, configure failback if the cluster node lacks the resources (such as memory or processing power) to support cluster node failures.
N + I Failover N + I failover is an active/passive policy where dedicated passive cluster node(s) provide backup for the active cluster node(s). This solution is best for critical applications that require dedicated resources. However, backup nodes add a higher cost of ownership because they remain idle and do not provide the cluster with additional network resources. Figure 4-1 shows an example of a 6 + 2 (N + I) failover configuration with six active nodes and two passive nodes.
Configuring Group Affinity On N + I (active/passive) failover clusters running Windows Server 2003, some resource groups may conflict with other groups if they are running on the same node. For example, running more than one Microsoft Exchange virtual server on the same node may generate application conflicts. Use Windows Server 2003 to assign a public property (or attribute) to a dependency between groups to ensure that they fail over to similar or separate nodes. This property is called group affinity.
If you have applications that run well on two-node, and you want to migrate these applications to Windows Server 2003, failover pair is a good policy. This solution is easy to plan and administer, and applications that do not run well on the same server can easily be moved into separate failover pairs. However, in a failover pair, applications on the pair cannot tolerate two node failures. Figure 4-2 shows an example of a failover pair configuration.
resource group to fail over. In this example, node 1 owns applications A, B, and C. If node 1 fails, applications A, B, and C fail over to cluster nodes 2, 3, and 4. Configure the applications similarly on nodes 2, 3, and 4. When implementing multiway failover, configure failback to avoid performance degradation. See "Understanding Your Failover Cluster" on page 37 for more information. Figure 4-3.
Figure 4-4. Example of a Four-Node Failover Ring Configuration application A application D application B application C Failover and Failback Capabilities Failover When an application or cluster resource fails, MSCS detects the failure and attempts to restart the resource. If the restart fails, MSCS takes the application offline, moves the application and its resources to another node, and restarts the application on the other node. See "Setting Advanced Resource Properties" for more information.
You can configure failback to occur immediately, at any given time, or not at all. To minimize the delay until the resources come back online, configure the failback time during off-peak hours. Modifying Your Failover Policy Use the following guidelines when you modify your failover policy: • Define how MSCS detects and responds to group resource failures. • Establish dependency relationships between the resources to control the order in which the resources are taken offline.
Maintaining Your Cluster Adding a Network Adapter to a Cluster Node NOTE: To perform this procedure, Microsoft® Windows Server® 2003 (including the latest service packs) and Microsoft Cluster Services (MSCS) must be installed on both nodes. 1 Move all resources from the node you are upgrading to another node. See the MSCS documentation for information about moving cluster resources to a specific node. 2 Shut down the node you are upgrading. 3 Install the additional network adapter.
7 Click OK and exit the network adapter properties. 8 Click the Start button and select Programs→Administrative Tools→Cluster Administrator. 9 Click the Network tab. 10 Verify that a new resource labeled "New Cluster Network" appears in the window. To rename the new resource, right-click the resource and enter a new name. 11 Move all cluster resources back to the original node. 12 Repeat step 2 through step 11 on each node.
Removing Nodes From Clusters Running Microsoft Windows Server 2003 1 Move all resource groups to another cluster node. 2 Click the Start button, select Programs→Administrative Tools→Cluster Administrator. 3 In Cluster Administrator, right-click the icon of the node you want to uninstall and then select Stop Cluster Service. 4 In Cluster Administrator, right-click the icon of the node you want to uninstall and then select Evict Node.
Recovering From a Corrupt Quorum Disk The quorum disk maintains the configuration data necessary for recovery when a node fails. If the quorum disk resource is unable to come online, the cluster does not start and all of the shared drives are unavailable. If this situation occurs and you must run chkdsk on the quorum disk, start the cluster manually from the command line. To start the cluster manually from a command line prompt: 1 Open a command line window.
Changing the MSCS Account Password in Windows Server 2003 To change the service account password for all nodes running Microsoft Windows Server 2003, type the following at a command line prompt: Cluster /cluster:[cluster_name] /changepass where cluster_name is the name of your cluster For help changing the password, type: cluster /changepass /help NOTE: Windows Server 2003, does not accept blank passwords for MSCS accounts.
10 On the Windows desktop, right-click the My Computer icon and select Manage. The Computer Management window appears. 11 In the Computer Management left pane, click Disk Management. The physical disk information appears in the right pane. 12 Right-click the disk you want to reformat and select Format. Disk Management reformats the disk. 13 In the File menu, select Exit. 14 In the "Looks Alive" poll interval box, select Use value from resource type and click OK.
Upgrading to a Cluster Configuration Before You Begin Before you upgrade your non-clustered system to a cluster solution: • Back up your data. • Verify that your hardware and storage systems meet the minimum system requirements for a cluster as described in "System Requirements" section of Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com.
Completing the Upgrade After installing the required hardware and network adapter upgrades, set up and cable the system hardware. NOTE: You may need to reconfigure your switch or storage groups so that both nodes in the cluster can access their logical unit numbers (LUNs). The final phase for upgrading to a cluster solution is to install and configure Windows Server 2003 with MSCS.
Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are using a Dell Verify the following: PowerVault MD3000 • Host Group is created and the or MD3000i storage cluster nodes are added to the array and the Host Host Group. Group or Host-to Virtual Disk Mappings • Host-to-Virtual Disk Mapping is created and the virtual disks is not correctly are assigned to the Host Group created. containing the cluster nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One of the nodes takes a The node-to-node long time to join the network has failed due cluster. to a cabling or hardware failure. or Check the network cabling. Ensure that the node-to-node interconnection and the public network are connected to the correct NICs. One of the nodes fail to Long delays in nodeto-node join the cluster. communications may be normal.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Attempts to connect to The Cluster Service a cluster using Cluster has not been started. Administrator fail. A cluster has not been formed on the system. The system has just been booted and services are still starting. Corrective Action Verify that the Cluster Service is running and that a cluster has been formed.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are prompted to configure one network instead of two during MSCS installation. The TCP/IP configuration is incorrect. The node-to-node network and public network must be assigned static IP addresses on different subnets. See "Assigning Static IP Addresses to Cluster Resources and Components" for information about assigning the network IPs.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services. See Microsoft Knowledge Base article KB883398 at the Microsoft Support website at support.microsoft.com for more information.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Cluster Services may not operate correctly on a cluster running Windows Server 2003 when the Internet Firewall enabled. The Windows Perform the following steps: Internet Connection 1 On the Windows desktop, Firewall is enabled, right-click My Computer and which may conflict click Manage. with Cluster Services. 2 In the Computer Management window, doubleclick Services. 3 In the Services window, double-click Cluster Services.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Public network clients cannot access the applications or services that are provided by the cluster. One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are using a Dell PowerVault MD3000 or MD3000i storage array and one of the following occurs: The snapshot virtual disk has been erroneously mapped to the node that does not own the source disk. Unmap the snapshot virtual disk from the node not owning the source disk, then assign it to the node that owns the source disk.
Troubleshooting
Index A active/active about, 46 C chkdsk/f running, 57 cluster cluster objects, 37 forming a new cluster, 39 joining an existing cluster, 39 verifying functionality, 34 verifying readiness, 32 verifying resource availability, 34 cluster configurations active/active, 46 active/passive, 46 supported configurations, 61 cluster group installing applications, 32 cluster networks configuring Windows Server 2003 cluster networks, 33 cluster nodes about, 38 states and definitions, 38 cluster objects about, 37 clu
F I failback about, 53 IP address assigning to cluster resources and components, 20 example configuration, 21 failover configuring, 43 modifying failover policy, 54 policies, 48 failover configurations for Windows Server 2003, Enterprise Edition, 48 M Microsoft Cluster Administrator running on a cluster node, 35 failover policies, 48 failover pair, 50 failover ring, 52 for Windows Server 2003, Enterprise Edition, 48 multiway failover, 51 N+I failover, 49 MSCS installing and configuring, 29 verifying
O R operating system installing, 18 upgrading, 62 Windows Server 2003, Enterprise Edition installing, 15 resource creating, 44 deleting, 45 resource dependencies, 40, 44 resource groups, 7 definition, 7 resource properties, 41 P period values adjusting, 43 S private network configuring IP addresses, 21 creating separate subnets, 22 using dual-port network adapters, 23 subnets creating, 22 public network creating separate subnets, 22 threshold adjusting, 43 Q troubleshooting connecting to a cluste
V virtual servers, 7 definition, 7 W warranty, 12 Windows Server 2003, Enterprise Edition cluster configurations, 49-52 76 Index