Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Getting Started Preparing Your Systems for Clustering Cabling Your Cluster Hardware Maintaining Your Cluster Using MSCS Troubleshooting Cluster Data Sheet Abbreviations and Acronyms Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
Back to Contents Page Getting Started Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Intended Audience Obtaining Technical Assistance Overview of NAS Clusters NAS Cluster Features NAS Cluster Components Minimum System Requirements Other Documents You May Need This guide provides information for installing, configuring, and troubleshooting a Dell™ PowerVault™ network attached storage (NAS) system's hardware and software components in a cluster configuration
Intended Audience This guide addresses two audience levels: Users and system installers who will perform general setup, cabling, and configuration of the PowerVault NAS Cluster components Trained service technicians who will perform more extensive installations, such as firmware upgrades and installation of required expansion cards Obtaining More Information See "Obtaining Technical Assistance" and "Overview of NAS Clusters" for a general description of PowerVault NAS SCSI clusters and clustering technolog
both systems Storage systems — One to four PowerVault 21xS or 22xS storage systems Each cluster node is configured with software and network resources that enable it to interact with the other node to provide a mutual redundancy of operation and application program processing. Because the systems interact in this way, they appear as a single system to the network clients. As an integrated system, the PowerVault NAS Cluster is designed to dynamically handle most hardware failures and prevent downtime.
Primary Domain Controller (PDC) NOTE: If another domain controller is not available on the network, you can configure a NAS cluster node as a domain controller for the NAS cluster. However, client systems outside of the NAS cluster cannot be included as members of the NAS cluster domain.
The following subsections describe the components that are common to the PowerVault NAS cluster, as well as the components that are specific to each cluster system. Table 1-3 lists the common components that are used in a PowerVault NAS cluster. Table 1-3. Cluster Components Component Description NAS systems Two identical PowerVault 770N or 775N NAS systems in a homogeneous pair with the Windows Storage Server 2003, Enterprise Edition operating system installed in each system.
Cluster Platform Guide. Crossover cable One Ethernet crossover cable for the node-to-node cluster interconnect (private network). Keyboard and monitor A keyboard and monitor are required for troubleshooting the cluster nodes. RAID Controllers Table 1-5 lists the Dell PowerEdge™ Expandable RAID controllers (PERC) that are used to connect the PowerVault 770N and 775N systems to external PowerVault storage systems. See the PERC documentation included with your system for a complete list of features.
Figure 1-2. PowerVault 775N Cluster Solution Minimum System Requirements If you are installing a new PowerVault NAS SCSI cluster or upgrading an existing system to a PowerVault NAS SCSI cluster, review the previous subsections to ensure that your hardware components meet the minimum system requirements listed in the following section.
PowerVault NAS Cluster Minimum System Requirements PowerVault NAS SCSI cluster configurations require the following hardware and software components: Cluster nodes Cluster storage Cluster interconnects (private network) Client network connections (public network) Operating system and storage management software Cluster Nodes Table 1-6 lists the hardware requirements for the cluster nodes. Table 1-6.
Table 1-7 provides the minimum requirements for the shared storage system(s). Table 1-7.
The cluster connections to the public network (for client access of cluster resources) require one or more identical network adapters supported by the system for each cluster node. Configure this network in a mixed mode (All Communications) to communicate the cluster heartbeat to the cluster nodes if the private network fails for any reason. Other Documents You May Need The System Information Guide provides important safety and regulatory information.
Back to Contents Page Preparing Your Systems for Clustering Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Before You Begin Installation Overview Selecting a Domain Model Configuring Windows Networking Assigning Static IP Addresses to Your Cluster Resources and Components Installing a PERC RAID Controller Installing and Configuring the Shared Storage System Installing a PowerVault 770N NAS Cluster Minimum Configuration Installing a PowerVault 775N NAS Cluster Minimum Conf
See "Installing and Configuring the Shared Storage System" for more information. 4. Cable the system hardware for clustering. See "Cabling Your Cluster Hardware" for more information. 5. Configure the storage system(s) as described in your storage system documentation. 6. Configure the PERC cards as described in your PERC card documentation. 7. Configure RAID for the internal SCSI hard drives, configure the hard drives using the controller's BIOS utility or Dell OpenManage™ Array Manager.
7. Verify cluster functionality. Ensure that: Your cluster components are communicating properly with each other. MSCS is started. See "Verifying Cluster Functionality" for more information. 8. Verify cluster resource availability. Use Cluster Administrator to check the running state of each resource group. See "Verifying Cluster Resource Availability" for more information.
before and after you install MSCS. If the IP assignments are not set up correctly, the cluster nodes may not be able to communicate with the domain. See "Troubleshooting" for more information. PowerVault NAS SCSI cluster configurations running the Windows operating system require static IP addresses assigned to hardware and software applications in your cluster, as listed in Table 2-1. Table 2-1.
to communicate with the domain and the Cluster Configuration Wizard may not allow you to configure all of your networks. See "Troubleshooting" for more information on troubleshooting problems. NOTE: Additional fault tolerance for the LAN segments can be achieved by using network adapters that support adapter teaming or by having multiple LAN segments. Do not use fault tolerant network adapters for the cluster interconnect, as these network adapters require a dedicated link between the cluster nodes.
2. At the prompt, type: ipconfig /all 3. Press . All known IP addresses for each local server appear on the screen. 4. Issue the ping command from each remote system. Ensure that each local server responds to the ping command. Installing a PERC RAID Controller You can install a PERC controller in your PowerVault NAS systems to manage your external storage systems. When you install a RAID controller in your system, install the controller in the correct PCI slot.
12. Repeat this procedure for cluster node 2.
NOTE: Only the FORCED JOINED JP8 jumper contains a jumper plug. The Dell-installed default for jumpers JP1, JP2, JP6, and JP7 is a noncluster operation (default configuration), as shown in Figure 2-1. 2. Move the jumper plug to connect the two pins of the FORCED JOINED JP8 jumper. 3. Repeat step 1 and step 2 for the second SEMM. 4. Install the two SEMMs in the PowerVault 21xS storage system.
Figure 2-3.
Split-Bus Module Your system supports three SCSI bus modes controlled by the split-bus module: Joined-bus mode Split-bus mode Cluster mode These modes are controlled by the position of the bus configuration switch when the system is turned on. Figure 2-4 illustrates the switch position for each mode. Figure 2-4.
The only difference between cluster mode and joined-bus mode is the SCSI ID occupied by the enclosure services processor. When cluster mode is detected, the processor SCSI ID changes from 6 to 15, allowing a second initiator to occupy SCSI ID 6. As a result, SCSI ID 15 is disabled, leaving 13 available hard drives in cluster mode. As a result, you must remove the SCSI ID 15 hard drive from the enclosure when using the enclosure in cluster mode.
Mode Position of Bus Configuration Switch Function Joined- Up bus mode LVD termination on the split-bus module is disabled, electrically joining the two SCSI buses to form one contiguous bus. In this mode, neither the split-bus nor the cluster LED indicators on the front of the enclosure are illuminated. Splitbus mode Center LVD termination on the split-bus module is enabled and the two buses are electrically isolated, resulting in two seven-drive SCSI buses.
Figure 2-6. Important System Warning The warning message appears on the screen immediately after activating the PERC BIOS configuration utility by pressing during the system's POST and when you attempt to perform a data-destructive operation in the Dell™ PowerEdge™ RAID Console utility. Examples of data-destructive operations include clearing the configuration of the logical drives or changing the RAID level of your shared hard drives.
Setting the SCSI Host Adapter IDs After you enable cluster mode on the PERC card, you have the option to change the SCSI ID for both of the adapter's channels. For each shared SCSI bus (a connection from a channel on one system's PERC card to the shared storage enclosure to a channel on the second system's PERC card), you must have unique SCSI IDs for each controller. The default SCSI ID for the PERC is ID 7. Thus, the SCSI ID for one of the system's PERC cards must be configured to ID 6.
After the virtual disks are created, write the disk signature, assign drive letters to the virtual disks, and then format the drives as NTFS drives. Format the drives and assign drive letters from only one cluster node. NOTICE: Accessing the hard drives from multiple cluster nodes may corrupt the file system. Assigning Drive Letters NOTICE: If the disk letters are manually assigned from the second node, the shared disks are simultaneously accessible from both nodes.
b. Click OK. c. Go to step 9. To create a mount point: a. Click Add. b. Click Mount in the following empty NTFS folder. c. Type the path to an empty folder on an NTFS volume, or click Browse to locate it. d. Click OK. e. Go to step 9. 9. Click Yes to confirm the changes. 10. Right-click the drive icon again and select Format from the submenu. 11. Under Volume Label, enter a descriptive name for the new volume; for example, Disk_Z or Email_Data. 12.
NAS systems 2003, Enterprise Edition operating system Operating system Windows Storage Server 2003, Enterprise Edition RAID controller One supported PERC installed in both systems Shared storage systems One PowerVault 21xS or 22xS storage system with at least nine hard drives reserved for the cluster Private network cabling One crossover cable (not included) attached to a Fast Ethernet network adapter in both systems OR One standard cable (not included) attached to a Gigabit Ethernet network adapte
Installing a PowerVault 775N NAS Cluster Minimum Configuration The following cluster components are required for a minimum system cluster configuration using the PowerVault 775N NAS Cluster: Table 2-5 provides the hardware requirements for a PowerVault 775N NAS cluster minimum configuration. Figure 2-8 shows a minimum system configuration for a PowerVault 775N NAS Cluster. See "Minimum System Requirements" for more information. Table 2-5.
Configuring the Shared Disks This section provides the steps for performing the following procedures: Creating the quorum resource Configuring the shared disk for the quorum disk Configuring the shared disks for the data disks Configuring the hot spare Creating the Quorum Resource When you install Windows Storage Server 2003, Enterprise Edition in your cluster, the software installation wizard automatically selects the quorum resource (or quorum disk), which you can modify later using Cluster Administrator
The quorum resource is typically a hard drive in the shared storage system that serves the following purposes in a PowerVault NAS Cluster configuration: Acts as an arbiter between the cluster nodes to ensure that the specific data necessary for system recovery is maintained consistently across the cluster nodes Logs the recovery data sent by the cluster node Only one cluster node can control the quorum resource at one time.
NOTE: After you create the virtual disk and the virtual disk is initialized by the PERC 3 controller, you must reboot the system. 4. Write a signature on the new disk. 5. Using the new disk, create a volume, assign a drive letter, and format the disk in NTFS. See your Array Manager documentation for information about configuring the shared disk. Configuring the Shared Disks for the Data Disk(s) 1. Open Array Manager. 2. Locate three or more hard drives of the same size in the external storage system(s).
Installing and Configuring MSCS MSCS is an integrated service in the Windows Storage Server 2003, Enterprise Edition operating system. MSCS performs the basic cluster functionality, which includes membership, communication, and failover management. When MSCS is installed properly, the service starts on each node and responds automatically if one of the nodes fails or goes offline. To provide application failover for the cluster, the MSCS software must be installed on both cluster nodes.
5. Follow the procedures in the wizard, and then click Finish. 6. Add the second node to the cluster. a. Turn on the remaining node. b. Click the Start button, select Programs→ Administrative Tools, and double-click Cluster Administrator. c. From the File menu, select Open Connection. d. In the Action box of the Open Connection to Cluster, select Add nodes to cluster. e.
10. In the Add Nodes window, click Next. 11. In the Analyzing Configuration menu, Cluster Administrator analyzes the cluster configuration. If Cluster Administrator discovers a problem with the cluster configuration, a warning icon appears in Checking cluster feasibility. Click the plus (+) sign to review any warnings, if needed. 12. Click Next to continue. 13. In the Password field of the Cluster Service Account menu, type the password for the account used to run MSCS, and click Next.
1. Start Cluster Administrator on the monitoring node. 2. Click the Start button and select Programs→ Administrative Tools (Common)→ Cluster Administrator. 3. Open a connection to the cluster and observe the running state of each resource group. If a group has failed, one or more of its resources might be offline. Configuring and Managing the Cluster Using Cluster Administrator Cluster Administrator is Microsoft's tool for configuring and managing a cluster.
For example, if a cluster has two volumes and each node owns one of the volumes, a typical scenario in an active/active configuration (where virtual servers are running on each node) would be: Node 1 owns Volume G. Node 2 owns Volume H. In this configuration, the administrator must use the PowerVault NAS Manager connect to node 1 to configure the Directory Quota settings for Volume G, and then connect to node 2 to configure the Directory Quota settings for Volume H.
Managing Shadow Copies You must use the Dell PowerVault NAS Manager to manage your shadow copies. Using Cluster Administrator or cluster.exe to manage shadow copies in a cluster is not supported. See the Dell PowerVault 77xN NAS Systems Administrator Guide for more information on managing shadow copies using NAS Manager.
3. Select Cluster Administrator. Creating a System State Backup A system state backup of your proven cluster configuration can help speed your recovery efforts in the event that you need to replace a cluster node. Therefore, you should create a system state backup after you have completed installing, configuring, and testing your PowerVault NAS Cluster and after you make any changes to the configuration.
Back to Contents Page Cabling Your Cluster Hardware Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Cabling the NAS SCSI Cluster Solution Cabling Your Public and Private Networks Cabling the Mouse, Keyboard, and Monitor Power Cabling the NAS SCSI Cluster Solution Dell™ PowerVault™ NAS SCSI cluster configurations require cabling for the storage systems, cluster interconnects, client network connections, and power connections.
When performing the following procedures, reference the appropriate figures according to the type of NAS systems that are installed in your cluster. 1. Locate two SCSI cables containing a 68-pin connector (for the PowerVault storage systems) and an ultra high density connector interface (UHDCI) connector (for the PERC controllers). 2. Ensure that the SCSI cables are long enough to connect your PowerVault storage systems to your PowerVault NAS systems. 3.
8. Ensure that the PERC card is installed in the same PCI slot in both PowerVault NAS systems. 9. On the first SCSI cable, connect the UHDCI connector to the PERC channel 1 connector on cluster node 1. See Figure 3-3 and Figure 3-4 for PowerVault 770N NAS cluster configurations. See Figure 3-5 and Figure 3-6 for PowerVault 775N NAS cluster configurations. Figure 3-3. Cabling a Clustered PowerVault 770N NAS System to One PowerVault 21xS Storage System. Figure 3-4.
Figure 3-5.
Figure 3-6. Cabling a Clustered PowerVault 775N NAS System to One PowerVault 22xS Storage System 10. Tighten and secure the retaining screws on the SCSI connectors. 11. On the second cable, connect the UHDCI connector to the PERC channel 1 connector on cluster node 2. 12. Tighten and secure the retaining screws on the SCSI connectors. NOTE: If the PowerVault 22xS storage system is disconnected from the cluster, it must be reconnected to the same channel on the same PERC card for proper operation.
Connecting the cluster to two PowerVault storage systems is similar to connecting the cluster to a single PowerVault storage system. Connect PERC card channel 0 in each node to the back of the first storage system. Repeat the process for channel 1 on the PERC card in each node using a second PowerVault storage system. With dual storage systems connected to a single PERC card, mirroring disk drives from one storage system to another is supported through RAID 1 and 1+0.
CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown. Figure 3-9.
Figure 3-10. Cabling Two PowerVault 22xS Storage Systems to a PowerVault 775N NAS SCSI Cluster Cabling Three or Four PowerVault 22xS Storage Systems to a NAS SCSI Cluster To connect the cluster to three or four PowerVault 22xS storage systems, repeat the process described in the preceding section for a second controller. NOTICE: If you have dual storage systems that are attached to a second controller, Dell supports disk mirroring between channels on the second controller.
Cabling Your Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 3-2. Table 3-2. Network Connections Network Description Connection Public network All connections to the client LAN. Private network A dedicated connection for sharing cluster health and status information between the cluster nodes. At least one public network must be configured for Mixed mode for private network failover.
Installing redundant network adapters provides your cluster with a failover connection to the public network. If the primary network adapter or a switch port fails, your cluster will be able to access the public network through the secondary network adapter until the faulty network adapter or switch port is repaired. Using Dual-Port Network Adapters for Your Private Network You can configure your cluster to use the public network as a failover for private network communications.
circuit. CAUTION: For operation in Europe, the NAS SCSI cluster requires two circuits rated in excess of the combined load of the attached systems. Refer to the ratings marked on the back of each cluster component when determining the total system's electrical load. See your system and storage system documentation for more information about the specific power requirements for your cluster system's components.
CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown. NOTE: For high-availability, Dell recommends that you use redundant power supplies as shown in Figure 3-12. Figure 3-13.
CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown. NOTE: For high-availability, Dell recommends that you use redundant power supplies as shown in Figure 3-13. Figure 3-14.
CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown. NOTE: For high-availability, Dell recommends that you use redundant power supplies as shown in Figure 3-14. Figure 3-15.
Back to Contents Page Maintaining Your Cluster Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Adding a Network Adapter to a Cluster Node Changing the IP Address of a Cluster Node on the Same IP Subnet Removing a Node Using Cluster Administrator Running chkdsk /f on a Quorum Disk Recovering From a Corrupt Quorum Disk Replacing a Cluster-Enabled Dell PERC Card Reinstalling an Existing Cluster Node Changing the Cluster Service Account Password in Windows Storage Server 2003
e. Assign a unique static IP address, subnet mask, and gateway. 4. Ensure that the network ID portion of the new network adapters IP address is different from the other adapter. For example, if the first network adapter in the node had an address of 192.168.1.101 with a subnet mask of 255.255.255.0, you might enter the following IP address and subnet mask for the second network adapter: IP address: 192.168.2.102 Subnet mask: 255.255.255.0 5. Click OK and exit network adapter properties. 6.
Removing a Node Using Cluster Administrator 1. Take all resource groups offline or move them to another cluster node. 2. Click the Start button, select Programs→ Administrative Tools, and then double-click Cluster Administrator. 3. In Cluster Administrator, right-click the icon of the node you want to uninstall and then select Stop Cluster Service. 4. In Cluster Administrator, right-click the icon of the node you want to uninstall and then select Evict Node.
To start the cluster manually from a command prompt: 1. Open a command prompt window. 2. Select the cluster folder directory by typing the following: cd \windows\cluster 3. Start the cluster in manual mode (on one node only) with no quorum logging by typing the following: Clussvc -debug -noquorumlogging Cluster Service starts. 4. Run chkdsk /f on the disk designated as the quorum resource. To run the chkdsk /f utility: a. Open a second command prompt window. b. Type: chkdsk /f 5.
3. Disconnect the failed PERC card's cable from the shared storage system. NOTICE: If you replace your PERC card, ensure that you enable cluster mode on the replacement PERC card before you connect the SCSI cables to the shared storage system. See "Enabling the Cluster Mode Using the PERC Card" for more information. 4. Replace the failed PERC card in the system without reconnecting the cable. 5. Power on the system with the replaced PERC card and run the BIOS configuration utility.
d. Close Cluster Administrator. 3. Shut down the cluster node you are replacing and disconnect the network, power, and SCSI cables. 4. Ensure that the following hardware and software components are installed in the replacement node: PERC card Network adapter drivers Windows Storage Server 2003, Enterprise Edition operating system 5. On the remaining node, identify the SCSI ID on the system's PERC card. See your PERC card documentation for information about identifying the SCSI ID. 6.
b. Verify that the configuration that is being displayed includes the existing configuration on the disks. c. Press , select Yes to save the disk configuration, and exit the configuration utility. d. Configure the SCSI ID so that it differs with the SCSI ID on the remaining node. See your PERC documentation for more information on verifying and changing the SCSI ID.
c. Evict the remaining node from the cluster. d. Close Cluster Administrator. 3. Shut down the evicted node and disconnect the power, network, and SCSI cables. 4. Perform any servicing or repairs to your evicted node as needed. 5. Reconnect the power and network cables to the evicted node. NOTICE: Do not connect the SCSI cables from the storage system to the evicted node in this step. 6. Turn on the evicted node.
12. Rejoin the node to the domain. 13. Start Cluster Administrator on the remaining node and perform the following steps: a. Join the node to the cluster. b. Move the necessary resources to the evicted node. If the evicted node was your active node, you must manually failover the resources to the node. 14. Open the Windows Event Viewer and check for any errors.
where 6000000 equals 6000000 milliseconds (or 100 minutes). 9. Click Apply. 10. On the Windows desktop, right-click My Computer and select Manage. The Computer Management window appears. 11. In the Computer Management left window pane, click Disk Management. The physical disk information appears in the right window pane. 12. Right-click the disk you want to reformat and select Format. Disk Management reformats the disk. 13. In the File menu, select Exit. 14.
8. Restart node 1. 9. On node 1, use Cluster Administrator to add a new group (for example Disk Group n:). 10. Select possible owners, but do not bring the group online yet. 11. Add a new resource (for example, Disk z:). 12. Select Physical Disk for the type of resource, and assign it to the new group you just created. 13. Select possible owners, and select the drive letter that you assigned to the new array. 14. Bring the new group that you just added online. 15.
5. In the Arrays directory, select PERC Subsystem 1→ x (Cluster)→ (Channel 0) or (Channel 1). where x indicates the number associated with the controller on the system. Select the channel (0 or 1) to which the enclosure is attached. 6. If you downloaded the EMM firmware to a diskette, ensure that the diskette is inserted. 7. Right-click the enclosure icon for the desired channel, and select Download Firmware.
Back to Contents Page Using MSCS Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide Cluster Objects Cluster Networks Network Interfaces Cluster Nodes Groups Cluster Resources File Share Resources Failover and Failback This section provides information about Microsoft® Cluster Service (MSCS).
A network that carries internal cluster communication A public network that provides client systems with access to cluster application services A public-and-private network that carries both internal cluster communication and connects client systems to cluster application services Neither a public nor private network that carries traffic unrelated to cluster operation Preventing Network Failure MSCS uses all available private and public-and-private networks for internal communication.
All nodes in the cluster are grouped under a common cluster name, which is used when accessing and managing the cluster. Table 5-1 defines various states of a node that can occur in cluster operation. Table 5-1. Node States and Definitions State Definition Down The node is not actively participating in cluster operations. Joining The node is in the process of becoming an active participant in the cluster operations.
the group's resources and can be modified by an Administrator. To maximize the processing power of a cluster, establish at least as many groups as there are nodes in the cluster. Cluster Resources A cluster resource is any physical or logical component that has the following characteristics: Can be brought online and taken offline Can be managed in a server cluster Can be hosted (owned) by only one node at a time To manage resources, MSCS communicates to a resource DLL through a Resource Monitor.
tabs. Properties of a cluster object should not be updated on multiple nodes simultaneously. See the MSCS online documentation for more information. Resource Dependencies Groups function properly only if resource dependencies are configured correctly. MSCS uses the dependencies list when bringing resources online and offline.
Resource Parameters The Parameters tab in the Properties dialog box is available for most resources. Table 5-3 lists each resource and its configurable parameters. Table 5-3.
an existing cluster, MSCS can retrieve the data from the other active nodes. However, when a node forms a cluster, no other node is available. MSCS uses the quorum disk's recovery logs to update the node's cluster database, thereby maintaining the correct version of the cluster database and ensuring that the cluster is intact. For example, if node 1 fails, node 2 continues to operate, writing changes to the cluster database. Before you can restart node 1, node 2 fails.
When you configure the Retry Period On Failure properly, consider the following guidelines: Select a unit value of minutes, rather than milliseconds (the default value is milliseconds). Select a value that is greater or equal to the value of the resource's restart period property. This rule is enforced by MSCS. NOTE: Do not adjust the Retry Period On Failure settings unless instructed by technical support. Resource Dependencies A dependent resource requires—or depends on—another resource to operate.
5. In the New Resource wizard, type the appropriate information in Name and Description, and click the appropriate information in Resource type and Group. 6. Click Next. 7. Add or remove possible owners of the resource, and then click Next. The New Resource window appears with Available resources and Resource dependencies selections. 8. To add dependencies, under Available resources, click a resource, and then click Add. 9.
7. Click the Start button and select Programs→ Administrative→ Tools→ Cluster Administrator. 8. In the Cluster Administrator left window pane, ensure that a physical disk resource exists in the cluster. 9. In the Cluster Administrator left or right window pane, right-click and select New→ Resource. 10. In the New Resource window, perform the following steps: a. In the Name field, type a name for the new share. b. In the Description field, type a description of the new share (if required). c.
Deleting a File Share 1. Click the Start button and select Programs→ Administrative→ Tools→ Cluster Administrator. 2. In the Cluster Administrator window console tree, click the Resources folder. 3. In the right window pane, right-click the file share you want to remove and select Delete. NOTE: When you delete a resource, Cluster Administrator automatically deletes all the resources that have a a dependency on the deleted resource.
Enabling Cluster NFS File Share Capabilities After you add a node to the cluster, enable the NFS file sharing capabilities by performing the following steps. NOTE: Perform this procedure on one cluster node after you configure the cluster. 1. Open a command prompt. 2. At the prompt, type: c:\dell\util\cluster 3. In the cluster directory, run the NFSShareEnable.bat file. Failover and Failback This section provides information about the failover and failback capabilities of MSCS.
The group's resources are taken offline. The resources in the group are taken offline by MSCS in the order determined by the group's dependency hierarchy: dependent resources first, followed by the resources on which they depend. For example, if an application depends on a Physical Disk resource, MSCS takes the application offline first, allowing the application to write changes to the disk before the disk is taken offline. The resource is taken offline.
cluster node has been restarted and rejoins the cluster, MSCS will bring the running application and its resources offline, move them from the failover cluster node to the original cluster node, and then restart the application. This process of returning the resources back to their original cluster node is called failback. You can configure failback to occur immediately at any given time, or not at all.
Back to Contents Page Troubleshooting Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide This appendix provides troubleshooting information for Dell™ PowerVault™ NAS SCSI cluster configurations. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1.
time to join the cluster. due to a cabling or hardware failure. Long delays in node-to-node communications may be normal. You are prompted to configure one network instead of two during MSCS installation. to-node interconnection and the public network are connected to the correct network adapters. Verify that the nodes can communicate with each other by running the ping command from each node to the other node. Try both the host name and IP address when using the ping command.
The Create NFS Share option The Enable NFS Share utility is not does not exist. installed on one of the cluster nodes. Back to Contents Page Run the Enable NFS File Share utility. See "Enabling Cluster NFS File Share Capabilities" for more information.
Back to Contents Page Cluster Data Sheet Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide PowerVault SCSI Cluster Solution Data Sheet The cluster data sheets on the following pages are provided for the system installer to record pertinent information about Dell™ PowerVault™ SCSI cluster configurations.
Additional Node 1 network adapter(s) Node 2, network adapter 1 Node 2, network adapter 2 Additional Node 2 network adapter(s) System Storage 1 Storage 2 Storage 3 Storage 4 SCSI ID Node 1, PERC Node 2, PERC Node 1, PERC Node 2, PERC PowerVault Storage System Description of Installed Items (Drive letters, RAID types, applications/data) Storage 1 Storage 2 Storage 3 Storage 4 Component Storage 1 Storage 2 Storage 3 Storage 4 Service Tag PCI Adapter Slot Installed Number (PERC, network adapter, an
PCI slot 8 PCI slot 9 PCI slot 10 PCI slot 11 Back to Contents Page
Back to Contents Page Abbreviations and Acronyms Dell™ PowerVault™ NAS Systems SCSI Cluster Installation and Troubleshooting Guide A ampere(s) API Application Programming Interface AC alternating current ACM advanced cooling module BBS Bulletin Board Service BDC backup domain controller BIOS basic input/output system bps bits per second BTU British thermal unit C Celsius
CIFS Common Internet File System cm centimeter(s) DC direct current DFS distributed file system DHCP dynamic host configuration protocol DLL dynamic link library DNS domain naming system ESD electrostatic discharge EMM enclosure management module ERP enterprise resource planning F Fahrenheit FC
Fibre Channel FCAL Fibre Channel arbitrated loop ft feet FTP file transfer protocol g gram(s) GB gigabyte Gb gigabit Gb/s gigabits per second GUI graphical user interface HBA host bus adapter HSSDC high-speed serial data connector HVD high-voltage differential
Hz hertz ID identification IIS Internet Information Server I/O input/output IP Internet Protocol K kilo- (1024) lb pound(s) LAN local area network LED light-emitting diode LS loop resiliency circuit/SCSI enclosure services LVD low-voltage differential
m meter MB megabyte(s) MB/sec megabyte(s) per second MHz megahertz MMC Microsoft ® Management Console MSCS Microsoft Cluster Service MSDTC Microsoft Distributed Transaction Coordinator NAS network attached storage NIS Network Information Service NFS network file system NTFS NT File System NVRAM
nonvolatile read-only memory PAE physical address extension PCB printed circuit board PDC primary domain controller PDU power distribution unit PERC PowerEdge™ Expandable RAID Controller PERC 3/DC PowerEdge Expandable RAID controller 3/dual channel PERC 4/DC PowerEdge Expandable RAID controller 4/dual channel PCI Peripheral Component Interconnect POST power-on self-test RAID redundant array of independent disks RAM random access memory
rpm revolutions per minute SAF-TE SCSI accessed fault-tolerant enclosures SCSI small computer system interface sec second(s) SEMM SCSI expander management modules SES SCSI enclosure services SMB Server Message Block SMP symmetric multiprocess SNMP Simple Network Management Protocol SQL Simple Query Language TCP/IP Transmission Control Protocol/Internet Protocol UHDCI
ultra high-density connector interface UPS uninterruptible power supply V volt(s) VHDCI very high-density connector interface WINS Windows Internet Naming Service Back to Contents Page