Book.book Page 1 Wednesday, July 14, 2010 8:10 PM Dell PowerEdge Systems Oracle Database on Microsoft Windows Server x64 Storage and Network Guide Version 4.4 w w w. d e l l . c o m | s u p p o r t . d e l l .
Book.book Page 2 Wednesday, July 14, 2010 8:10 PM Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. ____________________ Information in this publication is subject to change without notice. © 2009–2010 Dell Inc. All rights reserved.
Book.book Page 3 Wednesday, July 14, 2010 8:10 PM Contents 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . Required Documentation for Deploying the Dell Oracle Database . . . . . . . . . . . . . . . . . . . 7 . . . . . . . . . . . 8 . . . . . . . . . . . . . . . . . . . . . . . . 8 Terminology Used in This Document Getting Help Dell Support . . . . . . . . . . . . . . . . . . . . . . Oracle Support 2 . . . . . . . . . . . . . . . . . . . . Fibre Channel Cluster Setup . . . .
Book.book Page 4 Wednesday, July 14, 2010 8:10 PM Cabling Your iSCSI Storage System . 5 iSCSI Cluster Setup for the Dell EqualLogic PS Series Storage Systems . . . . . . . . . . . . . . . . 21 . . . . . . . . . . . 25 Cabling Dell EqualLogic iSCSI Storage System . 6 . . . . Configuring Network and Storage for Oracle RAC Database . . . . . . . . . . . . . . . 29 . . . . . 29 . . . . . . . . . . . . . . 30 Configuring the Public and Private Networks.
Book.book Page 5 Wednesday, July 14, 2010 8:10 PM Configuring Host Access to Volumes. . . . . . . . 39 Configuring Microsoft iSCSI Initiator . . . . . . . . 40 . . . . 41 . . . . . 42 . . . . . . . . . . 43 Verifying the Storage Assignment to the Nodes Preparing the Disks for Oracle Clusterware, Database, and Backup . . . . . . . . . . . . . Enabling the Automount Option for the Shared Disks . . . . . . Preparing the OCR and Voting Disks for Clusterware on Windows Server 2003 . . . . . . .
Book.
Book.book Page 7 Wednesday, July 14, 2010 8:10 PM 1 Overview The Storage and Networking Guide for Oracle Database on Microsoft Windows applies to: • Oracle Database 10g R2 Enterprise Edition on Microsoft Windows Server 2003 R2 Standard or Enterprise x64 Edition or Windows Server 2008 SP2 Enterprise or Standard x64 Edition. • Oracle Database 10g R2 Standard Edition on Windows Server 2003 R2 SP2 Standard x64 Edition or Windows Server 2008 SP2 Standard x64 Edition.
Book.book Page 8 Wednesday, July 14, 2010 8:10 PM Terminology Used in This Document This document uses the terms logical unit number (LUN) and virtual disk. These terms are synonymous and can be used interchangeably. The term LUN is commonly used in a Dell/EMC Fibre Channel storage system environment and virtual disk is commonly used in a Dell PowerVault SAS or iSCSI (Dell PowerVault MD3000 and Dell PowerVault MD3000i with Dell PowerVault MD1000 expansion) storage environment.
Book.book Page 9 Wednesday, July 14, 2010 8:10 PM Operating System and Hardware Installation Guide and Dell PowerEdge Systems Oracle Database on Microsoft Windows Server x64 Troubleshooting Guide of your system. • Dell Enterprise Training and Certification is now available; see dell.com/training for more information. This training service may not be offered in all locations.
Book.
Book.book Page 11 Wednesday, July 14, 2010 8:10 PM Fibre Channel Cluster Setup 2 WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance. After a Dell Managed Services representative completes the setup of your Fibre Channel cluster, verify the hardware connections and the hardware and software configurations as described in this section.
Book.book Page 12 Wednesday, July 14, 2010 8:10 PM Figure 2-1. Hardware Connections for a SAN-Attached Fibre Channel Cluster Client Systems Gigabit Ethernet Switches (Private Network) LAN/WAN Dell PowerEdge Systems (Oracle Database) Dell/EMC Fibre Channel Switches (SAN) Dell/EMC Fibre Channel Storage Systems CAT 5e/6 (Public NIC) CAT 5e/6 (Copper Gigabit NIC) Fiber Optic Cables Additional Fiber Optic Cables Table 2-1.
Book.book Page 13 Wednesday, July 14, 2010 8:10 PM Table 2-1. Fibre Channel Hardware Interconnections (continued) Cluster Component Connections Dell/EMC Fibre Channel storage system • Two CAT 5e or CAT 6 cables connected to the LAN • One to four fiber optic cable connections to each Fibre Channel switch.
Book.book Page 14 Wednesday, July 14, 2010 8:10 PM Figure 2-2. Cabling in a Dell/EMC SAN-Attached Fibre Channer Cluster Two HBA Ports for Node 1 Two HBA Ports for Node 2 sw0 sw1 SP-B SP-A Dell/EMC CX4-480 Fibre Channel Storage To configure your Oracle cluster storage system in a four-port, SAN-attached configuration (see Figure 2-2): 1 Connect one optical cable from SP-A port 0 to Fibre Channel switch 0. 2 Connect one optical cable from SP-A port 1 to Fibre Channel switch 1.
Book.book Page 15 Wednesday, July 14, 2010 8:10 PM SAS Cluster Setup for the Dell PowerVault MD3000 3 WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance. This section provides information and procedures to configure your Dell PowerEdge systems and PowerVault MD3000 hardware and software to function in an Oracle Real Application Cluster (RAC) environment.
Book.book Page 16 Wednesday, July 14, 2010 8:10 PM Figure 3-1. Cabling the Serial-Attached SCSI (SAS) Cluster and Dell PowerVault MD3000 Public Network LAN/WAN PowerEdge Systems PowerVault MD3000 Storage System CAT 5e/6 (Copper Gigabit NIC) CAT 5e/6 (Copper Gigabit NIC) SAS Cables Table 3-1. SAS Cluster Hardware Interconnections Cluster Component Connections PowerEdge system node • One CAT 5e/6 cable from public NIC to the local area network (LAN).
Book.book Page 17 Wednesday, July 14, 2010 8:10 PM Table 3-1. SAS Cluster Hardware Interconnections (continued) Cluster Component Connections PowerVault MD3000 • Two CAT 5e/6 cables connected to a LAN (one from each storage processor module). • Two SAS connections to each PowerEdge system node using a SAS 5/E controller. See "Cabling Your SAS Storage System" on page 17. Gigabit Ethernet switch • One CAT 5e/6 connection to the private Gigabit NIC on each PowerEdge system.
Book.book Page 18 Wednesday, July 14, 2010 8:10 PM Figure 3-2.
Book.book Page 19 Wednesday, July 14, 2010 8:10 PM iSCSI Cluster Setup for the Dell PowerVault MD3000i and PowerVault MD1000 Expansion Enclosures 4 WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance.
Book.book Page 20 Wednesday, July 14, 2010 8:10 PM Table 4-1. iSCSI Hardware Interconnections Cluster Component Connections One Dell PowerVault MD3000i storage system • Two CAT 5e/6 cables connected to LAN (one from each storage processor module) for the management interface. • Two CAT 5e/6 cables per storage processor for iSCSI interconnect. NOTE: For additional information on Dell PowerVault MD3000i system see your Dell PowerVault MD3000i Setup documentation.
Book.book Page 21 Wednesday, July 14, 2010 8:10 PM Setting Up iSCSI Cluster With Dell PowerVault MD3000i Storage System and Dell PowerVault MD1000 Expansion Enclosures Cabling Your iSCSI Storage System Direct-attached iSCSI clusters are limited to two nodes only. Figure 4-1.
Book.book Page 22 Wednesday, July 14, 2010 8:10 PM 5 Connect two SAS cables from the two MD3000 out ports to the two In ports of the first Dell PowerVault MD1000 expansion enclosure (Optional). 6 Connect two SAS cables from the two MD1000 out ports to the In-0 ports of the second Dell PowerVault MD1000 expansion enclosure (Optional). NOTE: For information on configuring the PowerVault MD1000 expansion enclosure, see the Dell PowerVault MD3000 Storage System documentation available at support.dell.
Book.book Page 23 Wednesday, July 14, 2010 8:10 PM 4 Connect one CAT 5e/6 cable from a port (iSCSI HBA or NIC) of node 2 to the port of network switch 2. 5 Connect one CAT 5e/6 cable from a port of switch 1 to the In-0 port of RAID controller 0 in the Dell PowerVault MD3000i storage enclosure. 6 Connect one CAT 5e/6 cable from the other port of switch 1 to the In-0 port of RAID controller 1 in the Dell PowerVault MD3000i storage enclosure.
Book.
Book.book Page 25 Wednesday, July 14, 2010 8:10 PM 5 iSCSI Cluster Setup for the Dell EqualLogic PS Series Storage Systems WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance.
Book.book Page 26 Wednesday, July 14, 2010 8:10 PM Figure 5-1. Recommended Network Configuration Trunk Links Dell PowerConnect 54xx Gigabit Ethernet Switches for iSCSI Storage Area Network Rear View of Dell EqualLogic iSCSI Storage Array Operations Panel iSCSI Storage Area Network Power Supply and Cooling Module 1 Control Module 1 Control Module 0 Power Supply and Cooling Module 0 Figure 5-2 is an architecture overview of a sample Oracle RAC configuration with three PS5000XV arrays. Table 5-1.
Book.book Page 27 Wednesday, July 14, 2010 8:10 PM Figure 5-2.
Book.book Page 28 Wednesday, July 14, 2010 8:10 PM As illustrated in Figure 5-2, the group named oracle-group includes three PS5000XV members: • oracle-member01 • oracle-member02 • oraclemember03 When a member is initialized, it can be configured with RAID 10, RAID 5, or RAID 50. For more information on how to initialize an EqualLogic array, see the Dell EqualLogic User's Guide. A PS Series storage group can be segregated into multiple tiers or pools.
Book.book Page 29 Wednesday, July 14, 2010 8:10 PM 6 Configuring Network and Storage for Oracle RAC Database This section provides information about: • Configuring the public and private networks. • Verifying the storage configuration. • Configuring the shared storage for Oracle Clusterware and the Oracle Database. NOTE: Oracle RAC requires an ordered list of procedures. To configure networking and storage in a minimal amount of time, perform the procedures listed in this chapter in order.
Book.book Page 30 Wednesday, July 14, 2010 8:10 PM Table 6-1. NIC Port Assignments NIC Port Three Ports Available Four Ports Available 1 Public IP and virtual IP Public IP 2 Private IP (NIC team) Private IP (NIC team) 3 Private IP (NIC team) Private IP (NIC team) 4 NA Virtual IP Configuring and Teaming the Private Network Before you deploy the cluster, assign a private IP address and host name to each cluster node.
Book.book Page 31 Wednesday, July 14, 2010 8:10 PM Configuring NIC Teaming for Your Private Network Adapters NOTE: TCP Offload Engine (TOE) functionality of a TOE-capable NIC is not supported in this solution. To configure NIC teaming for your private network adapters: 1 On node 1, identify two network adapters that are used for NIC teaming. 2 Connect an ethernet cable from each selected network adapter to the private network switch. 3 If node 1 is configured with Broadcom NICs, go to step 4.
Book.book Page 32 Wednesday, July 14, 2010 8:10 PM 4 If node 1 is configured with Broadcom NICs, configure NIC teaming by performing the following steps. If not go to step 5. a Click StartProgramsBroadcomBroadcom Advanced Control Suite 3. The Broadcom Advanced Control Suite 3 window is displayed. b Highlight Team Management, and click Teams and select Create a Team. The Broadcom Teaming Wizard window is displayed. c Click Next.
Book.book Page 33 Wednesday, July 14, 2010 8:10 PM Configuring the IP Addresses for Your Public and Private Network Adapters NOTE: The TOE functionality of TOE-capable NIC is not supported in this solution. To configure the IP addresses for your public and private network adapters: 1 Update the adapter’s network interface name, if required. Otherwise, go to step 3. a On node 1, click Start and navigate to SettingsControl Panel Network Connections.
Book.book Page 34 Wednesday, July 14, 2010 8:10 PM f On the Properties window, click Close. g Repeat step a through step f on the Private NIC team. NOTE: Private NIC team does not require a default gateway address and DNS server entry. 3 Ensure that the public and private network adapters appear in the appropriate order for access by network services. a On the Windows desktop, click StartSettingsControl Panel Network Connections.
Book.book Page 35 Wednesday, July 14, 2010 8:10 PM IP Address Node Name 155.16.170.201 rac1-vip 155.16.170.202 rac2-vip NOTE: Registering the private IP addresses with the DNS server is not required because the private network IP addresses are not accessible from the public network. 5 Repeat step 1 to step 4 on the remaining nodes. 6 Ensure that the cluster nodes can communicate with the public and private networks. a On node 1, open a command prompt window.
Book.book Page 36 Wednesday, July 14, 2010 8:10 PM Installing the Host-Based Software Required for Storage If you are installing Dell/EMC Fibre Channel Storage, see the Dell/EMC documentation that came with your system to install the EMC Naviagent software. If you are installing a Dell PowerVault storage, see the Dell PowerVault documentation that came with your system to install the Modular Disk Storage Manager (MDSM) software from the Dell PowerVault Resource media.
Book.book Page 37 Wednesday, July 14, 2010 8:10 PM Installing Multi-Path Driver Software for EqualLogic iSCSI Storage Array For more information see "Installing and Configuring Dell EqualLogic Host Integration Tool (HIT) Kit" on page 39. Verifying Multi-Path Driver Functionality To verify the multi-path driver functionality: 1 Right-click My Computer and select Manage. 2 Expand Storage and click Disk Management. One disk is displayed for each LUN assigned in the storage.
Book.book Page 38 Wednesday, July 14, 2010 8:10 PM automatic data placement and automatic load balancing occurs within a pool, based on the overall workload of the storage hardware resources within the pool. Table 6-4.
Book.book Page 39 Wednesday, July 14, 2010 8:10 PM Configuring iSCSI Networks It is recommended that the host network interfaces for iSCSI traffic are configured to use Flow Control and Jumbo frame for optimal performance. To set Flow Control and Jumbo frame: 1 Select StartSettings Network Connections. 2 Highlight the iSCSI network interface, and right click Properties. 3 Click Configure. 4 Click Advanced. 5 Highlight Jumbo Packet, and set its value to 9014 bytes.
Book.book Page 40 Wednesday, July 14, 2010 8:10 PM 8 On the System Restart Required window, select Yes, I want to restart my computer now, and click OK. 9 When the server restarts, a Remote Setup Wizard window is displayed. 10 Select Configure MPIO settings for this computer, then click Next. 11 Move the iSCSI network subnets under Subnets included for MPIO. Move all other network subnets under Subnets excluded from MPIO. Select Default load balancing policy (Least Queue Depth). Click Finish.
Book.book Page 41 Wednesday, July 14, 2010 8:10 PM f Enter the CHAP password defined in EqualLogic storage, by the Target secret box. g Click OK. 7 On the Log On to Target window, click OK. 8 On the iSCSI Initiator Properties window Targets tab, the status of the logged on volume should be Connected. 9 Repeat step 3 to step 8 to log on to the same volume for every other iSCSI initiator IP addresses. 10 Repeat step 3 to step 9 to log on to all other volumes created for the database.
Book.book Page 42 Wednesday, July 14, 2010 8:10 PM 6 On the Disk Management window, verify if four disks appear. The disks should be similar in size to each other and to the LUNs/virtual disks that are assigned to the nodes in the storage system. 7 Repeat step 1 to step 6 on the remaining nodes.
Book.book Page 43 Wednesday, July 14, 2010 8:10 PM Enabling the Automount Option for the Shared Disks To enable the Automount option for the shared disks: 1 On node 1, click Start and select Run. 2 In the Run field, type cmd and click OK. 3 At the command prompt, type diskpart and press . 4 At the DISKPART command prompt, type automount enable and press . The following message is displayed: Automatic mounting of new volumes enabled.
Book.book Page 44 Wednesday, July 14, 2010 8:10 PM The disk partition area you selected in step 3 is configured as an extended partition. 8 Repeat step 3 to step 7 on all shared disks that are assigned to the cluster nodes. 9 Create a logical drive for the OCR disk. a On the partition area of the disk identified for OCR and voting disk (2 GB LUN/virtual disk), right-click the free space and select New Logical Drive. The Welcome to the New Partition wizard is displayed. b Click Next.
Book.book Page 45 Wednesday, July 14, 2010 8:10 PM f In the Format Partition window, select Do not format this partition and click Next. g Click Finish. h Repeat step a to step g to create two additional voting disk partitions. NOTE: If you are using Redundant Voting Disk and OCR, repeat the steps outlined in step 9 and step 10 for the redundant Voting Disk and OCR.
Book.book Page 46 Wednesday, July 14, 2010 8:10 PM Preparing the Database Disk and Flash Recovery Area for Database Storage With OCFS NOTE: When using Automatic Storage Management (ASM), the ASM data disk group should be larger than your database (multiple LUNs) and the ASM Flash Recovery Area disk group should be at least twice the size of your data disk group.
Book.book Page 47 Wednesday, July 14, 2010 8:10 PM Preparing the Database Disk and Flash Recovery Area for Database Storage With ASM NOTE: If you are creating the logical drives that are used to create the OCFS storage disk, ignore the following steps and follow the procedures in "Preparing the Database Disk and Flash Recovery Area for Database Storage With OCFS" on page 46. To create logical drives that are used to create ASM disk storage: 1 Create one logical drive for the Database.
Book.book Page 48 Wednesday, July 14, 2010 8:10 PM Removing the Assigned Drive Letters To remove the assigned drive letters: 1 On the Windows desktop for each node, right-click My Computer and select Manage. 2 On the Computer Management window, expand Storage and click Disk Management.