Dell™ PowerEdge™ Systems Dell Oracle Database 10g R2 Enterprise Edition on Microsoft® Windows Server® 2003 R2 with SP2, Standard or Enterprise x64 Edition Deployment Guide Version 4.
Notes and Notices NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved. Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Contents Terminology Used in this Document . . . . . . . . . . . . Software and Hardware Requirements . Minimum Software Requirements 7 . . . . . . . . . 8 . . . . . . . . . . 8 Minimum Hardware Requirements . . . . . . . . . . Installing and Configuring the Operating System . 8 . . . 10 Installing the Operating System Using the Deployment CD/DVDs . . . . . . . . . . . . . . . 10 Verifying the Temporary Directory Paths . . . . . .
Downloading the Latest Oracle Patches . Configuring the Listener . . . . . 39 . . . . . . . . . . . . . . 39 Creating the Seed Database . . . . . . . . . . . . Installing Oracle RAC 10g R2 Using ASM . . . . . . . . Installing Oracle Clusterware Version 10.2.0.1 Installing Oracle10g Database With Real Application Clusters 10.2.0.1. . . . . 42 . . . . . . . . 44 45 . . . . . . . . . . . . . . 47 Creating the Seed Database . . . . . . . . . . . .
Obtaining and Using Open Source Files Index . . . . . . . . 76 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
This document provides information for installing, configuring, reinstalling, and using your Oracle Database 10g R2 software following Dell’s Supported Configurations for Oracle. Use this document in conjunction with the Dell Deployment CD to install your software. If you install your operating system using only the operating system CDs, the steps in this document may not be applicable.
virtual disk is commonly used in a Direct-attached SAS (Dell MD3000/MD3000i and Dell MD3000/MD3000i with MD1000 expansion) storage environment. Software and Hardware Requirements The following sections describe the minimum software and hardware requirements for Dell’s Supported Configurations for Oracle. Minimum Software Requirements Table 1-1 lists the minimum software requirements. NOTE: Your Dell configuration includes a 30-day trial license of Oracle software.
Table 1-2. Minimum Hardware Requirements - Fibre Channel Cluster Configurations Hardware Component Configuration Dell™ PowerEdge™ system (up to eight Intel® Xeon® processor family. nodes using Automatic Storage 1 GB of RAM. Management (ASM) or Oracle Cluster File Two 73-GB hard drives connected to an System (OCFS)) internal RAID controller. NOTE: Dell recommends two 73-GB hard drives (RAID 1) connected to an internal RAID controller based on your system.
Table 1-2. Minimum Hardware Requirements - Fibre Channel Cluster Configurations Hardware Component Configuration Gigabit Ethernet switch (two required) See dell.com/10g for information on supported configurations. For Fibre Channel: See the Dell | EMC system documentation for more details.
6 In the Select Language screen, select English. 7 On the Software License Agreement page, click Accept. The Systems Build and Update Utility home page appears. 8 From the Dell Systems Build and Update Utility home page, click Server OS Installation. The Server OS Installation screen appears. The Server Operating System Installation (SOI) module in the Dell™ Systems Build and Update Utility enables you to install Dell-supported operating systems on your Dell systems.
Enter OS Information: h Enter the appropriate User Name, Organization, and Product ID. i Enter all other necessary information. j Click Install SNMP (default). NOTE: If you have the Dell OpenManage CD and want to install it during your OS install, select Install Server Administrator. The Server Administrator can be installed anytime after the OS is installed. Installation Summary: k Click Eject CD/DVD Automatically (default).
CAUTION: Do not leave the administrator password blank. NOTE: To configure the public network properly, the computer name and public host name must be identical. NOTE: Record the logon password that you created in this step. You will need this information in step 14. When the installation procedure completes, the Welcome to Windows window appears. 12 Shut down the system, reconnect all external storage devices, and restart the system.
22 Run install_drivers.bat NOTE: This procedure may take several minutes to complete. 23 Press any key to continue. 24 If your current system is a Dell PowerEdge Server (M600, M605, M805 or M905), see Table 1-3 on page 14 for information on manually installing the HBA drivers. Otherwise, skip to step 25. 25 Check the logs to verify that all drivers were installed correctly. NOTE: Log information can be found at C:\Dell_Resource_CD\logs 26 When installation is complete, remove the CD from the CD drive.
%SystemDrive%\Temp where %SystemDrive% is the user’s local drive. 4 Repeat all steps in this section for all nodes in the cluster. Verifying Cluster Hardware and Software Configurations Before you begin the cluster setup, ensure that you have the minimum hardware installed as shown in Table 1-2. This section provides setup information for hardware and software cluster configurations.
Figure 1-1. Hardware Connections for a SAN-attached Fibre Channel Cluster Public network Gb Ethernet switches (private network) PowerEdge systems (Oracle database) Dell | EMC Fibre Channel switches (SAN) CAT 5e/6 (copper Gigabit NIC) CAT 5e/6 (copper Gigabit NIC) Fiber optic cables Additional fiber optic cables Dell | EMC CX3-10c, CX3-20, CX3-20F, CX3-40, CX3-40F, CX3-80, CX4-120, CX4-240, CX4-480, CX4-960, and AX4-5F Fibre Channel storage systems Table 1-5.
Table 1-5. Fibre Channel Hardware Interconnections (continued) Cluster Component Connections Dell|EMC Fibre Channel storage system Two CAT 5e/6 cables connected to LAN (one from each storage processor) One to four optical connections to each Fibre Channel switch in a SAN-attached configuration See "Cabling Your Dell|EMC Fibre Channel Storage" on page 17 for more information.
Figure 1-2. Cabling in a Dell|EMC SAN-Attached Fibre Channel Cluster Cluster node 1 Cluster node 2 HBA ports (2) SP-B (Storage processor B) HBA ports (2) SP-A (Storage processor A) CX3-20 storage system Use the following procedure to configure your Oracle cluster storage system in a four-port, SAN-attached configuration. 1 Connect one optical cable from SP-A port 0 to Fibre Channel switch 0. 2 Connect one optical cable from SP-A port 1 to Fibre Channel switch 1.
Configuring Networking and Storage for Oracle RAC 10g R2 This section provides the following information about network and storage configuration: • Configuring the public and private networks. • Verifying the storage configuration. • Configuring the shared storage for Oracle Clusterware and the Oracle Database. NOTE: Oracle RAC 10g R2 is a complex database configuration that requires an ordered list of procedures.
Configuring and Teaming the Private Network Before you deploy the cluster, assign a private IP address and host name to each cluster node. This procedure ensures that the nodes can communicate with each other through the private interface. Table 1-7 provides an example of a network configuration for a two-node cluster. NOTE: This example assumes all the IP addresses are registered in the hosts file of all cluster nodes. NOTE: The two bonded NIC ports for a private network should be on separate PCI buses.
d Right-click the Intel NIC, which is identified for NIC teaming and select Properties. e Click the Teaming tab. f Select Team with other Adapters and then select New Team. g Specify a name for NIC team and click Next. h In the Select the adapters to include in this team box, select the remaining network adapters that you identified for NIC teaming and click Next. i In the Select a team mode list box, select Adaptive Load Balancing. j Click Finish to complete the teaming.
Including this adapter in a team will disrupt the system management features. Click Yes to proceed. g Click Next. h In the Designating Standby Member window, select Do not configure a Standby Member and click Next. i In the Configuring Live Link window, select No and click Next. j In the Creating/Modifying a VLAN window, select Skip Manage VLAN and click Next. k In the last window, click Preview to verify the NIC team and the adapters.
2 Configure the IP addresses. NOTE: You must set a default gateway for your public interface, otherwise, the Clusterware installation may fail. a Click Start→Settings→Control Panel→Network Connections→ Public→Properties. b Double-click Internet Protocol (TCP/IP). c Click Use the following IP address, enter the required IP address, default gateway address, and the DNS server IP address, and click OK. d In the Public Properties window, select Show icon in notification area when connected.
4 On all nodes, add the public, private, and virtual IP addresses and host name to the %SystemRoot%\system32\drivers\etc\hosts file. NOTE: Add the public and virtual IP addresses to the hosts file only if they are not registered with the DNS server. For example, the following entries use the adapter IP and host name as shown in Table 1-7: IP AddressNode Name 155.16.170.1rac1 155.16.170.2rac2 10.10.10.1rac1-priv 10.10.10.2rac2-priv 155.16.170.201rac1-vip 155.16.170.
Installing the Host-Based Software Needed for Storage To install the EMC Naviagent software using the EMC software that came with your Dell|EMC system, follow the procedures in your Dell|EMC documentation. Verifying the Storage Assignment to the Nodes 1 On the Windows desktop, right-click My Computer and select Manage. 2 In the Computer Management window, click Device Manager. 3 Expand Disk drives.
Installing PowerPath for Dell|EMC Systems 1 On node 1, install EMC® PowerPath®. NOTE: For more information, see the EMC PowerPath documentation that came with your Dell|EMC storage system. 2 When the installation procedure is complete, restart your system. 3 Repeat step 1 and step 2 on the remaining nodes. Verifying Multi-Path Driver Functionality 1 Right-click My Computer and select Manage. 2 Expand Storage and click Disk Management. One disk appears for each LUN assigned in the storage.
2 In the Run field, enter cmd and click OK. 3 At the command prompt, enter diskpart. 4 At the DISKPART command prompt, enter automount enable. The following message appears: Automatic mounting of new volumes enabled. 5 At the DISKPART command prompt, enter exit. 6 Close the command prompt. 7 Repeat step 1 through step 6 on each of the remaining nodes. Preparing the OCR and Voting Disks for Clusterware 1 On the Windows desktop, right-click My Computer and select Manage.
The Welcome to the New Partition Wizard appears. b Click Next. c In the Select Partition Type window, select Logical drive and click Next. d In the Specify Partition Size window, enter 120 in the Partition size in MB field and click Next. e In the Assign Drive Letter or Path window, select Do not assign a drive letter or drive path and click Next. f In the Format Partition window, select Do not format this partition and click Next. g Click Finish.
Preparing the Database Disk and Flash Recovery Area for Database Storage With OCFS This section provides information for creating logical drives that will be used to create the Oracle’s Clustered File System (OCFS) storage disk. NOTE: When using ASM storage management, the ASM data disk group should be larger than your database (multiple LUNs) and the ASM Flash Recovery Area disk group should be at least twice the size of your data disk group.
Preparing the Database Disk and Flash Recovery Area for Database Storage With ASM This section provides information about creating logical drives that will be used to create ASM disk storage. ASM disk storage consists of one or more disk groups that can span multiple disks.
3 If you find any drive letters assigned to the drives that you created in "Preparing the OCR and Voting Disks for Clusterware" on page 27, perform the following steps: a Right-click the logical drive and select Change Drive Letter and Paths. b In the Change Drive Letter and Paths window, select the drive letter and click Remove. c In the Confirm window, click Yes. d Repeat step a through step c for the remaining logical drives on the storage partition.
2 In the Oracle Clusterware - Autorun window, click Install/Deinstall Products. This will start the Oracle Universal Installer (OUI) and the Welcome screen appears. 3 Click Next. 4 In the Specify Home Details window, accept the default settings and click Next. NOTE: Record the OraCR10g_home (CRS Home) path because you will need this information later. 5 In the Product Specification Prerequisite Checks window, make sure all the checks are completed successfully and then click Next.
c In the Specify Disk Configuration window, select Place OCR (Primary) on this partition and click OK. d Select the second partition and click Edit. e In the Specify Disk Configuration window, select Place OCR (Mirror) on this partition and click OK. 11 In the Cluster Configuration Storage window, perform the following steps for the voting disk: a Locate the three 50 MB partitions created in the procedure "Preparing the OCR and Voting Disks for Clusterware" on page 27.
e Use the pull-down menu of the Assign Drive Letter option to assign a drive letter to the partition. f Click OK. 14 In the Cluster Configuration Storage window, click Next. 15 Ignore the warning messages and click OK. 16 In the Summary window, click Install to start the installation procedure. The Install window appears, displaying an installation progression bar. The Configuration Assistant window appears and the OUI runs a series of configuration tools. The End of Installation window appears.
where %CD-ROM drive% is the drive letter of your CD drive. 2 In the Oracle Database 10g - Autorun window, click Install/Deinstall Products. This will start the OUI and the Welcome screen appears. 3 Click Next. 4 In the Select Installation Type window, select Enterprise Edition and click Next. 5 In the Specify Home Details window under Destination, verify the following: • In the Name field, the Oracle database home name is OraDb10g_home1.
NOTE: You must perform the procedures as listed in the window before proceeding to the next step. 12 After completing the required procedures as listed in the End of Installation window, click Exit. 13 In the Exit Window, click Yes. Installing Oracle 10g R2 Patchset 10.2.0.4 1 Ensure that only 10.2.0.1 Clusterware and 10.2.0.1 Database binaries are installed on your system and that the seed database is not created yet. 2 Download the patchset 10.2.0.4 from the Oracle Metalink website at metalink.oracle.
%SystemDrive%\Oracle_patch\setup.exe where %SystemDrive% is the drive on which you unzipped the Oracle patchset. 2 In the Welcome screen, click Next. 3 In the Specify home details window, select name as OraCr10g_home from the drop down list and click Next. 4 In the Specify Hardware Cluster Installation Mode window, click Next. 5 In the Product-Specific Prerequisite Checks window, click Next. 6 In the Summary window, click Install.
3 In the Specify Home Details window, select the name as OraDb10g_home1 from the drop-down list and click Next. 4 In the Specify Hardware Cluster Installation Mode window, click Next. 5 In the Product-Specific Prerequisite Checks window, click Next. 6 In the Oracle Configuration Manager Registration window, click Next. 7 In the Summary window, click Install. 8 In the End of Installation window, perform all the steps listed in the Summary window.
5 In the Listener Configuration, Listener window, select Add and click Next. 6 In the Listener Configuration, Listener Name window in the Listener name field, accept the default setting and click Next. 7 In the Listener Configuration, Select Protocols window, in the Selected protocols field, select TCP and click Next. 8 In the Listener Configuration, TCP/IP Protocol window, select Use the standard port number of 1521 and click Next.
9 In the Database Credentials window, click Use the Same Password for All Accounts, enter a new password in the appropriate fields, and click Next. NOTE: Record your new password for later use in database administration. 10 In the Storage Options window, select Cluster File System and click Next. 11 In the Database File Locations window, select the location for storing database files: a Select Use Common Location for All Database Files. b Click Browse.
b Click OK. 16 Click Next. 17 In the Database Content window, accept the default values and click Next. 18 In the Database Services window, click Next. 19 In the Initialization Parameters window, click Next. 20 In the Database Storage window, click Next. 21 In the Creation Options window, accept the default values, and click Finish. 22 In the Summary window, click OK. The Database Configuration Assistant window appears, and the Oracle software creates the database.
Installing Oracle Clusterware Version 10.2.0.1 1 On node 1, insert the Oracle Clusterware CD into the CD drive. The OUI starts and the Welcome screen appears. If the Welcome screen does not appear: a Click Start→Run. b In the Run field, enter the following and click OK: %CD drive%\autorun\autorun.exe where %CD drive% is the drive letter of your CD drive. 2 In the Oracle Clusterware window, click Install/Deinstall Products. 3 In the Welcome screen, click Next.
a Locate the two 120 MB partitions that you created in the subsection "Preparing the OCR and Voting Disks for Clusterware" on page 27. b Select the first partition and click Edit. c In the Specify Disk Configuration window, select Place OCR (Primary) on this partition and click OK. d Select the second partition and click Edit. e In the Specify Disk Configuration window, select Place OCR (Mirror) on this partition and click OK.
1 Insert the Oracle Database 10g Release 2 CD into the CD drive. The OUI starts and the Welcome screen appears. If the Welcome screen does not appear: a Click Start→Run. b In the Run field, enter: %CD drive%\autorun\autorun.exe where %CD drive% is the drive letter of your CD drive. 2 Click OK to continue. The OUI starts and the Welcome window appears. 3 Click Next. 4 In the Select Installation Type window, click Enterprise Edition and click Next.
NOTE: You should perform the steps as listed in the window before proceeding with the next step. 12 Click Exit. Installing Patchset 10.2.0.4 NOTE: The following patchset installation steps install only the Oracle softwares like 10.2.0.1 Clusterware and 10.2.0.1. Database binaries with seed database that are not yet created on your system. 1 Download the patchset 10.2.0.4 from the Oracle Metalink website located at metalink.oracle.com. 2 Unzip the patchset to the following location %SystemDrive%.
5 In the Summary window, click Install. 6 At the End of installation window, perform all the steps listed in the Summary window except step 1. 7 At the End of installation screen, click Exit and then click Yes to exit from the OUI. Installing Patchset 10.2.0.4 for Oracle 10g Database NOTE: Complete the following steps before creating a listener and a seed database. Ensure that all the Oracle services are running.
Configuring the Listener This section contains procedures to configure the listener, which is required to establish a remote client connection to a database. Perform the following steps on node 1: 1 Click Start→Run and enter netca. 2 Click OK. 3 In the Real Application Clusters Configuration window, select Cluster configuration and click Next. 4 In the Real Application Clusters Active Nodes window, select Select All nodes and click Next. 5 In the Welcome window, select Listener configuration and click Next.
EVM appears healthy NOTE: If the output indicated above does not appear, enter crsctl start crs. c Close the cmd window by entering exit. 2 On node 1, click Start→Run. 3 In the Run field, enter the following and click OK: dbca The Database Configuration Assistant starts. 4 In the Welcome window, select Oracle Real Application Clusters database and click Next. 5 In the Operations window, click Create a Database and click Next. 6 In the Node Selection window, click Select All and click Next.
13 In the Database Configuration Assistant window, click OK. The ASM Creation window appears, and the ASM Instance is created. NOTE: If the warning message Failed to retrieve network listener resources appears, click Yes to allow DBCA to create the appropriate listener resources. 14 In the ASM Disk Groups window, click Create New. 15 In the Create Disk Group window, enter the information for the database files. a In the Disk Group Name field, enter a name for the new disk group. For example, DATABASE.
e In the Generate stamps with this prefix field, enter FLASH, and click Next. f In the Stamp disks window, click Next. g Click Finish to save your settings. h Select the check boxes next to the available disks and click OK. The ASM Disk Group Window appears, indicating that the software is creating the disk group. When completed, the FLASH disk group appears in the Disk Group Name column.
26 In the Summary window, click OK. The Database Configuration Assistant window appears, and the Oracle software creates the database. NOTE: This procedure may take several minutes to complete. When completed, the Database Configuration Assistant window provides database configuration information. 27 Record the information in the Database Configuration Assistant window for future database administration. 28 Click Exit. The Start Cluster Database window appears and the cluster database starts.
3 In the Welcome screen, click Next. 4 In the Specify Home Details window, accept the default settings and click Next. NOTE: Record the OraCR10g_home (CRS Home) path because you will need this information later. 5 In the Product-Specific Prerequisite Checks window, click Next. 6 In the Specify Cluster Configuration window, perform the following steps: a Verify the public, private, and virtual Host names for the primary node.
12 Click Next. 13 Ignore the warning messages and click OK. 14 In the Summary window, click Install to start the installation procedure. NOTE: If a failure occurs in the Configuration Assistant window, perform the following steps and see "Troubleshooting" on page 64 and "Working Around Clusterware Installation Failure" on page 64. The Install window appears, displaying an installation progression bar. The Configuration Assistant window appears and the OUI runs a series of configuration tools.
• In the Path field, the complete Oracle home path is %SystemDrive%\oracle\product\10.2.0\db_1 where %SystemDrive% is the user’s local drive. NOTE: Record the path for later use. NOTE: The Oracle home path must be different from the Oracle home path that you selected in the Oracle Clusterware installation procedure. You cannot install the Oracle Database 10g R2 Standard x64 Edition with RAC and Clusterware in the same home directory. 6 Click Next.
%SystemDrive%:\%CRS_HOME%\bin> srvctl stop nodeapps -n where %SystemDrive% is the user’s local drive. 2 Stop all the Oracle services on all the nodes. 3 Click Start→Programs→Administrator Tools→Services. 4 Locate all Oracle services and stop them on both nodes. Installing the Patchset NOTE: You must install the patchset software from the node where the Oracle RAC 10g R2 software was installed. If this is not the node where you are running the OUI, exit and install the patchset from that node.
3 In the Specify home details window, select the name as OraDb10g_home1 from the drop down list to install the patchset to Oracle home and click Next. 4 In the Specify Hardware Cluster Installation Mode window, select Local Installation and click Next. 5 In the Summary window, click Install. During the installation, the following error message may appear: Error in writing to file oci.dll. To work around this issue, perform the following steps: a Cancel the patchset installation.
7 In the Listener Configuration Select Protocols window, select TCP in the Selected protocols field and click Next. 8 In the Listener Configuration TCP/IP Protocol window, select Use the standard port number of 1521 and click Next. 9 In the Listener Configuration More Listeners window, select No and click Next. 10 In the Listener Configuration Done window, click Next. 11 In the Welcome window, click Finish.
9 In the Database Credentials window, click Use the Same Password for All Accounts, enter a new password in the appropriate fields, and click Next. NOTE: Record your new password for later use in database administration. 10 In the Storage Options window, select Automatic Storage Management (ASM) and click Next. 11 In the Create ASM Instance window, perform the following steps: a In the SYS password field, enter a new password in the appropriate fields. b Click Next.
16 In the Create Disk Group window, enter the following information for the Flash Recovery Area. a In the Disk Group Name field, enter a name for the new disk group. For example, FLASH. b In the Redundancy box, select External. c Click Stamp disks. d In the Select disks screen, select the disk which you plan to use for the Flash Recovery Area. Note that the Status is marked as Candidate device. e In the Generate stamps with this prefix field, enter FLASH, and click Next.
g In the Edit Archive Mode Parameters window, ensure that the path listed under the Archive Log Destinations is as follows: +FLASH/, where FLASH is the Flash Recovery Area disk group name that you specified in step a of step 17. h Click Next. 20 In the Database Content window, click Next. 21 In the Database Services window, click Next. 22 In the Initialization Parameters window, click Next. 23 In the Database Storage window, click Next. 24 In the Creation Options window, click Finish.
Make sure that you can execute the following command from each of the existing nodes of your cluster where the host_name is the public network name of the new node: NET USE \\host_name\C$ You have the required administrative privileges on each node if the operating system responds with: Command completed successfully. NOTE: If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes.
7 Execute the following command to identify the node names and node numbers that are currently in use: CRS home\bin\olsnodes -n 8 Execute the crssetup.exe command using the next available node names and node numbers to add CRS information for the new nodes. For example: crssetup.
6 Execute the VIPCA utility from the bin subdirectory of the Oracle home using the -nodelist option with the following syntax that identifies the complete set of nodes that are now part of your RAC database beginning with Node1 and ending with NodeN: vipca -nodelist Node1,Node2,Node3,...NodeN 7 Add a listener to the new node only by running the Net Configuration Assistant (NetCA). After completing the procedures in the previous section, the new nodes are defined at the cluster database layer.
11 Review the information on the Summary dialog and click OK. The DBCA displays a progress dialog showing the DBCA performing the instance addition operation. When the DBCA completes the instance addition operation, the DBCA displays a dialog asking whether you want to perform another operation. 12 Click No and exit the DBCA, or click Yes to perform another operation.
If this occurs, perform the following steps to work around the error. These steps are detailed in Metalink Note ID 338924.1. This generally occurs if the Public interface is configured with an IP address in the networks 10.0.0.0/8, 172.16.0.0/16 or 192.168.1.0/24. 1 Click Start→Run. 2 In the Run field, enter the following and click OK: %SystemDrive%\Oracle\product\10.2.0\crs\bin\vipca where %SystemDrive% is the user’s local drive.
6 In the Welcome window, click Cancel. 7 When prompted, click Cancel, and then click Yes. Deleting Oracle Services 1 On node 1, launch the Services console. a Click Start→Run. b In the Run field, enter the following, and click OK: services.msc The Services window appears. 2 Identify and delete any remaining Oracle services. To delete a service: a Click Start→Run. b In the Run field, enter cmd and click OK.
where %SystemDrive% is the user’s local drive. The Oracle Symbolic Link Exporter (ExportSYMLinks) imports the symbolic links to the SYMMAP.TBL file to your current directory. d At the command prompt, enter the following: notepad SYMMAP.TBL 2 Ensure that OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3 appear in the file.
5 Launch the Oracle GUI Object Manager. At the command prompt, enter the following: %SystemDrive%\ora_bin_utils\GUIOracleOBJManager.e xe where %SystemDrive% is the user’s local drive. The Oracle Object Manager window appears. 6 Delete the symlinks for the OCR (OCRCFG and OCRMIRRORCFG) and the voting disks (Votedsk1, Votedsk2, and Votedsk3). a Select OCRCFG, OCRMIRRORCFG, Votedsk1, Votedsk2, and Votedsk3. b Click Options and select Commit.
11 Repeat the procedures "Preparing the Disks for Oracle Clusterware" on page 26 and "Removing the Assigned Drive Letters" on page 30 to recreate your logical partitions and the procedure "Installing Oracle RAC 10g R2 Using OCFS" on page 31 to re-install Oracle RAC for OCFS, or "Installing Oracle RAC 10g R2 Using ASM" on page 41 to re-install Oracle RAC for ASM. Additional Troubleshooting This section provides recommended actions for additional problems that you may encounter.
• Turn off Spanning Tree on the switch. • Enable Port Fast Learning (or equivalent, which may be called something different depending on the brand of switch) on the ports of the switch to which your teamed NICs are attached. • Use Broadcom’s LiveLink feature by right-clicking the team, choosing Enable LiveLink, and following the instructions in the window.
c Clean the storage devices. See "Uninstalling Oracle Clusterware" on page 65 for more information. • PROBLEM: The Configuration Assistant fails to install successfully. – CAUSE: One or more storage devices need to be reformatted. – RESOLUTION: Perform the following procedures: a Uninstall Oracle Clusterware using OUI. b Uninstall any remaining Oracle services. c Clean the storage devices. See "Uninstalling Oracle Clusterware" on page 65 for more information.
n Verify the following: • The storage system is functioning properly. • All fiber-optic cables are connected and secure. • The cluster node can access the shared storage disks. See "Installing the Host-Based Software Needed for Storage" on page 25 and "Verifying Multi-Path Driver Functionality" on page 26. o Repeat step a through step n and reset each Oracle service back to its original setting. System Blue Screen • PROBLEM: The cluster nodes generate a blue screen.
crsctl set css misscount n where n is a value greater than 120. d Restart node 1 and log on as administrator. e Restart each of the other nodes and log on as administrator. Storage • PROBLEM: Disks appear as unreachable. – CAUSE: On the Windows desktop, when you right-click My Computer, select Computer Management, and then click Disk Management, the disks appear unreachable.
Next, ensure that the fiber optic cables connected to the cluster nodes and storage system are installed correctly. See "Cabling Your Dell|EMC Fibre Channel Storage" on page 17 for more information. VIPCA • PROBLEM: The VIPCA configuration fails. – CAUSE: The public network adapter interface (or the network interface assigned for VIP in case 4 network interfaces) name is not identical on both cluster nodes.
Getting Help Dell Support For detailed information about using your system, see the documentation that came with your system components. For white papers, Dell Supported Configurations, and general information, visit dell.com/10g. For Dell technical support for your hardware and operating system software and to download the latest updates for your system, visit the Dell Support website at support.dell.com. Information about contacting Dell is provided in your system Installation and Troubleshooting Guide.
Deployment Guide
Index C area, 29 disks, 27 cluster fibre channel, 9, 16 Clusterware installing, 42, 52 preparing disks, 26 uninstalling, 66 D database disk, 29 disks database, 29 flash recovery, 27 voting, 26 E EMC Naviagent, 25 PowerPath, 8, 14 H hardware connections, 16-17 requirements, 9 help, 76 Dell support, 76 Oracle support, 76 I IP addresses configuring, 22 L listener configuring, 39, 47, 57 F M fibre channel Dell|EMC, 17 SAN-attached, 16 setting up, 16 Multi-Path, 26 driver, 26 flash recovery Index 87
N Naviagent, 25 network configuring, 19 NIC port assignments, 20 S seed database creating, 40 storage configuring, 19 O OCFS, 29 creating seed database, 40 installing Oracle using, 31 OCR disk, 26 Oracle preparing disks for Clusterware, 26 Oracle Database 10g configuring, 52 deploying, 52 OUI running, 66 P partitions creating, 27 patches downloading, 39 installing, 37 patchset installing, 45, 55 88 PowerPath installing, 26 Index T TOE, 20 V voting disk, 27 creating logical drive, 28 W Windows confi