Lnx_Stnwt.book Page 1 Wednesday, August 4, 2010 11:58 AM Dell PowerEdge Systems Oracle Database on Enterprise Linux x86_64 Storage and Network Guide Version 1.
Lnx_Stnwt.book Page 2 Wednesday, August 4, 2010 11:58 AM Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. ____________________ Information in this publication is subject to change without notice. © 2009–2010 Dell Inc. All rights reserved.
Lnx_Stnwt.book Page 3 Wednesday, August 4, 2010 11:58 AM Contents 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . Required Documentation for Deploying the Dell|Oracle 11g R2 Database . . . . . . . . . . . . . . . 7 . . . . . . . . . . . 8 . . . . . . . . . . . . . . . . . . . . . . . . 8 Terminology Used in This Document Getting Help Dell Support . . . . . . . . . . . . . . . . . . . . . . Oracle Support 2 7 . . . . . . . . . . . . . . . . . . . . Configuring Your Network .
Lnx_Stnwt.book Page 4 Wednesday, August 4, 2010 11:58 AM 3 Setting Up a Fibre Channel Cluster Hardware Connections for a Fibre Channel Cluster . . . . . . . . . . . . . . . . . . . . . . Cabling Your Fibre Channel Storage System . . . . . . 4 25 26 . . 29 . . . . . . . . . . . . . . . . 31 . . . . . . . . 32 Verifying and Upgrading the Firmware . . . . . . . . . 33 Installing the SAS 5/E Adapter Driver . . . . . . . . . . 33 Performing the Post Installation Tasks . . . . . . . . .
Lnx_Stnwt.book Page 5 Wednesday, August 4, 2010 11:58 AM 5 Setting Up an iSCSI Cluster for the Dell PowerVault MD3000i and MD1000 Storage Enclosures . . . . . . . . . . . . . . . Setting Up the Hardware . . 39 . . . . . . . . . . . . . . . . 40 Installing Host-based Software Needed for Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 . . . . . . . . . . . . . . . . . 44 Verifying and Upgrading the Firmware Post Installation Tasks .
Lnx_Stnwt.book Page 6 Wednesday, August 4, 2010 11:58 AM Using the fdisk Utility to Adjust a Disk Partition . . . . . . . . . . . . . . . Configuring Shared Storage for Clusterware, Database, and Recovery Files in an RAC Environment . . . . . . . . . . . . . Index 6 68 . . . . . 69 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lnx_Stnwt.book Page 7 Wednesday, August 4, 2010 11:58 AM 1 Overview The document provides a generalized guide to configure the network and storage requirements for running the Dell|Oracle database on a system installed with the Red Hat Enterprise Linux or the Oracle Enterprise Linux operating system. This document applies to Oracle Database 11g R2 running on Red Hat Enterprise Linux 5.5 AS x86_64 or Oracle Enterprise Linux 5.5 AS x86_64.
Lnx_Stnwt.book Page 8 Wednesday, August 4, 2010 11:58 AM Terminology Used in This Document • This document uses the terms logical unit number (LUN) and virtual disk. These terms are synonymous and can be used interchangeably. The term LUN is commonly used in a Dell/EMC Fibre Channel storage system environment and virtual disk is commonly used in a Dell PowerVault SAS and iSCSI (Dell PowerVault MD3000 and Dell PowerVault MD3000i with Dell PowerVault MD1000 expansion) storage environment.
Lnx_Stnwt.book Page 9 Wednesday, August 4, 2010 11:58 AM Configuring Your Network 2 This section provides information about configuring the public and private cluster network. NOTE: Each node in a network requires a unique public and private Internet protocol (IP) address. An additional public IP address is required to serve as the virtual IP address for the client connections and the connection failover. Therefore, a total of three IP address are required for each node.
Lnx_Stnwt.book Page 10 Wednesday, August 4, 2010 11:58 AM NOTE: Ensure that the Gateway address is configured for the public network interface. If the Gateway address is not configured, the grid installation may fail. DEVICE=eth0 ONBOOT=yes IPADDR= NETMASK= BOOTPROTO=static HWADDR= SLAVE=no GATEWAY= 3 Edit the /etc/sysconfig/network file, and, if necessary, replace localhost.localdomain with the qualified public node name.
Lnx_Stnwt.book Page 11 Wednesday, August 4, 2010 11:58 AM 3 In the /etc/sysconfig/network-scripts/ directory, create or edit the ifcfg-bond0 configuration file. For example, using sample network parameters, the file appears as: DEVICE=bond0 IPADDR=192.168.0.1 NETMASK=255.255.255.0 ONBOOT=yes BONDING_OPTS="mode=6 miimon=100 max_bonds=2" BOOTPROTO=none DEVICE=bondn is the name required for the bond, where n specifies the bond number. IPADDR is the private IP address.
Lnx_Stnwt.book Page 12 Wednesday, August 4, 2010 11:58 AM Setting Up User Equivalence Configuring ssh To configure ssh: 1 On the primary node, log in as root. 2 Go to the Grid binary foldersshsetup folder and run the following sshUserSetup.sh script: sh sshUserSetup.sh -hosts "host1 host2" -user grid –advanced sh sshUserSetup.sh -hosts "host1 host2" -user oracle –advanced where host1 and host2 are the cluster node names.
Lnx_Stnwt.book Page 13 Wednesday, August 4, 2010 11:58 AM Table 2-2 describes the different interfaces, IP address settings, and the resolutions in a cluster. Table 2-2.
Lnx_Stnwt.book Page 14 Wednesday, August 4, 2010 11:58 AM Configuring a DNS Client To configure a DNS client: 1 Add host entries within the /etc/hosts file domain name. On each node, modify lines in the /etc/hosts file by typing: 127.0.0.1 localhost.localdomain localhost 2 On all nodes in the cluster, edit the resolv.
Lnx_Stnwt.book Page 15 Wednesday, August 4, 2010 11:58 AM For a Cluster Using DNS To set up an Oracle 11g R2 RAC using Oracle DNS (without GNS): 1 At least two interfaces configured on each node, one for the private IP address and one for the public IP address. 2 A SCAN NAME configured on the DNS for Round Robin resolution to three addresses (recommended) or at least one address. The SCAN addresses must be on the same subnet as virtual IP addresses and public IP addresses.
Lnx_Stnwt.book Page 16 Wednesday, August 4, 2010 11:58 AM Configuring a DNS Client To configure the changes required on the cluster nodes for name resolution: 1 Add host entries in the /etc/hosts file. On each node, modify lines in the /etc/hosts file by typing: 127.0.0.1 localhost.
Lnx_Stnwt.book Page 17 Wednesday, August 4, 2010 11:58 AM Prerequisites for Enabling IPMI Each cluster node requires a Baseboard Management Controller (BMC), running a firmware compatible with IPMI version 1.5 or later and configured for remote control using LAN. NOTE: It is recommended that you use a dedicated management network (DRAC port) for IPMI. The Linux rpm required for ipmitool is OpenIPMI-tools-2.0.16-7.el5_4.1.x86_64.rpm. Configuring the Open IPMI Driver 1 Log in as root.
Lnx_Stnwt.book Page 18 Wednesday, August 4, 2010 11:58 AM Configuring BMC Using IPMItool Use the following example to configure BMC using ipmitool version 2.0: 1 Log in as root. 2 Verify that ipmitool is communicating with the BMC using the IPMI driver. Use the following commands to check for the device ID in the output: # ipmitool bmc info Device ID : 32 Device Revision : 0 Firmware Revision : 0.20 IPMI Version : 2.
Lnx_Stnwt.book Page 19 Wednesday, August 4, 2010 11:58 AM 4 Configure IP address settings for IPMI using one of the following procedures: • Using dynamic IP addressing—Dynamic IP addressing is the default assumed by Oracle Universal Installer. It is recommended that you select this option so that nodes can be added or removed from the cluster more easily, as address settings can be assigned automatically.
Lnx_Stnwt.book Page 20 Wednesday, August 4, 2010 11:58 AM Access Available : call-in / callback Link Authentication : disabled IPMI Messaging : disabled Privilege Level : NO ACCESS . . . c Assign the desired administrator user name and password and enable messaging for the identified slot. Also set the privilege level for that slot when accessed over LAN (channel 1) to ADMIN (level 4).
Lnx_Stnwt.book Page 21 Wednesday, August 4, 2010 11:58 AM IP Address Source : DHCP Address [or Static Address] IP Address : 192.168.0.55 Subnet Mask : 255.255.255.0 MAC Address : 00:14:22:23:fa:f9 SNMP Community String : public IP Header : TTL=0x40 Flags=0x40 Precedence=… Default Gateway IP : 192.168.0.1 Default Gateway MAC : 00:00:00:00:00:00 . . .
Lnx_Stnwt.book Page 22 Wednesday, August 4, 2010 11:58 AM 6 Verify that BMC is accessible and controllable from a remote node in your cluster using the bmc info command. For example, if node2-ipmi is the network host name assigned to BMC for node2, then to verify the BMC on node2 from node1, enter the following command on node1: $ ipmitool -H node2-ipmi -U bmcuser -P password bmc info Where bmcuser is the administrator account and password is the password.
Lnx_Stnwt.book Page 23 Wednesday, August 4, 2010 11:58 AM 3 Setting Up a Fibre Channel Cluster WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance. This section helps you to verify the hardware connections, and the hardware and software configurations of the Fibre Channel cluster.
Lnx_Stnwt.book Page 24 Wednesday, August 4, 2010 11:58 AM Table 3-1 lists the Fibre Channel hardware connections depicted in Figure 3-1 and summarises the cluster connections. Table 3-1.
Lnx_Stnwt.
Lnx_Stnwt.book Page 26 Wednesday, August 4, 2010 11:58 AM Configuring SAN-Attached Fibre Channel To configure your nodes in a four-port SAN-attached configuration: 1 Connect one optical cable from SP-A port 0 to Fibre Channel switch 0. 2 Connect one optical cable from SP-A port 1 to Fibre Channel switch 1. 3 Connect one optical cable from SP-A port 2 to Fibre Channel switch 0. 4 Connect one optical cable from SP-A port 3 to Fibre Channel switch 1.
Lnx_Stnwt.book Page 27 Wednesday, August 4, 2010 11:58 AM Figure 3-3.
Lnx_Stnwt.
Lnx_Stnwt.book Page 29 Wednesday, August 4, 2010 11:58 AM 4 Setting Up a SAS Cluster for the Dell PowerVault MD3000 and MD1000 Expansion Enclosures WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance.
Lnx_Stnwt.book Page 30 Wednesday, August 4, 2010 11:58 AM Figure 4-1. Cabling the Serial-Attached SCSI (SAS) Cluster and the Dell PowerVault MD3000 Storage Enclosure Client Systems Private Network DNS and DHCP Server LAN/WAN PowerEdge Systems PowerVault MD3000 Storage System Two PowerVault MD1000 Expansion Enclosures CAT 5e/6 (Public NIC) CAT 5e/6 (Copper Gigabit NIC) Fibre Optic Cables Table 4-1.
Lnx_Stnwt.book Page 31 Wednesday, August 4, 2010 11:58 AM Table 4-1. SAS Cluster Hardware Interconnections (continued) Cluster Component Connections PowerVault MD3000 storage enclosure • Two CAT 5e/6 cables connected to LAN (one from each storage processor module) • Two SAS connections to each PowerEdge system node through the SAS 5/E cables NOTE: For more information on the PowerVault MD3000 storage enclosure interconnection, see "Setting Up the Hardware" on page 31.
Lnx_Stnwt.book Page 32 Wednesday, August 4, 2010 11:58 AM 6 If applicable, connect two SAS cables from the two PowerVault MD1000 storage enclosures out ports to the In-0 ports of the second PowerVault MD1000 expansion enclosure. NOTE: For information on configuring the PowerVault MD1000 expansion enclosure, see the PowerVault MD1000 storage system documentation at support.dell.com/manuals. Figure 4-2.
Lnx_Stnwt.book Page 33 Wednesday, August 4, 2010 11:58 AM Verifying and Upgrading the Firmware To verify and upgrade the firmware: 1 Discover the direct-attached storage of the host system using the MDSM software that is installed on the host system.
Lnx_Stnwt.book Page 34 Wednesday, August 4, 2010 11:58 AM Setting Up an iSCSI Cluster for the Dell PowerVault MD32xx and MD12xx Storage Enclosures Setting Up the Hardware For assistance in setting up your PowerVault MD32xx and PowerVault MD12xx expansion enclosure, please see the PowerVault documentation at support.dell.com/manuals.
Lnx_Stnwt.book Page 35 Wednesday, August 4, 2010 11:58 AM NOTE: To install the software on a Windows or Linux system, you must have administrative or root privileges. The PowerVault MD3200 Series resource media offers the following three installation methods: • Graphical Installation (Recommended)—This is the recommended installation procedure for most users. The installer presents a graphical wizard-driven interface that allows customization of which components are installed.
Lnx_Stnwt.book Page 36 Wednesday, August 4, 2010 11:58 AM Console Installation NOTE: Console installation only applies to Linux systems that are not running a graphical environment. The autorun script in the root of the resource media detects when there is no graphical environment running and automatically starts the installer in a textbased mode. This mode provides the same options as graphical installation. Silent Installation This option allows you to install the software in an unattended mode.
Lnx_Stnwt.book Page 37 Wednesday, August 4, 2010 11:58 AM Post Installation Tasks Before using the PowerVault MD3200 Series storage array for the first time, complete a number of initial configuration tasks in the order shown. These tasks are performed using the MD Storage Manager (MDSM) software.
Lnx_Stnwt.book Page 38 Wednesday, August 4, 2010 11:58 AM To verify storage array discovery: 1 Check the hardware and connections for possible problems. For specific procedures on troubleshooting interface problems, see the Owner's Manual. 2 Verify that the array is on the local subnetwork. If it is not, click the New link to manually add it. 3 Verify that the status of each storage array is Optimal.
Lnx_Stnwt.book Page 39 Wednesday, August 4, 2010 11:58 AM 5 Setting Up an iSCSI Cluster for the Dell PowerVault MD3000i and MD1000 Storage Enclosures WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance.
Lnx_Stnwt.book Page 40 Wednesday, August 4, 2010 11:58 AM Table 5-1. iSCSI Hardware Interconnections (continued) Cluster Component Connections PowerVault MD3000i storage system • Two CAT 5e/6 cables connected to LAN (one from each storage processor module) for the management interface • Two CAT 5e/6 cables per storage processor for iSCSI interconnect NOTE: For additional information on the PowerVault MD3000i storage enclosure, see the PowerVault MD3000i documentation at support.dell.com/manuals.
Lnx_Stnwt.book Page 41 Wednesday, August 4, 2010 11:58 AM To configure your nodes in a direct-attached configuration, see Figure 5-1: 1 Connect one CAT 5e/6 cable from a port (iSCSI HBA or NIC) of node 1 to the In-0 port of RAID controller 0 in the PowerVault MD3000i storage enclosure. 2 Connect one CAT 5e/6 cable from the other port (iSCSI HBA or NIC) of node 1 to the In-0 port of RAID controller 1 in the PowerVault MD3000i storage enclosure.
Lnx_Stnwt.book Page 42 Wednesday, August 4, 2010 11:58 AM Figure 5-2.
Lnx_Stnwt.book Page 43 Wednesday, August 4, 2010 11:58 AM To configure your nodes in a switched configuration, see Figure 5-2: 1 Connect one CAT 5e/6 cable from a port (iSCSI HBA or NIC) of node 1 to the port of network switch 1. 2 Connect one CAT 5e/6 cable from a port (iSCSI HBA or NIC) of node 1 to the port of network switch 2. 3 Connect one CAT 5e/6 cable from a port (iSCSI HBA or NIC) of node 2 to the port of network switch 1.
Lnx_Stnwt.book Page 44 Wednesday, August 4, 2010 11:58 AM Installing Host-based Software Needed for Storage To install the necessary host-based storage software for the PowerVault MD3000i storage system, use the Dell PowerVault Resource media that came with your PowerVault MD3000i storage system. Follow the procedures in the PowerVault MD3000i storage enclosure documentation at support.dell.
Lnx_Stnwt.book Page 45 Wednesday, August 4, 2010 11:58 AM Setting Up an iSCSI Cluster for the Dell PowerVault MD32xxi and MD12xx Storage Enclosures Setting up the Hardware For assistance in setting up or installing your PowerVault MD32xxi and PowerVault MD12xx expansion enclosure, please refer to the PowerVault documentation that can be found at support.dell.com/manuals.
Lnx_Stnwt.book Page 46 Wednesday, August 4, 2010 11:58 AM • Console Installation—This installation procedure is useful for Linux users that do not desire to install an X-Window environment on their supported Linux platform. • Silent Installation—This installation procedure is useful for users that prefer to create scripted installations. Graphical Installation (Recommended) To complete graphical installation: 1 Close all other programs before installing any new software. 2 Insert the resource media.
Lnx_Stnwt.book Page 47 Wednesday, August 4, 2010 11:58 AM Console Installation NOTE: Console installation only applies to Linux systems that are not running a graphical environment. The autorun script in the root of the resource media detects when there is no graphical environment running and automatically starts the installer in a textbased mode. This mode provides the same options as graphical installation with the exception of the MDCU specific options.
Lnx_Stnwt.
Lnx_Stnwt.book Page 49 Wednesday, August 4, 2010 11:58 AM 6 Setting Up an iSCSI Cluster for the Dell|EqualLogic PS Series Storage System WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance. EqualLogic Terminology The EqualLogic PS series storage array includes storage virtualization technology.
Lnx_Stnwt.book Page 50 Wednesday, August 4, 2010 11:58 AM NOTE: It is recommended to use two Gigabit Ethernet switches. In the event of a switch failure in a single Ethernet switch environment, all hosts lose access to the storage until the switch is physically replaced and the configuration restored. In such a configuration, there must be multiple ports with link aggregation providing the inter-switch, or trunk connection.
Lnx_Stnwt.book Page 51 Wednesday, August 4, 2010 11:58 AM Figure 6-2.
Lnx_Stnwt.book Page 52 Wednesday, August 4, 2010 11:58 AM An EqualLogic PS-series storage group can be segregated into multiple tiers or pools. Tiered storage provides administrators with greater control over how disk resources are allocated. At any one time, a member can be assigned to only one pool. It is easy to assign a member to a pool and to move a member between pools with no impact to data availability.
Lnx_Stnwt.book Page 53 Wednesday, August 4, 2010 11:58 AM Table 6-1 shows a sample volume configuration. Table 6-1.
Lnx_Stnwt.book Page 54 Wednesday, August 4, 2010 11:58 AM Configuring the iSCSI Networks It is recommended to configure the host network interfaces for iSCSI traffic to use Flow Control and Jumbo Frame for optimal performance. Use the ethtool utility to configure Flow Control.
Lnx_Stnwt.book Page 55 Wednesday, August 4, 2010 11:58 AM UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:3348411 errors:0 dropped:0 overruns:0 frame:0 TX packets:2703578 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10647052076(9.9 GiB)TX bytes:11209177325(10.4 GiB) Memory:d5ee0000-d5f00000 Configuring Host Access to Volumes This section provides information about configuring the host access to iSCSI volumes using the iscsiadm tool.
Lnx_Stnwt.book Page 56 Wednesday, August 4, 2010 11:58 AM 5 Create an interface for each network interface on the host used for iSCSI traffic. iscsiadm -m iface -I iface_name --op=new, where, iface_name is the name assigned to the interface. iscsiadm -m iface -I iface_name --op=update -n iface.hwaddress -v hardware_address where, hardware_address is the hardware address of the interface obtained in step 4.
Lnx_Stnwt.book Page 57 Wednesday, August 4, 2010 11:58 AM interface=iface_name3 --interface=iface_name4, where, group_ip_address is the IP address of the EqualLogic storage group, iface_name1, iface_name2, iface_name3, iface_name4, etc, are the network interfaces (as defined in step 5) on the host that is used for iSCSI traffic. For example, the following command discovers four volumes at group IP address 10.16.7.
Lnx_Stnwt.book Page 58 Wednesday, August 4, 2010 11:58 AM Iface Name: eth1-iface Target: iqn.2001-05.com.equallogic:0-8a090693ee59d02-674f999767d4942e-mdi-data1 Portal: 10.16.7.100:3260,1 Iface Name: eth0-iface Iface Name: eth1-iface Target: iqn.2001-05.com.equallogic:0-8a090695ce59d02-2e0f999767f4942e-mdi-data2 Portal: 10.16.7.100:3260,1 Iface Name: eth0-iface Iface Name: eth1-iface Target: iqn.2001-05.com.equallogic:0-8a090697be59d02-d7ef99976814942e-mdi-fra1 Portal: 10.16.7.
Lnx_Stnwt.book Page 59 Wednesday, August 4, 2010 11:58 AM Logging in to [iface: eth0-iface, target: iqn.2001-05.com.equallogic:0-8a0906-95ce59d022e0f999767f4942e-mdi-data2, portal: 10.16.7.100,3260] Logging in to [iface: eth0-iface, target: iqn.2001-05.com.equallogic:0-8a0906-93ee59d02674f999767d4942e-mdi-data1, portal: 10.16.7.100,3260] Logging in to [iface: eth0-iface, target: iqn.2001-05.com.equallogic:0-8a0906-97be59d02d7ef99976814942e-mdi-fra1, portal: 10.16.7.
Lnx_Stnwt.book Page 60 Wednesday, August 4, 2010 11:58 AM Logging in to [iface: eth1-iface, target: iqn.2001-05.com.equallogic:0-8a0906-95ce59d022e0f999767f4942e-mdi-data2, portal: 10.16.7.100,3260] Logging in to [iface: eth1-iface, target: iqn.2001-05.com.equallogic:0-8a0906-93ee59d02674f999767d4942e-mdi-data1, portal: 10.16.7.100,3260] Logging in to [iface: eth1-iface, target: iqn.2001-05.com.equallogic:0-8a0906-97be59d02d7ef99976814942e-mdi-fra1, portal: 10.16.7.
Lnx_Stnwt.book Page 61 Wednesday, August 4, 2010 11:58 AM Configuring Device Mapper Multipath to Volumes 1 Run the /sbin/scsi_id command against the devices created for Oracle to obtain their unique device identifiers: /sbin/scsi_id -gus /block/ For example: # scsi_id -gus /block/sda 2 Uncomment the following section in /etc/multipath.conf.
Lnx_Stnwt.book Page 62 Wednesday, August 4, 2010 11:58 AM 4 Add the following section in /etc/multipath.conf. The WWID is obtained from step 1. Ensure the alias names are consistent on all hosts in the cluster. multipaths { multipath { wwid WWID_of_volume1 alias alias_of_volume1 } multipath { wwid WWID_of_volume2 alias alias_of_volume2 } (Add a multipath subsection for each additional volume.) } The following sample includes configurations of four volumes.
Lnx_Stnwt.book Page 63 Wednesday, August 4, 2010 11:58 AM 5 Restart the multipath daemon and verify the alias names are displayed in the multipath -l1 output. service multipathd restart chkconfig multipathd on multipath -ll For example, fra1 (36090a028d059be972e9414689799efd7) dm-13 EQLOGIC,100E-00 [size=5.
Lnx_Stnwt.book Page 64 Wednesday, August 4, 2010 11:58 AM 6 Verify the /dev/mapper/* devices are created. These devices names must be used to access and interact with multipath devices in the subsequent sections.
Lnx_Stnwt.book Page 65 Wednesday, August 4, 2010 11:58 AM 7 Configuring Database Storage on the Host WARNING: Before you begin any of the procedures in this section, read the safety information that shipped with your system. For additional best practices information, see dell.com/regulatory_compliance. Oracle Real Application Clusters (RAC) requires an ordered list of procedures.
Lnx_Stnwt.book Page 66 Wednesday, August 4, 2010 11:58 AM Table 7-1.
Lnx_Stnwt.book Page 67 Wednesday, August 4, 2010 11:58 AM 5 In the /proc/partitions file, ensure that: • All PowerPath pseudo devices appear in the file with similar device names across all nodes. For example: /dev/emcpowera, /dev/emcpowerb, and /dev/emcpowerc. • In the case of the PowerVault MD3000, MD3000i, or the EqualLogic storage array, all the virtual disks or volumes appear in the file with similar device names across all nodes.
Lnx_Stnwt.book Page 68 Wednesday, August 4, 2010 11:58 AM Adjusting Disk Partitions for Systems Running the Linux Operating System CAUTION: In a system running the Linux operating system, align the partition table before data is written to the LUN/virtual disk. The partition map is rewritten and all data on the LUN/virtual disk is destroyed. Example: fdisk Utility Arguments The following example indicates the arguments for the fdisk utility.
Lnx_Stnwt.book Page 69 Wednesday, August 4, 2010 11:58 AM The system displays the following message: The number of cylinders for this disk is set to 8782. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g.
Lnx_Stnwt.book Page 70 Wednesday, August 4, 2010 11:58 AM Default user to own the driver interface [ ]:grid Default group to own the driver interface [ ]:asmadmin Start Oracle ASM library driver on boot (y/n) [n]:y Fix permissions of Oracle ASM disks on boot (y/n) [y]:y 3 Perform this step only if the RAC configuration uses shared storage and a Linux Device Mapper Multipath driver. a Set the ORACLEASM_SCANORDER parameter in /etc/sysconfig/oracleasm to dm.
Lnx_Stnwt.
Lnx_Stnwt.