Best Practices Dell EMC SC Series: Oracle Best Practices Abstract Best practices, configuration options, and sizing guidelines for Dell EMC™ SC Series storage in Fibre Channel environments when deploying Oracle®.
Revisions Revisions Date Description January 2012 Initial release April 2012 Content and format change July 2013 Added ORION information provided by Copilot April 2015 Content and format change July 2017 Major rewrite. Made document agnostic with respect to Dell EMC SC series arrays. Format changes. August 2018 Content changes for preallocation May 2020 Changed setting of port down Acknowledgments Author: Mark Tomczik The information in this publication is provided “as is.” Dell Inc.
Table of contents Table of contents Revisions.............................................................................................................................................................................2 Acknowledgments ...............................................................................................................................................................2 Table of contents ................................................................................................
Table of contents 3.17 Data Progression pressure reports...................................................................................................................33 3.18 Volume distribution reports ...............................................................................................................................34 3.19 Storage profiles.................................................................................................................................................35 3.
Table of contents 6.2 ASMLib .............................................................................................................................................................77 6.2.1 ASMLib and PowerPath ...................................................................................................................................79 6.2.2 ASMLib and Linux DM-Multipath ......................................................................................................................80 6.
Executive summary Executive summary Managing and monitoring database storage, capacity planning, and data classification for storage are some of the daily activities and challenges of database administrators (DBAs). These activities also have impacts on the database environment, performance, data tiering, and data archiving. With traditional storage systems, DBAs have limited ability to accomplish these activities in an effective and efficient manner, especially from a storage perspective.
Introduction 1 Introduction When designing the physical layer of a database, DBAs must consider many storage configuration options. In most Oracle deployments, storage configuration should provide redundancies to avoid downtime for such events as component failures, maintenance, and upgrades. It should also provide ease of management, performance, and capacity to meet or exceed business requirements.
Introduction Dell SC220 enclosure with MLC and SLC drives Read-intensive drives (MLCs) provide greater capacity and lower costs than write-intensive drives (SLCs), but SLCs have greater endurance rates than MLCs. This makes SLCs optimal for workloads with heavy write characteristics, and MLCs optimal for workloads with heavy reads characteristics. An SC Series hybrid array can be deployed with a combination of SSDs and HDDs, with each media type in its own storage tier.
Fibre Channel connectivity 2 Fibre Channel connectivity SC Series arrays have been tested to work with Dell, Emulex®, and QLogic® HBA cards. They also support simultaneous transport protocols including Fibre Channel (FC), iSCSI, and FCoE. Although this document was created for FC environments with QLogic HBAs, much of it should also apply to iSCSI and FCoE.
Fibre Channel connectivity An example of soft zoning with a SC8000 is illustrated in Figure 2.
Fibre Channel connectivity When soft zoning with multiple controllers in virtual port mode, create the following zones in both fabrics: • • • A zone that includes half the physical ports from both controllers in one fabric, and a zone that includes the remaining ports from both controllers in the other fabric. For example: one zone could have ports 1 and 2 from both controllers, the other zone could have ports 3 and 4 from both controllers.
Fibre Channel connectivity Four zones (one for port 2 from each HBA) in fabric 2 Zones: Two dual-port server HBAs and two SC Series controllers with quad front-end ports 12 FC Zone Fabric Server HBA Server HBA port SC Series controller SC Series controller ports 1 1 1 1 1 1, 2 or 1, 3 1 1 1 1 2 1, 2 or 1, 3 2 1 2 1 1 1, 2 or 1, 3 2 1 2 1 2 1, 2 or 1, 3 3 2 1 2 1 3, 4 or 2, 4 3 2 1 2 2 3, 4 or 2, 4 4 2 2 2 1 3, 4 or 2, 4 4 2 2 2 2 3, 4 or 2, 4 Dell EMC
Fibre Channel connectivity 2.1.2 Hard (port) zoning Hard zoning is based on defining specific ports in the zone. Because the zone is based on ports, if the server or SC Series array is moved to a different port or switch, the fabric will require an update. This can cause issues with the manageability or supportability of the fabric. Therefore, Dell EMC does not recommend hard zoning with SC Series arrays. 2.
Fibre Channel connectivity QLogic HBA BIOS Settings QLogic BIOS menu QLogic BIOS attribute Value Adapter Settings Host Adapter BIOS Enable Connection Options QLe25xx and earlier: 1 (for point-topoint only) QLe26xx and later: Default Advanced Adapter Settings Selectable Boot Settings (Each HBA port has two paths to the boot volume. The WWN for each path should be selected, except when installing and configuring Dell EMC PowerPath™.
Fibre Channel connectivity 2.2.2 Server FC HBA driver settings: timeouts and queue depth Configure the link down timeout, and if necessary, the queue depth in Linux after backing up the original QLogic adapter configuration file. The timeout value determines the time a server waits before the server destroys a connection after losing connectivity.
SC Series array 3 SC Series array Storage administrators have to make complex decisions daily on storage configuration, usage, and planning. For example, when creating a volume, the question may be asked: Will it be sized appropriately? If all volumes are oversized in a traditional array, there is the added issue of over provisioning the array. SC Series storage provides solutions to these complex decisions with a robust set of features that provide an easy-tomanage storage solution.
SC Series array 3.2 Benefits of deploying Oracle on SC Series storage Some of the benefits of deploying Oracle databases on SC Series storage are listed in Table 6.
SC Series array The actions taken in virtual port mode when a controller or controller port failure occurs are shown in Table 7. Failover mechanics with virtual ports 3.
SC Series array SC Series fault domain 2 and physical/virtual port assignments 3.6 Redundancy for SC Series front-end connections The following types of redundancy are available: Storage controller redundancy: The ports on an offline storage controller move to the remaining available storage controller. Storage controller port redundancy: I/O activity on a failed port moves to another available port in the same fault domain (providing virtual port mode is enabled).
SC Series array Figure 9 shows a server object in DSM with multiple initiators. First dualport HBA Second dualport HBA Scroll to see additional HBAs assigned to server HBAs assigned to DSM server object r730xd-1 An important DSM server object attribute is Operating System (Figure 10). By setting Operating System to the type of OS intended to be installed on the physical server, SC Series storage implements a set of specific OS rules to govern the automated process of mapping volumes to the server object.
SC Series array 3.8 Disk pools and disk folders In most configurations, regardless of the disk’s performance characteristics, all disks should be assigned to a single disk folder to create one virtual pool of storage. The virtual pool of storage is referred to as the pagepool and it corresponds to a disk folder in the SC Series system. The default disk folder is called Assigned. Assigned disk folder There are some cases where multiple disk folders (multiple pagepools) may be needed.
SC Series array 3.9 Storage types When creating the Assigned disk folder, the SC Series system requires a storage type be defined and set for the folder (Figure 14). A storage type defines the type of Tier Redundancy applied to the available storage tiers, and the Datapage Size. Default values for Tier Redundancy and Datapage Size are heuristically generated and are appropriate for most deployments. Contact Dell Support for advice should a change from the default value be desired.
SC Series array Displaying defined storage types Assigned disk folder configured with redundant 512 KB pages 3.10 Data page The size of a data page is defined by the storage type and is the space taken from a disk folder and allocated to a volume when space is requested. In a default SC configuration, space is allocated from the Assigned disk folder.
SC Series array 3.11 Tiered storage All disk space within the pagepool is allocated into at least one, and up to three storage tiers. A tier defines the type of storage media used to save data. When only two types of disks are used in an SC Series array, the array automatically creates two tiers of storage. The fastest disks are placed in tier 1 (T1), and higher capacity, cost-efficient disks with lower performance metrics are assigned to tier 3 (T3).
SC Series array Multiple storage tiers 3.12 Tier redundancy and RAID Data within tiers is protected by redundancy through the implementation of RAID technology. RAID requirements for each disk tier are based on the type, size, and number of disks in the tier, and will result in either single or dual redundancy of the data on a volume. In rare cases, redundancy for a tier can be disabled by using RAID 0, but caution is advised. For RAID 0 usage, contact Dell Support for advice.
SC Series array Tier redundancy and RAID types Tier redundancy Description Non-redundant SC Series arrays will use RAID 0 in all classes, in all tiers. Data is striped but provides no redundancy. If one disk fails, all data is lost. Dell EMC does not recommend using non-redundant (RAID 0) storage unless data has been backed up elsewhere, and then only in specific cases after a thorough evaluation of business requirements. Contact Dell Support for advice.
SC Series array Storage profiles define tier redundancy, or the RAID level used to protect data on a volume. In most Oracle environments, using the default tier redundancy provides appropriate levels of data protection, good I/O performance, and storage conservation for all types of database applications. Therefore, Dell EMC recommends using the default tier redundancy and evaluating its suitability before attempting to change it.
SC Series array Note: A RAID rebalance should not be performed unless sufficient free disk space is available within the assigned disk folder, and only should be done when most appropriate for application requirements.
SC Series array RAID stripes in SC Series arrays SC Series arrays store most active data on RAID 10, and least active data on RAID 5 or RAID 6. Distributing data across more drives is marginally less efficient, but decreases vulnerability. Conversely, distributing data across fewer drives is more efficient, but marginally increases vulnerability. To view RAID stripe widths and efficiencies, in DSM right-click the array, select Edit Settings, and click Storage.
SC Series array The RAID Stripe Width drop-down fields show the available stripe widths for RAID 5 and RAID 6. RAID efficiencies of stripe widths 3.15 RAID penalty Depending on the RAID level implemented, data and parity information may be striped between multiple disks. Before any write operation is considered complete, the parity must be calculated for the data and written to disk.
SC Series array Table 11 lists the RAID penalty and description for each SC Series RAID level.
SC Series array By default, Data Progression runs every 24 hours at 7 PM system time. This schedule start time can be changed to avoid any resource contention between heavy I/O activity produced by databases and the activity generated by Data Progression cycles. A maximum elapsed time of Data Progression cycles can also be set. It is recommended to consult with Dell Support to assess the appropriateness of a change.
SC Series array 3.17 Data Progression pressure reports A tier can become full through normal usage, by data movement from Data Progression cycles, or from frequent database snapshots with long retention periods. When a tier becomes full, the SC Series array writes data to the next lower tier which can cause performance issues because of the RAID penalty. Therefore, Dell EMC recommends using Data Progression pressure reports to monitor disk usage.
SC Series array Specific date period for pressure report For more information on Data Progression pressure reports, see the Dell Storage Manager Administrator’s Guide available on the Dell Support website. 3.18 Volume distribution reports Dell EMC recommends reviewing volume metrics using subtabs Volumes, Volume Growth, and Storage Chart after selecting a volume in DSM. (See Figure 26). Volume distribution reports The metrics help to determine: 1. 2. 3. 4.
SC Series array Volume growth report Volume growth chart 3.19 Storage profiles Dell Fluid Data™ storage automatically migrates data (Data Progression) to the optimal storage tier based on a set of predefined or custom policies called storage profiles.
SC Series array Depending on the type of storage media used in SC series arrays, storage profiles will vary. The default standard storage profiles are described in Table 13.
SC Series array Default standard profiles in SC Series all-flash arrays Name Initial write tier Tier and RAID levels Progression Flash Optimized with Progression (Tier 1 to All Tiers) 1 Writeable: • T1 RAID 10, RAID 10-DM Snapshots: • T2 RAID 5-9, RAID 6-10 • T3 RAID 5-9, RAID 6-10 To all tiers Write Intensive (Tier 1) 1 Writeable: • T1 RAID 10, RAID 10-DM Snapshots: • T1 RAID 10, RAID 10-DM No Flash Only with Progression (Tier 1 to Tier 2) 1 Writeable: • T1 RAID 10, RAID 10-DM Snapshots: • T2
SC Series array Oracle, or infrequent large Oracle data loads without overloading T1 SSDs. This is also referred to the costoptimized profile. Requires Data Progression license. Because SSDs are automatically assigned to T1, profiles that include T1 allow volumes to use SSD storage.
SC Series array To display Storage Profiles in the navigation tree, or to be able to create custom storage profiles, perform the following: 1. Right-click the SC Series array and select Edit Settings. 2. Select Preferences. 3. Select the check box, Allow Storage Profile Selection. Note: Once the ability to view and create Storage Profiles has been enabled, it cannot be disabled. 3.
SC Series array Once the consistent snapshot is made, the database can be taken out of BEGIN BACKUP mode. If the entire database (datafiles, system, sysaux, control files, redo log files, etc.) resides on a single volume and is opened, a standard snapshot of that volume can be made, but the database must still be placed in BEGIN BACKUP mode prior to the creation of the snapshot.
SC Series array Note: Consistent snapshot profiles provide write-consistency and data integrity across all SC Series volumes in the snapshot. They do not provide write-consistency and data integrity at the application level. For that, Oracle BEGIN/END BACKUP must be used with consistent snapshot profiles. For information on the process of creating a consistent snapshot and using BEGIN/END BACKUP, see 3.22 Expiration times can be set on snapshots.
SC Series array Once a snapshot is created, view volumes can be created from it and presented to a server DSM object. The server can then use the view volume as it would with any other volume. Once the view volume is created, no additional space is allocated to it as long as there are no writes being written to the view volume. See lun 1 View 1 in Figure 33.
SC Series array Dell EMC recommends creating snapshots under these conditions: • • • • • • • • Immediately before a database goes live, or before and after an upgrade, major change, repair, or any major maintenance operation Once per day to allow Data Progression to move age-appropriate data more efficiently and effectively to other storage types and tiers On a schedule that satisfies appropriate business requirements for recovery point objective (RPO) and recovery time objective (RTO).
SC Series array To assign a volume to a snapshot profile, see section 3.25. To assign snapshot profiles to a volume after the volume has been created, right-click the volume from the navigation tree and select Set Snapshot Profiles.
SC Series array To create a snapshot of an Oracle database, see section 3.22. To create a snapshot of a non-Oracle database volume, follow these steps: 1. Make sure the volume belongs to a snapshot profile. 2. In DSM, select the Storage view and select the SC Series array containing the desired volume. 3. Select the Storage tab, right-click the desired volume, and select Create Snapshot. 4. Follow the remaining instructions in the Create Snapshot wizard.
SC Series array 3.22 Consistent snapshot profiles and Oracle Consistent snapshots are recommended in Oracle environments and should be considered under the following conditions: • • • • • • Immediately before the database goes live, or before and after an Oracle upgrades, major change, repair, or any major maintenance operations. Once per day to allow Data Progression to move age-appropriate data more efficiently and effectively to other storage types and tiers.
SC Series array Consistent snapshot Consistent snapshot profile containing ASM volumes for one database 47 Dell EMC SC Series: Oracle Best Practices | CML1114
SC Series array 3.23 Using Data Progression and snapshots In order to utilize Data Progression effectively, snapshots must be used. The frequency of when the snapshots are created will be dependent on each environment, but a good place to start would be to create them at least once a week on all volumes.
SC Series array Read-ahead and write cache global settings For information on setting cache for a volume, see section 3.27. 3.25 Creating volumes (LUNs) in DSM When creating LUNs for an Oracle database, it is suggested to create an even number of volumes that are distributed evenly between SC Series controllers in a dual controller array. The even LUN distribution across controllers distributes I/O across both controllers. To create a volume, perform the following in DSM: 1. 2. 3. 4.
SC Series array The maximum size of a volume and total storage is dependent on the version of Oracle: - If the COMPATIBLE.ASM and COMPATIBLE.RDBMS ASM disk group attributes are set to 12.1 or greater: > > > > > - If the COMPATIBLE.ASM or COMPATIBLE.RDBMS ASM disk group attribute is set to 11.1 or 11.
SC Series array 8. In most Oracle deployments, the database volumes should be removed from the Daily profile schedule. However, before removing them, assess the change. 9. Uncheck Daily and select the consistent snapshot profile for the database. Select OK. 10. To present the volume to a server, select the Change option that corresponds to Server. 11. Select the server and click OK.
SC Series array 12. Select Advanced Mapping. 13. In Oracle deployments, most SC Series volumes should be presented to database servers using multiple paths to avoid single point of failure should a device path fail. The default settings in section Restrict Mapping Paths and Configure Multipathing of the Advanced Mapping wizard were left at default settings to ensure all appropriate mappings would be used and to balance the volume mappings between both controllers. 14.
SC Series array 17. If multiple pagepools are defined, select the storage type associated with the pagepool from which space should be allocated for the volume. 18. As appropriate, choose the options to create the volume as a replication, Live Volume, or to preallocate storage, and click OK. After a volume has been created and presented to the SC server object, a device scan must be performed on the physical server before the server can use the volume.
SC Series array 3.26 Preallocating storage The SC Series array can preallocate storage to a volume when the volume is initially presented to an SC Series server object. This preallocation can be performed if the Preallocate Storage option is enabled system-wide in the SC Series array and selected during SC Series volume creation.
SC Series array 3. Select the Storage tab and in the main navigation tree, right-click the SC Series array. 4. Select Edit Settings. 5. Select Preferences. 6. Under Storage > Storage Type, select the check box, Allow Preallocate Storage Selection, and click OK. After the feature has been enabled, a volume can be preallocated by selecting check box Preallocate Storage after the volume has been presented to a server object and selecting the remaining steps from the Create Volume wizard.
SC Series array Volume metrics after preallocation To see the progress of preallocation, perform the following steps to view the SC Series background processes: 1. Select the Storage view. 2. From the Storage navigation tree, select an SC Series array.
SC Series array 3. Select tab Storage > the array object folder > Background Processes. 3.27 Volume cache settings Dell EMC recommends disabling write cache for arrays containing flash. Disabling read-ahead cache in allflash arrays will be application-specific and may or may not improve performance.
SC Series array 3.28 Data reduction with deduplication and compression As the demand for storage capacity increases, so does the pursuit of having the ability to perform data reduction and compression. This can be shown by considering database data that are candidate for archival or perhaps retained in the database for long periods of time and undergoing infrequent changes.
Server multipathing 4 Server multipathing An I/O path generally consists of an initiator port, fabric port, target port, and LUN. Each permutation of this I/O path is considered an independent path. Dynamic multipathing/failover tools aggregate these independent paths into a single logical path. This abstraction provides I/O load balancing across the HBAs, as well as nondisruptive failovers on I/O path failures.
Server multipathing PowerPath maintains persistent mappings between pseudo-devices and their corresponding back-end LUNs and records the mapping information in configuration files residing in directory /etc. With PowerPath 4.x and higher, mapping configuration exists in several files: /etc/emcp_devicesDB.dat /etc/emcp_devicesDB.idx /etc/powermt_custom.
Server multipathing In RAC environments, there may be occasions where PowerPath pseudo devices are not consistent across nodes. If consistent device names are desired, Dell EMC’s utility emcpadm can be used to rename the devices.
Server multipathing defaults { user_friendly_names yes } multipaths { multipath { wwid 36000d3100003d0000000000000000241 alias oraasm-crs1 } multipath { wwid 36000d3100003d0000000000000000242 alias oraasm-data1 } multipath { wwid 36000d3100003d0000000000000000243 alias oraasm-data2 } multipath { wwid 36000d3100003d0000000000000000244 alias oraasm-fra1 } multipath { wwid 36000d3100003d0000000000000000245 alias oraasm-fra2 } } DM-Multipath devices for ASM 62 Dell EMC SC Series
Server multipathing Once /etc/multipath.conf has been updated, reload DM-multipath: /etc/init.d/multipathd reload Dell EMC recommends using a naming convention for all DM-Multipath pseudo-device aliases that provides easy device identification and management.
Sizing an SC Series array for Oracle 5 Sizing an SC Series array for Oracle Many of the recommended settings for configuring SC Series arrays for Oracle are mentioned in section 3. This section covers additional information on sizing a SC Series array for Oracle deployments. In a balanced system, all components from processors to disks are orchestrated to work together to guarantee the maximum possible I/O performance metrics.
Sizing an SC Series array for Oracle IOPS than larger-capacity, slower spinning drives. Since SSDs have no mechanical parts, and are best suited for random I/O, consider using SSDs for best performance in OLTP workloads. Data warehouses are designed to accommodate ad-hoc queries, OLAP, DSS, and ETL processing. Their workloads generally have large sequential reads. Storage solutions servicing workloads of this type are predominantly sized based on I/O bandwidth or throughput and not capacity or IOPS.
Sizing an SC Series array for Oracle Dell recommends factoring in the RAID penalty when determining the number of disks in an SC Series array. If this is not considered, the SC Series array will be undersized.
Sizing an SC Series array for Oracle should be used with each volume owned by a different controller. The test will verify if all I/O paths are fully functional. If the resulting throughput matches the expected throughput for the components in the I/O path, the paths are set up correctly. Caution should be exercised should the test be run on a live system as the test could cause significant performance issues.
Sizing an SC Series array for Oracle all paths may not be achievable and therefore the test may not verify that all paths are functioning and yield the IO potential of the array. Dell EMC recommends repeating this test and validating the process on the production server after go-live to validate and establish a benchmark of initial performance metrics. Once a design can deliver the expected throughput requirement, additional disks can be added to the storage solution to meet capacity requirements.
Sizing an SC Series array for Oracle 5.8 Disk drives and RAID recommendations for Oracle Table 16 shows some permutations of disk drive types and RAID mixes for Oracle implementations. The information is provided only for illustration and not as a set of rules that govern the actual selection of disk drives and RAID for Oracle file types. Business needs will drive the selection and placement process.
SC Series array and Oracle storage management 6 SC Series array and Oracle storage management Two Oracle features to consider for managing disks are Oracle Managed Files and ASM, a feature introduced with Oracle 10g. Without these features, a database administrator must manage database files in native file systems or as raw devices. This could lead to managing hundreds or even thousands of files.
SC Series array and Oracle storage management For improved I/O bandwidth, create disk groups with an even number of LUNs having the same performance characteristics and capacity, with LUNs evenly distributed between the dual SC controllers. Under normal conditions, the LUNs should also belong to the same SC storage type, SC storage profile, and have the same SC volume characteristics.
SC Series array and Oracle storage management ASM requires its own dedicated instance for the purpose of managing disk groups. When Oracle RAC is deployed, an ASM instance must exist on each node in the RAC cluster. 6.1.1 ASM instance initialization parameter ASM_DISKSTRING An important ASM instance initialization parameter is ASM_DISKSTRING.
SC Series array and Oracle storage management When using UDEV in a RAC environment, it is recommended that ASM_DISKSTRING be set to the location and names of the shared device on all nodes.
SC Series array and Oracle storage management Therefore, although it is not a requirement to use the same device name for shared block devices across all nodes of a cluster, it may be beneficial to use them. Some reasons to consider using consistent names are: • • • 6.1.3 Ensure the voting disk can be accessed on all nodes. It is possible to use different asm_disksting per nodes, but it is not a commonly used practice.
SC Series array and Oracle storage management # fdisk /dev/mapper/oraasm-data1 WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').
SC Series array and Oracle storage management If [deadline] appears in the output, it signifies the schedule default value is deadline. If the IO scheduler is not set to deadline, use UDEV to set the schedule. cat /etc/udev/rules.d/60-oracle-schedulers.rules ACTION=="add|change", KERNEL=="dm*",ENV{DM_NAME}=="oraasm*",ATTR{queue/rotational}=="0",ATTR{queue/scheduler}="de adline" Note: The rule begins with ACTION and occupies one line in the rule file.
SC Series array and Oracle storage management ORA-01078: ORA-15081: ORA-27091: ORA-17507: failure in processing system parameters failed to submit an I/O operation to a disk unable to queue I/O I/O request size 512 is not a multiple of logical block size These issues can be mitigated by applying all necessary Oracle patches and configuring ASMLib as necessary, using advanced cookbooks to manually build the Oracle environment, or avoiding using ASMLib. See section 6.2 for more information. 6.
SC Series array and Oracle storage management If manually editing /etc/sysconfig/oracleasm, make sure the link to /etc/sysconfig/oracleasm_dev_oracleasm is not broken. # ls -ltr /etc/sysconfig/oracleasm* lrwxrwxrwx. 1 root root 24 Jan 19 13:34 /etc/sysconfig/oracleasm -> oracleasm_dev_oracleasm -rw-r--r--. 1 root root 978 Mar 3 14:13 /etc/sysconfig/oracleasm-_dev_oracleasm After ASMLib is configured, ASMLib can be enabled and started: # /etc/init.
SC Series array and Oracle storage management 6.2.1 ASMLib and PowerPath After a partition is created on a PowerPath pseudo-device and the partition table updated, the partitioned pseudo-device is visible: # fdisk /dev/emcpowerb # partprobe /dev/emcpowerb # ls -ltr /dev | grep emcpowerb1 brw-rw---1 root disk 120, 97 Mar 9 10:58 emcpowerb1 To use the partitioned pseudo-device with ASMLib, use the following process: 1.
SC Series array and Oracle storage management # grep emcpowerg1 /proc/partitions 120 97 52428784 emcpowerb1 ASM should also have access to all SCSI paths of the PowerPath partitioned pseudo-device: # /etc/init.d/oracleasm querydisk -p DATA1 Disk "DATA1" is a valid ASM disk /dev/sdc1: LABEL="DATA1" TYPE="oracleasm" /dev/sdi1: LABEL="DATA1" TYPE="oracleasm" /dev/sdac1: LABEL="DATA1" TYPE="oracleasm" /dev/sdaf1: LABEL="DATA1" TYPE="oracleasm" /dev/emcpowerb1: LABEL="DATA1" TYPE="oracleasm" 4.
SC Series array and Oracle storage management 2. Now use ASMLib to stamp the partitioned pseudo-device as an ASM disk: If the environment is a RAC environment, perform this step on only one of the nodes. # /etc/init.d/oracleasm createdisk data1 /dev/mapper/oraasm-data1p1 Marking disk "data1" as an ASM disk: [ OK ] Create ASM disks from partitioned DM-Multipath devices.
SC Series array and Oracle storage management 6.3 ASMFD configuration ASMFD is a feature available on Linux starting with Oracle 12c Release 1 (12.1.0.2.0) and is installed by default with Oracle Grid Infrastructure. ASMFD resides in the I/O path of the Oracle ASM disks. For ASMFD usage with Oracle 11g Release 2 and Oracle 12c Release 2, see information on My Oracle Support.
SC Series array and Oracle storage management ASMFD records ASM_DISKSTRING and its value in /etc/afd.conf. ASMFD uses the value similar to the ORACLEASM_SCANORDER directive of the ASMLib driver module as it instructs ASMFD to scan only the devices identified by the value. # cat /etc/afd.conf afd_diskstring='AFD:*' When using multipath devices with ASMFD, ASMFD must be configured to use only pseudo-devices. For additional information on ASMFD, see information in My Oracle Support and in appendix B.1 6.3.
SC Series array and Oracle storage management 5. To verify that the ASM disk is using the pseudo-device, compare the major and minor device number of the ASM disk to the major and minor device number of the pseudo-device. If the major and minor numbers match, the correct device is used. # ls -ltr /dev | grep emcpowerg1 brw-rw---1 root disk 120, 97 Mar # grep emcpowerg1 /proc/partitions 120 97 52428784 emcpowerg1 9 14:42 emcpowerg1 6.
SC Series array and Oracle storage management 5. To verify that the ASM disk is using the pseudo-device, compare the major and minor device number of the ASM disk to the major and minor device number of the pseudo-device. If the major and minor numbers match, the correct device is used. # ls -ltr /dev/oracleafd/disks/DATA1 brw-rw---- 1 grid oinstall 252, 9 Apr 14 12:44 /dev/oracleasm/disks/DATA1 # ls -ltr /dev/mapper/oraasm-data1p1 lrwxrwxrwx 1 root root 7 Apr 14 12:43 /dev/mapper/oraasm-data1p1 -> ..
SC Series array and Oracle storage management 6.4.1 UDEV and PowerPath This section provides UDEV examples for PowerPath pseudo devices intended for ASM. For more information on using Dell EMC PowerPath with Oracle ASM see Using Oracle Database 10g’s Automatic Storage Management with EMC Storage Technology, a joint engineering white paper authored by Dell EMC and Oracle. 6.4.1.
SC Series array and Oracle storage management UDEV rules need to be added to the kernel and activated: /sbin/udevadm control --reload-rules /sbin/udevadm trigger --type=devices --action=change After the rule set becomes active, device-persistence is set on the un-partitioned pseudo-devices: # ls -ltr /dev brw-rw---1 brw-rw---1 brw-rw---1 brw-rw---1 brw-rw---1 | grep emcpower[bcfgk] grid oinstall 120, 160 grid oinstall 120, 16 grid oinstall 120, 96 grid oinstall 120, 32 grid oinstall 120, 80 Apr Apr Apr Ap
SC Series array and Oracle storage management To change the disk discovery path in any of the appropriate Oracle GUI tools (such as runInstaller, config.sh, and asmca), select Change Discovery Path. Then, change the default value to the appropriate value for the environment. 6.4.1.2 UDEV with partitioned PowerPath pseudo-devices The procedure for using partitioned pseudo-devices with UDEV is very similar to using unpartitioned pseudodevices.
SC Series array and Oracle storage management If a single UDEV rule is not granular enough for the environment, a set of UDEV rules can be constructed to identify each candidate pseudo-device targeted for ASM. There are a number of ways to accomplish this. The example shown in Figure 58 uses the UUID of the pseudo devices. # cat /etc/udev/rules.d/99-oracle-asm-devices.
SC Series array and Oracle storage management After the rule set becomes active, device-persistence is set on the desired partitioned pseudo-devices: # ls -ltr /dev/ | grep emcpower brw-rw---1 root disk 120, 160 Apr brw-rw---1 root disk 120, 96 Apr brw-rw---1 root disk 120, 64 Apr brw-rw---1 root disk 120, 32 Apr brw-rw---1 root disk 120, 16 Apr brw-rw---1 grid oinstall 120, 65 Apr brw-rw---1 grid oinstall 120, 97 Apr brw-rw---1 grid oinstall 120, 161 Apr brw-rw---1 grid oinstall 120, 33 Apr brw-r
SC Series array and Oracle storage management To change the disk discovery path in any of the Oracle GUI tools (such as runInstaller, config.sh, and asmca), select Change Discovery Path. Then change the default value to the appropriate values for the environment. 6.4.2 UDEV and DM-Multipath This section provides UDEV examples for DM-Multipath pseudo devices intended for ASM. The snippet from /etc/multipath.conf in Figure 46 is used for the examples in this section.
SC Series array and Oracle storage management If a single UDEV rule is too generic for the environment, a UDEV rule can be constructed for each candidate pseudo-device targeted for ASM. The example shown in Figure 60 uses the UUID of the pseudo devices shown in Figure 59.
SC Series array and Oracle storage management After the rules are active, verify the kernel device-mapper devices (dm-) for the DM-Multipath pseudo device aliases have the owner, group and privilege defined appropriately: [root]# ls -ltr /dev | grep 'dm-[01346]$' brw-rw---1 grid oinstall 252, 1 May brw-rw---1 grid oinstall 252, 0 May brw-rw---1 grid oinstall 252, 3 May brw-rw---1 grid oinstall 252, 4 May brw-rw---1 grid oinstall 252, 6 May 16 16 16 16 16 12:49 12:49 12:49 12:49 12:49 dm-1 dm-0 dm-3 dm-4
SC Series array and Oracle storage management 6.4.2.2 UDEV with partitioned DM-Multipath pseudo devices Assuming a prefix and suffix were used when naming the DM-Multipath device alias (see Figure 46), create a single primary partition on the DM-Multipath device and update the Linux partition table with the new partition: # fdisk /dev/mapper/oraasm-data1 # partprobe /dev/mapper/oraasm-data1 # /etc/init.
SC Series array and Oracle storage management # cat /etc/udev/rules.d/99-oracle-asm-devices.
SC Series array and Oracle storage management After the rules are active, verify the kernel device-mapper devices (dm-) for the DM-Multipath pseudo device aliases have the owner, group and privilege defined appropriately: # ls -ltr /dev/mapper/oraasm*p1 lrwxrwxrwx 1 root root 8 Apr 14 lrwxrwxrwx 1 root root 8 Apr 14 lrwxrwxrwx 1 root root 8 Apr 14 lrwxrwxrwx 1 root root 7 Apr 14 lrwxrwxrwx 1 root root 7 Apr 14 # ls -ltr /dev brw-rw---1 brw-rw---1 brw-rw---1 brw-rw---1 brw-rw---1 15:17 15:17 15:17 15:17 15:
SC Series array and Oracle storage management To change the disk discovery path in any of the Oracle GUI tools (such as runInstaller, config.sh, and asmca), select Change Discovery Path. Then change the default value to the appropriate value for the environment. 6.
SC Series array and Oracle storage management ASM provides three different types of redundancy that can be used within each ASM disk group: Normal Redundancy: Provides two-way mirroring in ASM. This requires two ASM failure groups and is the default redundancy. This is the default. High Redundancy: Provides three-way mirroring in ASM. This requires three ASM failure groups. External Redundancy: Provides no mirroring in ASM.
SC Series array and Oracle storage management contains database data or indexes be created with an even number of ASM disks with the same capacity and performance characteristic, and where each ASM disk within the same disk group is active on a different controller. This allows both controllers to participate in servicing the IO requests from the originating ASM disk group, and it allows ASM to stripe.
SC Series array and Oracle storage management 6.5.2 ASM diskgroups with ASMLib and pseudo devices When using ASMLib with pseudo devices from either EMC PowerPath or native DM-Multipath disk groups should reference ASMLib managed disks (Figure 67) rather than pseudo devices: $ /etc/init.
SC Series array and Oracle storage management SQL> select GROUP_NUMBER, DISK_NUMBER, … from v$asm_disk; Grp Dsk Mount Num Num Status --- --- --------------0 2 CLOSED ORCL:FRA1 0 3 CLOSED ORCL:FRA2 1 0 CACHED ORCL:CRS1 2 0 CACHED ORCL:DATA1 2 1 CACHED ORCL:DATA2 Mode Status Name FailGrp Label Path ------- --------------- --------------- --------------- -------ONLINE FRA1 ONLINE FRA2 ONLINE CRS1 CRS1 CRS1 ONLINE DATA1 DATA1 DATA1 ONLINE DATA2 DATA2 DATA2 Disk group 2 and 0 were created with a
SC Series array and Oracle storage management If using asmca to create ASM disk groups, select the check boxes associated with the appropriate disks for the disk group: 6.5.
SC Series array and Oracle storage management Then, when creating the disk group, if the disk string (asmcmd: dsk string. SQL*Plus: DISK clause) identifies a subset of the discovered disks, the subset of disks will be added to the disk group. The following will add all disks having a name starting with DATA from the discovery path to the disk group. $ cat dg_data_config-ASMLib-disks.xml PAGE 104SC Series array and Oracle storage management SQL> select GROUP_NUMBER, DISK_NUMBER, … from v$asm_disk; Grp Dsk Mount Num Num Status --- --- --------------0 2 CLOSED 0 3 CLOSED 1 0 CACHED 2 0 CACHED AFD:DATA1 2 1 CACHED AFD:DATA2 Mode Status Name FailGrp Label Path ------- --------------- --------------- --------------- -------ONLINE ONLINE ONLINE ONLINE CRS1 DATA1 CRS1 DATA1 FRA1 FRA2 CRS1 DATA1 ONLINE DATA2 DATA2 DATA2 AFD:FRA1 AFD:FRA2 AFD:CRS1 Disk group 2 and 0 were created with a subset of d
SC Series array and Oracle storage management If using asmca to create ASM disk groups, select the check boxes associated with the appropriate disks for the disk group: 6.5.4 ASM diskgroups with PowerPath and UDEV This section refers to partitioned pseudo-devices. If unpartitioned devices are used, simply remove any reference to the partition and value from the remainder of this section.
SC Series array and Oracle storage management When identifying ASM disks during the creation of an ASM disk group, specify a disk string that identifies a subset of the disks returned by the ASM disk discovery process. Any disk that matches the specified search string will be added to the disk group. This can lead to ASM disks not having the same order in the ASM disk group as indicated by the SC volume name.
SC Series array and Oracle storage management If SC Series volume names have an ordering that needs to be maintained within in the disk group, Dell EMC recommends the disk string uniquely identify each pseudo-device rather than a subset of pseudo-devices and then name each disk separately to ensure the correct disks are used. Figure 79 shows how to resolve the misnomer: CREATE DISKGROUP DATA EXTERNAL REDUNDANCY DISK '/dev/emcpowerb1' NAME DATA_0000 , '/dev/emcpowerc1' NAME DATA_0001 ATTRIBUTE 'compatible.
SC Series array and Oracle storage management 6.5.5 ASM diskgroups with DM-Multipath and UDEV This section refers to partitioned logical DM-Multipath aliased devices. If unpartitioned devices are used, simply remove any reference to the partition-indicator value from the remainder of this section. ASM instance initialization parameter ASM_DISKSTRING is used by ASM to discover all candidate ASM disks that could be added to a disk group.
SC Series array and Oracle storage management When identifying ASM disks during the creation of an ASM disk group, specify a disk string that identifies a subset of the disks returned by the ASM disk discovery process. Any disk that matches the specified search string will be added to the disk group. This can lead to ASM disks not having the same order in the ASM disk group as indicated by the SC volume name.
SC Series array and Oracle storage management If SC Series volume names have an ordering that needs to be maintained within in the disk group, Dell EMC recommends the disk string uniquely identify each pseudo-device rather than a subset of pseudo-devices and then name each disk separately to ensure the correct disks are used.
SC Series array and Oracle storage management When identifying the ASM disks during the creation of a disk group in ASM, use the absolute path of and the partitioned pseudo-device names that represents the single logical path of the single ASM partitioned device: /dev/mapper/ Do not use the /dev/sd or /dev/dm- name that make up the logical device. This is because dm and sd device names are not consistent across reboots nor across servers in a RAC cluster. 6.
SC Series array and Oracle storage management At this point, DSM displays the amount of space allocated to the SC Series volume equal to the sum of the size specified in the CREATE TABELSPACE command and any additional space needed for the RAID level of the storage tier. Since the tablespace is defined to be 10GB and RAID 10 redundancy is used, a total of 20GB physical disk space is allocated from the SC Series pagepool to this volume (see Disk Usage in Figure 83).
SC Series array and Oracle storage management 6.
SC Series array and Oracle storage management Released ASM space is not deallocated without calling ASRU To instruct SC Series arrays to release the space, ASRU must be called to write zeros to the space once occupied by tablespace TESTTS. $ /u01/app/11.2.0/grid/perl/bin/perl ./ASRU.pl TESTTS Checking the system ...done Calculating the sizes of the disks ...done Writing the data to a file ...done Resizing the disks...done Calculating the sizes of the disks ...done /u01/app/11.2.
SC Series array and Oracle storage management If it is necessary to deallocate thinly provisioned storage, Oracle recommends using ASMFD or ASRU, with a preference towards ASMFD, providing their usage has been evaluated and any side effects from using them are understood. For information on installing ASRU, its configuration, issues or usage exceptions, see information on the My Oracle Support site.
SC Series array and Oracle storage management 6.10 Cooked filesystems with spinning media When using cooked files systems, consider the following file placements as a starting point. Storage profile: High Priority (Tier 1) Tier Drive type Writeable data Snapshot data Progression 1 FC/SAS 15K RAID 10 RAID 5, 6 N/A Storage profile: Recommended (all tiers) 6.
SC Series array and Oracle storage management 6.12 1 SSD writeintensive RAID 10 N/A N/A 2 SSD readintensive N/A RAID 5, 6 N/A 3 FC/SAS 15K N/A RAID 5, 6 RAID 5, 6 Direct I/O and async I/O Oracle recommends using both direct I/O and async I/O. For information on enabling direct and async I/O in the Oracle environment, refer to Oracle documentation. 6.13 Raw devices and Oracle Starting with Oracle 11g, Oracle began the process of desupporting raw storage devices.
Conclusion 7 Conclusion SC Series arrays provide a cost-effective storage solution regardless of the type of media used within the array. This becomes more apparent with SC Series arrays using flash that can outperform high-performance arrays of 15K spinning disks for the same cost. When adding Oracle ASM to the configuration, implementing and maintaining an Oracle deployment is even easier.
Oracle performance testing with ORION A Oracle performance testing with ORION The ORION (ORacle I/O Numbers) calibration tool is similar to the Iometer tool developed by the Intel® Corporation and dd for Linux. Like Iometer and dd, ORION runs I/O performance tests, however it is tailored to measure Oracle RDBMS workloads using the same software stack that Oracle uses. It is not required that Oracle or even a database be installed in order to use ORION.
Oracle performance testing with ORION Note: Tests which include a write component can destroy data. Never include a volume that contains important data to the source LUN file (iotest.lun). Rather, create new volumes for testing purposes. Check to be sure Oracle software has access to all the test volumes listed in iotest.lun. If proper access is not granted, ORION will fail. Since the ORION test is dependent on async I/O, ensure the platform is capable of async I/O.
Oracle performance testing with ORION When the test is complete, the results will exist in the test output files listed in Table 19. ORION output files File Description iotest_summary.txt Recap input parameters + latency, MBPS and IOPS results. iotest_mbps.csv Large I/Os in MBPS , performance results iotest_iops.csv Small I/Os in IOPS, performance results iotest_lat.csv Latency of small I/Os iotest_tradeoff.csv Large MBPS / small IOPS combinations possible with minimal latency iotest_trace.
Oracle performance testing with ORION File Description -write Percentage of writes (default = 0); a typical percentage would be ‘write 20’ Note: Write tests destroy data on volumes. -cache_size Size in MB of an array cache.
Oracle performance testing with ORION An elaborate custom workload test setting in ORION is shown in this example: orion -run advanced –testname iotest –num_disks 5 \ –simulate raid0 –stripe 1024 –write 20 –type seq \ –matrix row –num_large 0 To force ORION (12c) to write non-zeros, use the following steps: base64 /dev/urandom | head -c 100000000 > file.txt nohup /u01/app/oracle/product/11.2.
Technical support and resources B Technical support and resources Dell.com/support is focused on meeting customer needs with proven services and support. Storage technical documents and videos provide expertise that helps to ensure customer success on Dell EMC storage platforms. B.
Technical support and resources Other recommended publications (with My Oracle Support Doc IDs): • • • • • • • • • • • • • • • • • • 125 Oracle Support Doc 1077784.1: Can I create an 11.2 disk over the 2 TB limit Lun Size And Performance Impact With Asm (Doc ID 373242.1) Oracle Support Doc 1601759.1: Oracle Linux 5 — Filesystem & I/O Type Supportability Oracle Database — Filesystem & I/O Type Supportability on Oracle Linux 6 Oracle Support Doc 1487957.