Oracle/HP Best Practices Guide for HP IO Accelerators Part Number 647093-001 December 2010 (First Edition)
© Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software.
Contents Overview ..................................................................................................................................... 5 Introduction ................................................................................................................................................. 5 Single-instance and RAC overview ................................................................................................................. 5 Using IO Accelerators with Oracle ..............
Acronyms and abbreviations ........................................................................................................
Overview Introduction This document explains best practices for integrating HP IO Accelerator technology in Oracle environments.
For more information on using flash cache with Oracle databases, see "Using the IO Accelerator with flash cache (on page 22)." Setting up filesystems Oracle can host its data files on any standard filesystem built on IO Accelerators. Typical configurations include aggregating drives using RAID or LVM for performance or redundancy. See the IO Accelerator User Guide for your operating system for additional recommendations for filesystems, RAID setup, and filesystem tuning.
Redundancy architectures Redundancy best practices When a failover system is not available to provide failure protection, all storage devices which are not highly-available, including IO Accelerators, must be mirrored to provide redundancy. Local redundancy is not as critical for designs that include both off-server replication and a hot standby failover server ready to take over. However, local redundancy might still be used in some environments.
• /dev/fiob to /dev/fiod • /dev/fioe to /dev/fiog • /dev/fiof to /dev/fioh The following command mirrors four devices (fioa through fiod) to create the +DATADG disk group. ASM SQL > CREATE DISKGROUP DATADG NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/dev/fioa' NAME diska1, '/dev/fiob' NAME diska2, FAILGROUP failure_group_2 DISK '/dev/fioc' NAME diskb1, '/dev/fiod' NAME diskb2; The following command mirrors across four devices (/dev/fioa[e,f,g, h]) to create the LOG-DG disk group.
This command mirrors across three devices (/dev/fioa[g,h,i]) to create the LOGDG disk group. ASM SQL > CREATE DISKGROUP LOGDG HIGH REDUNDANCY FAILGROUP failure_group_1 DISK '/dev/fiog' NAME diska1, FAILGROUP failure_group_2 DISK '/dev/fioh' NAME diskb1, FAILGROUP failure_group_2 DISK '/dev/fioi' NAME diskc1, ASM mirroring with a read-preferred device ASM enables you to select a device in a mirrored disk group that handles all the reads and receives all the writes.
Administrators can use either manual or automatic failover to a Data Guard standby database to maintain high availability for mission-critical applications. Oracle Data Guard provides a primary/secondary solution (only one host can read and update the database at a time) rather than the primary/primary solution provided by RAC (each server can read and update simultaneously).
If you change the current archival process of a destination (for example, from the ARCn process to the LGWR process), archival processing does not change until the next log switch occurs. Data Guard recommended reading For more information on Oracle 11g Data Guard Concepts and Administration, see the following references: • Additional information on Data Guard (http://download.oracle.com/docs/cd/E14072_01/server.112/e10700/title.htm) • Data Guard Overview (http://www.oracle.
Generic performance tuning Write performance and steady state Solid state storage solutions often have poor right performance. This condition is caused by the characteristics of the underlying storage media, NAND flash. As more data is written to a solid state device, you must defragment the data on the device to continue operating. This process is called grooming.
Discovering your application I/O profile and selecting an architecture Using Oracle tools Understanding how your database uses its storage resources is the first step in selecting an appropriate architecture and applying any needed performance tunings. You can identify where the I/O bottlenecks are in your system. Then you can choose from a list of recommended architectures to help eliminate the bottlenecks. Oracle Enterprise Manager Determining what causes an I/O bottleneck in Oracle might be challenging.
5. Specify a beginning and ending snapshot ID for the workload repository report. For example, enter 150 for begin_snap and 160 for end_snap. 6. Enter a report name, or accept the default report name. For example, enter awrrpt_1_150_160 (the default) for report_name. If a text report had been specified, the awrrpt_1_150_160.txt report name is generated. Simple analysis of an AWR report Even if I/O bottlenecks are not found on your system, you can still improve performance by improving I/O time.
When the top waits events in the AWR report are db file sequential read or db file scattered read, this is an indication of I/O waits. What is also important with these wait events is how long they are taking, on average. A sequential I/O (db file sequential read) takes 1-5 milliseconds. A random read (db file scattered read) takes up to 10-15 milliseconds.
Single-instance performance architectures Hosting all data areas on the IO Accelerator This configuration is likely to be the highest performing of all proposed solutions. It is also likely to be the simplest to deploy, depending on your data protection needs. To create this configuration: 1. Place a filesystem on the IO Accelerator, or add the IO Accelerator devices to ASM in a configuration that includes the appropriate level of protection for your data.
chmod 775 /u02/oradata 2. Log on as the oracle user: su - oracle 3. Create the subdirectory for the TESTDB database under /u02/oradata: mkdir -p /u02/oradata/TESTDB The IO Accelerator is configured as an EXT3 filesystem and is ready for use by the Oracle TESTDB database. 4. As the oracle user, move the TEMP tablespace to the IO Accelerator. Migrating the TEMP Tablespace 1.
The TEMP tablespace cannot be moved during the mount stage by using the ALTER DATABASE RENAME FILE command. A workaround to this issue is to create a new temp tablespace. 1. Create a new temporary tablespace called TEMP2: CREATE TEMPORARY TABLESPACE TEMP2 SIZE 10240M AUTOEXTEND ON; 2. Assign the default TEMP tablespace for the database to the TEMP2 tablespace: ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2; 3. Drop the old TEMP tablespace: DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES; 4.
3. Create the subdirectory for the TESTDB database in the /u02/oradata directory: mkdir -p /u02/oradata/TESTDB The IO Accelerator is now configured as a filesystem and ready for use by the Oracle REDO log storage. 4. As the oracle user, move the REDO log members to the IO Accelerator. 5. Log on to sqlplus as sysdba: export ORACLE_SID=TESTDB sqlplus / as sysdba The datafiles that make up the existing REDO logs cannot be renamed/relocated while the database is open.
execute immediate stmt; else stmt := 'alter database add logfile thread ' || rlcRec.thr || ' ''/u02/oradata/TESTDB/redo0' || rlcRec.grp || '.log'' size ' || rlcRec.bytes_k || 'K'; execute immediate stmt; begin stmt := 'alter database drop logfile group ' || rlcRec.grp; dbms_output.
1. Log on the TESTDB database as sysdba: export ORACLE_SID=TESTDB sqlplus / as sysdba NOTE: The datafiles that make up the existing REDO logs cannot be renamed or relocated while the database is open. Instead, you must create new REDO logs in the new location, and then drop the old REDO logs. To relocate the REDO logs, create new REDO log groups using the new IO Accelerator diskgroup location, and then drop the old REDO log groups. 2.
end if; end loop; end; / When the script completes, the database REDO logs reside in the IO Accelerator location. Because ASM is used, the old REDO log datafiles are automatically deleted when the corresponding REDO log group is dropped. 3. Use the following SQL query to display the new location of the REDO logs: select a.group# "Group Nbr", b.thread# "Thread Nbr", substr(a.member,1,65) "Member", b.bytes/1048576 size_kb "Size (MB)", b.sequence# "Seq Nbr", b.archived "Archived", b.
Setting up flash cache Database Smart Flash Cache is an optional memory component in Oracle 11g Release 2 that you can add if your database is running on Solaris with Exadata storage or on Oracle Enterprise Linux. It is an extension of the SGA-resident buffer cache, providing a level 2 cache to the SGA, level 1 for database blocks. It can improve response time and overall throughput.
make the flash cache between two times and 10 times the size of SGA_TARGET. Using 80% of the size of SGA_TARGET instead of the full size is sufficient. Tuning memory for the flash cache For each database block moved from the buffer cache to the flash cache, a small amount of metadata about the block is kept in the buffer cache. For a single instance database, the metadata consumes approximately 100 bytes. For an Oracle RAC database, it is closer to 200 bytes.
db_flash_cache_size=xxG (this value depends on the size of the SSD device) 3. Stop and then restart the Oracle database. To use flash cache, create or update schema objects with the flash_cache storage clause specified. For example: SQL> create table TB_TEST01 storage( flash_cache keep) as select * from all_objects; Table created.
Oracle RAC performance architectures SRP and iSER targets SRP and iSER are RDMA-capable block protocols that can be sent over RDMA-capable networks such as Infiniband. IO Accelerators are a good match for use with SRP/iSER because of the high performance they can deliver over the wire. By setting up two or three separate SRP or iSER target servers and using ASMs built in replication capabilities (levels NORMAL or HIGH redundancy), a fault-tolerant, high performance RAC cluster can be built.
Acronyms and abbreviations ADDM automatic database diagnostic monitor ASM Advanced Server Management AWR automatic workload repository CPU central processing unit DBWR database writer EM Enterprise Manager HTML hypertext markup language I/O input/output iSCSI Internet Small Computer System Interface iSER iSCSI extensions for RDMA LGWR log writer LRU least recently used Acronyms and abbreviations 27
LVM Logical Volume Manager OEL Oracle Enterprise Linux OLTP online transaction processing OMF Oracle Managed File RAC real application clusters RAID redundant array of inexpensive (or independent) disks RAM random access memory RDMA Remote Direct Memory Access SCSI small computer system interface SGA system global area SRP SCSI remote protocol Acronyms and abbreviations 28