Improving the performance of single instance Oracle on file systems, January 2008

19
After running the Oracle database on the SUT for a period of time and with the default HP-UX kernel
setting dbc_max_pct=50%, some variability in the TPM results occurred.
To solve this, rebooting the SUT between test runs eliminated variation between measurements and the
kernel parameter dbc_max_pct was set to 5%, to ensure that the HP-UX buffer cache did not grow
to a point that would lead to memory pressure on Oracle, skewing the results.
Measurements were taken throughout each test run using Oracle (STATSPACK) and HP tools to collect
statistics. Of particular interest are the following statistics:
TPM, which measured the efficiency of the system.
CPU utilization, to see how the system is behaving and how much idle time remains under a given
workload. The objective is to run with a workload that results in the lowest possible percentage of
CPU idle time, which is the sum of % wait I/O and % idle.
I/O rate, which tracks TPM.
Each measurement was run twice to ensure consistency in the results. To collate results, another
program was written to extract the measured statistics from each run and tabulate them in a set of
HTML pages. A sample of the output generated for a series of runs is shown in Figure B-2.
Figure B-2 Sample output from results collator program
Oracle 9.2.0.7 / Raw / LVM
RUN #
514 515 516 517 518
Results Files
qualify.out qualify.out qualify.out qualify.out qualify.out
TPCC.GORA TPCC.GORA TPCC.GORA TPCC.GORA TPCC.GORA
TPCC.SAR TPCC.SAR TPCC.SAR TPCC.SAR TPCC.SAR
Clients
20 40 60 80 100
tpmC
23661.57 36859.87 45371.00 48693.77 49382.60
CPU Utilization (u/s/w/i)%
25/5/0/70 50/9/0/42 69/12/19/0 79/13/8/0 82/14/4/0
Physical Reads/s
3,060.61 4,644.49 5,782.50 6,175.93 6,353.81
Physical Writes/s
3,163.16 4,973.00 6,573.13 6,954.93 6,968.91
Physical I/O Rate
6223 9617 12355 13129 13321
Log File Sync Avg Wait (ms)
1 3 4 7 11
db file sequential read Avg Wait (ms)
5 7 8 9 10
Log File Parallel Write Avg Wait (ms)
0 1 1 2 2
Response Times (s): 90
th
Percentile/Average/Maximum
New Order
0.06 /0.04 / 0.30 0.08 /0.04 / 0.33 0.09 /0.05 / 0.38 0.10 /0.06 / 0.60 0.12 /0.08 / 3.25
Payment
0.02 /0.01 / 0.25 0.03 /0.01 / 0.29 0.03 /0.02 / 0.30 0.04 /0.02 / 0.41 0.05 /0.03 / 1.80
Order-Status
0.04 /0.02 / 0.27 0.05 /0.03 / 0.27 0.06 /0.03 / 0.28 0.06 /0.04 / 0.30 0.07 /0.05 / 0.31
Delivery
0.05 /0.03 / 0.24 0.06 /0.04 / 0.37 0.08 /0.05 / 0.51 0.09 /0.06 / 0.40 0.11 /0.08 / 0.60
Stock-Level
0.09 /0.03 / 0.43 0.10 /0.03 / 0.56 0.12 /0.04 / 0.58 0.14 /0.04 / 0.73 0.17 /0.06 / 1.08
Deferred-Delivery
0.05 /0.03 / 0.24 0.06 /0.04 / 0.37 0.08 /0.05 / 0.35 0.09 /0.06 / 0.40 0.11 /0.08 / 0.60
For example, run 514 was a client workload of 20 for the I/O configuration raw/LVM, resulting
in a measured average TPM rate of 23,662, with an average I/O rate of 6,223 I/O per second.
After performing these experiments, it was possible to graph the data to compare the five I/O
subsystem configurations.