Running Oracle OLTP workloads in HP Integrity VM 4.3

9
Oracle OLTP testing methodology
(Used during this performance characterization)
For this effort, we ran a number of Oracle workloads inside an Integrity VM guest. Next, we booted the
Integrity BL860c i2 Server Blade, using the VM's boot disk, to run the same Oracle workloads on the native blade.
The differences in Oracle TPS and the CPU utilization of the VM, vs. the native blade, were compared, to arrive at
a rough estimate of virtualization overhead associated with a given workload. The TPS numbers were taken from
the AWR reports collected during each workload run.
Generation of different-sized “clean” databases
An empty database instance is created using the Database Configuration Assistant (DBCA). DBCA creates the default
datafiles, the redo log groups, the archive logs, and other miscellaneous files. The Datagenerator utility (part of the
Swingbench suite) is used to populate the dataset based on a specified scaling factor.
As stated previously, the scaling factor used by Datagenerator was adjusted throughout this benchmarking effort to
create different sized datasets. By using datasets of varying sizes we can control the amount of physical disk I/O
performed during the Swingbench run. In other words, smaller datasets initially require physical disk I/O as the
dataset is accessed and cached in the Oracle SGA, but require less physical disk I/O as the run progresses since
many of the data block requests are satisfied from the SGA. Larger datasets, especially those significantly larger than
the specified database memory target (40 GB in this benchmark), require more physical disk I/O throughout the run
because a smaller percentage of the overall dataset can fit in the SGA.
Once Datagenerator finishes populating the database, the database is shut down and saved into the pristine
filesystem (i.e. /oracle/pristine/<dataset size>). This is considered a “clean” copy of the database.
Workload run
Prior to each run, a “clean” copy of a given database is restored based on the desired amount of physical disk I/O
being exercised. Then a Swingbench workload profile is selected based on the amount of CPU utilization being
exercised. For each combination of database size and workload profile, the following steps are taken:
1. Boot the Integrity BL860c i2 Server Blade as a VM host.
2. Start the Integrity VM guest.
3. Run the workload against the Integrity VM guest.
4. Boot the Integrity BL860c i2 Server Blade using the VM guests’ boot disk as the boot target. This ensures the native
blade test uses the exact same boot environment, filesystem layout, kernel parameters, and Oracle configuration as
the VM guest.
5. Run the workload against the Integrity BL860c i2 Server Blade.
This method of booting the native blade using the VM guest boot disk can only be accomplished if the guest boot disk
and the Oracle data storage are presented as whole disks. The agile storage addressing built into HP-UX 11i v3
retains the device special files between Integrity VM test runs and physical Integrity server test runs.
Note:
The “native” mode tests are never run on the Integrity VM host, as that system is
specifically optimized and tuned for running Integrity VM guests, not application
workloads.
As the Integrity BL860c i2 Server Blade has more physical memory than the VM guest (96 GB vs. 64 GB), it was necessary
to limit the blade’s visible memory during the native mode tests runs, to fairly compare against the VM guest. To accomplish
this, the Integrity BL860c i2 Server Blade was booted using the M boot loader option during the native mode runs. For
example:
hpux> boot –M65536
The M option specifies the amount of memory in MB visible during the current boot. When using the M option, the
system chooses which memory it allocates to the OS.