Running Oracle OLTP workloads in HP Integrity VM 4.3

11
Oracle best practices with Integrity VM
(Observed during this performance characterization)
Optimizing Oracle in an HP-UX environment requires several best practices that apply regardless of whether Oracle is
running in an Integrity VM guest or on a native Integrity server. Those best practices are outside the scope of this paper.
However, there are several best practices specific to running an Oracle workload inside an Integrity VM guest.
Integrity VM host best practices
Set the base page size to 64 KB. Doing so allows more memory to be allocated for VM use and improves performance.
Do not run any workload on the Integrity VM host. The VM host should only be used to run Integrity VM guests.
Tune the Integrity VM host specifically to run Integrity VM guests. Refer to the tuning recommendations in the
System sizing guidelines for Integrity Virtual Machines deployment” white paper.
Run the appropriate Integrity VM version based on the version of the Oracle database server used in the VM guest.
See the Oracle support website for version information.
Configure the VM to use AVIO storage and AVIO networking adapters.
Install the latest AVIO host drivers. These are available from the HP Software Download site:
http://software.hp.com. AVIO drivers offer improved performance over VIO. Also, VIO support is deprecated in
Integrity VM 4.3 and may be obsolete in a future release.
Integrity VM guest best practices
When creating VM guests, configure as many vHBA as virtual CPUs for each guest. This is necessary to distribute
the storage interrupts equally across the available VM guest CPUs. This step is only required when running Integrity
VM versions prior to 4.3, as storage interrupts are automatically distributed to available virtual CPUs in v4.3.
However, this step will not have an adverse effect on v4.3 deployments.
When running Integrity VM versions prior to 4.3, vHBAs are created by varying the bus and device numbers when
configuring VM guest storage devices. These unique bus and device numbers for vHBAs need only be specified
until there is a sufficient number to match the number of virtual cores for the given VM guest.
For example, consider a guest with three virtual CPU cores and five virtual disks mapped directly to physical disks (for
example, disks 20 through 24). The five virtual disks can be specified to hpvmcreate(1M) or hpvmmodify(1M) as follows:
# hpvmmodify P <guest> \
-a disk:avio_stor:1,0,:disk:/dev/rdisk/disk20 \
-a disk:avio_stor:2,0,:disk:/dev/rdisk/disk21 \
-a disk:avio_stor:3,0,:disk:/dev/rdisk/disk22 \
-a disk:avio_stor::disk:/dev/rdisk/disk23 \
-a disk:avio_stor::disk:/dev/rdisk/disk24
Only three unique ‹bus, device› pairs need to be specified as there are three virtual CPU cores. Any remaining storage
devices may be added using the “::” bus/device syntax and their bus/device assignments will be automatically generated.
Use whole disk backing stores, that is, map virtual disks directly to whole physical disks. In these benchmark efforts, whole
disk backing stores were found to perform better than LVM logical volume backing stores or file-based backing stores.
Install the Integrity VM guest kit (HPVM guest, available in the OE). The guest kit contains many required
components needed to optimize the guest performance and behavior.
Install the latest AVIO guest drivers. These are available from the HP Software Download site:
http://software.hp.com. AVIO drivers offer improved performance over VIO. Also, VIO support is deprecated in
Integrity VM 4.3 and may be obsolete in a future release.
Oracle best practices
Optimizing Oracle in an HP-UX environment involves several best practices that apply regardless of whether
Oracle is running in an Integrity VM guest or on a standalone Integrity server. These best practices are documented
elsewhere and are outside the scope of this paper.