HP Enterprise Cluster Master Toolkit User Guide (5900-2131, December 2011)

1. Create the volume group with the two PVs, incorporating the two physical paths for each
(choosing hh to be the next hexadecimal number that is available on the system, after the
volume groups that are already configured).
# pvcreate -f /dev/rdsk/c9t0d1
# pvcreate -f /dev/rdsk/c9t0d2
# mkdir /dev/vgora_asm
# mknod /dev/vgora_asm/group c 64 0xhh0000
# vgcreate /dev/vgora_asm /dev/dsk/c9t0d1
# vgextend /dev/vgora_asm /dev/dsk/c9t0d2
# vgextend /dev/vgora_asm /dev/dsk/c10t0d1
# vgextend /dev/vgora_asm /dev/dsk/c10t0d2
2. For each of the two PVs, create a corresponding LV.
Create an LV of zero length.
Mark the LV as contiguous.
Extend each LV to the maximum size possible on that PV (the number of extents available
in a PV can be determined via vgdisplay -v <vgname>)
Configure LV timeouts, based on the PV timeout and number of physical paths, as
described in the previous section. If a PV timeout has been explicitly set, its value can be
displayed via pvdisplay -v. If not, pvdisplay will show a value of default, indicating that
the timeout is determined by the underlying disk driver. For SCSI devices, in HP-UX 11i
v2, the default timeout is 30 seconds.
Null out the initial part of each LV user data area to ensure ASM accepts the LV as an
ASM disk group member. Note that we are zeroing out the LV data area, not its metadata.
It is the ASM metadata that is being cleared.
# lvcreate -n lvol1 vgora_asm
# lvcreate -n lvol2 vgora_asm
# lvchange -C y /dev/vgora_asm/lvol1
# lvchange -C y /dev/vgora_asm/lvol2
# Assume vgdisplay shows each PV has 2900 extents in our example
# lvextend -l 2900 /dev/vgora_asm/lvol1 /dev/dsk/c9t0d1
# lvextend -l 2900 /dev/vgora_asm/lvol2 /dev/dsk/c9t0d2
# Assume a PV timeout of 30 seconds.
# There are 2 paths to each PV, so the LV timeout value is 60 seconds
# lvchange -t 60 /dev/vgora_asm/lvol1
# lvchange -t 60 /dev/vgora_asm/lvol2
# dd if=/dev/zero of=/dev/vgora_asm/rlvol1 bs=8192 count=12800
# dd if=/dev/zero of=/dev/vgora_asm/rlvol2 bs=8192 count=12800
3. Export the volume group across the Serviceguard cluster and mark it as exclusive, as specified
by Serviceguard documentation. Assign the right set of ownerships and access rights to the
raw logical volumes on each node as required by Oracle (oracle:dba and 0660, respectively).
We can now use the raw logical volume device names as disk group members when
configuring ASM disk groups using the Oracle database management utilities. There are a
few ways to configure the ASM disk groups, for example, the dbca database creation wizard
and sqlplus.
The same command sequence can be used for adding new disks to an existing volume group
that is being used by ASM to store one or more database instances.
Supporting Oracle ASM instance and Oracle database with ASM 31