Serviceguard Extended Distance Cluster (EDC) with VxVM/CVM Mirroring on HP-UX, May 2008

Disk Access Name and Disk Media Name
When VxVM recognizes a disk device for the first time, (either after the installation of VxVM/CVM or
when attaching new devices to the system), the disk device is configured in the internal DMP and
VxVM configuration database. At this time the device files (referred to as DMP meta nodes in VxVM
documentation) are also created. They are located in “/dev/vx/dmp/rdsk” and
/dev/vx/dmp/dsk”.
These device files are persistent across reboot by default. Since they are used to “access” the disk
device, VxVM documentation refers to them as “disk access name”, “device name” or simply as
“DEVICE” as shown below. Each cluster node builds its own configuration information and device
files. Thus, the disk access names and device files are different across cluster nodes—independent of
the configured disk naming scheme (EBN or OSN).
Figure 4 shows the disk access names in the “DEVICE” column of the “vxdisk list” output and in
the “ASSOC” column of the “vxprint” output. Both columns are highlighted in red.
At disk group creation time, VxVM assigns a “disk media name” to each VxVM disk in a disk group.
VxVM either assigns a name in the form diskgroup##, or the current “disk access name” to be the
“disk media name”. The disk media name can be changed with the “vxedit -g <disk group>
rename <old name> <new name>“ command to a meaningful string.
Since CVM configurations within a cluster are performed on the CVM master node, the disk access
name and disk media name are typically identical on the node that was the CVM master at the time
the disk group was created. On all other cluster nodes these two names usually do not match. The
following figure 4 shows the disk media names in the “DISK” column of the “vxdisk list” output
and in the “NAME” column with “dm” in the “TY” column of the “vxprint” output. Those columns are
highlighted in blue.
Figure 4 shows an example of different disk access names and disk media names as observed on a
CVM slave node. The output on the CVM master is not shown, since both columns would show the
same names.
Figure 4. vxdisk and vxprint output on CVM slave node
$ vxdisk list
DEVICE TYPE DISK GROUP STATUS
DC1-XP12k1_0 auto:cdsdisk DC1-XP12k1_2 dgEDC online shared
DC1-XP12k1_3 auto:cdsdisk DC1-XP12k1_3 dgEDC online shared
DC1-XP12k1_5 auto:cdsdisk DC1-XP12k1_4 dgEDC online shared
DC1-XP12k1_7 auto:cdsdisk DC1-XP12k1_5 dgEDC online shared
DC2-EVA8k1_2 auto:cdsdisk DC2-EVA8k1_6 dgEDC online shared
DC2-EVA8k1_4 auto:cdsdisk DC2-EVA8k1_2 dgEDC online shared
DC2-EVA8k1_6 auto:cdsdisk DC2-EVA8k1_3 dgEDC online shared
DC2-EVA8k1_7 auto:cdsdisk DC2-EVA8k1_7 dgEDC online shared
$ vxprint
Disk group: dgEDC
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg dgEDC dgEDC - - - - - -
dm DC1-XP12k1_2 DC1-XP12k1_0 - 31241088 - - - -
dm DC1-XP12k1_3 DC1-XP12k1_3 - 31241088 - - - -
dm DC1-XP12k1_4 DC1-XP12k1_5 - 2400768 - - - -
dm DC1-XP12k1_5 DC1-XP12k1_7 - 2400768 - - - -
dm DC2-EVA8k1_2 DC2-EVA8k1_4 - 4176768 - - - -
dm DC2-EVA8k1_3 DC2-EVA8k1_6 - 4176768 - - - -
dm DC2-EVA8k1_6 DC2-EVA8k1_2 - 33536896 - - - -
dm DC2-EVA8k1_7 DC2-EVA8k1_7 - 33536896 - - - -
8