Veritas Volume Manager 5.1 SP1 Administrator's Guide HP-UX 11i v3 HP Part Number: 5900-1506 Published: April 2011 Edition: 1.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Chapter 1 Understanding Veritas Volume Manager ....................... 19 About Veritas Volume Manager ...................................................... VxVM and the operating system ..................................................... How data is stored .................................................................. How VxVM handles storage management .....................
8 Contents Non-persistent FastResync ...................................................... Persistent FastResync ............................................................. DCO volume versioning ........................................................... FastResync limitations ............................................................ Hot-relocation ............................................................................. Volume sets ...............................................................
Contents VxVM root disk volume restrictions ......................................... Root disk mirrors ................................................................. Booting root volumes ............................................................ Setting up a VxVM root disk and mirror .................................... Creating an LVM root disk from a VxVM root disk ...................... Adding swap volumes to a VxVM rootable system .......................
10 Contents Displaying information about controllers ................................. Displaying information about enclosures .................................. Displaying information about array ports ................................. Displaying extended device attributes ..................................... Suppressing or including devices for VxVM or DMP control ......... Gathering and displaying I/O statistics .....................................
Contents Chapter 6 Creating and administering disk groups ...................... 207 About disk groups ....................................................................... Specification of disk groups to commands ................................. System-wide reserved disk groups ........................................... Rules for determining the default disk group ............................. Disk group versions ..............................................................
12 Contents Backing up and restoring disk group configuration data .................... 266 Using vxnotify to monitor configuration changes ............................. 267 Working with existing ISP disk groups ........................................... 267 Chapter 7 Creating and administering subdisks and plexes ............................................................................. 271 About subdisks .......................................................................... Creating subdisks .
Contents Setting default values for vxassist ........................................... Using the SmartMove™ feature while attaching a plex ................ Discovering the maximum size of a volume ..................................... Disk group alignment constraints on volumes ................................. Creating a volume on any disk ...................................................... Creating a volume on specific disks ................................................
14 Contents Reclamation of storage on thin reclamation arrays .......................... Identifying thin and thin reclamation LUNs .............................. How reclamation on a deleted volume works ............................. Thin Reclamation of a disk, a disk group, or an enclosure ............. Thin Reclamation of a file system ............................................ Triggering space reclamation .................................................
Contents Specifying a plex for relayout ................................................. Tagging a relayout operation .................................................. Viewing the status of a relayout .............................................. Controlling the progress of a relayout ...................................... Converting between layered and non-layered volumes ...................... Adding a RAID-5 log ...................................................................
16 Contents Moving relocated subdisks using vxdiskadm .............................. Moving relocated subdisks using vxassist ................................. Moving relocated subdisks using vxunreloc ............................... Restarting vxunreloc after errors ............................................ Modifying the behavior of hot-relocation ........................................ Chapter 13 417 419 419 421 422 Administering cluster functionality (CVM) ...................
Contents Setting exclusive open access to a volume by a node ................... Displaying the cluster protocol version ..................................... Displaying the supported cluster protocol version range .............. Recovering volumes in shared disk groups ................................ Obtaining cluster performance statistics .................................. Administering CVM from the slave node ................................... Chapter 14 Administering sites and remote mirrors ....
18 Contents Performance monitoring .............................................................. Setting performance priorities ................................................ Obtaining performance data ................................................... Using performance data ........................................................ Tuning VxVM ............................................................................ General tuning guidelines ......................................................
Chapter 1 Understanding Veritas Volume Manager This chapter includes the following topics: ■ About Veritas Volume Manager ■ VxVM and the operating system ■ How VxVM handles storage management ■ Volume layouts in VxVM ■ Online relayout ■ Volume resynchronization ■ Dirty region logging ■ Volume snapshots ■ FastResync ■ Hot-relocation ■ Volume sets About Veritas Volume Manager VeritasTM Volume Manager (VxVM) by Symantec is a storage management subsystem that allows you to manage physica
20 Understanding Veritas Volume Manager VxVM and the operating system VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments. By supporting the Redundant Array of Independent Disks (RAID) model, VxVM can be configured to protect against disk and hardware failure, and to increase I/O throughput. Additionally, VxVM provides features that enhance fault tolerance and fast recovery from disk failure or storage array failure.
Understanding Veritas Volume Manager How VxVM handles storage management vxrelocd The hot-relocation daemon monitors VxVM for events that affect redundancy, and performs hot-relocation to restore redundancy. If thin provision disks are configured in the system, then the storage space of a deleted volume is reclaimed by this daemon as configured by the policy. How data is stored Several methods are used to store data on physical disks.
22 Understanding Veritas Volume Manager How VxVM handles storage management the disk. The physical disk device name varies with the computer system you use. Not all parameters are used on all systems. Figure 1-1 shows how a physical disk and device name (devname) are illustrated in this document.
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-2 shows how VxVM represents the disks in a disk array as several volumes to the operating system.
24 Understanding Veritas Volume Manager How VxVM handles storage management Device discovery Device discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery service enables you to add support dynamically for new disk arrays.
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-3 Example configuration for disk enclosures connected via a fibre channel switch Host c1 Fibre Channel switch Disk enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on.
26 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-4 shows a High Availability (HA) configuration where redundant-loop access to storage is implemented by connecting independent controllers on the host to separate switches with independent paths to the enclosures.
Understanding Veritas Volume Manager How VxVM handles storage management To take account of fault domains when configuring data redundancy, you can control how mirrored volumes are laid out across enclosures. See “Mirroring across targets, controllers or enclosures” on page 319. Virtual objects VxVM uses multiple virtualization layers to provide distinct functionality and reduce physical limitations. Virtual objects in VxVM include the following: ■ Disk groups See “Disk groups” on page 29.
28 Understanding Veritas Volume Manager How VxVM handles storage management See “Displaying volume information” on page 336. See the vxprint(1M) manual page. Combining virtual objects in VxVM VxVM virtual objects are combined to build volumes. The virtual objects contained in volumes are VM disks, disk groups, subdisks, and plexes.
Understanding Veritas Volume Manager How VxVM handles storage management Connection between objects in VxVM Figure 1-5 Disk group vol01 vol02 Volumes vol01-01 vol02-01 vol02-02 vol01-01 vol02-01 vol02-02 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 Subdisks disk01-01 disk02-01 disk03-01 VM disks disk01 disk02 disk03 devname1 devname2 devname3 Plexes Physical disks The disk group contains three VM disks which are used to create two volumes.
30 Understanding Veritas Volume Manager How VxVM handles storage management See “VM disks” on page 30. In releases before VxVM 4.0, the default disk group was rootdg (the root disk group). For VxVM to function, the rootdg disk group had to exist and it had to contain at least one disk. This requirement no longer exists, and VxVM can work without any disk groups configured (although you must set up at least one disk group before you can create any volumes of other VxVM objects).
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-6 disk01 VM disk example VM disk Physical disk devname Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.
32 Understanding Veritas Volume Manager How VxVM handles storage management Example of three subdisks assigned to one VM Disk Figure 1-8 disk01-01 disk01-02 disk01-01 disk01-02 disk01-03 disk01-03 Subdisks VM disk with three subdisks disk01 Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. Plexes VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks.
Understanding Veritas Volume Manager How VxVM handles storage management Volumes A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume. Due to its virtual nature, a volume is not restricted to a particular disk or a specific area of a disk.
34 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-11 Example of a volume with two plexes vol06 vol06-01 vol06-02 vol06-01 vol06-02 disk01-01 disk02-01 Volume with two plexes Plexes Each plex of the mirror contains a complete copy of the volume data. The volume vol06 has the following characteristics: ■ It contains two plexes named vol06-01 and vol06-02. ■ Each plex contains one subdisk. ■ Each subdisk is allocated from a different VM disk (disk01 and disk02).
Understanding Veritas Volume Manager Volume layouts in VxVM physical disk. The combination of a volume layout and the physical disks therefore determines the storage service available from a given virtual device. Layered volumes A layered volume is constructed by mapping its subdisks to underlying volumes. The subdisks in the underlying volumes must map to VM disks, and hence to attached physical storage.
36 Understanding Veritas Volume Manager Volume layouts in VxVM See “Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)” on page 43. ■ RAID-5 (striping with parity) See “RAID-5 (striping with parity)” on page 44. Concatenation, spanning, and carving Concatenation maps data in a linear manner onto one or more subdisks in a plex. To access all of the data in a concatenated plex sequentially, data is first accessed in the first subdisk from the beginning to the end.
Understanding Veritas Volume Manager Volume layouts in VxVM The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are contiguous on the plex, but actually come from two distinct subdisks on the same physical disk. The remaining free space in the subdisk, disk01-02, on VM disk, disk01, can be put to other uses. You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
38 Understanding Veritas Volume Manager Volume layouts in VxVM Striping (RAID-0) Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-14 Striping across three columns Column 0 Column 1 Column 2 Stripe 1 stripe unit 1 stripe unit 2 stripe unit 3 Stripe 2 stripe unit 4 stripe unit 5 stripe unit 6 Subdisk 1 Subdisk 2 Subdisk 3 Plex A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
40 Understanding Veritas Volume Manager Volume layouts in VxVM Example of a striped plex with one subdisk per column Figure 1-15 su1 su2 su3 su4 su5 su6 Column 0 Column 1 Column 2 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 disk01 disk02 disk03 devname1 devname2 devname3 su1 su4 su2 su5 su3 su6 Stripe units Striped plex Subdisks VM disks Physical disk There is one column per physical disk.
Understanding Veritas Volume Manager Volume layouts in VxVM Example of a striped plex with concatenated subdisks per column Figure 1-16 su1 Column 0 su2 su3 su4 su5 su6 Column 1 Column 2 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk01 disk02 disk03 devname1 devname2 devname3 su1 su4 su2 su5 su3 su6 Stripe units Striped plex Subdisks
42 Understanding Veritas Volume Manager Volume layouts in VxVM Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable.
Understanding Veritas Volume Manager Volume layouts in VxVM Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) VxVM supports the combination of striping above mirroring. This combined layout is called a striped-mirror layout. Putting mirroring below striping mirrors each column of the stripe. If there are multiple subdisks per column, each subdisk can be mirrored individually instead of each column. A striped-mirror volume is an example of a layered volume. See “Layered volumes” on page 49.
44 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-19 How the failure of a single disk affects mirrored-stripe and striped-mirror volumes Mirrored-stripe volume with no Striped plex redundancy Mirror Detached striped plex Failure of disk detaches plex Striped-mirror volume with partial redundancy Mirror Striped plex Failure of disk removes redundancy from a mirror When the disk is replaced, the entire plex must be brought up to date.
Understanding Veritas Volume Manager Volume layouts in VxVM volume is reflected in all copies. If a portion of a mirrored volume fails, the system continues to use the other copies of the data. RAID-5 provides data redundancy by using parity. Parity is a calculated value used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is calculated by doing an exclusive OR (XOR) procedure on the data. The resulting parity is then written to the volume.
46 Understanding Veritas Volume Manager Volume layouts in VxVM row is the minimal number of disks necessary to support the full width of a parity stripe. Figure 1-21 shows the row and column arrangement of a traditional RAID-5 array. Figure 1-21 Traditional RAID-5 array Stripe 1 Stripe3 Row 0 Stripe 2 Row 1 Column 0 Column 1 Column 2 Column 3 This traditional array structure supports growth by adding more rows per column.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-22 Veritas Volume Manager RAID-5 array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = subdisk VxVM allows each column of a RAID-5 plex to consist of a different number of subdisks. The subdisks in a given column can be derived from different physical disks. Additional subdisks can be added to the columns as necessary.
48 Understanding Veritas Volume Manager Volume layouts in VxVM unit is located in the next stripe, shifted left one column from the previous parity stripe unit location. If there are more stripes than columns, the parity stripe unit placement begins in the rightmost column again. Figure 1-23 shows a left-symmetric parity layout with five disks (one per column).
Understanding Veritas Volume Manager Volume layouts in VxVM 49 RAID-5 logging Logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device such as a volume on disk or in non-volatile RAM. The new data and parity are then written to the disks. Without logging, it is possible for data not involved in any active writes to be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail.
50 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-25 shows a typical striped-mirror layered volume where each column is represented by a subdisk that is built from an underlying mirrored volume.
Understanding Veritas Volume Manager Online relayout Creating striped-mirrors See “Creating a striped-mirror volume” on page 319. See the vxassist(1M) manual page. Creating concatenated-mirrors See “Creating a concatenated-mirror volume” on page 312. See the vxassist(1M) manual page. Online Relayout See “Online relayout” on page 51. See the vxassist(1M) manual page. See the vxrelayout(1M) manual page. Moving RAID-5 subdisks See the vxsd(1M) manual page.
52 Understanding Veritas Volume Manager Online relayout For example, if a striped layout with a 128KB stripe unit size is not providing optimal performance, you can use relayout to change the stripe unit size. File systems mounted on the volumes do not need to be unmounted to achieve this transformation provided that the file system (such as Veritas File System) supports online shrink and grow operations.
Understanding Veritas Volume Manager Online relayout Additional permanent disk space may be required for the destination volumes, depending on the type of relayout that you are performing. This may happen, for example, if you change the number of columns in a striped volume. Figure 1-26 shows how decreasing the number of columns can require disks to be added to a volume.
54 Understanding Veritas Volume Manager Online relayout Figure 1-28 Example of relayout of a concatenated volume to a RAID-5 volume Concatenated volume RAID-5 volume Note that adding parity increases the overall storage space that the volume requires. ■ Change the number of columns in a volume. Figure 1-29 shows an example of changing the number of columns.
Understanding Veritas Volume Manager Online relayout Limitations of online relayout Note the following limitations of online relayout: ■ Log plexes cannot be transformed. ■ Volume snapshots cannot be taken when there is an online relayout operation running on the volume. ■ Online relayout cannot create a non-layered mirrored volume in a single step. It always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat.
56 Understanding Veritas Volume Manager Volume resynchronization You can determine the transformation direction by using the vxrelayout status volume command. These transformations are protected against I/O failures if there is sufficient redundancy and space to move the data. Transformations and volume length Some layout transformations can cause the volume length to increase or decrease. If either of these conditions occurs, online relayout uses the vxresize command to shrink or grow a file system.
Understanding Veritas Volume Manager Dirty region logging Dirty flags VxVM records when a volume is first written to and marks it as dirty. When a volume is closed by all processes or stopped cleanly by the administrator, and all writes have been completed, VxVM removes the dirty flag for the volume. Only volumes that are marked dirty require resynchronization. Resynchronization process The process of resynchronization depends on the type of volume.
58 Understanding Veritas Volume Manager Dirty region logging If DRL is not used and a system failure occurs, all mirrors of the volumes must be restored to a consistent state. Restoration is done by copying the full contents of the volume between its mirrors. This process can be lengthy and I/O intensive. Note: DRL adds a small I/O overhead for most write access patterns. This overhead is reduced by using SmartSync.
Understanding Veritas Volume Manager Dirty region logging See “Preparing a volume for DRL and instant snapshots” on page 359. SmartSync recovery accelerator The SmartSync feature of Veritas Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as resilvering.) SmartSync reduces the time required to restore consistency, freeing more I/O bandwidth for business-critical applications.
60 Understanding Veritas Volume Manager Volume snapshots Redo log volume configuration A redo log is a log of changes to the database data. Because the database does not maintain changes to the redo logs, it cannot provide information about which sections require resilvering. Redo logs are also written sequentially, and since traditional dirty region logs are most useful with randomly-written data, they are of minimal use for reducing recovery time for redo logs.
Understanding Veritas Volume Manager Volume snapshots Figure 1-31 Time Volume snapshot as a point-in-time image of a volume T1 Original volume T2 Original volume Snapshot volume Snapshot volume is created at time T2 T3 Original volume Snapshot volume Snapshot volume retains image taken at time T2 T4 Original volume Snapshot volume Snapshot volume is updated at time T4 Resynchronize snapshot volume from the original volume Even though the contents of the original volume can change, the sna
62 Understanding Veritas Volume Manager Volume snapshots resynchronization is required on reattaching the snapshot plex. In later releases, the snapshot model was enhanced to allow snapshot volumes to contain more than a single plex, reattachment of a subset of a snapshot volume’s plexes, and persistence of FastResync across system reboots or cluster restarts. See “FastResync” on page 63. Release 4.
Understanding Veritas Volume Manager FastResync Table 1-1 Comparison of snapshot features for supported snapshot types (continued) Snapshot feature Full-sized instant (vxsnap) Space-optimized Break-off instant (vxassist or (vxsnap) vxsnap) Can be moved into separate disk group Yes from original volume No Yes Can be turned into an independent volume Yes No Yes FastResync ability persists across system reboots or cluster restarts Yes Yes Yes Synchronization can be controlled Yes No No Can
64 Understanding Veritas Volume Manager FastResync See “Enabling FastResync on a volume” on page 373. FastResync enhancements FastResync provides the following enhancements to VxVM: Faster mirror resynchronization FastResync optimizes mirror resynchronization by keeping track of updates to stored data that have been missed by a mirror.
Understanding Veritas Volume Manager FastResync How non-persistent FastResync works with snapshots The snapshot feature of VxVM takes advantage of FastResync change tracking to record updates to the original volume after a snapshot plex is created. After a snapshot is taken, the snapback option is used to reattach the snapshot plex.
66 Understanding Veritas Volume Manager FastResync Version 0 DCO volume layout In earlier releases of VxVM, the DCO object only managed information about the FastResync maps. These maps track writes to the original volume and to each of up to 32 snapshot volumes since the last snapshot operation. Each plex of the DCO volume on disk holds 33 maps, each of which is 4 blocks in size by default. Persistent FastResync uses the maps in a version 0 DCO volume on disk to implement change tracking.
Understanding Veritas Volume Manager FastResync and by the number of per-volume maps. Both the region size and the number of per-volume maps in a DCO volume may be configured when a volume is prepared for use with snapshots. The region size must be a power of 2 and be greater than or equal to 16KB.
68 Understanding Veritas Volume Manager FastResync Associated with the volume are a DCO object and a DCO volume with two plexes. To create a traditional third-mirror snapshot or an instant (copy-on-write) snapshot, the vxassist snapstart or vxsnap make operation respectively is performed on the volume. Figure 1-33 shows how a snapshot plex is set up in the volume, and how a disabled DCO plex is associated with it.
Understanding Veritas Volume Manager FastResync Figure 1-34 Mirrored volume and snapshot volume after completion of a snapshot operation Mirrored volume Data plex Data plex Data change object Snap object DCO volume DCO log plex DCO log plex Snapshot volume Data plex Data change object Snap object DCO volume DCO log plex The DCO volume contains the single DCO plex that was associated with the snapshot plex.
70 Understanding Veritas Volume Manager FastResync volume. If the volumes are in different disk groups, the command must be run separately on each volume. ■ For full-sized instant snapshots, the vxsnap reattach operation is run to return all of the plexes of the snapshot volume to the original volume. ■ For full-sized instant snapshots, the vxsnap dis or vxsnap split operations are run on a volume to break the association between the original volume and the snapshot volume.
Understanding Veritas Volume Manager Hot-relocation ■ Persistent FastResync is supported for RAID-5 volumes, but this prevents the use of the relayout or resize operations on the volume while a DCO is associated with it. ■ Neither non-persistent nor persistent FastResync can be used to resynchronize mirrors after a system crash. Dirty region logging (DRL), which can coexist with FastResync, should be used for this purpose. In VxVM 4.
72 Understanding Veritas Volume Manager Volume sets redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them accessible again.
Chapter 2 Provisioning new usable storage This chapter includes the following topics: ■ Provisioning new usable storage ■ Growing the existing storage by adding a new LUN ■ Growing the existing storage by growing the LUN Provisioning new usable storage The following procedure describes how to provision new usable storage. To provision new usable storage 1 Set up the LUN. See the documentation for your storage array for information about how to create, mask, and bind the LUN.
74 Provisioning new usable storage Growing the existing storage by adding a new LUN # vxdg -g dg1 adddisk 3PARDATA0_1 4 Create the volume on the LUN: # vxassist -b -g dg1 make vol1 100g 3PARDATA0_1 5 Create a file system on the volume: # mkfs -F vxfs /dev/vx/rdsk/dg1/vol1 6 Create a mount point on the file system: # mkdir mount1 7 Mount the file system: # mount -F vxfs /dev/vx/dsk/dg1/vol1 /mount1 Growing the existing storage by adding a new LUN The following procedure describes how to grow the e
Provisioning new usable storage Growing the existing storage by growing the LUN To grow the existing storage by growing a LUN 1 Grow the existing LUN. See the documentation for your storage array for information about how to create, mask, and bind the LUN. 2 Make VxVM aware of the new LUN size.
76 Provisioning new usable storage Growing the existing storage by growing the LUN
Chapter 3 Administering disks This chapter includes the following topics: ■ About disk management ■ Disk devices ■ Discovering and configuring newly added disk devices ■ Disks under VxVM control ■ Changing the disk-naming scheme ■ About the Array Volume Identifier (AVID) attribute ■ Discovering the association between enclosure-based disk names and OS-based disk names ■ About disk installation and formatting ■ Displaying or changing default disk layout attributes ■ Adding a disk to VxVM
78 Administering disks About disk management ■ Removing and replacing disks ■ Enabling a disk ■ Taking a disk offline ■ Renaming a disk ■ Reserving disks About disk management Veritas Volume Manager (VxVM) allows you to place LUNs and disks under VxVM control, to initialize disks, and to remove and replace disks. Note: Most VxVM commands require superuser or equivalent privileges.
Administering disks Disk devices In HP-UX 11i v3, the persistent (agile) forms of such devices are located in the /dev/disk and /dev/rdisk directories. To maintain backward compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories. VxVM uses the device names to create metadevices in the /dev/vx/[r]dmp directories.
80 Administering disks Disk devices By default, VxVM uses enclosure-based naming. You can change the disk-naming scheme if required. See “Changing the disk-naming scheme” on page 99. Operating system-based naming Under operating system-based naming, all disk devices except fabric mode disks are displayed either using the legacy c#t#d# format or the persistent disk## format.
Administering disks Disk devices ■ Disks in the OTHER_DISKS category (disks that are not multi-pathed by DMP) are named using the c#t#d# format or the disk## format. By default, enclosure-based names are persistent, so they do not change after reboot. If a CVM cluster is symmetric, each node in the cluster accesses the same set of disks. Enclosure-based names provide a consistent naming system so that the device names are the same on each node.
82 Administering disks Disk devices A disk’s type identifies how VxVM accesses a disk, and how it manages the disk’s private and public regions. The following disk access types are used by VxVM: auto When the vxconfigd daemon is started, VxVM obtains a list of known disk device addresses from the operating system and configures disk access records for them automatically. nopriv There is no private region (only a public region for allocating subdisks).
Administering disks Discovering and configuring newly added disk devices By default, all auto-configured disks are formatted as cdsdisk disks when they are initialized for use with VxVM. You can change the default format by using the vxdiskadm(1M) command to update the /etc/default/vxdisk defaults file. See “Displaying or changing default disk layout attributes” on page 109. See the vxdisk(1M) manual page.
84 Administering disks Discovering and configuring newly added disk devices The vxdisk scandisks command rescans the devices in the OS device tree and triggers a DMP reconfiguration. You can specify parameters to vxdisk scandisks to implement partial device discovery.
Administering disks Discovering and configuring newly added disk devices Discovering disks and dynamically adding disk arrays DMP uses array support libraries (ASLs) to provide array-specific support for multi-pathing. An array support library (ASL) is a dynamically loadable shared library (plug-in for DDL). The ASL implements hardware-specific logic to discover device attributes during device discovery.
86 Administering disks Discovering and configuring newly added disk devices See “Adding unsupported disk arrays to the DISKS category” on page 94. Disks in JBODs that do not fall into any supported category, and which are not capable of being multipathed by DMP are placed in the OTHER_DISKS category. Adding support for a new disk array You can dynamically add support for a new type of disk array. The support comes in the form of Array Support Libraries (ASLs) that are developed by Symantec.
Administering disks Discovering and configuring newly added disk devices Third-party driver coexistence The third-party driver (TPD) coexistence feature of VxVM allows I/O that is controlled by some third-party multi-pathing drivers to bypass DMP while retaining the monitoring capabilities of DMP. If a suitable ASL is available and installed, devices that use TPDs can be discovered without requiring you to set up a specification file, or to run a special command.
88 Administering disks Discovering and configuring newly added disk devices Listing all the devices including iSCSI You can display the hierarchy of all the devices discovered by DDL, including iSCSI devices. To list all the devices including iSCSI ◆ Type the following command: # vxddladm list The following is a sample output: HBA c2 (20:00:00:E0:8B:19:77:BE) Port c2_p0 (50:0A:09:80:85:84:9D:84) Target c2_p0_t0 (50:0A:09:81:85:84:9D:84) LUN c2t0d0 . . . HBA c3 (iqn.1986-03.com.sun:01:0003ba8ed1b5.
Administering disks Discovering and configuring newly added disk devices 89 Listing the ports configured on a Host Bus Adapter You can obtain information about all the ports configured on an HBA. The display includes the following information: HBA-ID The parent HBA. State Whether the device is Online or Offline. Address The hardware address.
90 Administering disks Discovering and configuring newly added disk devices To list the targets configured from a Host Bus Adapter or port ◆ You can filter based on a HBA or port, using the following command: # vxddladm list targets [hba=hba_name|port=port_name] For example, to obtain the targets configured from the specified HBA: # vxddladm list targets hba=c2 TgtID Alias HBA-ID State Address ----------------------------------------------------------------c2_p0_t0 c2 Online 50:0A:09:80:85:84:9D:84 Lis
Administering disks Discovering and configuring newly added disk devices Getting or setting the iSCSI operational parameters DDL provides an interface to set and display certain parameters that affect the performance of the iSCSI device path. However, the underlying OS framework must support the ability to set these values. The vxddladm set command returns an error if the OS support is not available.
92 Administering disks Discovering and configuring newly added disk devices To get the iSCSI operational parameters on the initiator for a specific iSCSI target ◆ Type the following commands: # vxddladm getiscsi target=tgt-id {all | parameter} You can use this command to obtain all the iSCSI operational parameters.
Administering disks Discovering and configuring newly added disk devices Excluding support for a disk array library To exclude support for a disk array library ◆ Type the following command: # vxddladm excludearray libname=libvxenc.sl This example excludes support for disk arrays that depends on the library libvxenc.sl.
94 Administering disks Discovering and configuring newly added disk devices Adding unsupported disk arrays to the DISKS category Disk arrays should be added as JBOD devices if no ASL is available for the array. JBODs are assumed to be Active/Active (A/A) unless otherwise specified. If a suitable ASL is not available, an A/A-A, A/P or A/PF array must be claimed as an Active/Passive (A/P) JBOD to prevent path delays and I/O failures. If a JBOD is ALUA-compliant, it is added as an ALUA array.
Administering disks Discovering and configuring newly added disk devices 4 Enter the following command to add a new JBOD category: # vxddladm addjbod vid=vendorid [pid=productid] \ [serialnum=opcode/pagecode/offset/length] [cabinetnum=opcode/pagecode/offset/length] policy={aa|ap}] where vendorid and productid are the VID and PID values that you found from the previous step. For example, vendorid might be FUJITSU, IBM, or SEAGATE.
96 Administering disks Discovering and configuring newly added disk devices 7 To verify that the array is recognized, use the vxdmpadm listenclosure command as shown in the following sample output for the example array: # vxdmpadm listenclosure ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT ============================================================== Disk Disk DISKS CONNECTED Disk 2 The enclosure name and type for the array are both shown as being set to Disk.
Administering disks Discovering and configuring newly added disk devices Foreign devices DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multi-pathing or RAM disk capabilities. For these devices it may be preferable to use the multi-pathing capability that is provided by the third-party drivers for some arrays rather than using Dynamic Multi-Pathing (DMP).
98 Administering disks Disks under VxVM control ■ Foreign devices, such as HP-UX native multi-pathing metanodes, do not have enclosures, controllers or DMP nodes that can be administered using VxVM commands. An error message is displayed if you attempt to use the vxddladm or vxdmpadm commands to administer such devices while HP-UX native multi-pathing is configured. ■ The I/O Fencing and Cluster File System features are not supported for foreign devices.
Administering disks Changing the disk-naming scheme For example, use the following commands to destroy the file system and initialize the disk: # dd if=/dev/zero of=/dev/dsk/diskname bs=1024k count=50 # vxdisk scandisks # vxdisk -f init diskname ■ If the disk was previously in use by the LVM subsystem, you can preserve existing data while still letting VxVM take control of the disk. This is accomplished using conversion. With conversion, the virtual layout of the data is fully converted to VxVM control.
100 Administering disks Changing the disk-naming scheme If operating system-based naming is selected, all VxVM commands that list DMP node devices will display device names according to the mode that is specified. Table 3-2 Modes to display device names for all VxVM commands Mode Format of output from VxVM command default The same format is used as in the input to the command (if this can be determined). Otherwise, legacy names are used. This is the default mode.
Administering disks Changing the disk-naming scheme To change the disk-naming scheme ◆ Select Change the disk naming scheme from the vxdiskadm main menu to change the disk-naming scheme that you want VxVM to use. When prompted, enter y to change the naming scheme. For operating system based naming, you are asked to select between default, legacy or new device names. Alternatively, you can change the naming scheme from the command line.
102 Administering disks Changing the disk-naming scheme Examples of using vxddladm to change the naming scheme The following examples illustrate the use of vxddladm set namingscheme command: # vxddladm set namingscheme=osn mode=default NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ====================================================== c1t65d0 ENABLED Disk 2 2 0 Disk # vxdmpadm getlungroup dmpnodename=disk25 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ===============================================
Administering disks Changing the disk-naming scheme # vxdmpadm getlungroup dmpnodename=Disk_11 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== Disk_11 ENABLED Disk 2 2 0 Disk Displaying the disk-naming scheme VxVM disk naming can be operating-system based naming or enclosure-based naming. This command displays whether the VxVM disk naming scheme is currently set.
104 Administering disks Changing the disk-naming scheme To regenerate persistent device names ◆ To regenerate the persistent names repository, use the following command: # vxddladm [-c] assign names The -c option clears all user-specified names and replaces them with autogenerated names. If the -c option is not specified, existing user-specified names are maintained, but OS-based and enclosure-based names are regenerated. The disk names now correspond to the new path names.
Administering disks Changing the disk-naming scheme You do not need to perform either procedure if the devices on which any simple or nopriv disks are present are not automatically configured by VxVM (for example, non-standard disk devices such as ramdisks). The disk access records for simple disks are either persistent or non-persistent. The disk access record for a persistent simple disk is stored in the disk’s private region.
106 Administering disks About the Array Volume Identifier (AVID) attribute To remove the error state for persistent simple or nopriv disks in the boot disk group 1 Use vxdiskadm to change back to c#t#d# naming.
Administering disks Discovering the association between enclosure-based disk names and OS-based disk names 107 Identifier (AVID) as an index in the DMP metanode name. The DMP metanode name is in the format enclosureID_AVID. The VxVM utilities such as vxdisk list display the DMP metanode name, which includes the AVID property. Use the AVID to correlate the DMP metanode name to the LUN displayed in the array management interface (GUI or CLI) .
108 Administering disks About disk installation and formatting To discover the association between enclosure-based disk names and OS-based disk names ◆ To discover the operating system-based names that are associated with a given enclosure-based disk name, use either of the following commands: # vxdisk list enclosure-based_name # vxdmpadm getsubpaths dmpnodename=enclosure-based_name For example, to find the physical device that is associated with disk ENC0_21, the appropriate commands would be: # vxdisk
Administering disks Displaying or changing default disk layout attributes Displaying or changing default disk layout attributes To display or change the default values for initializing the layout of disks ◆ Select Change/display the default disk layout from the vxdiskadm main menu. For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file, /etc/default/vxdisk. See the vxdisk(1M) manual page.
110 Administering disks Adding a disk to VxVM c3t0d0 c3t1d0 c3t2d0 c3t3d0 If you enter list at the prompt, the vxdiskadm program displays a list of the disks available to the system: The phrase online invalid in the STATUS line indicates that a disk has yet to be added or initialized for VxVM control. Disks that are listed as online with a disk name and disk group are already under VxVM control. Enter the device name or pattern of the disks that you want to initialize at the prompt and press Return.
Administering disks Adding a disk to VxVM If the new disk group may be moved between different operating system platforms, enter y. Otherwise, enter n.
112 Administering disks Adding a disk to VxVM list of enclosure names Enter site tag for disks on enclosure enclosure name [,q,?] site_name 12 If one or more disks already contains a file system, vxdiskadm asks if you are sure that you want to destroy it. Enter y to confirm this: The following disk device appears to contain a currently unmounted file system.
Administering disks Adding a disk to VxVM 15 If you choose not to use the default disk names, vxdiskadm prompts you to enter the disk name. Enter disk name for c21t2d6 [,q,?] (default: dg201) 16 At the following prompt, indicate whether you want to continue to initialize more disks (y) or return to the vxdiskadm main menu (n): Add or initialize other disks? [y,n,q,?] (default: n) The default layout for disks can be changed. See “Displaying or changing default disk layout attributes” on page 109.
114 Administering disks RAM disk support in VxVM Using vxdiskadd to put a disk under VxVM control To use the vxdiskadd command to put a disk under VxVM control. ◆ Type the following command: # vxdiskadd disk For example, to initialize the second disk on the first controller: # vxdiskadd c0t1d0 The vxdiskadd command examines your disk to determine whether it has been initialized and also checks for disks that have been added to VxVM, and for other conditions.
Administering disks Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks Normally, VxVM does not start volumes that are formed entirely from plexes with volatile subdisks. That is because there is no plex that is guaranteed to contain the most recent volume contents. Some RAM disks are used in situations where all volume contents are recreated after reboot.
116 Administering disks Veritas Volume Manager co-existence with Oracle Automatic Storage Management (ASM) disks indicates that the disk must be cleaned up before the disk can be initialized for use with VxVM. To remove a FORMER ASM disk from ASM control for use with VxVM ◆ Clean the disk with the dd command to remove all ASM identification information on it.
Administering disks Rootability To check if a particular disk is under ASM control ◆ Use the vxisasm utility to check if a particular disk is under ASM control.
118 Administering disks Rootability VxVM root disk volume restrictions Volumes on a bootable VxVM root disk have the following configuration restrictions: ■ All volumes on the root disk must be in the disk group that you choose to be the bootdg disk group. ■ The names of the volumes with entries in the LIF LABEL record must be standvol, rootvol, swapvol, and dumpvol (if present).
Administering disks Rootability Warning: If you mirror only selected volumes on the root disk and use spanning or striping to enhance performance, these mirrors are not bootable. See “Setting up a VxVM root disk and mirror” on page 119. Booting root volumes Note: At boot time, the system firmware provides you with a short time period during which you can manually override the automatic boot process and select an alternate boot device.
120 Administering disks Rootability disk to the new VxVM root disk, optionally creates a mirror of the VxVM root disk on another specified physical disk, and make the VxVM root disk and its mirror (if any) bootable by HP-UX. Note: Operations involving setting up a root image, creating a mirror, and restoring the root image are not supported on the LVM version 2 volume groups. Only create a VxVM root disk if you also intend to mirror it.
Administering disks Rootability # /etc/vx/bin/vxcp_lvmroot -R 30 -v -b c0t4d0 # /etc/vx/bin/vxrootmir -v -b c1t1d0 The disk name assigned to the VxVM root disk mirror also uses the format rootdisk## with ## set to the next available number. The target disk for a mirror that is added using the vxrootmir command must be large enough to accommodate the volumes from the VxVM root disk.
122 Administering disks Rootability This example shows how to create an LVM root disk on physical disk c0t1d0 after removing the existing LVM root disk configuration from that disk. # /etc/vx/bin/vxdestroy_lvmroot -v c0t1d0 # /etc/vx/bin/vxres_lvmroot -v -b c0t1d0 The -b option to vxres_lvmroot sets c0t1d0 as the primary boot device. As these operations can take some time, the verbose option, -v, is specified to indicate how far the operation has progressed. See the vxres_lvmroot (1M) manual page.
Administering disks Rootability Adding persistent dump volumes to a VxVM rootable system A persistent dump volume is used when creating crash dumps, which are eventually saved in the /var/adm/crash directory. A maximum of ten VxVM volumes can be configured as persistent dump volumes. Persistent dump volumes should be created and configured only on the current boot disk.
124 Administering disks Displaying disk information To remove a persistent dump volume 1 Run the following command to remove a VxVM volume that is being used as a dump volume from the crash dump configuration: #crashconf -ds /dev/vx/dsk/bootdg/dumpvol In this example, the dump volume is named dumpvol in the boot disk group. 2 Display the new crash dump configuration: #crashconf -v You can now remove the volume if required.
Administering disks Controlling Powerfail Timeout To display information about an individual disk ◆ Type the following command: # vxdisk [-v] list diskname The -v option causes the command to additionally list all tags and tag values that are defined for the disk. Without this option, no tags are displayed. Displaying disk information with vxdiskadm Displaying disk information shows you which disks are initialized, to which disk groups they belong, and the disk status.
126 Administering disks Controlling Powerfail Timeout See the pfto(7) man page. VxVM uses this mechanism in its Powerfail Timeout (PFTO) feature. You can specify a timeout value for individual VxVM disks using the vxdisk command. If the PFTO setting for a disk I/O is enabled, the underlying driver returns an error without retrying the I/O if the disk timer (PFTO) expires and the I/O does not return from the disk.
Administering disks Removing disks The output shows the pftostate field, which indicates whether PFTO is enabled or disabled. The timeout field shows the PFTO timeout value. timeout: 30 pftostate: disabled The output shows: Device: devicetag: ... timeout: pftostate: ...
128 Administering disks Removing disks To remove a disk 1 Stop all activity by applications to volumes that are configured on the disk that is to be removed. Unmount file systems and shut down databases that are configured on the volumes. 2 Use the following command to stop the volumes: # vxvol [-g diskgroup] stop vol1 vol2 ... 3 Move the volumes to other disks or back up the volumes.
Administering disks Removing disks Removing a disk with subdisks You can remove a disk on which some subdisks are defined. For example, you can consolidate all the volumes onto one disk. If you use the vxdiskadm program to remove a disk, you can choose to move volumes off that disk. Some subdisks are not movable. A subdisk may not be movable for one of the following reasons: ■ There is not enough space on the remaining disks in the subdisks disk group.
130 Administering disks Removing a disk from VxVM control Removing a disk with no subdisks To remove a disk that contains no subdisks from its disk group ◆ Run the vxdiskadm program and select Remove a disk from the main menu, and respond to the prompts as shown in this example to remove mydg02: Enter disk name [,list,q,?] mydg02 VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg02 from group mydg.
Administering disks Removing and replacing disks track, tracks per cylinder and sectors per cylinder, same number of cylinders, and the same number of accessible cylinders. Note: You may need to run commands that are specific to the operating system or disk array before removing a physical disk. If failures are starting to occur on a disk, but the disk has not yet failed completely, you can replace the disk.
132 Administering disks Removing and replacing disks 3 When you select a disk to remove for replacement, all volumes that are affected by the operation are displayed, for example: VxVM NOTICE V-5-2-371 The following volumes will lose mirrors as a result of this operation: home src No data on these volumes will be lost. The following volumes are in use, and will be disabled as a result of this operation: mkting Any applications using these volumes will fail future accesses.
Administering disks Removing and replacing disks 4 133 At the following prompt, either select the device name of the replacement disk (from the list provided), press Return to choose the default disk, or enter none if you are going to replace the physical disk: The following devices are available as replacements: c0t1d0 You can choose one of these disks now, to replace mydg02. Select none if you do not wish to select a replacement disk.
134 Administering disks Removing and replacing disks 7 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32 MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.
Administering disks Removing and replacing disks 3 The vxdiskadm program displays the device names of the disk devices available for use as replacement disks. Your system may use a device name that differs from the examples. Enter the device name of the disk or press Return to select the default device: The following devices are available as replacements: c0t1d0 c1t1d0 You can choose one of these disks to replace mydg02. Choose "none" to initialize another disk to replace mydg02.
136 Administering disks Enabling a disk 6 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32 MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.
Administering disks Taking a disk offline 3 At the following prompt, indicate whether you want to enable another device (y) or return to the vxdiskadm main menu (n): Enable another device? [y,n,q,?] (default: n) 4 After using the vxdiskadm command to replace one or more failed disks in a VxVM cluster, run the following command on all the cluster nodes: # vxdctl enable Then run the following command on the master node: # vxreattach -r accesname where accessname is the disk access name (such as c0t1d0).
138 Administering disks Renaming a disk Renaming a disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type. To rename a disk ◆ Type the following command: # vxedit [-g diskgroup] rename old_diskname new_diskname By default, VxVM names subdisk objects after the VM disk on which they are located.
Administering disks Reserving disks To reserve a disk ◆ Type the following command: # vxedit [-g diskgroup] set reserve=on diskname After you enter this command, the vxassist program does not allocate space from the selected disk unless that disk is specifically mentioned on the vxassist command line. For example, if mydg03 is reserved, use the following command: # vxassist [-g diskgroup] make vol03 20m mydg03 The vxassist command overrides the reservation and creates a 20 megabyte volume on mydg03.
140 Administering disks Reserving disks
Chapter 4 Administering Dynamic Multi-Pathing This chapter includes the following topics: ■ How DMP works ■ Disabling multi-pathing and making devices invisible to VxVM ■ Enabling multi-pathing and making devices visible to VxVM ■ About enabling and disabling I/O for controllers and storage processors ■ About displaying DMP database information ■ Displaying the paths to a disk ■ Setting customized names for DMP nodes ■ Administering DMP using vxdmpadm How DMP works Note: You need a full li
142 Administering Dynamic Multi-Pathing How DMP works Multiported disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array. DMP can also differentiate between different enclosures of a supported array that are connected to the same host system. See “Discovering and configuring newly added disk devices” on page 83.
Administering Dynamic Multi-Pathing How DMP works Active/Passive in explicit failover mode The appropriate command must be issued to the or non-autotrespass mode (A/P-F) array to make the LUNs fail over to the secondary path. This policy supports concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN switch between an array and a controller.
144 Administering Dynamic Multi-Pathing How DMP works How DMP represents multiple physical paths to a disk as one node Figure 4-1 VxVM Host c1 Single DMP node c2 Mapped by DMP DMP Multiple paths Multiple paths Disk VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs. Figure 4-2 shows an example where two paths, c1t99d0 and c2t99d0, exist to a single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.
Administering Dynamic Multi-Pathing How DMP works How DMP monitors I/O on paths In older releases of VxVM, DMP had one kernel daemon (errord) that performed error processing, and another (restored) that performed path restoration activities. From release 5.0, DMP maintains a pool of kernel threads that are used to perform such tasks as error processing, path restoration, statistics collection, and SCSI request callbacks. The vxdmpadm stat command can be used to provide information about the threads.
146 Administering Dynamic Multi-Pathing How DMP works If required, the response of DMP to I/O failure on a path can be tuned for the paths to individual arrays. DMP can be configured to time out an I/O request either after a given period of time has elapsed without the request succeeding, or after a given number of retries on a path have failed. See “Configuring the response to I/O failures” on page 189.
Administering Dynamic Multi-Pathing How DMP works Load balancing By default, the DMP uses the Minimum Queue policy for load balancing across paths for Active/Active (A/A), Active/Passive (A/P), Active/Passive with explicit failover (A/P-F) and Active/Passive with group failover (A/P-G) disk arrays. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths. I/O is sent down the path which has the minimum outstanding I/Os.
148 Administering Dynamic Multi-Pathing How DMP works Migrating between DMP and HP-UX native multi-pathing Note: Migrating from one multipath driver to the other overwrites the existing Powerfail Timeout (PFTO) settings for the migrating device. It will take the default PFTO setting for the multipath driver that it is migrated to. You can use the vxddladm addforeign and vxddladm rmforeign commands to migrate a system between DMP and HP-UX native multi-pathing.
Administering Dynamic Multi-Pathing How DMP works To migrate from DMP to HP-UX native multi-pathing 1 Stop all the volumes in each disk group on the system: # vxvol -g diskgroup stopall 2 Use the following commands to initiate the migration: # vxddladm addforeign blockdir=/dev/disk chardir=/dev/rdisk # vxconfigd -kr reset For migration involving a current boot disk, use: # vxddladm -r addforeign blockdir=/dev/disk chardir=/dev/rdisk 3 Restart all the volumes in each disk group: # vxvol -g diskgroup
150 Administering Dynamic Multi-Pathing How DMP works To migrate from HP-UX native multi-pathing to DMP 1 Stop all the volumes in each disk group on the system: # vxvol -g diskgroup stopall 2 Use the following commands to initiate the migration: # vxddladm rmforeign blockdir=/dev/disk chardir=/dev/rdisk # vxconfigd -kr reset For migration involving a current boot disk, use: # vxddladm -r rmforeign blockdir=/dev/disk/ chardir=/dev/rdisk/
Administering Dynamic Multi-Pathing How DMP works 3 Restart all the volumes in each disk group: # vxvol -g diskgroup startall The output from the vxdisk list command now shows DMP metanode names according to the current naming scheme. For example, under the default or legacy naming scheme, vxdisk list displays the devices as shown.
152 Administering Dynamic Multi-Pathing Disabling multi-pathing and making devices invisible to VxVM severely degrade I/O performance (sometimes referred to as the ping-pong effect). Path failover on a single cluster node is also coordinated across the cluster so that all the nodes continue to share the same physical path. Prior to release 4.
Administering Dynamic Multi-Pathing Enabling multi-pathing and making devices visible to VxVM To disable multi-pathing and make devices invisible to VxVM 1 Run the vxdiskadm command, and select Prevent multipathing/Suppress devices from VxVM’s view from the main menu. You are prompted to confirm whether you want to continue. 2 Select the operation you want to perform from the following options: Option 1 Suppresses all paths through the specified controller from the view of VxVM.
154 Administering Dynamic Multi-Pathing About enabling and disabling I/O for controllers and storage processors To enable multi-pathing and make devices visible to VxVM 1 Run the vxdiskadm command, and select Allow multipathing/Unsuppress devices from VxVM’s view from the main menu. You are prompted to confirm whether you want to continue. 2 Select the operation you want to perform from the following options: Option 1 Unsuppresses all paths through the specified controller from the view of VxVM.
Administering Dynamic Multi-Pathing About displaying DMP database information array port resulted in all primary paths being disabled, DMP will failover to active secondary paths and I/O will continue on them. After the operation is over, you can use vxdmpadm to re-enable the paths through the controllers. See “Disabling I/O for paths, controllers or array ports” on page 187. See “Enabling I/O for paths, controllers or array ports” on page 188.
156 Administering Dynamic Multi-Pathing Displaying the paths to a disk To display the multi-pathing information on a system ◆ Use the vxdisk path command to display the relationships between the device paths, disk access names, disk media names and disk groups on a system as shown here: # vxdisk path SUBPATH c1t0d0 c4t0d0 c1t1d0 c4t1d0 . . .
Administering Dynamic Multi-Pathing Displaying the paths to a disk To view multi-pathing information for a particular metadevice 1 Use the following command: # vxdisk list devicename For example, to view multi-pathing information for c1t0d3, use the following command: # vxdisk list c1t0d3 The output from the vxdisk list command displays the multi-pathing information, as shown in the following example: Device: c1t0d3 devicetag: c1t0d3 type: simple hostid: system01 . . .
158 Administering Dynamic Multi-Pathing Setting customized names for DMP nodes 2 Alternately, you can use the following command to view multi-pathing information: # vxdmpadm getsubpaths dmpnodename=devicename For example, to view multi-pathing information for emc_clariion0_17, use the following command: # vxdmpadm getsubpaths dmpnodename=emc_clariion0_17 Typical output from the vxdmpadm getsubpaths command is as follows: NAME STATE[A] PATH-TYPE[M] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS =================
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm To assign DMP nodes from a file 1 Use the script vxgetdmpnames to get a sample file populated from the devices in your configuration. The sample file shows the format required and serves as a template to specify your customized names.
160 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm ■ Configure the I/O throttling mechanism. ■ Control the operation of the DMP path restoration thread. ■ Get or set the values of various tunables used by DMP. The following sections cover these tasks in detail along with sample output. See “Changing the values of VxVM tunables” on page 495. See “DMP tunable parameters” on page 504. See the vxdmpadm(1M) manual page.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm c2t1d2 c2t1d3 ENABLED ENABLED ACME ACME 2 2 2 2 0 0 enc0 enc0 Use the dmpnodename attribute with getdmpnode to display the DMP information for a given DMP node.
162 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm dev-attr = ###path = name path = c18t0d1 path = c26t0d1 path = c28t0d1 path = c20t0d1 path = c32t0d1 path = c24t0d1 path = c30t0d1 path = c22t0d1 state type transport ctlr hwpath aportID aportWWN attr enabled(a) primary SCSI c18 0/3/1/0.0x50001fe1500a8f081-1 enabled(a) primary SCSI c26 0/3/1/1.0x50001fe1500a8f081-1 enabled(a) primary SCSI c28 0/3/1/1.0x50001fe1500a8f091-2 enabled(a) primary SCSI c20 0/3/1/0.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm has been manually disabled using the vxdmpadm disable command is listed as disabled(m). # vxdmpadm list dmpnode dmpnodename=dmpnodename For example, the following command displays the consolidated information for the DMP node emc_clariion0_158. # vxdmpadm list dmpnode dmpnodename=emc_clariion0_158 dmpdev = emc_clariion0_19 state = enabled enclosure = emc_clariion0 cab-sno = APM00042102192 asl = libvxCLARiiON.
164 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm c11t0d10 ENABLED c11t0d11 ENABLED ACME ACME 2 2 2 2 0 0 enc1 enc1 Displaying paths controlled by a DMP node, controller, enclosure, or array port The vxdmpadm getsubpaths command lists all of the paths known to DMP. The vxdmpadm getsubpaths command also provides options to list the subpaths through a particular DMP node, controller, enclosure, or array port.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm the I/O policy is not set to singleactive, DMP can use a group of paths (all primary or all secondary) for I/O, which are shown as ENABLED(A). See “Specifying the I/O policy” on page 180. Paths that are in the DISABLED state are not available for I/O operations. A path that was manually disabled by the system administrator displays as DISABLED(M). A path that failed displays as DISABLED.
166 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm You can use getsubpaths to obtain information about all the subpaths of an enclosure.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The other controllers are connected to disks that are in recognized DMP categories. All the controllers are in the ENABLED state which indicates that they are available for I/O operations. The state DISABLED is used to indicate that controllers are unavailable for I/O operations. The unavailability can be due to a hardware failure or due to I/O operations being disabled on that controller by using the vxdmpadm disable command.
168 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The following command lists attributes for all enclosures in a system: # vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT ================================================================================= Disk Disk DISKS CONNECTED Disk 6 ANA0 ACME 508002000001d660 CONNECTED A/A 57 enc0 A3 60020f20000001a90000 CONNECTED A/P 30 Displaying information about array ports Use the commands in this s
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Hardware RAID types Displays what kind of Storage RAID Group the LUN belongs to Thin Provisioning Discovery and Reclamation Displays the LUN’s thin reclamation abilities Device Media Type Displays the type of media –whether SSD (solid state disk ) Storage-based Snapshot/Clone Displays whether the LUN is a SNAPSHOT or a CLONE of a PRIMARY LUN Storage-based replication Displays if the LUN is part of a replicated group across a remo
170 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The vxdisk -x attribute -p list command displays the one-line listing for the property list and the attributes.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm control by using the vxdmpadm include command. The devices can be included or excluded based on VID:PID combination, paths, controllers, or disks. You can use the bang symbol (!) to exclude or include any paths or controllers except the one specified. Note: The ! character is a special character in some shells. The following syntax shows how to escape it in a bash shell.
172 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm # vxdmpadm iostat show {all | ctlr=ctlr-name \ | dmpnodename=dmp-node \ | enclosure=enclr-name [portid=array-portid ] \ | pathname=path-name | pwwn=array-port-wwn } \ [interval=seconds [count=N]] This command displays I/O statistics for all paths (all), or for a specified controller, DMP node, enclosure, path or port ID.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm c2t121d0 c3t121d0 c2t112d0 c3t112d0 c2t96d0 c3t96d0 c2t106d0 c3t106d0 c2t113d0 c3t113d0 c2t119d0 c3t119d0 87 0 87 0 87 0 87 0 87 0 87 0 0 0 0 0 0 0 0 0 0 0 0 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.
174 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm # vxdmpadm iostat show pathname=c3t115d0 interval=2 count=2 PATHNAME cpu usage = 8195us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) READS WRITES READS WRITES READS WRITES c3t115d0 PATHNAME c3t115d0 0 0 0 0 0.00 0.00 cpu usage = 59us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) READS WRITES READS WRITES READS WRITES 0 0 0 0 0.00 0.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm 175 Displaying cumulative I/O statistics Use the groupby clause of the vxdmpadm iostat command to display cumulative I/O statistics listings per DMP node, controller, array port id, or host-array controller pair and enclosure. If the groupby clause is not specified, then the statistics are displayed per path.
176 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm You can also filter out entities for which all data entries are zero. This option is especially useful in a cluster environment which contains many failover devices. You can display only the statistics for the active paths.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Setting the attributes of the paths to an enclosure You can use the vxdmpadm setattr command to set the attributes of the paths to an enclosure or disk array. The attributes set for the paths are persistent and are stored in the file /etc/vx/dmppolicy.info. You can set the following attributes: active Changes a standby (failover) path to an active path.
178 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm primary Defines a path as being the primary path for a JBOD disk array. The following example specifies a primary path for a JBOD disk array: # vxdmpadm setattr path c3t10d0 \ pathtype=primary secondary Defines a path as being the secondary path for a JBOD disk array.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm To display the minimum redundancy level for a particular device, use the vxdmpadm getattr command, as follows: # vxdmpadm getattr enclosure|arrayname|arraytype \ component-name redundancy For example, to show the minimum redundancy level for the enclosure HDS9500-ALUA0: # vxdmpadm getattr enclosure HDS9500-ALUA0 redundancy ENCLR_NAME DEFAULT CURRENT ============================================= HDS9500-ALUA0 0 4 Specifying the minimum n
180 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Displaying the I/O policy To display the current and default settings of the I/O policy for an enclosure, array or array type, use the vxdmpadm getattr command.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm adaptive This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time. For example, I/O from/to a database may exhibit both long transfers (table scans) and short transfers (random look ups). The policy is also useful for a SAN environment where different paths may have different number of hops.
182 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm balanced [partitionsize=size] This policy is designed to optimize the use of caching in disk drives and RAID controllers. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware. During normal operation, the disks (or LUNs) are logically divided into a number of regions (or partitions), and I/O from/to a given region is sent on only one of the active paths.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm minimumq This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queue for a LUN. No further configuration is possible as DMP automatically determines the path with the shortest queue. The following example sets the I/O policy to minimumq for a JBOD: # vxdmpadm setattr enclosure Disk \ iopolicy=minimumq This is the default I/O policy for all arrays.
184 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm singleactive This policy routes I/O down the single active path. This policy can be configured for A/P arrays with one active path per controller, where the other paths are used in case of failover. If configured for A/A arrays, there is no load balancing across the paths, and the alternate paths are only used to provide high availability (HA). If the current active path fails, I/O is switched to an alternate active path.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The use_all_paths attribute only applies to A/A-A arrays. For other arrays, the above command displays the message: Attribute is not applicable for this array. Example of applying load balancing in a SAN This example describes how to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches.
186 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm cpu usage = 11294us per cpu memory = 32768b OPERATIONS KBYTES PATHNAME READS WRITES READS WRITES c2t0d15 0 0 0 0 c2t1d15 0 0 0 0 c3t1d15 0 0 0 0 c3t2d15 0 0 0 0 c4t2d15 0 0 0 0 c4t3d15 0 0 0 0 c5t3d15 0 0 0 0 c5t4d15 5493 0 5493 0 AVG TIME(ms) READS WRITES 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.41 0.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm PATHNAME c2t0d15 c2t1d15 c3t1d15 c3t2d15 c4t2d15 c4t3d15 c5t3d14 c5t4d15 OPERATIONS READS WRITES 1021 0 947 0 1004 0 1027 0 1086 0 1048 0 1036 0 1021 0 READS 1021 947 1004 1027 1086 1048 1036 1021 KBYTES WRITES 0 0 0 0 0 0 0 0 AVG TIME(ms) READS WRITES 0.39 0.00 0.39 0.00 0.39 0.00 0.40 0.00 0.39 0.00 0.39 0.00 0.39 0.00 0.39 0.
188 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm # vxdmpadm disable enclosure=HDS9500V0 portid=1A # vxdmpadm disable pwwn=20:00:00:E0:8B:06:5F:19 You can use the -c option to check if there is only a single active path to the disk. If so, the disable command fails with an error message unless you use the -f option to forcibly disable the path.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The following are examples of using the command to enable I/O on an array port: # vxdmpadm enable enclosure=HDS9500V0 portid=1A # vxdmpadm enable pwwn=20:00:00:E0:8B:06:5F:19 Renaming an enclosure The vxdmpadm setattr command can be used to assign a meaningful name to an existing enclosure, for example: # vxdmpadm setattr enclosure enc0 name=GRP1 This example changes the name of an enclosure from enc0 to GRP1.
190 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The value of the argument to retrycount specifies the number of retries to be attempted before DMP reschedules the I/O request on another available path, or fails the request altogether. As an alternative to specifying a fixed number of retries, you can specify the amount of time DMP allows for handling an I/O request. If the I/O request does not succeed within that time, DMP fails the I/O request.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Configuring the I/O throttling mechanism By default, DMP is configured with I/O throttling turned off for all paths. To display the current settings for I/O throttling that are applied to the paths to an enclosure, array name or array type, use the vxdmpadm getattr command. See “Displaying recovery option values” on page 192.
192 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm The above command configures the default behavior, corresponding to recoveryoption=nothrottle. The above command also configures the default behavior for the response to I/O failures. See “Configuring the response to I/O failures” on page 189. Note: The I/O throttling settings are persistent across reboots of the system.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm # vxdmpadm getattr \ {enclosure enc-name|arrayname name|arraytype type} \ recoveryoption The following example shows the vxdmpadm getattr command being used to display the recoveryoption option values that are set on an enclosure.
194 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Configuring DMP path restoration policies DMP maintains a kernel thread that re-examines the condition of paths at a specified interval. The type of analysis that is performed on the paths depends on the checking policy that is configured. Note: The DMP path restoration thread does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable.
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm ■ check_periodic The path restoration thread performs check_all once in a given number of cycles, and check_disabled in the remainder of the cycles. This policy may lead to periodic slowing down (due to check_all) if there is a large number of paths available.
196 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm Displaying the status of the DMP path restoration thread Use the following command to display the status of the automatic path restoration kernel thread, its polling interval, and the policy that it uses to check the condition of paths: # vxdmpadm stat restored This produces output such as the following: The number of daemons running : 1 The interval of daemon: 300 The policy of daemon: check_disabled Displaying information about t
Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm is currently loaded and in use. To see detailed information for an individual module, specify the module name as the argument to the command: # vxdmpadm listapm module_name To add and configure an APM, use the following command: # vxdmpadm -a cfgapm module_name [attr1=value1 \ [attr2=value2 ...]] The optional configuration attributes and their values are specific to the APM for an array.
198 Administering Dynamic Multi-Pathing Administering DMP using vxdmpadm
Chapter 5 Online dynamic reconfiguration This chapter includes the following topics: ■ About online dynamic reconfiguration ■ Reconfiguring a LUN online that is under DMP control ■ Upgrading the array controller firmware online ■ Replacing a host bus adapter About online dynamic reconfiguration You can perform the following kinds of online dynamic reconfigurations: ■ Reconfiguring a LUN online that is under DMP control ■ Updating the array controller firmware, also known as a nondisruptive upgr
200 Online dynamic reconfiguration Reconfiguring a LUN online that is under DMP control The operations are as follows: ■ Dynamic LUN removal from an existing target ID See “Removing LUNs dynamically from an existing target ID” on page 200. ■ Dynamic new LUN addition to a new target ID See “Adding new LUNs dynamically to a new target ID” on page 202.
Online dynamic reconfiguration Reconfiguring a LUN online that is under DMP control 5 Remove the LUNs from the vxdisk list. Enter the following command on all nodes in a cluster: # vxdisk rm da-name This is a required step. If you do not perform this step, the DMP device tree shows ghost paths. 6 Clean up the HP-UX 11i v3 SCSI device tree for the devices that you removed in step 5. See “Cleaning up the operating system device tree after removing LUNs” on page 203. This step is required.
202 Online dynamic reconfiguration Reconfiguring a LUN online that is under DMP control If the answer to any of these questions is "No," return to step 4 and perform the required steps. If the answer to all of the questions is "Yes," the LUN remove operation is successful. Adding new LUNs dynamically to a new target ID In this case, a new group of LUNs is mapped to the host via multiple HBA ports. An operating system device scan is issued for the LUNs to be recognized and added to DMP control.
Online dynamic reconfiguration Reconfiguring a LUN online that is under DMP control If the answer to all of the questions is "Yes," the LUNs have been successfully added. You can now add the LUNs to a disk group, create new volumes, or grow existing volumes. If the dmp_native_support tunable is set to ON and the new LUN does not have a VxVM label or is not claimed by a TPD driver then the LUN is available for use by LVM.
204 Online dynamic reconfiguration Upgrading the array controller firmware online Upgrading the array controller firmware online Storage array subsystems need code upgrades as fixes, patches, or feature upgrades. You can perform these upgrades online when the file system is mounted and I/Os are being served to the storage. Legacy storage subsystems contain two controllers for redundancy. An online upgrade is done one controller at a time.
Online dynamic reconfiguration Replacing a host bus adapter Replacing a host bus adapter This section describes replacing an online host bus adapter (HBA) when DMP is managing multi-pathing in a Storage Foundation Cluster File System (SFCFS) cluster. The HBA World Wide Port Name (WWPN) changes when the HBA is replaced. Following are the prerequisites to replace an online host bus adapter: ■ A single node or two or more node CFS or RAC cluster. ■ I/O running on CFS file system.
206 Online dynamic reconfiguration Replacing a host bus adapter 5 Replace the HBA with a new compatible HBA of similar type in the same slot. 6 Bring the replaced HBA back into the configuration. Enter the following: # /usr/bin/olrad -q 7 Enter the following command after you replace the HBA: # /usr/bin/olrad -R slot_ID 8 Verify the success of the operation. The slot power should be ON, driver is OLAR capable, and so on.
Chapter Creating and administering disk groups This chapter includes the following topics: ■ About disk groups ■ Displaying disk group information ■ Creating a disk group ■ Adding a disk to a disk group ■ Removing a disk from a disk group ■ Moving disks between disk groups ■ Deporting a disk group ■ Importing a disk group ■ Handling of minor number conflicts ■ Moving disk groups between systems ■ Handling cloned disks with duplicated identifiers ■ Renaming a disk group ■ Handling c
208 Creating and administering disk groups About disk groups ■ Upgrading the disk group version ■ About the configuration daemon in VxVM ■ Backing up and restoring disk group configuration data ■ Using vxnotify to monitor configuration changes ■ Working with existing ISP disk groups About disk groups Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group.
Creating and administering disk groups About disk groups continues to refer to it. You can replace disks by first associating a different physical disk with the name of the disk to be replaced and then recovering any volume data that was stored on the original disk (from mirrors or backup copies). Having disk groups that contain many disks and VxVM objects causes the private region to fill.
210 Creating and administering disk groups About disk groups Specification of disk groups to commands Many VxVM commands let you specify a disk group using the -g option. For example, the following command creates a volume in the disk group, mktdg: # vxassist -g mktdg make mktvol 5g The block special device that corresponds to this volume is /dev/vx/dsk/mktdg/mktvol.
Creating and administering disk groups About disk groups ■ Use the default disk group name that is specified by the environment variable VXVM_DEFAULTDG. This variable can also be set to one of the reserved system-wide disk group names: bootdg, defaultdg, or nodg. If the variable is undefined, the following rule is applied. ■ Use the disk group that has been assigned to the system-wide default disk group alias, defaultdg. If this alias is undefined, the following rule is applied.
212 Creating and administering disk groups About disk groups If bootdg is specified as the argument to this command, the default disk group is set to be the same as the currently defined system-wide boot disk group. If nodg is specified as the argument to the vxdctl defaultdg command, the default disk group is undefined. The specified disk group is not required to exist on the system. See the vxdctl(1M) manual page. See the vxdg(1M) manual page.
Creating and administering disk groups About disk groups Table 6-1 Disk group version assignments VxVM release Introduces disk group version New features supported 5.
214 Creating and administering disk groups About disk groups Table 6-1 Disk group version assignments (continued) VxVM release Introduces disk group version New features supported Supports disk group versions 4.1 120 ■ Automatic Cluster-wide Failback for A/P arrays ■ Migration of Volumes to ISP ■ Persistent DMP Policies ■ Shared Disk Group Failure Policy 20, 30, 40, 50, 60, 70, 80, 90, 110, 120 4.
Creating and administering disk groups About disk groups Table 6-1 Disk group version assignments (continued) VxVM release Introduces disk group version New features supported 3.2, 3.5 90 ■ ■ ■ ■ ■ ■ ■ 3.1.1 80 ■ 3.1 70 ■ 3.0 60 Supports disk group versions Cluster Support 20, 30, 40, 50, 60, 70, for Oracle 80, 90 Resilvering Disk Group Move, Split and Join Device Discovery Layer (DDL) 1.
216 Creating and administering disk groups Displaying disk group information Disk group version assignments (continued) Table 6-1 VxVM release Introduces disk group version New features supported 2.0 20 ■ Supports disk group versions Dirty Region 20 Logging (DRL) ■ Disk Group Configuration Copy Limiting ■ Mirrored Volumes Logging ■ New-Style Stripes ■ RAID-5 Volumes ■ Recovery Checkpointing 1.3 15 15 1.
Creating and administering disk groups Displaying disk group information import-id: 0.1 flags: version: 160 local-activation: read-write alignment: 512 (bytes) ssb: on detach-policy: local copies: nconfig=default nlog=default config: seqno=0.
218 Creating and administering disk groups Creating a disk group newdg newdg02 oradg oradg01 c0t13d0 c0t14d0 c0t13d0 c0t14d0 0 0 4443310 4443310 - To display free space for a disk group, use the following command: # vxdg -g diskgroup free where -g diskgroup optionally specifies a disk group.
Creating and administering disk groups Adding a disk to a disk group The disk that is specified by the device name, c1t0d0, must have been previously initialized with vxdiskadd or vxdiskadm. The disk must not currently belong to a disk group. You can use the cds attribute with the vxdg init command to specify whether a new disk group is compatible with the Cross-platform Data Sharing (CDS) feature. In Veritas Volume Manager 4.
220 Creating and administering disk groups Removing a disk from a disk group You can also use the vxdiskadd command to add a disk to a disk group. Enter the following: # vxdiskadd c1t1d0 where c1t1d0 is the device name of a disk that is not currently assigned to a disk group. The command dialog is similar to that described for the vxdiskadm command. See “Adding a disk to VxVM” on page 109.
Creating and administering disk groups Moving disks between disk groups For example, to remove the disk c1t0d0 from VxVM control, enter the following: # vxdiskunsetup c1t0d0 You can remove a disk on which some subdisks of volumes are defined. For example, you can consolidate all the volumes onto one disk. If you use vxdiskadm to remove a disk, you can choose to move volumes off that disk. To do this, run vxdiskadm and select Remove a disk from the main menu.
222 Creating and administering disk groups Deporting a disk group You can also move a disk by using the vxdiskadm command. Select Remove a disk from the main menu, and then select Add or initialize a disk. To move disks and preserve the data on these disks, along with VxVM objects, such as volumes: See “Moving objects between disk groups” on page 257. Deporting a disk group Deporting a disk group disables access to a disk group that is enabled (imported) by the system.
Creating and administering disk groups Importing a disk group 6 At the following prompt, press Return to continue with the operation: Continue with operation? [y,n,q,?] (default: y) After the disk group is deported, the vxdiskadm utility displays the following message: VxVM INFO V-5-2-269 Removal of disk group newdg was successful.
224 Creating and administering disk groups Handling of minor number conflicts 3 At the following prompt, enter the name of the disk group to import (in this example, newdg): Select disk group to import [,list,q,?] (default: list) newdg When the import finishes, the vxdiskadm utility displays the following success message: VxVM INFO V-5-2-374 The import of newdg was successful.
Creating and administering disk groups Handling of minor number conflicts If a minor conflict exists when a disk group is imported, VxVM automatically assigns a new base minor to the disk group, and reminors the volumes in the disk group, based on the new base minor. You do not need to run the vxdg reminor command to resolve the minor conflicts. To avoid any conflicts between shared and private disk groups, the minor numbers are divided into shared and private pools.
226 Creating and administering disk groups Moving disk groups between systems release. In this case, disable the dynamic reminoring before you install the new VxVM package. To disable the division between shared and private minor numbers 1 Set the tunable sharedminorstart in the defaults file /etc/default/vxsf to 0 (zero). Set the following line in the /etc/default/vxsf file.
Creating and administering disk groups Moving disk groups between systems 4 Import (enable local access to) the disk group on the target system with this command: # vxdg import diskgroup Warning: All disks in the disk group must be moved to the other system. If they are not moved, the import fails. 5 By default, VxVM enables and starts any disabled volumes after the disk group is imported. See “Setting the automatic recovery of volumes” on page 224.
228 Creating and administering disk groups Moving disk groups between systems The disks may be considered invalid due to a mismatch between the host ID in their configuration copies and that stored in the /etc/vx/volboot file. To clear locks on a specific set of devices, use the following command: # vxdisk clearimport devicename ...
Creating and administering disk groups Moving disk groups between systems As using the -f option to force the import of an incomplete disk group counts as a successful import, an incomplete disk group may be imported subsequently without this option being specified. This may not be what you expect. You can also import the disk group as a shared disk group. See “Importing disk groups as shared” on page 455. These operations can also be performed using the vxdiskadm utility.
230 Creating and administering disk groups Moving disk groups between systems Note: The default policy ensures that a small number of disk groups can be merged successfully between a set of machines. However, where disk groups are merged automatically using failover mechanisms, select ranges that avoid overlap.
Creating and administering disk groups Moving disk groups between systems See the vxdg(1M) manual page. See “Handling of minor number conflicts” on page 224. Compatibility of disk groups between platforms For disk groups that support the Cross-platform Data Sharing (CDS) feature, the upper limit on the minor number range is restricted on AIX, HP-UX, Linux (with a 2.6 or later kernel) and Solaris to 65,535 to ensure portability between these operating systems. On a Linux platform with a pre-2.
232 Creating and administering disk groups Handling cloned disks with duplicated identifiers # cat /proc/sys/vxvm/vxio/vol_max_volumes 4079 See the vxdg(1M) manual page. Handling cloned disks with duplicated identifiers A disk may be copied by creating a hardware snapshot (such as an EMC BCV™ or Hitachi ShadowCopy™) or clone, by using dd or a similar command to replicate the disk, or by building a new LUN from the space that was previously used by a deleted LUN.
Creating and administering disk groups Handling cloned disks with duplicated identifiers . . . c2t64d0 c2t65d0 c2t66d0 c2t67d0 c2t68d0 auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk - - online online online udid_mismatch online udid_mismatch online udid_mismatch Writing a new UDID to a disk You can use the following command to update the unique disk identifier (UDID) for one or more disks.
234 Creating and administering disk groups Handling cloned disks with duplicated identifiers copies of one or more cloned disks exist. In this case, you can use the following command to tag all the disks in the disk group that are to be imported: # vxdisk [-g diskgroup ] settag tagname disk ... where tagname is a string of up to 128 characters, not including spaces or tabs.
Creating and administering disk groups Handling cloned disks with duplicated identifiers To check which disks in a disk group contain copies of this configuration information, use the vxdg listmeta command: # vxdg [-q] listmeta diskgroup The -q option can be specified to suppress detailed configuration information from being displayed.
236 Creating and administering disk groups Handling cloned disks with duplicated identifiers These tags can be viewed by using the vxdisk listtag command: # vxdisk listtag DEVICE NAME VALUE TagmaStore-USP0_24 TagmaStore-USP0_25 TagmaStore-USP0_28 TagmaStore-USP0_28 t2 t1 t1 t2 v2 v1 v1 v2 The following command ensures that configuration database copies and kernel log copies are maintained for all disks in the disk group mydg that are tagged as t1: # vxdg -g mydg set tagmeta=on tag=t1 nconfig=all nlo
Creating and administering disk groups Handling cloned disks with duplicated identifiers Importing cloned disks without tags In the first example, cloned disks (ShadowImage™ devices) from an Hitachi TagmaStore array will be imported. The primary (non-cloned) disks, mydg01, mydg02 and mydg03, are already imported, and the cloned disks are not tagged.
238 Creating and administering disk groups Handling cloned disks with duplicated identifiers To import only the cloned disks into the mydg disk group: # vxdg -o useclonedev=on -o updateid import mydg # vxdisk -o alldgs list DEVICE TagmaStore-USP0_3 TagmaStore-USP0_23 TagmaStore-USP0_25 TagmaStore-USP0_30 TagmaStore-USP0_31 TagmaStore-USP0_32 TYPE auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk DISK GROUP mydg03 mydg (mydg) (mydg) mydg02 mydg mydg01 mydg (mydg) STATUS onlin
Creating and administering disk groups Handling cloned disks with duplicated identifiers Importing cloned disks with tags In this example, cloned disks (BCV devices) from an EMC Symmetrix DMX array will be imported. The primary (non-cloned) disks, mydg01, mydg02 and mydg03, are already imported, and the cloned disks with the tag t1 are to be imported.
240 Creating and administering disk groups Handling cloned disks with duplicated identifiers As the cloned disk EMC0_15 is not tagged as t1, it is not imported. Note that the state of the imported cloned disks has changed from online udid_mismatch to online clone_disk. By default, the state of imported cloned disks is shown as online clone_disk.
Creating and administering disk groups Renaming a disk group As in the previous example, the cloned disk EMC0_15 is not tagged as t1, and so it is not imported.
242 Creating and administering disk groups Renaming a disk group When renaming on deport, you can specify the -h hostname option to assign a lock to an alternate host. This ensures that the disk group is automatically imported when the alternate host reboots.
Creating and administering disk groups Handling conflicting configuration copies 3 On the importing host, import and rename the rootdg disk group with this command: # vxdg -tC -n newdg import diskgroup The -t option indicates a temporary import name, and the -C option clears import locks. The -n option specifies an alternate name for the rootdg being imported so that it does not conflict with the existing rootdg. diskgroup is the disk group ID of the disk group being imported (for example, 774226267.
244 Creating and administering disk groups Handling conflicting configuration copies Figure 6-1 shows a 2-node cluster with node 0, a fibre channel switch and disk enclosure enc0 in building A, and node 1, another switch and enclosure enc1 in building B.
Creating and administering disk groups Handling conflicting configuration copies When the network links are restored, attempting to reattach the missing disks to the disk group on Node 0, or to re-import the entire disk group on either node, fails. VxVM increments the serial ID in the disk media record of each imported disk in all the disk group configuration databases on those disks, and also in the private region of each imported disk.
246 Creating and administering disk groups Handling conflicting configuration copies Figure 6-2 Example of a serial split brain condition that can be resolved automatically Partial disk group imported on host X Disk B not imported Disk A Disk B Disk A = 1 Disk B = 0 Configuration database Expected A = 1 Expected B = 0 Configuration database Expected A = 0 Expected B = 0 1. Disk A is imported on a separate host. Disk B is not imported.
Creating and administering disk groups Handling conflicting configuration copies Figure 6-3 247 Example of a true serial split brain condition that cannot be resolved automatically Partial disk group imported on host X Partial disk group imported on host Y Disk A Disk B Disk A = 1 Configuration database Expected A = 1 Expected B = 0 Disk B = 1 Configuration database Expected A = 0 Expected B = 1 1. Disks A and B are imported independently on separate hosts.
248 Creating and administering disk groups Handling conflicting configuration copies Correcting conflicting configuration information To resolve conflicting configuration information, you must decide which disk contains the correct version of the disk group configuration database. To assist you in doing this, you can run the vxsplitlines command to show the actual serial ID on each disk in the disk group and the serial ID that was expected from the configuration database.
Creating and administering disk groups Handling conflicting configuration copies 249 All the disks in the first pool have the same config copies All the disks in the second pool may not have the same config copies Number of disks in the first pool: 1 Number of disks in the second pool: 1 To import the disk group with the configuration copy from the first pool, enter the following command: # /usr/sbin/vxdg (-s) -o selectcp=1221451925.395.
250 Creating and administering disk groups Reorganizing the contents of disk groups and expected serial IDs for any disks in the disk group that are not imported at this time remain unaltered. Reorganizing the contents of disk groups There are several circumstances under which you might want to reorganize the contents of your existing disk groups: ■ To group volumes or disks differently as the needs of your organization change.
Creating and administering disk groups Reorganizing the contents of disk groups Figure 6-4 Disk group move operation Source Disk Group Target Disk Group Move Source Disk Group ■ After move Target Disk Group The split operation removes a self-contained set of VxVM objects from an imported disk group, and moves them to a newly created target disk group.
252 Creating and administering disk groups Reorganizing the contents of disk groups Figure 6-5 Disk group split operation Source disk group Disks to be split into new disk group Source disk group ■ After split New target disk group The join operation removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. Figure 6-6 shows the join operation.
Creating and administering disk groups Reorganizing the contents of disk groups Figure 6-6 Disk group join operation Source disk group Target disk group Join After join Target disk group These operations are performed on VxVM objects such as disks or top-level volumes, and include all component objects such as sub-volumes, plexes and subdisks. The objects to be moved must be self-contained, meaning that the disks that are moved must not contain any other objects that are not intended for the move.
254 Creating and administering disk groups Reorganizing the contents of disk groups Warning: Before moving volumes between disk groups, stop all applications that are accessing the volumes, and unmount all file systems that are configured on these volumes.
Creating and administering disk groups Reorganizing the contents of disk groups ■ Splitting or moving a volume into a different disk group changes the volume’s record ID. ■ The operation can only be performed on the master node of a cluster if either the source disk group or the target disk group is shared. ■ In a cluster environment, disk groups involved in a move or join must both be private or must both be shared.
256 Creating and administering disk groups Reorganizing the contents of disk groups the move. You can use the vxprint command on a volume to examine the configuration of its associated DCO volume. If you use the vxassist command to create both a volume and its DCO, or the vxsnap prepare command to add a DCO to a volume, the DCO plexes are automatically placed on different disks from the data plexes of the parent volume.
Creating and administering disk groups Reorganizing the contents of disk groups Examples of disk groups that can and cannot be split Figure 6-7 Volume data plexes Snapshot plex The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data Split Volume DCO plexes Snapshot DCO plex Volume data plexes Snapshot plex The disk group cannot be split as the DCO plexes cannot accompany their volumes.
258 Creating and administering disk groups Reorganizing the contents of disk groups # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ... The -o expand option ensures that the objects that are actually moved include all other disks containing subdisks that are associated with the specified objects or with objects that they contain. The default behavior of vxdg when moving licensed disks in an EMC array is to perform an EMC disk compatibility check for each disk involved in the move.
Creating and administering disk groups Reorganizing the contents of disk groups dg dm dm dm dm v pl sd pl sd mydg mydg01 mydg05 mydg07 mydg08 vol1 vol1-01 mydg01-01 vol1-02 mydg05-01 mydg c0t1d0 c1t96d0 c1t99d0 c1t100d0 fsgen vol1 vol1-01 vol1 vol1-02 ENABLED ENABLED ENABLED ENABLED ENABLED 17678493 17678493 17678493 17678493 2048 3591 3591 3591 3591 0 0 ACTIVE ACTIVE ACTIVE - - - The following command moves the self-contained set of objects implied by specifying disk mydg01 from disk group mydg t
260 Creating and administering disk groups Reorganizing the contents of disk groups TY dg dm dm NAME mydg mydg07 mydg08 ASSOC mydg c1t99d0 c1t100d0 KSTATE - LENGTH 17678493 17678493 PLOFFS - STATE - TUTIL0 - PUTIL0 - The following commands would also achieve the same result: # vxdg move mydg rootdg mydg01 mydg05 # vxdg move mydg rootdg vol1 See “Moving objects between shared disk groups” on page 457.
Creating and administering disk groups Reorganizing the contents of disk groups pl vol1-02 vol1 sd rootdg05-01 vol1-02 ENABLED ENABLED 3591 3591 0 ACTIVE - - - The following command removes disks rootdg07 and rootdg08 from rootdg to form a new disk group, mydg: # vxdg -o expand split rootdg mydg rootdg07 rootdg08 By default, VxVM automatically recovers and starts the volumes following a disk group split. If you have turned off the automatic recovery feature, volumes are disabled after a split.
262 Creating and administering disk groups Reorganizing the contents of disk groups Joining disk groups To remove all VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o override|verify] join sourcedg targetdg See “Moving objects between disk groups” on page 257. Note: You cannot specify rootdg as the source disk group for a join operation. The following output from vxprint shows the contents of the disk groups rootdg and mydg.
Creating and administering disk groups Disabling a disk group # vxdg join mydg rootdg By default, VxVM automatically recovers and starts the volumes following a disk group join. If you have turned off the automatic recovery feature, volumes are disabled after a join. Use the following commands to recover and restart the volumes in the target disk group: # vxrecover -g targetdg -m [volume ...
264 Creating and administering disk groups Destroying a disk group Destroying a disk group The vxdg command provides a destroy option that removes a disk group from the system and frees the disks in that disk group for reinitialization: # vxdg destroy diskgroup Warning: This command destroys all data on the disks. When a disk group is destroyed, the disks that are released can be re-used in other disk groups.
Creating and administering disk groups About the configuration daemon in VxVM Until the disk group is upgraded, it may still be deported back to the release from which it was imported. To use the features in the upgraded release, you must explicitly upgrade the existing disk groups. There is no "downgrade" facility. After you upgrade a disk group, the disk group is incompatible with earlier releases of VxVM that do not support the new version.
266 Creating and administering disk groups Backing up and restoring disk group configuration data The vxconfigd daemon reads the contents of this file to locate the disks and the configuration databases for their disk groups. The /etc/vx/darecs file is also used to store definitions of foreign devices that are not autoconfigurable. Such entries may be added by using the vxddladm addforeign command. See the vxddladm(1M) manual page.
Creating and administering disk groups Using vxnotify to monitor configuration changes Using vxnotify to monitor configuration changes You can use the vxnotify utility to display events relating to disk and configuration changes that are managed by the vxconfigd configuration daemon. If vxnotify is running on a system where the VxVM clustering feature is active, it displays events that are related to changes in the cluster state of the system on which it is running.
268 Creating and administering disk groups Working with existing ISP disk groups To determine whether a disk group is an ISP disk group ◆ Check for the presence of storage pools, using the following command: # vxprint Sample output: Disk group: mydg TY NAME ASSOC dg mydg mydg KSTATE - LENGTH - PLOFFS STATE TUTIL0 PUTIL0 ALLOC_SUP - dm mydg2 dm mydg3 ams_wms0_359 ams_wms0_360 - 4120320 4120320 - - - - st mypool dm mydg1 ams_wms0_358 - 4120320 - DATA - - - v myvol0 fsgen pl myvol0-01 myvol
Creating and administering disk groups Working with existing ISP disk groups This disk group is a ISP disk group. Dg needs to be migrated to non-ISP dg to allow any configuration changes. Please upgrade the dg to perform the migration. Note: Non-ISP or VxVM volumes in the ISP disk group are not affected. Operations that still work on ISP disk group without upgrading: ■ Setting, removing, and replacing volume tags. See “About volume administration” on page 336.
270 Creating and administering disk groups Working with existing ISP disk groups
Chapter Creating and administering subdisks and plexes This chapter includes the following topics: ■ About subdisks ■ Creating subdisks ■ Displaying subdisk information ■ Moving subdisks ■ Splitting subdisks ■ Joining subdisks ■ Associating subdisks with plexes ■ Associating log subdisks ■ Dissociating subdisks from plexes ■ Removing subdisks ■ Changing subdisk attributes ■ About plexes ■ Creating plexes ■ Creating a striped plex ■ Displaying plex information ■ Attaching and
272 Creating and administering subdisks and plexes About subdisks ■ Taking plexes offline ■ Detaching plexes ■ Reattaching plexes ■ Moving plexes ■ Copying volumes to plexes ■ Dissociating and removing plexes ■ Changing plex attributes About subdisks Subdisks are the low-level building blocks in a Veritas Volume Manager (VxVM) configuration that are required to create plexes and volumes. See “Creating a volume” on page 299. Note: Most VxVM commands require superuser or equivalent privileges.
Creating and administering subdisks and plexes Displaying subdisk information If you intend to use the new subdisk to build a volume, you must associate the subdisk with a plex. See “Associating subdisks with plexes” on page 275. Subdisks for all plex layouts (concatenated, striped, RAID-5) are created the same way. Displaying subdisk information The vxprint command displays information about VxVM objects.
274 Creating and administering subdisks and plexes Moving subdisks Moving subdisks Moving a subdisk copies the disk space contents of a subdisk onto one or more other subdisks. If the subdisk being moved is associated with a plex, then the data stored on the original subdisk is copied to the new subdisks. The old subdisk is dissociated from the plex, and the new subdisks are associated with the plex. The association is at the same offset within the plex as the source subdisk.
Creating and administering subdisks and plexes Joining subdisks For example, to split subdisk mydg03-02, with size 2000 megabytes into subdisks mydg03-02, mydg03-03, mydg03-04 and mydg03-05, each with size 500 megabytes, all in the disk group, mydg, use the following commands: # vxsd -g mydg -s 1000m split mydg03-02 mydg03-02 mydg03-04 # vxsd -g mydg -s 500m split mydg03-02 mydg03-02 mydg03-03 # vxsd -g mydg -s 500m split mydg03-04 mydg03-04 mydg03-05 Joining subdisks Joining subdisks combines two or more
276 Creating and administering subdisks and plexes Associating subdisks with plexes the plex and then associate each of the subdisks with that plex. In this example, the subdisks are associated to the plex in the order they are listed (after sd=). The disk space defined as mydg02-01 is first, mydg02-00 is second, and mydg02-02 is third. This method of associating subdisks is convenient during initial configuration. Subdisks can also be associated with a plex that already exists.
Creating and administering subdisks and plexes Associating log subdisks For example, the following command would add the subdisk, mydg11-01, to the end of column 1 of the plex, vol02-01: # vxsd -g mydg -l 1 assoc vol02-01 mydg11-01 Alternatively, to add M subdisks at the end of each of the N columns in a striped or RAID-5 volume, you can use the following form of the vxsd command: # vxsd [-g diskgroup] assoc plex subdisk1:0 ...
278 Creating and administering subdisks and plexes Dissociating subdisks from plexes To add a log subdisk to an existing plex, use the following command: # vxsd [-g diskgroup] aslog plex subdisk where subdisk is the name to be used for the log subdisk. The plex must be associated with a mirrored volume before dirty region logging takes effect.
Creating and administering subdisks and plexes Removing subdisks Removing subdisks To remove a subdisk, use the following command: # vxedit [-g diskgroup] rm subdisk For example, to remove a subdisk named mydg02-01 from the disk group, mydg, use the following command: # vxedit -g mydg rm mydg02-01 Changing subdisk attributes Warning: To avoid possible data loss, change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects.
280 Creating and administering subdisks and plexes About plexes tutiln Nonpersistent (temporary) utility field(s) used to manage objects and communication between different commands and Symantec products. tutiln field attributes are not maintained on reboot. tutiln fields are organized as follows: ■ tutil0 is set by VxVM. tutil1 is set by other Symantec products such as Veritas Operations Manager (VOM) or the Veritas Enterprise Administrator (VEA) console..
Creating and administering subdisks and plexes Creating plexes See “Creating a volume” on page 299. Note: Most VxVM commands require superuser or equivalent privileges. Creating plexes Use the vxmake command to create VxVM objects, such as plexes. When creating a plex, identify the subdisks that are to be associated with it: To create a plex from existing subdisks, use the following command: # vxmake [-g diskgroup] plex plex sd=subdisk1[,subdisk2,...
282 Creating and administering subdisks and plexes Displaying plex information # vxprint [-g diskgroup] -l plex The -t option prints a single line of information about the plex. To list free plexes, use the following command: # vxprint -pt The following section describes the meaning of the various plex states that may be displayed in the STATE field of vxprint output. Plex states Plex states reflect whether or not plexes are complete and are consistent copies (mirrors) of the volume contents.
Creating and administering subdisks and plexes Displaying plex information Table 7-1 Plex states State Description ACTIVE A plex can be in the ACTIVE state in the following ways: when the volume is started and the plex fully participates in normal volume I/O (the plex contents change as the contents of the volume change) ■ when the volume is stopped as a result of a system crash and the plex is ACTIVE at the moment of the crash ■ In the latter case, a system failure can leave plex contents in an inco
284 Creating and administering subdisks and plexes Displaying plex information Table 7-1 Plex states (continued) State Description OFFLINE The vxmend off task indefinitely detaches a plex from a volume by setting the plex state to OFFLINE. Although the detached plex maintains its association with the volume, changes to the volume do not update the OFFLINE plex. The plex is not updated until the plex is put online and reattached with the vxplex att task.
Creating and administering subdisks and plexes Displaying plex information Table 7-1 Plex states (continued) State Description TEMPRM A TEMPRM plex state is similar to a TEMP state except that at the completion of the operation, the TEMPRM plex is removed. Some subdisk operations require a temporary plex. Associating a subdisk with a plex, for example, requires updating the subdisk with the volume contents before actually associating the subdisk.
286 Creating and administering subdisks and plexes Attaching and associating plexes Table 7-2 Plex condition flags (continued) Condition flag Description RECOVER A disk corresponding to one of the disk media records was replaced, or was reattached too late to prevent the plex from becoming out-of-date with respect to the volume. The plex required complete recovery from another plex in the volume to synchronize its contents.
Creating and administering subdisks and plexes Taking plexes offline For example, to attach a plex named vol01-02 to a volume named vol01 in the disk group, mydg, use the following command: # vxplex -g mydg att vol01 vol01-02 If the volume does not already exist, associate one or more plexes to the volume when you create the volume, using the following command: # vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...
288 Creating and administering subdisks and plexes Detaching plexes Detaching plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume.
Creating and administering subdisks and plexes Reattaching plexes As when returning an OFFLINE plex to ACTIVE, this command starts to recover the contents of the plex and, after the recovery is complete, sets the plex utility state to ACTIVE.
290 Creating and administering subdisks and plexes Moving plexes To disable automatic plex attachment, remove vxattachd from the start up scripts. Disabling vxattachd disables the automatic reattachment feature for both plexes and sites. In a Cluster Volume Manager (CVM) the following considerations apply: ■ If the global detach policy is set, a storage failure from any node causes all plexes on that storage to be detached globally.
Creating and administering subdisks and plexes Copying volumes to plexes The size of the plex has several implications: ■ If the new plex is smaller or more sparse than the original plex, an incomplete copy is made of the data on the original plex. If an incomplete copy is desired, use the -o force option to vxplex. ■ If the new plex is longer or less sparse than the original plex, the data that exists on the original plex is copied onto the new plex.
292 Creating and administering subdisks and plexes Changing plex attributes are critical to the creation of a new plex to contain the same data. Before a plex is removed, you must record its configuration. See “Displaying plex information” on page 281.
Creating and administering subdisks and plexes Changing plex attributes The following example command sets the comment field, and also sets tutil2 to indicate that the subdisk is in use: # vxedit -g mydg set comment="plex comment" tutil2="u" vol01-02 To prevent a particular plex from being associated with a volume, set the putil0 field to a non-null string, as shown in the following command: # vxedit -g mydg set putil0="DO-NOT-USE" vol01-02 See the vxedit(1M) manual page.
294 Creating and administering subdisks and plexes Changing plex attributes
Chapter Creating volumes This chapter includes the following topics: ■ About volume creation ■ Types of volume layouts ■ Creating a volume ■ Using vxassist ■ Discovering the maximum size of a volume ■ Disk group alignment constraints on volumes ■ Creating a volume on any disk ■ Creating a volume on specific disks ■ Creating a mirrored volume ■ Creating a volume with a version 0 DCO volume ■ Creating a volume with a version 20 DCO volume ■ Creating a volume with dirty region logging e
296 Creating volumes About volume creation ■ Initializing and starting a volume ■ Accessing a volume ■ Using rules and persistent attributes to make volume allocation more efficient About volume creation Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Volumes are created to take advantage of the VxVM concept of virtual disks.
Creating volumes Types of volume layouts Mirrored A volume with multiple data plexes that duplicate the information contained in a volume. Although a volume can have a single data plex, at least two are required for true mirroring to provide redundancy of data. For the redundancy to be useful, each of these data plexes should contain disk space from different disks. See “Mirroring (RAID-1)” on page 41. RAID-5 A volume that uses striping to spread data and parity evenly across multiple disks in an array.
298 Creating volumes Types of volume layouts Supported volume logs and maps Veritas Volume Manager supports the use of the following types of logs and maps with volumes: ■ FastResync Maps are used to perform quick and efficient resynchronization of mirrors. See “FastResync” on page 63. These maps are supported either in memory (Non-Persistent FastResync), or on disk as part of a DCO volume (Persistent FastResync).
Creating volumes Creating a volume Creating a volume You can create volumes using an advanced approach or an assisted approach. Each method uses different tools. You may switch between the advanced and the assisted approaches at will. Note: Most VxVM commands require superuser or equivalent privileges. Advanced approach The advanced approach consists of a number of commands that typically require you to specify detailed input.
300 Creating volumes Using vxassist of the desired volume as input. Additionally, the vxassist command can modify existing volumes while automatically modifying any underlying or associated objects. The vxassist command uses default values for many volume attributes, unless you provide specific values. It does not require you to have a thorough understanding of low-level VxVM concepts, vxassist does not conflict with other VxVM commands or preclude their use.
Creating volumes Using vxassist The vxassist command takes this form: # vxassist [options] keyword volume [attributes...] where keyword selects the task to perform. The first argument after a vxassist keyword, volume, is a volume name, which is followed by a set of desired volume attributes.
302 Creating volumes Using vxassist You must create the /etc/default directory and the vxassist default file if these do not already exist on your system. The format of entries in a defaults file is a list of attribute-value pairs separated by new lines. These attribute-value pairs are the same as those specified as options on the vxassist command line. See the vxassist(1M) manual page.
Creating volumes Discovering the maximum size of a volume # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes raid5_stwid=16k Using the SmartMove™ feature while attaching a plex The SmartMove™ feature reduces the time and I/O required to attach or reattach a plex to an existing VxVM volume, in the specific case where a VxVM volume has a VxF
304 Creating volumes Disk group alignment constraints on volumes # vxassist -g dgrp maxsize layout=raid5 nlog=2 You can use storage attributes if you want to restrict the disks that vxassist uses when creating volumes. See “Creating a volume on specific disks” on page 305. The maximum size of a VxVM volume that you can create is 256TB. Disk group alignment constraints on volumes Certain constraints apply to the length of volumes and to the numeric values of size attributes that apply to volumes.
Creating volumes Creating a volume on specific disks To create a concatenated, default volume, use the following form of the vxassist command: # vxassist [-b] [-g diskgroup] make volume length Specify the -b option if you want to make the volume immediately available for use. See “Initializing and starting a volume” on page 326.
306 Creating volumes Creating a volume on specific disks # vxassist -b -g mydg make volspec 5g \!ctlr:c2 This example includes only disks on controller c1 except for target t5: # vxassist -b -g mydg make volspec 5g ctlr:c1 \!target:c1t5 If you want a volume to be created using only disks from a specific disk group, use the -g option to vxassist, for example: # vxassist -g bigone -b make volmega 20g bigone10 bigone11 or alternatively, use the diskgroup attribute: # vxassist -b make volmega 20g diskgroup
Creating volumes Creating a volume on specific disks The allocation behavior of the vxassist command changes with the presence of SSD devices in a disk group. Note: If the disk group version is less than 150, the vxassist command does not honor media type of the device for making allocations. The vxassist command allows you to specify Hard Disk Drive (HDD) or SSD devices for allocation using the mediatype attribute.
308 Creating volumes Creating a volume on specific disks In order to allocate a volume on SSD devices from enclr3 enclosure, following command should be used: # vxassist -g mydg make myvol 1G enclr:enclr3 mediatype:ssd The allocation fails, if the command is specified in one of the following two ways: # vxassist -g mydg make myvol 1G enclr:enclr1 mediatype:hdd In the above case, volume myvol cannot be created as there are no HDD devices in enclr1 enclosure.
Creating volumes Creating a volume on specific disks Figure 8-1 Example of using ordered allocation to create a mirrored-stripe volume column 1 column 2 column 3 mydg01-01 mydg02-01 mydg03-01 Mirrored-stripe volume Striped plex Mirror column 1 mydg04-01 column 2 mydg05-01 column 3 mydg06-01 Striped plex For layered volumes, vxassist applies the same rules to allocate storage as for non-layered volumes.
310 Creating volumes Creating a volume on specific disks # vxassist -b -g mydg -o ordered make strmir2vol 10g \ layout=mirror-stripe ncol=2 col_switch=3g,2g \ mydg01 mydg02 mydg03 mydg04 mydg05 mydg06 mydg07 mydg08 This command allocates 3 gigabytes from mydg01 and 2 gigabytes from mydg02 to column 1, and 3 gigabytes from mydg03 and 2 gigabytes from mydg04 to column 2. The mirrors of these columns are then similarly formed from disks mydg05 through mydg08.
Creating volumes Creating a mirrored volume Example of storage allocation used to create a mirrored-stripe volume across controllers Figure 8-4 c1 c2 c3 Controllers Mirrored-stripe volume column 1 column 2 column 3 column 1 column 2 column 3 Striped plex Mirror Striped plex c4 c5 c6 Controllers There are other ways in which you can control how vxassist lays out mirrored volumes across controllers. See “Mirroring across targets, controllers or enclosures” on page 319.
312 Creating volumes Creating a mirrored volume a particular layout, you can specify layout=mirror-concat or layout=concat-mirror to implement the desired layout. To create a new mirrored volume, use the following command: # vxassist [-b] [-g diskgroup] make volume length \ layout=mirror [nmirror=number] [init=active] Specify the -b option if you want to make the volume immediately available for use. See “Initializing and starting a volume” on page 326.
Creating volumes Creating a volume with a version 0 DCO volume Specify the -b option if you want to make the volume immediately available for use. See “Initializing and starting a volume” on page 326. Creating a volume with a version 0 DCO volume If a data change object (DCO) and DCO volume are associated with a volume, this allows Persistent FastResync to be used with the volume. See “How persistent FastResync works with snapshots” on page 67.
314 Creating volumes Creating a volume with a version 0 DCO volume To create a volume with an attached version 0 DCO object and volume 1 Ensure that the disk group has been upgraded to at least version 90. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup See “Upgrading the disk group version” on page 264.
Creating volumes Creating a volume with a version 0 DCO volume 3 To enable DRL or sequential DRL logging on the newly created volume, use the following command: # vxvol [-g diskgroup] set logtype=drl|drlseq volume If you use ordered allocation when creating a mirrored volume on specified storage, you can use the optional logdisk attribute to specify on which disks dedicated log plexes should be created.
316 Creating volumes Creating a volume with a version 20 DCO volume Creating a volume with a version 20 DCO volume To create a volume with an attached version 20 DCO object and volume 1 Ensure that the disk group has been upgraded to the latest version. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the most recent version, use the following command: # vxdg upgrade diskgroup See “Upgrading the disk group version” on page 264.
Creating volumes Creating a striped volume The nlog attribute can be used to specify the number of log plexes to add. By default, one log plex is added. The loglen attribute specifies the size of the log, where each bit represents one region in the volume. For example, the size of the log would need to be 20K for a 10GB volume with a region size of 64 kilobytes.
318 Creating volumes Creating a striped volume See “Initializing and starting a volume” on page 326. For example, to create the 10-gigabyte striped volume volzebra, in the disk group, mydg, use the following command: # vxassist -b -g mydg make volzebra 10g layout=stripe This creates a striped volume with the default stripe unit size (64 kilobytes) and the default number of stripes (2). You can specify the disks on which the volumes are to be created by including the disk names on the command line.
Creating volumes Mirroring across targets, controllers or enclosures Creating a striped-mirror volume A striped-mirror volume is an example of a layered volume which stripes several underlying mirror volumes. A striped-mirror volume requires space to be available on at least as many disks in the disk group as the number of columns multiplied by the number of stripes in the volume.
320 Creating volumes Mirroring across media types (SSD and HDD) # vxassist [-b] [-g diskgroup] make volume length \ layout=layout mirror=ctlr [attributes] Note: Both paths of an active/passive array are not considered to be on different controllers when mirroring across controllers.
Creating volumes Creating a RAID-5 volume Note: mirror=mediatype is not supported. Creating a RAID-5 volume A RAID-5 volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume. Additional disks may be required for any RAID-5 logs that are created. Note: VxVM supports the creation of RAID-5 volumes in private disk groups, but not in shareable disk groups in a cluster environment.
322 Creating volumes Creating tagged volumes If you require RAID-5 logs, you must use the logdisk attribute to specify the disks to be used for the log plexes. RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated with a RAID-5 volume has a complete copy of the logging information for the volume. To support concurrent access to the RAID-5 array, the log should be several times the stripe size of the RAID-5 plex.
Creating volumes Creating tagged volumes You can use the tag attribute with the vxassist make command to set a named tag and optional tag value on a volume, for example: # vxassist -b -g mydg make volmir 5g layout=mirror tag=mirvol=5g To list the tags that are associated with a volume, use this command: # vxassist [-g diskgroup] listtag volume If you do not specify a volume name, the tags of all volumes and vsets in the disk group are listed.
324 Creating volumes Creating a volume using vxmake The following command confirms that the vxfs.placement_class tag has been updated. # vxassist -g dg3 listtag TY NAME DISKGROUP TAG ========================================================= v vol4 dg3 vxfs.placement_class.tier2 Creating a volume using vxmake As an alternative to using vxassist, you can create a volume using the vxmake command to arrange existing subdisks into plexes, and then to form these plexes into a volume.
Creating volumes Creating a volume using vxmake # vxmake -g mydg plex raidplex layout=raid5 stwidth=32 \ sd=mydg00-00:0,mydg01-00:1,mydg02-00:2,mydg03-00:0, \ mydg04-00:1,mydg05-00:2 This command stacks subdisks mydg00-00 and mydg03-00 consecutively in column 0, subdisks mydg01-00 and mydg04-00 consecutively in column 1, and subdisks mydg02-00 and mydg05-00 in column 2. Offsets can also be specified to create sparse RAID-5 plexes, as for striped plexes.
326 Creating volumes Initializing and starting a volume #rty sd sd sd sd sd plex #name mydg03-01 mydg03-02 mydg04-01 mydg04-02 mydg04-03 db-01 #options disk=mydg03 offset=0 len=10000 disk=mydg03 offset=25000 len=10480 disk=mydg04 offset=0 len=8000 disk=mydg04 offset=15000 len=8000 disk=mydg04 offset=30000 len=4480 layout=STRIPE ncolumn=2 stwidth=16k sd=mydg03-01:0/0,mydg03-02:0/10000,mydg04-01:1/0, mydg04-02:1/8000,mydg04-03:1/16000 sd ramd1-01 disk=ramd1 len=640 comment="Hot spot for dbvol" plex db-02
Creating volumes Initializing and starting a volume is specified to prevent VxVM from synchronizing the empty data plexes of a new mirrored volume: # vxassist [-g diskgroup] make volume length layout=mirror \ init=active Warning: There is a very small risk of errors occurring when the init=active attribute is used.
328 Creating volumes Accessing a volume Accessing a volume As soon as a volume has been created and initialized, it is available for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as relational databases and other data management software.
Creating volumes Using rules and persistent attributes to make volume allocation more efficient For example, you can create allocation rules so that a set of servers can standardize their storage tiering.
330 Creating volumes Using rules and persistent attributes to make volume allocation more efficient volume allocation which has proven too restrictive or discard it to allow a needed allocation to succeed. Rule file format When you create rules, you do not define them in the /etc/default/vxassist file. You create the rules in another file and add the path information to /etc/default/vxassist. By default, a rule file is loaded from /etc/default/vxsf_rules.
Creating volumes Using rules and persistent attributes to make volume allocation more efficient volume rule tier1 { rule=base mirror=enclosure tier=tier1 } volume rule tier2 { rule=base tier=tier2 } The following rule file contains a more complex definition which runs across several lines. volume rule appXdb_storage { description="Create storage for the database of Application X" rule=base siteconsistent=yes mirror=enclosure } By default, a rule file is loaded from /etc/default/vxsf_rules.
332 Creating volumes Using rules and persistent attributes to make volume allocation more efficient dm ibm_ds8x000_0267 ibm_ds8x000_0267 - 2027264 dm ibm_ds8x000_0268 ibm_ds8x000_0268 - 2027264 - - - - v pl sd pl sd dc v pl sd pl sd ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE - - - vol1 fsgen ENABLED 409600 vol1-01 vol1 ENABLED 409600 ibm_ds8x000_0266-01 vol1-01 ENABLED 409600 vol1-02 vol1 ENABLED 409600 ibm_ds8x000_0267-01 vol1-02 ENABLED 409600 vol1_dco vol1 vol1_dcl gen ENABLED 144 vol1_dcl-01 vo
Creating volumes Using rules and persistent attributes to make volume allocation more efficient ibm_ds8x000_0266 ibm_ds8x000_0268 vxmediatype vxmediatype ssd ssd The following command creates a volume, vol1, in the disk group dg3. rule1 is specified on the command line, so those attributes are also applied to vol1. # vxassist -g dg3 make vol1 100m rule=rule1 The following command shows that the volume vol1 is created off the SSD device ibm_ds8x000_0266 as specified in rule1.
334 Creating volumes Using rules and persistent attributes to make volume allocation more efficient v pl sd sd vol1 fsgen ENABLED 2301952 vol1-01 vol1 ENABLED 2301952 ibm_ds8x000_0266-01 vol1-01 ENABLED 2027264 ibm_ds8x000_0268-01 vol1-01 ENABLED 274688 0 2027264 ACTIVE ACTIVE - - -
Chapter Administering volumes This chapter includes the following topics: ■ About volume administration ■ Displaying volume information ■ Monitoring and controlling tasks ■ About SF Thin Reclamation feature ■ Reclamation of storage on thin reclamation arrays ■ Monitoring Thin Reclamation using the vxtask command ■ Using SmartMove with Thin Provisioning ■ Admin operations on an unmounted VxFS thin volume ■ Stopping a volume ■ Starting a volume ■ Resizing a volume ■ Adding a mirror to
336 Administering volumes About volume administration ■ Changing the read policy for mirrored volumes ■ Removing a volume ■ Moving volumes from a VM disk ■ Enabling FastResync on a volume ■ Performing online relayout ■ Converting between layered and non-layered volumes ■ Adding a RAID-5 log About volume administration Veritas Volume Manager (VxVM) lets you perform common maintenance tasks on volumes.
Administering volumes Displaying volume information This example produces the following output: V PL SD SV SC DC SP NAME NAME NAME NAME NAME NAME NAME v pl sd v pl sd RVG/VSET/CO VOLUME PLEX PLEX PLEX PARENTVOL SNAPVOL KSTATE KSTATE DISK VOLNAME CACHE LOGVOL DCO STATE STATE DISKOFFS NVOLLAYR DISKOFFS LENGTH LENGTH LENGTH LENGTH LENGTH READPOL LAYOUT [COL/]OFF [COL/]OFF [COL/]OFF PREFPLEX NCOL/WID DEVICE AM/NM DEVICE UTYPE MODE MODE MODE MODE pubs pubs-01 pubs mydg11-01 pubs-01 ENABLED ENABLED my
338 Administering volumes Displaying volume information See “Volume states” on page 338. Volume states Table 9-1 shows the volume states that may be displayed by VxVM commands such as vxprint. Table 9-1 Volume states Volume state Description ACTIVE The volume has been started (the kernel state is currently ENABLED) or was in use (the kernel state was ENABLED) when the machine was rebooted. If the volume is ENABLED, the state of its plexes at any moment is not certain (because the volume is in use).
Administering volumes Displaying volume information Table 9-1 Volume states (continued) Volume state Description SYNC The volume is either in read-writeback recovery mode (the kernel state is ENABLED) or was in read-writeback mode when the machine was rebooted (the kernel state is DISABLED). With read-writeback recovery, plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plexes.
340 Administering volumes Monitoring and controlling tasks Monitoring and controlling tasks The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The task monitor lets you monitor task progress and modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance). Note: VxVM supports this feature only for private disk groups, not for shared disk groups in a CVM environment.
Administering volumes Monitoring and controlling tasks For more information about the utilities that support task tagging, see their respective manual pages. Managing tasks with vxtask You can use the vxtask command to administer operations on VxVM tasks. Operations include listing tasks, modifying the task state (pausing, resuming, aborting) and modifying the task's progress rate. VxVM tasks represent long-term operations in progress on the system.
342 Administering volumes Monitoring and controlling tasks monitor Prints information continuously about a task or group of tasks as task information changes. This lets you track task progress. Specifying -l prints a long listing. By default, one-line listings are printed. In addition to printing task information when a task state changes, output is also generated when the task completes. When this occurs, the state of the task is printed as EXITED.
Administering volumes About SF Thin Reclamation feature # vxtask abort recovall This command causes VxVM to try to reverse the progress of the operation so far. For example, aborting an Online Relayout results in VxVM returning the volume to its original layout. See “Controlling the progress of a relayout” on page 380. About SF Thin Reclamation feature You can use the Thin Reclamation feature in the following ways: ■ Space is reclaimed automatically when a volume is deleted.
344 Administering volumes Reclamation of storage on thin reclamation arrays To identify LUNs ◆ To identify LUNs that are thin or thinrclm type the following command: # vxdisk -o thin list DEVICE SIZE(mb) hitachi_usp0_065a 10000 hitachi_usp0_065b 10000 hitachi_usp0_065c 10000 hitachi_usp0_065d 10000 . . . hitachi_usp0_0660 10000 PHYS_ALLOC(mb) 84 110 74 50 - 672 GROUP thindg TYPE thinrclm thinrclm thinrclm thinrclm thinrclm In the above output, the SIZE column shows the size of the disk.
Administering volumes Reclamation of storage on thin reclamation arrays reclaim_on_delete_wait_period The storage space that is used by the deleted volume is reclaimed after reclaim_on_delete_wait_period days. The value of the tunable can be anything between -1 to 367. The default is set to 1, which means the volume is deleted the next day. The storage is reclaimed immediately if the value is -1. The storage space is not reclaimed automatically, if the value is greater than 366.
346 Administering volumes Reclamation of storage on thin reclamation arrays To reclaim space on disk1, use the following command: # vxdisk -o full reclaim disk1 The above command reclaims unused space on disk1 that is outside of the vol1. The reclamation skips the vol1 volume, since the VxFS file system is not mounted, but it scans the rest of the disk for unused space. Example of reclamation for disk groups.
Administering volumes Monitoring Thin Reclamation using the vxtask command Note: Thin Reclamation is a slow process and may take several hours to complete, depending on the file system size. Thin Reclamation is not guaranteed to reclaim 100% of the free space. You can track the progress of the Thin Reclamation process by using the vxtask list command when using the Veritas Volume Manager (VxVM) command vxdisk reclaim. See the vxtask(1M) and vxdisk(1M) manual pages.
348 Administering volumes Using SmartMove with Thin Provisioning To monitor thin reclamation 1 To initiate thin reclamation, use the following command: # vxdisk reclaim diskgroup For example: # vxdisk reclaim dg100 2 To monitor the reclamation status, run the following command in another session: # vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 171 RECLAIM/R 00.
Administering volumes Admin operations on an unmounted VxFS thin volume Admin operations on an unmounted VxFS thin volume A thin volume is a volume composed of one or more thin LUNs. If a thin volume is not mounted on a VxFS file system, any resynchronization, synchronization, or refresh operation on the volume, plex, or subdisk performs a full synchronization and allocates storage on the unused space of the volume.
350 Administering volumes Starting a volume Putting a volume in maintenance mode If all mirrors of a volume become STALE, you can place the volume in maintenance mode. Before you put the volume in maintenance mode, make sure the volume is stopped or it is in the DISABLED state. Then you can view the plexes while the volume is DETACHED and determine which plex to use for reviving the others.
Administering volumes Resizing a volume If a volume cannot be enabled, it remains in its current state. To start all DISABLED or DETACHED volumes in a disk group, enter the following: # vxvol -g diskgroup startall To start a DISABLED volume, enter the following: # vxrecover -g diskgroup -s volume ... To start all DISABLED volumes, enter the following: # vxrecover -s To prevent any recovery operations from being performed on the volumes, additionally specify the -n option to vxrecover.
352 Administering volumes Resizing a volume Resizing volumes with vxresize Use the vxresize command to resize a volume containing a file system. Although you can use other commands to resize volumes containing file systems, vxresize offers the advantage of automatically resizing certain types of file system as well as the volume. Table 9-3 shows which operations are permitted and whether you must unmount the file system before you resize it.
Administering volumes Resizing a volume Note: If you enter an incorrect volume size, do not try to stop the vxresize operation by entering Crtl-C. Let the operation complete and then rerun vxresize with the correct value. For more information about the vxresize command, see the vxresize(1M) manual page. Resizing volumes with vxassist The following modifiers are used with the vxassist command to resize a volume: growto Increases the volume size to a specified length.
354 Administering volumes Resizing a volume If you want the subdisks to be grown using contiguous disk space, and you previously performed a relayout on the volume, also specify the attribute layout=nodiskalign to the growby command .
Administering volumes Adding a mirror to a volume Note: You cannot use the vxvol set len command to increase the size of a volume unless the needed space is available in the volume's plexes. When you reduce the volume's size using the vxvol set len command, the freed space is not released into the disk group’s free space pool. If a volume is active and you reduce its length, you must force the operation using the -o force option to vxvol.
356 Administering volumes Adding a mirror to a volume # /etc/vx/bin/vxmirror -g diskgroup -a To configure VxVM to create mirrored volumes by default, use the following command: # vxmirror -d yes If you make this change, you can still make unmirrored volumes by specifying nmirror=1 as an attribute to the vxassist command.
Administering volumes Removing a mirror 5 At the prompt, press Return to make the mirror: Continue with operation? [y,n,q,?] (default: y) The vxdiskadm program displays the status of the mirroring operation, as follows: VxVM vxmirror INFO V-5-2-22 Mirror volume voltest-bk00 . . . VxVM INFO V-5-2-674 Mirroring of disk mydg01 is complete.
358 Administering volumes Adding logs and maps to volumes # vxplex [-g diskgroup] -o rm dis mirror For example, to dissociate and remove a mirror named vol01-02 from the disk group mydg, use the following command: # vxplex -g mydg -o rm dis vol01-02 This command removes the mirror vol01-02 and all associated subdisks.
Administering volumes Preparing a volume for DRL and instant snapshots See “Adding a RAID-5 log” on page 382. Preparing a volume for DRL and instant snapshots You can add a version 20 data change object (DCO) and DCO volume to an existing volume if the disk group version number is 110 or greater. You can also simultaneously create a new volume, a DCO and DCO volume, and enable DRL as long as the disk group version is 110 or greater. See “Determining the DCO version number” on page 362.
360 Administering volumes Preparing a volume for DRL and instant snapshots See “Specifying storage for version 20 DCO plexes” on page 360. The vxsnap prepare command automatically enables Persistent FastResync on the volume. Persistent FastResync is also set automatically on any snapshots that are generated from a volume on which this feature is enabled. If the volume is a RAID-5 volume, it is converted to a layered volume that can be used with instant snapshots and Persistent FastResync.
Administering volumes Preparing a volume for DRL and instant snapshots In this output, the DCO object is shown as vol1_dco, and the DCO volume as vol1_dcl with 2 plexes, vol1_dcl-01 and vol1_dcl-02. If you need to relocate DCO plexes to different disks, you can use the vxassist move command. For example, the following command moves the plexes of the DCO volume, vol1_dcl, for volume vol1 from disk03 and disk04 to disk07 and disk08. Note: The ! character is a special character in some shells.
362 Administering volumes Preparing a volume for DRL and instant snapshots Determining the DCO version number To use the instant snapshot and DRL-enabled DCO features, you must use a version 20 DCO, rather than version 0 DCO. To find out the version number of a DCO that is associated with a volume 1 Use the vxprint command on the volume to discover the name of its DCO.
Administering volumes Preparing a volume for DRL and instant snapshots Determining if DRL is enabled on a volume To determine if DRL (configured using a version 20 DCO) is enabled on a volume 1 Use the vxprint command on the volume to discover the name of its DCO.
364 Administering volumes Preparing a volume for DRL and instant snapshots Determining if DRL logging is active on a volume To determine if DRL logging (configured using a version 20 DCO) is active on a mirrored volume 1 Use the following vxprint commands to discover the name of the volume’s DCO volume: # DCONAME=`vxprint [-g diskgroup] -F%dco_name volume` # DCOVOL=`vxprint [-g diskgroup] -F%parent_vol $DCONAME` 2 Use the vxprint command on the DCO volume to find out if DRL logging is active: # vxprin
Administering volumes Adding traditional DRL logging to a mirrored volume Note: If the volume is part of a snapshot hierarchy, this command fails . Adding traditional DRL logging to a mirrored volume A traditional DRL log is configured within a DRL plex. A version 20 DCO volume cannot be used in conjunction with a DRL plex. The version 20 DCO volume layout includes space for a DRL log. See “Preparing a volume for DRL and instant snapshots” on page 359.
366 Administering volumes Upgrading existing volumes to use version 20 DCOs Removing a traditional DRL log You can use the vxassist remove log command to remove a traditional DRL log that is configured within a DRL plex. The command will not remove a DRL log that is configured within a version 20 DCO. To remove a traditional DRL log ◆ Type the following command: # vxassist [-g diskgroup] remove log volume logtype=drl [nlog=n] By default, the vxassist command removes one log.
Administering volumes Upgrading existing volumes to use version 20 DCOs To upgrade an existing disk group and the volumes that it contains 1 Upgrade the disk group that contains the volume to the latest version before performing the remainder of the procedure described in this section.
368 Administering volumes Setting tags on volumes 6 To dissociate a version 0 DCO object, DCO volume and snap objects from the volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco 7 To upgrade the volume, use the following command: # vxsnap [-g diskgroup] prepare volume [ndcomirs=number] \ [regionsize=size] [drl=on|sequential|off] \ [storage_attribute ...] The ndcomirs attribute specifies the number of DCO plexes that are created in the DCO volume.
Administering volumes Setting tags on volumes # vxassist [-g diskgroup] settag volume|vset tagname[=tagvalue] # vxassist [-g diskgroup] replacetag volume|vset oldtag newtag # vxassist [-g diskgroup] removetag volume|vset tagname To list the tags that are associated with a volume, use the following command: # vxassist [-g diskgroup] listtag [volume|vset] If you do not specify a volume name, all the volumes and vsets in the disk group are displayed. The acronym vt in the TY field indicates a vset.
370 Administering volumes Changing the read policy for mirrored volumes Changing the read policy for mirrored volumes VxVM offers the choice of the following read policies on the data plexes in a mirrored volume: round Reads each plex in turn in “round-robin” fashion for each nonsequential I/O detected. Sequential access causes only one plex to be accessed. This approach takes advantage of the drive or controller read-ahead caching policies.
Administering volumes Removing a volume For example, to set the policy for vol01 to read preferentially from the plex vol01-02, use the following command: # vxvol -g mydg rdpol prefer vol01 vol01-02 To set the read policy to select, use the following command: # vxvol [-g diskgroup] rdpol select volume See “Volume read policies” on page 487. Removing a volume If a volume is inactive or its contents have been archived, you may no longer need it.
372 Administering volumes Moving volumes from a VM disk Moving volumes from a VM disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system that have sufficient space. To move volumes from a disk 1 From the vxdiskadm main menu, select Move volumes from a disk .
Administering volumes Enabling FastResync on a volume Enabling FastResync on a volume The recommended method for enabling FastResync on a volume with a version 20 DCO is to use the vxsnap prepare command. See “Preparing a volume for DRL and instant snapshots” on page 359. Note: To use this feature, you need a FastResync license. FastResync quickly and efficiently resynchronizes stale mirrors.
374 Administering volumes Enabling FastResync on a volume Checking whether FastResync is enabled on a volume To check whether FastResync is enabled on a volume, use the following command: # vxprint [-g diskgroup] -F%fastresync volume If FastResync is enabled, the command returns on; otherwise, it returns off.
Administering volumes Performing online relayout Performing online relayout You can use the vxassist relayout command to reconfigure the layout of a volume without taking it offline. The general form of this command is as follows: # vxassist [-b] [-g diskgroup] relayout volume [layout=layout] \ [relayout_options] If you specify the -b option, relayout of the volume is a background task. The following destination layout configurations are supported.
376 Administering volumes Performing online relayout Table 9-4 Supported relayout transformations for concatenated volumes (continued) Relayout to From concat mirror-concat No. Add a mirror instead. mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be defined. stripe Yes. The stripe width and number of columns may be defined. stripe-mirror Yes. The stripe width and number of columns may be defined.
Administering volumes Performing online relayout Table 9-6 Supported relayout transformations for mirrored-stripe volumes (continued) Relayout to From mirror-stripe mirror-concat No. Use vxassist convert after relayout to the concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width or number of columns must be changed.
378 Administering volumes Performing online relayout Table 9-8 Supported relayout transformations for mirrored-stripe volumes Relayout to From mirror-stripe concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to the concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes.
Administering volumes Performing online relayout ncol=number Specifies the number of columns. ncol=+number Specifies the number of columns to add. ncol=-number Specifies the number of columns to remove. stripeunit=size Specifies the stripe width.
380 Administering volumes Performing online relayout Viewing the status of a relayout Online relayout operations take time to perform. You can use the vxrelayout command to obtain information about the status of a relayout operation. For example, the following command: # vxrelayout -g mydg status vol04 might display output similar to the following: STRIPED, columns=5, stwidth=128--> STRIPED, columns=6, stwidth=128 Relayout running, 68.58% completed.
Administering volumes Converting between layered and non-layered volumes complete the operation. You cannot then use the original task tag to control the relayout. The -o bg option restarts the relayout in the background. You can also specify the slow and iosize option modifiers to control the speed of the relayout and the size of each region that is copied.
382 Administering volumes Adding a RAID-5 log You can use volume conversion before or after you perform an online relayout to achieve more transformations than would otherwise be possible. During relayout process, a volume may also be converted into an intermediate layout.
Administering volumes Adding a RAID-5 log Adding a RAID-5 log using vxplex You can also add a RAID-5 log using the vxplex command. For example, to attach the RAID-5 log plex r5log, to the RAID-5 volume r5vol, in the disk group mydg, use the following command: # vxplex -g mydg att r5vol r5log The attach operation can only proceed if the size of the new log is large enough to hold all the data on the stripe.
384 Administering volumes Adding a RAID-5 log Note: When you remove a log and it leaves less than two valid logs on the volume, a warning is printed and the operation is stopped. You can force the operation by specifying the -f option with vxplex or vxassist.
Chapter 10 Creating and administering volume sets This chapter includes the following topics: ■ About volume sets ■ Creating a volume set ■ Adding a volume to a volume set ■ Removing a volume from a volume set ■ Listing details of volume sets ■ Stopping and starting volume sets ■ Raw device node access to component volumes About volume sets Veritas File System (VxFS) uses volume sets to implement its Multi-Volume Support and SmartTier features.
386 Creating and administering volume sets Creating a volume set ■ The first volume (index 0) in a volume set must be larger than the sum of the total volume size divided by 4000, the size of the VxFS intent log, and 1MB. Volumes 258 MB or larger should always suffice. ■ Raw I/O from and to a volume set is not supported. ■ Raw I/O from and to the component volumes of a volume set is supported under certain conditions. See “Raw device node access to component volumes” on page 389.
Creating and administering volume sets Adding a volume to a volume set Adding a volume to a volume set Having created a volume set containing a single volume, you can use the following command to add further volumes to the volume set: # vxvset [-g diskgroup] [-f] addvol volset volume For example, to add the volume vol2, to the volume set myvset, use the following command: # vxvset -g mydg addvol myvset vol2 Warning: The -f (force) option must be specified if the volume being added, or any volume in the v
388 Creating and administering volume sets Stopping and starting volume sets # vxvset [-g diskgroup] list [volset] If the name of a volume set is not specified, the command lists the details of all volume sets in a disk group, as shown in the following example: # vxvset -g mydg list NAME set1 set2 GROUP mydg mydg NVOLS 3 2 CONTEXT - To list the details of each volume in a volume set, specify the name of the volume set as an argument to the command: # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 IN
Creating and administering volume sets Raw device node access to component volumes 389 # vxvset -g mydg stop set1 # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 INDEX 0 1 2 LENGTH 12582912 12582912 12582912 KSTATE DISABLED DISABLED DISABLED CONTEXT - LENGTH 12582912 12582912 12582912 KSTATE ENABLED ENABLED ENABLED CONTEXT - # vxvset -g mydg start set1 # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 INDEX 0 1 2 Raw device node access to component volumes To guard against accidental file system
390 Creating and administering volume sets Raw device node access to component volumes Access to the raw device nodes for the component volumes can be configured to be read-only or read-write. This mode is shared by all the raw device nodes for the component volumes of a volume set. The read-only access mode implies that any writes to the raw device will fail, however writes using the ioctl interface or by VxFS to update metadata are not prevented.
Creating and administering volume sets Raw device node access to component volumes The following example creates a volume set, myvset1, containing the volume, myvol1, in the disk group, mydg, with raw device access enabled in read-write mode: # vxvset -g mydg -o makedev=on -o compvol_access=read-write \ make myvset1 myvol1 Displaying the raw device access settings for a volume set You can use the vxprint -m command to display the current settings for a volume set.
392 Creating and administering volume sets Raw device node access to component volumes # vxvset [-g diskgroup] [-f] set \ compvol_access={read-only|read-write} vset The compvol_access attribute can be specified to the vxvset set command to change the access mode to the component volumes of a volume set. If any of the component volumes are open, the -f (force) option must be specified to set the attribute to read-only.
Chapter 11 Configuring off-host processing This chapter includes the following topics: ■ About off-host processing solutions ■ Implemention of off-host processing solutions About off-host processing solutions Off-host processing lets you implement the following activities: Data backup As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline.
394 Configuring off-host processing Implemention of off-host processing solutions Database error recovery Logic errors caused by an administrator or an application program can compromise the integrity of a database. By restoring the database table files from a snapshot copy, the database can be recovered more quickly than by full restoration from tape or other backup media. Using linked break-off snapshots makes off-host processing simpler.
Configuring off-host processing Implemention of off-host processing solutions See “Implementing decision support” on page 399. These applications use the Persistent FastResync feature of VxVM in conjunction with linked break-off snapshots. A volume snapshot represents the data that exists in a volume at a given time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
396 Configuring off-host processing Implemention of off-host processing solutions To back up a volume in a private disk group 1 On the primary host, use the following command to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume: # vxprint -g volumedg -F%instant volume If the volume can be used for instant snapshot operations, this command returns on; otherwise, it returns off.
Configuring off-host processing Implemention of off-host processing solutions 4 On the primary host, link the snapshot volume in the snapshot disk group to the data volume. Enter the following: # vxsnap -g volumedg -b addmir volume mirvol=snapvol \ mirdg=snapvoldg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete.
398 Configuring off-host processing Implemention of off-host processing solutions 10 The snapshot volume is initially disabled following the import. On the OHP host, use the following commands to recover and restart the snapshot volume: # vxrecover -g snapvoldg -m snapvol # vxvol -g snapvoldg start snapvol 11 On the OHP host, back up the snapshot volume. If you need to remount the file system in the volume to back it up, first run fsck on the volume.
Configuring off-host processing Implemention of off-host processing solutions 14 The snapshot volume is initially disabled following the import.
400 Configuring off-host processing Implemention of off-host processing solutions To set up a replica database using the table files that are configured within a volume in a private disk group 1 Use the following command on the primary host to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume: # vxprint -g volumedg -F%instant volume This command returns on if the volume can be used
Configuring off-host processing Implemention of off-host processing solutions 5 On the primary host, link the snapshot volume in the snapshot disk group to the data volume: # vxsnap -g volumedg -b addmir volume mirvol=snapvol \ mirdg=snapvoldg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete: # vxsnap -g volumedg snapwait volume mirvol=snapvol \ mirdg=snapvoldg This step sets up the snapshot volumes, and starts tracking changes to the original
402 Configuring off-host processing Implemention of off-host processing solutions 10 On the OHP host where the replica database is to be set up, use the following command to import the snapshot volume’s disk group: # vxdg import snapvoldg 11 The snapshot volume is initially disabled following the import.
Configuring off-host processing Implemention of off-host processing solutions 4 The snapshot volume is initially disabled following the import.
404 Configuring off-host processing Implemention of off-host processing solutions
Chapter 12 Administering hot-relocation This chapter includes the following topics: ■ About hot-relocation ■ How hot-relocation works ■ Configuring a system for hot-relocation ■ Displaying spare disk information ■ Marking a disk as a hot-relocation spare ■ Removing a disk from use as a hot-relocation spare ■ Excluding a disk from hot-relocation use ■ Making a disk available for hot-relocation use ■ Configuring hot-relocation to use only spare disks ■ Moving relocated subdisks ■ Modify
406 Administering hot-relocation How hot-relocation works If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled. Apparent disk failure may not be due to a fault in the physical disk media or the disk controller, but may instead be caused by a fault in an intermediate or ancillary component such as a cable, host bus adapter, or power supply.
Administering hot-relocation How hot-relocation works Disk failure This is normally detected as a result of an I/O failure from a VxVM object. VxVM attempts to correct the error. If the error cannot be corrected, VxVM tries to access configuration information in the private region of the disk. If it cannot access the private region, it considers the disk failed. Plex failure This is normally detected as a result of an uncorrectable I/O error in the plex (which affects subdisks within the plex).
408 Administering hot-relocation How hot-relocation works ■ The failing subdisks are on non-redundant volumes (that is, volumes of types other than mirrored or RAID-5). ■ There are insufficient spare disks or free disk space in the disk group. ■ The only available space is on a disk that already contains a mirror of the failing plex. ■ The only available space is on a disk that already contains the RAID-5 log plex or one of its healthy subdisks.
Administering hot-relocation How hot-relocation works Figure 12-1 Example of hot-relocation for a subdisk in a RAID-5 volume a Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is availavle for hot-relocation. mydg01 mydg02 mydg03 mydg01-01 mydg02-01 mydg03-01 mydg02-02 mydg03-02 mydg04 mydg04-01 mydg05 Spare disk b Subdisk mydg02-01 in one RAID-5 volume fails.
410 Administering hot-relocation How hot-relocation works Mail can be sent to users other than root. See “Modifying the behavior of hot-relocation” on page 422. You can determine which disk is causing the failures in the above example message by using the following command: # vxstat -g mydg -s -ff home-02 src-02 The -s option asks for information about individual subdisks, and the -ff option displays the number of failed read and write operations.
Administering hot-relocation How hot-relocation works Failures have been detected by the Veritas Volume Manager: failed disks: mydg02 failed plexes: home-02 src-02 mkting-01 failing disks: mydg02 This message shows that mydg02 was detached by a failure. When a disk is detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01 were also detached (probably because of the failure of the disk). One possible cause of the problem could be a cabling error.
412 Administering hot-relocation Configuring a system for hot-relocation Hot-relocation tries to move all subdisks from a failing drive to the same destination disk, if possible. When hot-relocation takes place, the failed subdisk is removed from the configuration database, and VxVM ensures that the disk space used by the failed subdisk is not recycled as free space.
Administering hot-relocation Marking a disk as a hot-relocation spare # vxdg [-g diskgroup] spare The following is example output: GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS mydg c0t2d0 c0t2d0 0 658007 s mydg02 Here mydg02 is the only disk designated as a spare in the mydg disk group. The LENGTH field indicates how much spare space is currently available on mydg02 for relocation.
414 Administering hot-relocation Removing a disk from use as a hot-relocation spare To use vxdiskadm to designate a disk as a hot-relocation spare 1 Select Mark a disk as a spare for a disk group from the vxdiskadm main menu. 2 At the following prompt, enter a disk media name (such as mydg01): Enter disk name [,list,q,?] mydg01 The following notice is displayed when the disk has been marked as spare: VxVM NOTICE V-5-2-219 Marking of mydg01 in mydg as a spare disk is complete.
Administering hot-relocation Excluding a disk from hot-relocation use To use vxdiskadm to remove a disk from the hot-relocation pool 1 Select Turn off the spare flag on a disk from the vxdiskadm main menu. 2 At the following prompt, enter the disk media name of a spare disk (such as mydg01): Enter disk name [,list,q,?] mydg01 The following confirmation is displayed: VxVM NOTICE V-5-2-143 Disk mydg01 in mydg no longer marked as a spare disk.
416 Administering hot-relocation Making a disk available for hot-relocation use Making a disk available for hot-relocation use Free space is used automatically by hot-relocation in case spare space is not sufficient to relocate failed subdisks. You can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool.
Administering hot-relocation Moving relocated subdisks Moving relocated subdisks When hot-relocation occurs, subdisks are relocated to spare disks and/or available free space within the disk group. The new subdisk locations may not provide the same performance or data layout that existed before hot-relocation took place. You can move the relocated subdisks (after hot-relocation is complete) to improve performance.
418 Administering hot-relocation Moving relocated subdisks To move the relocated subdisks using vxdiskadm 1 Select Unrelocate subdisks back to a disk from the vxdiskadm main menu. 2 This option prompts for the original disk media name first.
Administering hot-relocation Moving relocated subdisks Moving relocated subdisks using vxassist You can use the vxassist command to move and unrelocate subdisks. For example, to move the relocated subdisks on mydg05 belonging to the volume home back to mydg02, enter the following command. Note: The ! character is a special character in some shells. The following example shows how to escape it in a bash shell.
420 Administering hot-relocation Moving relocated subdisks If vxunreloc cannot replace the subdisks back to the same original offsets, a force option is available that allows you to move the subdisks to a specified disk without using the original offsets. See the vxunreloc(1M) manual page. The examples in the following sections demonstrate the use of vxunreloc. Moving hot-relocated subdisks back to their original disk Assume that mydg01 failed and all the subdisks were relocated.
Administering hot-relocation Moving relocated subdisks Assume that mydg01 failed and the subdisks were relocated and that you want to move the hot-relocated subdisks to mydg05 where some subdisks already reside.
422 Administering hot-relocation Modifying the behavior of hot-relocation The comment fields of all the subdisks on the destination disk remain marked as UNRELOC until phase 3 completes. If its execution is interrupted, vxunreloc can subsequently re-use subdisks that it created on the destination disk during a previous execution, but it does not use any data that was moved to the destination disk. If a subdisk data move fails, vxunreloc displays an error message and exits.
Administering hot-relocation Modifying the behavior of hot-relocation 1 To prevent vxrelocd starting, comment out the entry that invokes it in the startup file: # nohup vxrelocd root & 2 By default, vxrelocd sends electronic mail to root when failures are detected and relocation actions are performed.
424 Administering hot-relocation Modifying the behavior of hot-relocation
Chapter 13 Administering cluster functionality (CVM) This chapter includes the following topics: ■ Overview of clustering ■ Multiple host failover configurations ■ About the cluster functionality of VxVM ■ CVM initialization and configuration ■ Dirty region logging in cluster environments ■ Administering VxVM in cluster environments Overview of clustering Tightly-coupled cluster systems are common in the realm of enterprise-scale mission-critical data processing.
426 Administering cluster functionality (CVM) Overview of clustering Overview of cluster volume management Over the past several years, parallel applications using shared data access have become increasingly popular. Examples of commercially available applications include Oracle Real Application Clusters™ (RAC), Sybase Adaptive Server®, and Informatica Enterprise Cluster Edition.
Administering cluster functionality (CVM) Overview of clustering Figure 13-1 Example of a 4-node CVM cluster Redundant private network Node 1 (slave) Node 2 (slave) Node 3 (slave) Node 0 (master) Redundant SCSIor Fibre Channel connectivity Cluster-shareable disks Cluster-shareable disk groups To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster.
428 Administering cluster functionality (CVM) Overview of clustering You can run commands that configure or reconfigure VxVM objects on any node in the cluster. These tasks include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations. The first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master.
Administering cluster functionality (CVM) Overview of clustering leaves the cluster gracefully, it deports all its imported shared disk groups, but they remain imported on the surviving nodes. Reconfiguring a shared disk group is performed with the cooperation of all nodes. Configuration changes to the disk group are initiated by the master, and happen simultaneously on all nodes and the changes are identical.
430 Administering cluster functionality (CVM) Overview of clustering Table 13-1 Activation modes for shared disk groups (continued) Activation mode Description readonly (ro) The node has read access to the disk group and denies write access for all other nodes in the cluster. The node has no write access to the disk group. Attempts to activate a disk group for either of the write modes on other nodes fail. sharedread (sr) The node has read access to the disk group.
Administering cluster functionality (CVM) Overview of clustering The activation-mode is one of exclusivewrite, readonly, sharedread, sharedwrite, or off. When a shared disk group is created or imported, it is activated in the specified mode. When a node joins the cluster, all shared disk groups accessible from the node are activated in the specified mode. The activation mode of a disk group controls volume I/O from different nodes in the cluster.
432 Administering cluster functionality (CVM) Overview of clustering ■ Any failures that require a configuration change must be sent to the master node so that they can be resolved correctly. ■ As the master node resolves failures, all the slave nodes are correctly updated. This ensures that all nodes have the same view of the configuration. The practical implication of this design is that I/O failure on any node results in the configuration of all nodes being changed.
Administering cluster functionality (CVM) Overview of clustering Global detach policy The global detach policy is the traditional and default policy for all nodes on the configuration. If there is a read or write I/O failure on a slave node, the master node performs the usual I/O recovery operations to repair the failure, and, if required, the plex is detached cluster-wide. All nodes remain in the cluster and continue to perform I/O, but the redundancy of the mirrors is reduced.
434 Administering cluster functionality (CVM) Overview of clustering Table 13-3 Type of I/O failure Cluster behavior under I/O failure to a mirrored volume for different disk detach policies Local (diskdetpolicy=local) Global (diskdetpolicy=global) Failure of path to Reads fail only if no plexes remain one disk in a available to the affected node. volume for a single Writes to the volume fail. node The plex is detached, and I/O from/to the volume continues.
Administering cluster functionality (CVM) Overview of clustering Disk group failure policy The local detach policy by itself is insufficient to determine the desired behavior if the master node loses access to all disks that contain copies of the configuration database and logs. In this case, the disk group is disabled. As a result, any action that would result in an update to log/config copy will also fail from the other nodes in the cluster. In release 4.
436 Administering cluster functionality (CVM) Overview of clustering # vxdg -g diskgroup set diskdetpolicy=local dgfailpolicy=leave Effect of disk connectivity on cluster reconfiguration The detach policy, previous I/O errors, or access to disks are not considered when a new master node is chosen. When the master node leaves a cluster, the node that takes over as master of the cluster may already have seen I/O failures for one or more disks.
Administering cluster functionality (CVM) Multiple host failover configurations Multiple host failover configurations Outside the context of CVM, VxVM disk groups can be imported (made available) on only one host at any given time. When a host imports a (private) disk group, the volumes and configuration of that disk group become accessible to the host.
438 Administering cluster functionality (CVM) Multiple host failover configurations group is initially imported by Node A, but the administrator wants to access the disk group from Node B if Node A crashes. Such a failover scenario can be used to provide manual high availability to data, where the failure of one node does not prevent access to data.
Administering cluster functionality (CVM) About the cluster functionality of VxVM Association not resolved Association count is incorrect Duplicate record in configuration Configuration records are inconsistent These errors are typically reported in association with specific disk group configuration copies, but usually apply to all copies. The following is usually displayed along with the error: Disk group has no valid configuration copies See the Veritas Volume Manager Troubleshooting Guide.
440 Administering cluster functionality (CVM) CVM initialization and configuration The nodes can simultaneously access and manage a set of disks or LUNs under VxVM control. The same logical view of disk configuration and any changes to this view are available on all the nodes. When the CVM functionality is enabled, all cluster nodes can share VxVM objects such as shared disk groups. Private disk groups are supported in the same way as in a non-clustered environment.
Administering cluster functionality (CVM) CVM initialization and configuration For VxVM in a cluster environment, initialization consists of loading the cluster configuration information and joining the nodes in the cluster. The first node to join becomes the master node, and later nodes (slaves) join to the master. If two nodes join simultaneously, VxVM chooses the master. After a given node joins, that node has access to the shared disk groups and volumes.
442 Administering cluster functionality (CVM) CVM initialization and configuration master when the cluster is not handling VxVM configuration changes or cluster reconfiguration operations. The reinit keyword allows nodes to be added to or removed from a cluster without stopping the cluster. Before running this command, the cluster configuration file must have been updated with information about the supported nodes in the cluster.
Administering cluster functionality (CVM) CVM initialization and configuration Table 13-5 Node abort messages (continued) Reason Description master aborted during join Master node aborted while another node was joining the cluster. protocol version out of range Cluster protocol version mismatch or unsupported version. recovery in progress Volumes that were opened by the node are still recovering. transition to role failed Changing the role of a node to be the master failed.
444 Administering cluster functionality (CVM) CVM initialization and configuration or to abandon the transaction. Before the transaction can be committed, all of the kernels ensure that no I/O is underway, and block any I/O issued by applications until the reconfiguration is complete. The master node is responsible both for initiating the reconfiguration, and for coordinating the commitment of the transaction. The resulting configuration changes appear to occur simultaneously on all nodes.
Administering cluster functionality (CVM) CVM initialization and configuration ■ role of the node ■ network address of the node On the master node, the vxconfigd daemon sets up the shared configuration by importing shared disk groups, and informs the kernel when it is ready for the slave nodes to join the cluster. On slave nodes, the vxconfigd daemon is notified when the slave node can join the cluster.
446 Administering cluster functionality (CVM) CVM initialization and configuration configuration can fail. For example, shared disk groups listed using the vxdg list command are marked as disabled; when the rejoin completes successfully, they are marked as enabled. ■ If the vxconfigd daemon is stopped on both the master and slave nodes, the slave nodes do not display accurate configuration information until vxconfigd is restarted on the master and slave nodes, and the daemons have reconnected.
Administering cluster functionality (CVM) CVM initialization and configuration The CVM functionality of VxVM maintains global state information for each volume. This enables VxVM to determine which volumes need to be recovered when a node crashes. When a node leaves the cluster due to a crash or by some other means that is not clean, VxVM determines which volumes may have writes that have not completed and the master node resynchronizes these volumes.
448 Administering cluster functionality (CVM) Dirty region logging in cluster environments Dirty region logging in cluster environments Dirty region logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure. DRL is supported in cluster-shareable disk groups. This section provides a brief overview of how DRL behaves in a cluster environment.
Administering cluster functionality (CVM) Administering VxVM in cluster environments on all affected volumes. The recovery utilities compare a crashed node's active maps with the recovery map and make any necessary updates. Only then can the node rejoin the cluster and resume I/O to the volume (which overwrites the active map). During this time, other nodes can continue to perform I/O. VxVM tracks which nodes have crashed. If multiple node recoveries are underway in a cluster at a given time.
450 Administering cluster functionality (CVM) Administering VxVM in cluster environments Table 13-6 Cluster status messages (continued) Status message Description mode: enabled: cluster active - role not set master: mozart state: joining reconfig: master update The node has not yet been assigned a role, and is in the process of joining the cluster. mode: enabled: cluster active - SLAVE master: mozart state: joining The node is configured as a slave, and is in the process of joining the cluster.
Administering cluster functionality (CVM) Administering VxVM in cluster environments To change the CVM master manually 1 To view the current master, use one of the following commands: # vxclustadm nidmap Name system01 system02 CVM Nid 0 1 CM Nid 0 1 State Joined: Slave Joined: Master # vxdctl -c mode mode: enabled: cluster active - MASTER master: system02 In this example, the CVM master is system02.
452 Administering cluster functionality (CVM) Administering VxVM in cluster environments 3 To monitor the master switching, use the following command: # vxclustadm -v nodestate state: cluster member nodeId=0 masterId=0 neighborId=1 members[0]=0xf joiners[0]=0x0 leavers[0]=0x0 members[1]=0x0 joiners[1]=0x0 leavers[1]=0x0 reconfig_seqnum=0x9f9767 vxfen=off state: master switching in progress reconfig: vxconfigd in join In this example, the state indicates that master is being changed.
Administering cluster functionality (CVM) Administering VxVM in cluster environments a transaction is in progress. Try again In some cases, if the master switching operation is interrupted by another reconfiguration operation, the master change fails. In this case, the existing master remains the master of the cluster. After the reconfiguration is complete, reissue the vxclustadm setmaster command to change the master.
454 Administering cluster functionality (CVM) Administering VxVM in cluster environments Listing shared disk groups vxdg can be used to list information about shared disk groups. To display information for all disk groups, use the following command: # vxdg list Example output from this command is displayed here: NAME group2 group1 STATE enabled,shared enabled,shared ID 774575420.1170.teal 774222028.1090.teal Shared disk groups are designated with the flag shared.
Administering cluster functionality (CVM) Administering VxVM in cluster environments log log disk c1t0d0 copy 1 len=220 disk c1t0d0 copy 1 len=220 Note that the flags field is set to shared. The output for the same command when run on a slave is slightly different. The local-activation and cluster-actv-modes fields display the activation mode for this node and for each node in the cluster respectively.
456 Administering cluster functionality (CVM) Administering VxVM in cluster environments # vxdg -s import diskgroup where diskgroup is the disk group name or ID. On subsequent cluster restarts, the disk group is automatically imported as shared. Note that it can be necessary to deport the disk group (using the vxdg deport diskgroup command) before invoking the vxdg utility. Forcibly importing a disk group You can use the -f option to the vxdg command to import a disk group forcibly.
Administering cluster functionality (CVM) Administering VxVM in cluster environments Converting a disk group from shared to private You can convert shared disk groups on a master node or a slave node. If you run the command to convert the shared disk group on a slave node, the command is shipped to the master and executed on the master.
458 Administering cluster functionality (CVM) Administering VxVM in cluster environments You can use the vxdg join command to merge the contents of two imported disk groups. In a cluster, you can join two private disk groups on any cluster node where those disk groups are imported. If the source disk group and the target disk group are both shared, you can perform the join from a master node or a slave node.
Administering cluster functionality (CVM) Administering VxVM in cluster environments # vxdg -g diskgroup set dgfailpolicy=dgdisable|leave|requestleave The default failure policy is dgdisable. Creating volumes with exclusive open access by a node When using the vxassist command to create a volume, you can use the exclusive=on attribute to specify that the volume may only be opened by one node in the cluster at a time.
460 Administering cluster functionality (CVM) Administering VxVM in cluster environments This command produces output similar to the following: Volboot file version: 3/1 seqno: 0.
Administering cluster functionality (CVM) Administering VxVM in cluster environments Recovering volumes in shared disk groups The vxrecover utility is used to recover plexes and volumes after disk replacement. When a node leaves a cluster, it can leave some mirrors in an inconsistent state. The vxrecover utility can be used to recover such volumes. The -c option to vxrecover causes it to recover all volumes in shared disk groups.
462 Administering cluster functionality (CVM) Administering VxVM in cluster environments The statistics for all nodes are summed. For example, if node 1 performed 100 I/O operations and node 2 performed 200 I/O operations, vxstat -b displays a total of 300 I/O operations. Administering CVM from the slave node CVM requires that the master node of the cluster executes configuration commands, which change the object configuration of a CVM shared disk group.
Administering cluster functionality (CVM) Administering VxVM in cluster environments Note the following limitations for issuing CVM commands from the slave node: ■ The CVM protocol version 100 or later is required on all nodes in the cluster. See “Displaying the cluster protocol version” on page 459. ■ CVM uses the values in the defaults file on the master node when CVM executes the command.
464 Administering cluster functionality (CVM) Administering VxVM in cluster environments
Chapter 14 Administering sites and remote mirrors This chapter includes the following topics: ■ About sites and remote mirrors ■ Making an existing disk group site consistent ■ Configuring a new disk group as a Remote Mirror configuration ■ Fire drill — testing the configuration ■ Changing the site name ■ Administering the Remote Mirror configuration ■ Examples of storage allocation by specifying sites ■ Displaying site information ■ Failure and recovery scenarios About sites and remote
466 Administering sites and remote mirrors About sites and remote mirrors Figure 14-1 Example of a two-site remote mirror configuration Site A Cluster nodes Private network Metropolitan or wide area network link (Fibre Channel or DWDM) Fibre Channel switch Disk enclosures Site B Cluster nodes Fibre Channel switch Disk enclosures If a disk group is configured across the storage at the sites, and inter-site communication is disrupted, there is a possibility of a serial split brain condition aris
Administering sites and remote mirrors About sites and remote mirrors Figure 14-2 shows an example of a site-consistent volume with two plexes configured at each of two sites.
468 Administering sites and remote mirrors About sites and remote mirrors Figure 14-3 Example of a two-site configuration with remote storage only Site A Site B Cluster or standalone system Fibre Channel switch Disk enclosures Metropolitan or wide area network link (Fibre Channel or DWDM) Fibre Channel switch Disk enclosures About site-based allocation Site-based allocation policies are enforced by default in a site-configured disk group.
Administering sites and remote mirrors About sites and remote mirrors add the site to the disk group fails. If the -f option is specified, the command does not fail, but instead it sets the allsites attribute for the volume to off. Note: By default, volumes created will be mirrored when sites are configured in a disk group. Initial synchronization occurs between mirrors. Depending on the size of the volume, synchronization may take a long time.
470 Administering sites and remote mirrors About sites and remote mirrors ■ At least two sites must be configured in the disk group before site consistency is turned on. See “Making an existing disk group site consistent” on page 471. ■ All the disks in a disk group must be registered to one of the sites before you can set the siteconsistent attribute on the disk group. About site tags In a Remote Mirror configuration, each storage device in the disk group must be tagged with site information.
Administering sites and remote mirrors Making an existing disk group site consistent This command has no effect if a site name has not been set for the host. See “Changing the read policy for mirrored volumes” on page 370. Making an existing disk group site consistent The site consistency feature requires that a license enabling the site awareness feature has been purchased for all hosts at all sites that participate in the configuration.
472 Administering sites and remote mirrors Configuring a new disk group as a Remote Mirror configuration 7 Turn on site consistency for the disk group: # vxdg -g diskgroup set siteconsistent=on 8 Turn on the allsites flag for the volume which requires data replication to each site: # vxvol [-g diskgroup] set allsites=on volume 9 Turn on site consistency for each existing volume in the disk group for which site consistency is needed. You also need to attach DCOv20 if it is not attached already.
Administering sites and remote mirrors Configuring a new disk group as a Remote Mirror configuration 3 Register a site record to the disk group, for each site. # vxdg -g diskgroup [-f] addsite sitename 4 Do one of the following: ■ To tag all disks regardless of the disk group, do the following: Assign a site name to the disks or enclosures. You can set site tags at the disk level, or at the enclosure level.
474 Administering sites and remote mirrors Fire drill — testing the configuration Fire drill — testing the configuration Warning: To avoid potential loss of service or data, it is recommended that you do not use these procedures in a production environment. After validating the consistency of the volumes and disk groups at your sites, you should validate the procedures that you will use in the event of the various possible types of failure.
Administering sites and remote mirrors Changing the site name Use the following commands to reattach a site and recover the disk group: # vxdg -g diskgroup [-o overridessb] reattachsite sitename # vxrecover -g diskgroup It may be necessary to specify the -o overridessb option if a serial split-brain condition is indicated. Changing the site name You can change the site name, or tag, that is used to identify each site in a Remote Mirror configuration.
476 Administering sites and remote mirrors Administering the Remote Mirror configuration Administering the Remote Mirror configuration After the Remote Mirror site is configured, refer to the following sections for additional tasks to maintain the configuration. Configuring site tagging for disks or enclosures To set up a Remote Mirror configuration, specify to which site each storage device in the disk group belongs. Assign a site tag to one or more disks or enclosures.
Administering sites and remote mirrors Administering the Remote Mirror configuration To configure automatic site tagging for a disk group 1 Set the autotagging policy to on for the disk group. Automatic tagging is the default setting, so this step is only required if the autotagging policy was previously disabled.
478 Administering sites and remote mirrors Examples of storage allocation by specifying sites # vxvol [-g diskgroup] set siteconsistent=off volume The siteconsistent attribute and the allsites attribute must be set to off for RAID-5 volumes in a site-consistent disk group. Examples of storage allocation by specifying sites Table 14-1 shows examples of how to use sites with the vxassist command to allocate storage.
Administering sites and remote mirrors Examples of storage allocation by specifying sites Table 14-1 Examples of storage allocation by specifying sites (continued) Command Description # vxassist -g ccdg make vol 2g \ nmirror=2 site:site2 \ siteconsistent=off \ allsites=off Create a mirrored volume that is not site consistent. Both mirrors are allocated from any available storage in the disk group that is tagged as belonging to site2.
480 Administering sites and remote mirrors Displaying site information Displaying site information To display the site name for a host ◆ To determine to which site a host belongs, use the following command on the host: # vxdctl list | grep siteid siteid: building1 To display the disks or enclosures registered to a site ◆ To check which disks or enclosures are registered to a site, use the following command: # vxdisk [-g diskgroup] listtag To display the setting for automatic site tagging for a disk gr
Administering sites and remote mirrors Failure and recovery scenarios To list the site tags for a disk group ◆ To list the site tags for a disk group, use the following command: # vxdg -o tag=site listtag diskgroup Failure and recovery scenarios Table 14-2 lists the possible failure scenarios and recovery procedures for the Remote Mirror feature. Table 14-2 Failure scenarios and recovery procedures Failure scenario Recovery procedure Disruption of network link between sites.
482 Administering sites and remote mirrors Failure and recovery scenarios At the chosen site, use the following commands to reattach a site and recover the disk group: # vxdg -g diskgroup -o overridessb reattachsite sitename # vxrecover -g diskgroup In the case that the host systems are configured at a single site with only storage at the remote sites, the usual resynchronization mechanism of VxVM is used to recover the remote plexes when the storage comes back on line.
Administering sites and remote mirrors Failure and recovery scenarios Recovering from site failure If all the hosts and storage fail at a site, use the following commands to reattach the site after it comes back online, and to recover the disk group: # vxdg -g diskgroup [-o overridessb] reattachsite sitename # vxrecover -g diskgroup The -o overridessb option is only required if a serial split-brain condition is indicated.
484 Administering sites and remote mirrors Failure and recovery scenarios To send mail to other users, add the user name to the line that starts vxattachd in the /sbin/init.d/vxvm-recover startup script, and reboot the system. If you do not want a site to be recovered automatically, kill the vxattachd daemon, and prevent it from restarting. If you stop vxattachd, the automatic plex reattachment also stops.
Chapter 15 Performance monitoring and tuning This chapter includes the following topics: ■ Performance guidelines ■ RAID-5 ■ Performance monitoring ■ Tuning VxVM Performance guidelines Veritas Volume Manager (VxVM) can improve system performance by optimizing the layout of data storage on the available hardware. VxVM lets you optimize data storage performance using the following strategies: ■ Balance the I/O load among the available disk drives.
486 Performance monitoring and tuning Performance guidelines VxVM can split volumes across multiple drives. This approach gives you a finer level of granularity when you locate data. After you measure access patterns, you can adjust your decisions on where to place file systems. You can reconfigure volumes online without adversely impacting their availability. Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel.
Performance monitoring and tuning RAID-5 Combining mirroring and striping When you have multiple I/O streams, you can use mirroring and striping together to significantly improve performance. Because parallel I/O streams can operate concurrently on separate devices, striping provides better throughput. When I/O fits exactly across all stripe units in one stripe, serial access is optimized.
488 Performance monitoring and tuning Performance monitoring Figure 15-2 shows an example in which the read policy of the mirrored-stripe volume labeled HotVol is set to prefer for the striped plex PL1.
Performance monitoring and tuning Performance monitoring Best performance is usually achieved by striping and mirroring all volumes across a reasonable number of disks and mirroring between controllers, when possible. This procedure tends to even out the load between all disks, but it can make VxVM more difficult to administer. For large numbers of disks (hundreds or thousands), set up disk groups containing 10 disks, where each group is used to create a striped-mirror volume.
490 Performance monitoring and tuning Performance monitoring ■ average operation time (which reflects the total time through the VxVM interface and is not suitable for comparison against other statistics programs) These statistics are recorded for logical I/O including reads, writes, atomic copies, verified reads, verified writes, plex reads, and plex writes for each volume.
Performance monitoring and tuning Performance monitoring due to volumes being created, and also removes statistics from boot time (which are not usually of interest). After resetting the counters, allow the system to run during typical system activity. Run the application or workload of interest on the system to measure its effect. When monitoring a system that is used for multiple purposes, try not to exercise any one application more than usual.
492 Performance monitoring and tuning Performance monitoring pl sd archive-01 archive mydg03-03 archive-01 ENABLED mydg03 ACTIVE 0 20480 40960 CONCAT 0 c1t2d0 RW ENA The subdisks line (beginning sd) indicates that the volume archive is on disk mydg03. To move the volume off mydg03, use the following command. Note: The ! character is a special character in some shells. This example shows how to escape it in a bash shell.
Performance monitoring and tuning Performance monitoring If some disks appear to be excessively busy (or have particularly long read or write times), you may want to reconfigure some volumes. If there are two relatively busy volumes on a disk, move them closer together to reduce seek times on the disk. If there are too many relatively busy volumes on one disk, move them to a disk that is less busy.
494 Performance monitoring and tuning Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources that are used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General tuning guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Performance monitoring and tuning Tuning VxVM Number of configuration copies for a disk group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance. As a general rule, reducing the number of configuration copies in a disk group speeds up initial access of the disk group, initial startup of the vxconfigd daemon, and transactions that are performed within the disk group.
496 Performance monitoring and tuning Tuning VxVM Tunable parameters for VxVM Table 15-1 lists the kernel tunable parameters for VxVM. Table 15-1 Kernel tunable parameters for VxVM Parameter Description vol_checkpt_default The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint.
Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed.
498 Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_max_vol The maximum number of volumes that can be created on the system. The minimum and maximum permitted values are 1 and the maximum number of minor numbers representable on the system. The default value is 8388608. vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request.
Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_maxspecialio The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously.
500 Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description voldrl_max_drtregs The maximum number of dirty regions that can exist on the system for non-sequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure. The default value is 2048.
Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to this size, one for RAID-5 and one for mirrored volumes.
502 Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description voliot_iobuf_limit The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.
Performance monitoring and tuning Tuning VxVM Table 15-1 Kernel tunable parameters for VxVM (continued) Parameter Description volpagemod_max_memsz The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata. The default value is 65536k (64MB). The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications.
504 Performance monitoring and tuning Tuning VxVM DMP tunable parameters Table 15-2 shows the DMP parameters that can be tuned by using the vxdmpadm settune command. Table 15-2 DMP parameters that are tunable Parameter Description dmp_cache_open If this parameter is set to on, the first open of a device that is performed by an array support library (ASL) is cached. This caching enhances the performance of device discovery by minimizing the overhead that is caused by subsequent opens by ASLs.
Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_evm_handling Determines whether DMP listens to disk-related events from the Event Monitoring framework (EVM) on HP-UX. The default is on. If this parameter is set to on, DMP listens and reacts to events from EVM. If the parameter is set to off, DMP stops listening and reacting to events.
506 Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_log_level The level of detail that is displayed for DMP console messages. The following level values are defined: 1 — Displays all DMP log messages that existed in releases before 5.0. 2 — Displays level 1 messages plus messages that relate to path or disk addition or removal, SCSI errors, IO errors and DMP node migration.
Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_monitor_fabric Determines whether the Event Source daemon (vxesd) uses the Storage Networking Industry Association (SNIA) HBA API. This API allows DDL to improve the performance of failover by collecting information about the SAN topology and by monitoring fabric events. If this parameter is set to on, DDL uses the SNIA HBA API.
508 Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_path_age The time for which an intermittently failing path needs to be monitored as healthy before DMP again tries to schedule I/O requests on it. The default value is 300 seconds. A value of 0 prevents DMP from detecting intermittently failing paths.
Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_probe_threshold If the dmp_low_impact_probe is turned on, dmp_probe_threshold determines the number of paths to probe before deciding on changing the state of other paths in the same subpath failover group. The default value is 5. dmp_queue_depth The maximum number of queued I/O requests on a path during I/O throttling. The default value is 40.
510 Performance monitoring and tuning Tuning VxVM Table 15-2 DMP parameters that are tunable (continued) Parameter Description dmp_restore_policy The DMP restore policy, which can be set to one of the following values: ■ check_all ■ check_alternate ■ check_disabled ■ check_periodic The default value is check_disabled. The value of this tunable can also be set using the vxdmpadm start restore command. See “Configuring DMP path restoration policies” on page 194.
Performance monitoring and tuning Tuning VxVM Disabling I/O statistics collection By default, Veritas Volume Manager collects I/O statistics on all objects in the configuration. This helps you tune different parameters that depend upon the environment and workload. See “Tunable parameters for VxVM” on page 496. See “DMP tunable parameters” on page 504. After the tuning is done, you may choose to disable I/O statistics collection because it improves I/O throughput.
512 Performance monitoring and tuning Tuning VxVM To enable I/O statistics collection ◆ Enter the following command: # vxtune vol_stats_enable 1 To enable DMP I/O statistics collection ◆ Enter the following command: # vxdmpadm iostat start See the vxdmpadm(1M) manual page.
Appendix A Using Veritas Volume Manager commands This appendix includes the following topics: ■ About Veritas Volume Manager commands ■ CVM commands supported for executing on the slave node ■ Online manual pages About Veritas Volume Manager commands Most Veritas Volume Manager (VxVM) commands (excepting daemons, library commands and supporting scripts) are linked to the /usr/sbin directory from the /opt/VRTS/bin directory.
514 Using Veritas Volume Manager commands About Veritas Volume Manager commands VxVM library commands and supporting scripts are located under the /usr/lib/vxvm directory hierarchy. You can include these directories in your path if you need to use them on a regular basis. For detailed information about an individual command, refer to the appropriate manual page in the 1M section. See “Online manual pages” on page 543.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-1 Obtaining information about objects in VxVM (continued) Command Description vxdg list [diskgroup] Lists information about disk groups. See “Displaying disk group information” on page 216. Example: # vxdg list mydg vxdg -s list Lists information about shared disk groups. See “Listing shared disk groups” on page 454. Example: # vxdg -s list vxdisk -o alldgs list Lists all diskgroups on the disks.
516 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-1 Obtaining information about objects in VxVM (continued) Command Description vxprint -st [-g diskgroup] [subdisk ...] Displays information about subdisks. See “Displaying subdisk information” on page 273. Example: # vxprint -st -g mydg vxprint -pt [-g diskgroup] [plex ...] Displays information about plexes. See “Displaying plex information” on page 281.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-2 Administering disks (continued) Command Description vxedit [-g diskgroup] set \ reserve=on|off diskname Sets aside/does not set aside a disk from use in a disk group. See “Reserving disks” on page 138.
518 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-2 Administering disks (continued) Command Description vxdisk offline devicename Takes a disk offline. See “Taking a disk offline” on page 137. Example: # vxdisk offline c0t1d0 vxdisk -g diskgroup [-o full] reclaim Performs thin reclamation on a disk, enclosure, or disk group. disk|enclosure|diskgroup See “Thin Reclamation of a disk, a disk group, or an enclosure” on page 345.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups (continued) Command Description vxdg -g diskgroup listssbinfo Reports conflicting configuration information. See “Handling conflicting configuration copies” on page 243. Example: # vxdg -g mydg listssbinfo vxdg [-n newname] deport diskgroup Deports a disk group and optionally renames it. See “Deporting a disk group” on page 222.
520 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups (continued) Command Description vxdg [-o expand] listmove sourcedg \ Lists the objects potentially affected by targetdg object ... moving a disk group. See “Listing objects potentially affected by a move” on page 255. Example: # vxdg -o expand listmove \ mydg newdg myvol1 vxdg [-o expand] move sourcedg \ targetdg object ... Moves objects between disk groups.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups (continued) Command Description vxdg -g diskgroup set \ activation=ew|ro|sr|sw|off Sets the activation mode of a shared disk group in a cluster. See “Changing the activation mode on a shared disk group” on page 458. Example: # vxdg -g mysdg set \ activation=sw vxrecover -g diskgroup -sb Starts all volumes in an imported disk group.
522 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks (continued) Command Description vxsd [-g diskgroup] assoc plex \ subdisk... Associates subdisks with an existing plex. See “Associating subdisks with plexes” on page 275. Example: # vxsd -g mydg assoc home-1 \ mydg02-01 mydg02-00 \ mydg02-01 vxsd [-g diskgroup] assoc plex \ subdisk1:0 ... subdiskM:N-1 Adds subdisks to the ends of the columns in a striped or RAID-5 volume.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks (continued) Command Description vxsd [-g diskgroup] join \ sd1 sd2 ... subdisk Joins two or more subdisks. See “Joining subdisks” on page 275. Example: # vxsd -g mydg join \ mydg03-02 mydg03-03 \ mydg03-02 vxassist [-g diskgroup] move \ volume \!olddisk newdisk Relocates subdisks in a volume between disks. See “Moving relocated subdisks using vxassist” on page 419.
524 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks (continued) Command Description vxedit [-g diskgroup] rm subdisk Removes a subdisk. See “Removing subdisks” on page 279. Example: # vxedit -g mydg rm mydg02-01 vxsd [-g diskgroup] -o rm dis subdisk Dissociates and removes a subdisk from a plex. See “Dissociating subdisks from plexes” on page 278.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-5 Creating and administering plexes (continued) Command Description vxplex [-g diskgroup] att volume plex Attaches a plex to an existing volume. See “Attaching and associating plexes” on page 286. See “Reattaching plexes” on page 288. Example: # vxplex -g mydg att vol01 \ vol01-02 vxplex [-g diskgroup] det plex Detaches a plex. See “Detaching plexes” on page 288.
526 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-5 Creating and administering plexes (continued) Command Description vxplex [-g diskgroup] cp volume \ newplex Copies a volume onto a plex. See “Copying volumes to plexes” on page 291. Example: # vxplex -g mydg cp vol02 \ vol03-01 vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching plexes” on page 288.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes (continued) Command Description vxassist -b [-g diskgroup] make \ volume length [layout=layout] \ [attributes] Creates a volume. See “Creating a volume on any disk” on page 304. See “Creating a volume on specific disks” on page 305.
528 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes (continued) Command Description vxassist -b [-g diskgroup] make \ Creates a striped or RAID-5 volume. volume length layout={stripe|raid5} \ See “Creating a striped volume” [stripeunit=W] [ncol=N] \ on page 317. [attributes] See “Creating a RAID-5 volume” on page 321.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes (continued) Command Description vxvol [-g diskgroup] init zero \ volume Initializes and zeros out a volume for use. See “Initializing and starting a volume” on page 326. Example: # vxvol -g mydg init zero \ myvol Table A-7 Administering volumes Command Description vxassist [-g diskgroup] mirror \ volume [attributes] Adds a mirror to a volume. See “Adding a mirror to a volume ” on page 355.
530 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] \ {growto|growby} volume length Grows a volume to a specified size or by a specified amount. See “Resizing volumes with vxassist” on page 353. Example: # vxassist -g mydg growby \ myvol 10g vxassist [-g diskgroup] \ {shrinkto|shrinkby} volume length Shrinks a volume to a specified size or by a specified amount.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxsnap [-g diskgroup] make \ source=volume\ /newvol=snapvol\ [/nmirror=number] Takes a full-sized instant snapshot of a volume by breaking off plexes of the original volume. For information about creating snapshots, see the Veritas Storage Foundation Advanced Features Administrator's Guide.
532 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxmake [-g diskgroup] cache \ cache_object cachevolname=volume \ [regionsize=size] Creates a cache object for use by space-optimized instant snapshots. For information about snapshots, see the Veritas Storage Foundation Advanced Features Administrator's Guide. A cache volume must have already been created.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxsnap [-g diskgroup] refresh snapshotRefreshes a snapshot from its original volume. For information about snapshots, see the Veritas Storage Foundation Advanced Features Administrator's Guide. Example: # vxsnap -g mydg refresh \ mysnpvol vxsnap [-g diskgroup] dis snapshot Turns a snapshot into an independent volume.
534 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] relayout \ volume [layout=layout] \ [relayout_options] Performs online relayout of a volume. See “Performing online relayout” on page 375.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] remove \ volume volume Removes a volume. See “Removing a volume” on page 371. Example: # vxassist -g mydg remove \ myvol Table A-8 Monitoring and controlling tasks Command Description command [-g diskgroup] -t tasktag \ [options] [arguments] Specifies a task tag to a VxVM command. See “Specifying task tags” on page 340.
536 Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-8 Monitoring and controlling tasks (continued) Command Description vxtask pause task Suspends operation of a task. See “Using the vxtask command” on page 342. Example: # vxtask pause mytask vxtask -p [-g diskgroup] list Lists all paused tasks. See “Using the vxtask command” on page 342. Example: # vxtask -p -g mydg list vxtask resume task Resumes a paused task.
Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 Command vxdg List of CVM commands supported for executing on the slave node Supported operations 537
538 Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 Command List of CVM commands supported for executing on the slave node (continued) Supported operations vxdg -s init [cds=on|off] vxdg -T < different_versions> -s init [minor=base-minor] [cds=on|off] vxdg [-n newname] [-h new-host-id] deport vxdg [-Cfst] [-n newname] [-o clearreserve] [-o useclonedev={on|off}] [-o updateid] [-o noreonline] [-o selectcp=diskid] [-
Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 Command List of CVM commands supported for executing on the slave node (continued) Supported operations vxdg -g set attr=value ... vxassist vxassist -g [ -b ] convert volume layout= vxassist -g [ -b ] addlog volume vxassist -g [-b] mirror volume vxassist [-b]-g make volume length [layout=layout] diskname ...
540 Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 List of CVM commands supported for executing on the slave node (continued) Command Supported operations vxcache vxcache -g start cacheobject vxcache -g stop cacheobject vxcache -g att volume cacheobject vxcache -g dis cachevol vxcache -g shrinkcacheto cacheobject newlength vxcache -g shrinkcacheby cacheobject lengthchange vx
Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 List of CVM commands supported for executing on the slave node (continued) Command Supported operations vxmend vxmend -g on plex vxmend -g off plex vxmirror vxmirror -g medianame vxmirror -g -d [yes|no] vxplex vxplex -g att volume plex vxplex -g cp volume new_plex vxplex -g dis plex1 vxplex -g mv origina
542 Using Veritas Volume Manager commands CVM commands supported for executing on the slave node Table A-9 List of CVM commands supported for executing on the slave node (continued) Command Supported operations vxsnap vxsnap -g addmir volume [nmirror=N] vxsnap -g prepare volume vxsnap -g rmmir volume vxsnap -g unprepare volume vxsnap -g make snapshot_tuple [snapshot_tuple]...
Using Veritas Volume Manager commands Online manual pages Table A-9 List of CVM commands supported for executing on the slave node (continued) Command Supported operations vxvol vxvol -g set logtype=drl | drlseq volume vxvol -g start volume vxvol -g stop volume vxvol -g {startall|stopall} volume vxvol -g init enable volume vxvol -g init active volume vxvol -g maint volumename vxvol -g set len volumename vxv
544 Using Veritas Volume Manager commands Online manual pages 7 Device driver interfaces. Section 1M — administrative commands Table A-10 lists the manual pages in section 1M for commands that are used to administer Veritas Volume Manager. Table A-10 Section 1M manual pages Name Description dgcfgbackup Create or update VxVM volume group configuration backup file. dgcfgdaemon Start the VxVM configuration backup daemon. dgcfgrestore Display or restore VxVM disk group configuration from backup.
Using Veritas Volume Manager commands Online manual pages Table A-10 Section 1M manual pages (continued) Name Description vxcmdlog Administer command logging. vxconfigbackup Back up disk group configuration. vxconfigbackupd Disk group configuration backup daemon. vxconfigd Veritas Volume Manager configuration daemon vxconfigrestore Restore disk group configuration. vxcp_lvmroot Copy LVM root disk onto new Veritas Volume Manager root disk.
546 Using Veritas Volume Manager commands Online manual pages Table A-10 Section 1M manual pages (continued) Name Description vxdisksetup Configure a disk for use with Veritas Volume Manager. vxdiskunsetup Deconfigure a disk from use with Veritas Volume Manager. vxdmpadm DMP subsystem administration. vxedit Create, remove, and modify Veritas Volume Manager records. vxevac Evacuate all volumes from a disk. vximportdg Import a disk group into the Veritas Volume Manager configuration.
Using Veritas Volume Manager commands Online manual pages Table A-10 Section 1M manual pages (continued) Name Description vxprint Display records from the Veritas Volume Manager configuration. vxr5check Verify RAID-5 volume parity. vxreattach Reattach disk drives that have become accessible again. vxrecover Perform volume recovery operations. vxrelayout Convert online storage from one layout to another. vxrelocd Monitor Veritas Volume Manager for failure events and relocate failed subdisks.
548 Using Veritas Volume Manager commands Online manual pages Table A-10 Section 1M manual pages (continued) Name Description vxtranslog Administer transaction logging. vxtune Adjust Veritas Volume Replicator and Veritas Volume Manager tunables. vxunreloc Move a hot-relocated subdisk back to its original disk. vxvmboot Prepare Veritas Volume Manager volume as a root, boot, primary swap or dump volume. vxvmconvert Convert LVM volume groups to VxVM disk groups.
Using Veritas Volume Manager commands Online manual pages Table A-12 Section 7 manual pages (continued) Name Description vxdmp Dynamic Multi-Pathing device. vxinfo General information device. vxio Virtual disk device. vxiod I/O daemon process control device. vxtrace I/O tracing device.
550 Using Veritas Volume Manager commands Online manual pages
Appendix B Configuring Veritas Volume Manager This appendix includes the following topics: ■ Setup tasks after installation ■ Unsupported disk arrays ■ Foreign devices ■ Initialization of disks and creation of disk groups ■ Guidelines for configuring storage ■ VxVM’s view of multipathed devices ■ Cluster support Setup tasks after installation A number of setup tasks can be performed after installing the Veritas Volume Manager (VxVM) software.
552 Configuring Veritas Volume Manager Unsupported disk arrays ■ Designate hot-relocation spare disks in each disk group. ■ Add mirrors to volumes. ■ Configure DRL and FastResync on volumes. The following tasks are to perform ongoing maintenance: ■ Resize volumes and file systems. ■ Add more disks, create new disk groups, and create new volumes. ■ Create and maintain snapshots.
Configuring Veritas Volume Manager Guidelines for configuring storage To maintain system availability, data important to running and booting your system must be mirrored. The data must be preserved so it can be used in case of failure. The following are suggestions for protecting your system and data: ■ Perform regular backups to protect your data. Backups are necessary if all copies of a volume are lost or corrupted. Power surges can damage several (or all) disks on your system.
554 Configuring Veritas Volume Manager Guidelines for configuring storage and degrades performance. Using the vxassist or vxdiskadm commands precludes this from happening. ■ To provide optimum performance improvements through the use of mirroring, at least 70 percent of physical I/O operations should be read operations. A higher percentage of read operations results in even better performance.
Configuring Veritas Volume Manager Guidelines for configuring storage ■ Calculate stripe-unit sizes carefully. In general, a moderate stripe-unit size (for example, 64 kilobytes, which is also the default used by vxassist) is recommended. ■ If it is not feasible to set the stripe-unit size to the track size, and you do not know the application I/O pattern, use the default stripe-unit size. ■ Many modern disk drives have variable geometry.
556 Configuring Veritas Volume Manager Guidelines for configuring storage ■ Only one RAID-5 plex can exist per RAID-5 volume (but there can be multiple log plexes). ■ The RAID-5 plex must be derived from at least three subdisks on three or more physical disks. If any log plexes exist, they must belong to disks other than those used for the RAID-5 plex. ■ RAID-5 logs can be mirrored and striped.
Configuring Veritas Volume Manager Guidelines for configuring storage ■ If a given disk group spans multiple controllers and has more than one spare disk, set up the spare disks on different controllers (in case one of the controllers fails). ■ For a mirrored volume, configure the disk group so that there is at least one disk that does not already contain a mirror of the volume.
558 Configuring Veritas Volume Manager VxVM’s view of multipathed devices /dev/vx/rdsk/dg/vol character device file for volume vol in disk group dg The pathnames include a directory named for the disk group. Use the appropriate device node to create, mount and repair file systems, and to lay out databases that require raw partitions. VxVM’s view of multipathed devices You can use the vxdiskadm command to control how a device is treated by the Veritas Dynamic Multi-Pathing (DMP).
Configuring Veritas Volume Manager Cluster support ■ Use the vxdg command or the Veritas Operations Manager (VOM) to create disk groups. If you use the vxdg command, specify the -s option to create shared disk groups. ■ Use vxassist or VOM to create volumes in the disk groups. ■ If the cluster is only running with one node, bring up the other cluster nodes. Enter the vxdg list command on each node to display the shared disk groups.
560 Configuring Veritas Volume Manager Cluster support
Glossary Active/Active disk arrays Active/Passive disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
562 Glossary cluster-shareable disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a shared disk group). column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex. concatenation A layout style characterized by subdisks that are arranged sequentially and contiguously. configuration copy A single copy of a configuration database.
Glossary disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by VxVM in deciding how to access and manipulate the disk that is defined by the disk access record. disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits such as redundancy or improved performance.
564 Glossary enabled path A path to a disk that is available for I/O. encapsulation A process that converts existing partitions on a specified disk to volumes. Encapsulation is not supported on the HP-UX platform. enclosure See disk enclosure. enclosure-based naming See device name. fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) via a Fibre Channel switch.
Glossary mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each mirror consists of one plex of the volume with which the mirror is associated. mirroring A layout technique that mirrors the contents of a volume onto multiple plexes. Each plex duplicates the data stored on the volume, but the plexes themselves may have different layouts.
566 Glossary Persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO volume on disk). persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and prevents failed mirrors from being selected for recovery. This is also known as kernel logging. physical disk The underlying storage device, which may or may not be under VxVM control.
Glossary rootability The ability to place the root file system and the swap device under VxVM control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure. secondary path In Active/Passive disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths.
568 Glossary stripe unit size The size of each stripe unit. The default stripe unit size is 64KB. The stripe unit size is sometimes also referred to as the stripe width. striping A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes.
Index Symbols /dev/vx/dmp directory 143 /dev/vx/rdmp directory 143 /etc/default/vxassist file 301, 416 /etc/default/vxdg defaults file 430 /etc/default/vxdg file 219 /etc/default/vxdisk file 83, 109 /etc/fstab file 371 /etc/volboot file 266 /etc/vx/darecs file 266 /etc/vx/dmppolicy.info file 180 /etc/vx/volboot file 228 /sbin/init.
570 Index autotrespass mode 142 B B_NDELAY flag 145 backups implementing online 395 of disk group configuration 266 balanced path policy 182 base minor number 229 blocks on disks 31 boot disk group 210 bootdg 210 C c# 80 c#t#d# 80 c#t#d# based naming 80 Campus Cluster feature administering 465 campus clusters administering 465 serial split brain condition in 243 categories disks 85 CDS alignment constraints 304 compatible disk groups 219 disk format 82 cds attribute 219 cdsdisk format 82 Changing the CV
Index columns (continued) in striping 38 mirroring in striped-mirror volumes 319 comment plex attribute 292 subdisk attribute 280 concatenated volumes 36, 296 concatenated-mirror volumes converting to mirrored-concatenated 381 creating 312 defined 44 recovery 297 concatenation 36 condition flags for plexes 285 configuration backup and restoration 266 configuration changes monitoring using vxnotify 267 configuration copies for disk group 495 configuration database copy size 209 in private region 81 listing
572 Index device discovery introduced 24 partial 83 Device Discovery Layer 87 Device Discovery Layer (DDL) 24, 87 device files to access volumes 328, 557 device names 78 configuring persistent 103 user-specified 158 device nodes controlling access for volume sets 391 displaying access for volume sets 391 enabling access for volume sets 390 for volume sets 389 devices adding foreign 97 fabric 84 JBOD 85 listing all 88 metadevices 79 path redundancy 178–179 pathname 79 volatile 114 dgalign_checking attribut
Index disk groups (continued) impact of number of configuration copies on performance 495 importing 223 importing as shared 455 importing forcibly 456 importing with cloned disks 233 ISP 268 joining 252, 262 layout of DCO plexes 256 limitations of move split.
574 Index disks (continued) marking as spare 413 media name 78 metadevices 79 mirroring volumes on 356 moving between disk groups 221, 258 moving disk groups between systems 226 moving volumes from 372 names 78 naming schemes 79 nopriv 82 obtaining performance statistics 491 OTHER_DISKS category 86 partial failure messages 409 postponing replacement 131 primary path 157 putting under control of VxVM 98 reinitializing 113 releasing from disk groups 264 removing 127, 131 removing from disk groups 220 removi
Index DMP (continued) logging levels 506 metanodes 143 migrating to or from native multi-pathing migrating between DMP and native multi-pathing 148 nodes 143 path aging 505 path failover mechanism 145 path-switch tunable 508 renaming an enclosure 189 restore policy 194 scheduling I/O on secondary paths 184 setting the DMP restore polling interval 194 stopping the DMP restore daemon 195 vxdmpadm 159 DMP nodes displaying consolidated information 161 setting names 158 DMP support JBOD devices 85 dmp_cache_ope
576 Index error messages (continued) Association not resolved 438 Cannot auto-import group 438 Configuration records are inconsistent 438 Disk for disk group not found 228 Disk group has no valid configuration copies 228, 439 Disk group version doesn't support feature 212 Disk is in use by another host 227 Disk is used by one or more subdisks 220 Disk not moving but subdisks on it are 255 Duplicate record in configuration 438 import failed 227 No valid disk found containing disk group 227 tmpsize too smal
Index hot-relocation (continued) subdisk relocation messages 417 unrelocating subdisks 417 unrelocating subdisks using vxassist 419 unrelocating subdisks using vxdiskadm 418 unrelocating subdisks using vxunreloc 419 use of free space in disk groups 411 use of spare disks 411 use of spare disks and free space 411 using only spare disks for 416 vxrelocd 406 HP disk format 82 hpdisk format 82 I iSCSI parameters (continued) setting with vxddladm 91 ISP disk groups 268 ISP disk group Importing 268 Upgrading 2
578 Index log subdisks 554 associating with plexes 277 DRL 58 logdisk 315, 322 logical units 142 loglen attribute 317 logs adding DRL log 365 adding for RAID-5 382 adding sequential DRL logs 365 adding to volumes 358 RAID-5 49, 57 removing DRL log 366 removing for RAID-5 383 removing sequential DRL logs 366 resizing using vxvol 355 specifying number for RAID-5 321 usage with volumes 298 logtype attribute 316 LUN 142 LUN group failover 143 LUN groups displaying details of 163 LUNs idle 508 M maps adding t
Index multi-pathing (continued) displaying information about 155 enabling 154 Multi-Volume Support 385 N names changing for disk groups 241 device 78 disk 78 disk media 30, 78 plex 33 plex attribute 292 renaming disks 138 subdisk 31 subdisk attribute 279 VM disk 31 volume 33 naming DMP nodes 158 naming scheme changing for disks 99 changing for TPD enclosures 104 displaying for disks 103 naming schemes for disks 79 native multi-pathing DMP coexistence with native multi-pathing 147 ndcomirror attribute 314,
580 Index partition size (continued) specifying 182 path aging 505 path failover in DMP 145 pathgroups creating 153 paths disabling for DMP 187 enabling for DMP 188 setting attributes of 177, 179 performance analyzing data 490 benefits of using VxVM 485 changing values of tunables 495 combining mirroring and striping 487 effect of read policies 487 examining ratio of reads to writes 493 hot spots identified by I/O traces 493 impact of number of disk group configuration copies 495 load balancing in DMP 147
Index plex states (continued) STALE 284 TEMP 284 TEMPRM 285 TEMPRMSD 285 plexes associating log subdisks with 277 associating subdisks with 275 associating with volumes 286 attaching to volumes 286 changing attributes 292 changing read policies for 370 comment attribute 292 complete failure messages 410 condition flags 285 copying 291 creating 281 creating striped 281 defined 32 detaching from volumes temporarily 288 disconnecting from volumes 287 displaying information about 281 dissociating from volumes
582 Index RAID-5 volumes (continued) creating 321 defined 297 performance 487 removing logs 383 raw device nodes controlling access for volume sets 391 displaying access for volume sets 391 enabling access for volume sets 390 for volume sets 389 read policies changing 370 performance of 487 prefer 370 round 370 select 370 siteread 370, 467–468, 470 split 370 read-only mode 430 readonly mode 429 RECOVER plex condition 286 recovery checkpoint interval 496 I/O delay 496 preventing on restarting volumes 351 r
Index root disks creating LVM from VxVM 121 creating VxVM 120 removing LVM 121 root volumes booting 119 rootability 117 rootdg 30 round read policy 370 round-robin load balancing 183 read policy 370 S scandisks vxdisk subcommand 83 secondary path 142 secondary path attribute 178 secondary path display 157 select read policy 370 sequential DRL defined 58 sequential DRL attribute 316 serial split brain condition 466 correcting 248 in campus clusters 243 in disk groups 243 setting path redundancy levels 179
584 Index split read policy 370 STALE plex state 284 standby path attribute 178 states for plexes 282 volume 338 statistics gathering 145 storage ordered allocation of 308, 315, 322 storage attributes and volume layout 305 storage failures 482 storage processor 142 storage relayout 51 stripe columns 38 stripe unit size recommendations 555 stripe units changing size 378 defined 38 stripe-mirror-col-split-trigger-pt 319 striped plexes adding subdisks 276 defined 38 striped volumes changing number of columns
Index tags (continued) setting on disks 234 setting on volumes 323, 368 specifying for online relayout tasks 379 specifying for tasks 340 target IDs specifying to vxassist 305 target mirroring 308, 319 targets listing 89 task monitor in VxVM 340 tasks aborting 341 changing state of 341–342 identifiers 340 listing 341 managing 341 modifying parameters of 342 monitoring 342 monitoring online relayout 380 pausing 342 resuming 342 specifying tags 340 specifying tags on online relayout operation 379 tags 340 TE
586 Index use_avid vxddladm option 101 user-specified device names 158 usesfsmartmove parameter 303 V V-5-1-2536 352 V-5-1-2829 212 V-5-1-552 220 V-5-1-569 438 V-5-1-587 227 V-5-2-3091 255 V-5-2-369 221 V-5-2-4292 255 version 0 of DCOs 66 version 20 of DCOs 66 versioning of DCOs 65 versions disk group 265 displaying for disk group 265 upgrading 265 virtual objects 27 VM disks defined 30 determining if shared 453 displaying spare 412 excluding free space from hot-relocation use 415 initializing 98 making
Index volumes (continued) adding subdisks to plexes of 276 adding to volume sets 387 adding version 20 DCOs to 359 advanced approach to creating 299 assisted approach to creating 299 associating plexes with 286 attaching plexes to 286 block device files 328, 557 booting VxVM-rootable 119 changing layout online 375 changing number of columns 378 changing read policies for mirrored 370 changing stripe unit size 378 character device files 328, 557 checking if FastResync is enabled 374 combining mirroring and
588 Index volumes (continued) recovering after correctable hardware failure 410 removing 371 removing DRL logs 366 removing from /etc/fstab 371 removing mirrors from 357 removing plexes from 357 removing RAID-5 logs 383 removing sequential DRL logs 366 removing support for DRL and instant snapshots 364 resizing using vxassist 353 resizing using vxresize 352 resizing using vxvol 354 restrictions on VxVM-bootable 118 snapshots 60 spanned 36 specifying default layout 304 specifying non-default number of colu
Index vxclustadm 441 vxconfigd managing with vxdctl 265 monitoring configuration changes 267 operation in clusters 444 vxcp_lvm_root used to create VxVM root disk 120 used to create VxVM root disk mirrors 120 vxdarestore used to handle simple/nopriv disk failures 105 vxdctl checking cluster protocol version 460 enabling disks after hot swap 136 managing vxconfigd 265 setting a site tag 472, 475 setting default disk group 212 usage in clusters 449 vxdctl enable configuring new disks 83 invoking device disco
590 Index vxdisk (continued) determining if disks are shared 453 discovering disk access names 107–108 displaying information about disks 217 displaying multi-pathing information 157 listing disks 124 listing spare disks 413 listing tags on disks 234 placing a configuration database on a cloned disk 234 removing tags from disks 235 scanning disk devices 83 setting a site name 476 setting tags on disks 234 updating the disk identifier 233 vxdisk scandisks rescanning devices 84 scanning devices 84 vxdiskadd
Index vxdmpadm list displaying DMP nodes 161 vxedit changing plex attributes 293 changing subdisk attributes 279–280 configuring number of configuration copies for a disk group 495 excluding free space on disks from hot-relocation use 415 making free space on disks available for hot-relocation use 416 marking disks as spare 413 removing disks from pool of hot-relocation spares 414 removing plexes 292 removing subdisks from VxVM 279 removing volumes 371 renaming disks 138 reserving disks 139 VxFS file syste
592 Index vxsnap (continued) removing support for DRL and instant snapshots 364 vxsplitlines diagnosing serial split brain condition 248 vxstat determining which disks have failed 410 obtaining disk performance statistics 491 obtaining volume performance statistics 489 usage with clusters 461 zeroing counters 491 vxtask aborting tasks 342 listing tasks 342 monitoring online relayout 380 monitoring tasks 342 pausing online relayout 380 resuming online relayout 380 resuming tasks 342 vxtrace tracing volume