Veritas Volume Manager 5.
Legal Notices © Copyright 2008 Hewlett-Packard Development Company, L.P. Publication Date: 2008 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Chapter 1 Understanding Veritas Volume Manager VxVM and the operating system .......................................................................19 How data is stored ........................................................................................19 How VxVM handles storage management .......................................................20 Physical objects—physical disks ................................................................20 Virtual objects ...............................
6 Contents DCO volume versioning .............................................................................. 68 FastResync limitations ............................................................................... 74 Hot-relocation ...................................................................................................... 75 Volume sets .......................................................................................................... 75 Chapter 2 Administering disks Disk devices ..
Contents Taking a disk offline ..........................................................................................118 Renaming a disk .................................................................................................119 Reserving disks ..................................................................................................119 Displaying disk information ............................................................................
8 Contents Displaying the status of the DMP path restoration thread ................. 161 Displaying information about the DMP error-handling thread ......... 162 Configuring array policy modules .......................................................... 162 Chapter 4 Creating and administering disk groups Specifying a disk group to commands ............................................................ 167 System-wide reserved disk groups .........................................................
Contents Creating subdisks ...............................................................................................215 Displaying subdisk information ......................................................................216 Moving subdisks .................................................................................................217 Splitting subdisks ..............................................................................................217 Joining subdisks .............................
10 Contents Creating a concatenated-mirror volume ................................................ 249 Creating a volume with a version 0 DCO volume ......................................... 250 Creating a volume with a version 20 DCO volume ....................................... 252 Creating a volume with dirty region logging enabled .................................. 252 Creating a striped volume ................................................................................
Contents Adding a RAID-5 log using vxplex ...........................................................283 Removing a RAID-5 log .............................................................................284 Resizing a volume ..............................................................................................284 Resizing volumes using vxresize .............................................................285 Resizing volumes using vxassist ..........................................................
12 Contents Adding a snapshot to a cascaded snapshot hierarchy ......................... 337 Refreshing an instant snapshot .............................................................. 337 Reattaching an instant snapshot ............................................................ 338 Reattaching a linked break-off snapshot volume ................................. 339 Restoring a volume from an instant snapshot ...................................... 340 Dissociating an instant snapshot ..................
Contents Chapter 12 Administering hot-relocation How hot-relocation works ................................................................................380 Partial disk failure mail messages ...........................................................383 Complete disk failure mail messages ......................................................384 How space is chosen for relocation .........................................................384 Configuring a system for hot-relocation ........................
14 Contents Converting a disk group from shared to private ................................... 424 Moving objects between disk groups ...................................................... 424 Splitting disk groups ................................................................................. 424 Joining disk groups .................................................................................... 424 Changing the activation mode on a shared disk group ........................
Contents Running a rule ............................................................................................447 Identifying configuration problems using Storage Expert .........................449 Recovery time .............................................................................................449 Disk groups .................................................................................................451 Disk striping ....................................................................
16 Contents Dirty region logging guidelines ............................................................... 515 Striping guidelines .................................................................................... 515 RAID-5 guidelines ...................................................................................... 516 Hot-relocation guidelines ......................................................................... 516 Accessing volume devices ....................................................
Chapter 1 Understanding Veritas Volume Manager VeritasTM Volume Manager (VxVM) by Symantec is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A VxVM volume appears to applications and the operating system as a physical disk on which file systems, databases and other managed data objects can be configured. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments.
18 Understanding Veritas Volume Manager ■ Volume snapshots ■ FastResync ■ Hot-relocation ■ Volume sets Further information on administering Veritas Volume Manager may be found in the following documents: ■ Veritas Storage Foundation Cross-Platform Data Sharing Administrator’s Guide Provides more information on using the Cross-platform Data Sharing (CDS) feature of Veritas Volume Manager, which allows you to move VxVM disks and objects between machines that are running under different operating sys
Understanding Veritas Volume Manager VxVM and the operating system VxVM and the operating system VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
20 Understanding Veritas Volume Manager How VxVM handles storage management How VxVM handles storage management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. ■ Physical objects—physical disks or other hardware with block and raw operating system device interfaces that are used to store data. ■ Virtual objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
Understanding Veritas Volume Manager How VxVM handles storage management VxVM writes identification information on physical disks under VxVM control (VM disks). VxVM disks can be identified even after physical disk disconnection or system outages. VxVM can then re-form disk groups and logical objects to provide failure detection and to speed system recovery. VxVM accesses all disks as entire physical disks without partitions.
22 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-2 How VxVM presents the disks in a disk array as volumes to the operating system Operating system Veritas Volume Manager Volumes Physical disks Disk 1 Disk 2 Disk 3 Disk 4 Multipathed disk arrays Some disk arrays provide multiple ports to access their disk devices.
Understanding Veritas Volume Manager How VxVM handles storage management Device Discovery service enables you to add support dynamically for new disk arrays. This operation, which uses a facility called the Device Discovery Layer (DDL), is achieved without the need for a reboot. This means that you can dynamically add a new disk array to a host, and run a command which scans the operating system’s device tree for all the attached disk devices, and reconfigures DMP with the new device database.
24 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-3 Example configuration for disk enclosures connected via a fibre channel hub or switch c1 Host Fibre Channel hub or switch Disk enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on.
Understanding Veritas Volume Manager How VxVM handles storage management In High Availability (HA) configurations, redundant-loop access to storage can be implemented by connecting independent controllers on the host to separate hubs with independent paths to the enclosures as shown in Figure 1-4.
26 Understanding Veritas Volume Manager How VxVM handles storage management See “Disk device naming in VxVM” on page 78 and “Changing the disk-naming scheme” on page 91 for details of the standard and the enclosure-based naming schemes, and how to switch between them.
Understanding Veritas Volume Manager How VxVM handles storage management ■ Subdisks (each representing a specific region of a disk) are combined to form plexes ■ Volumes are composed of one or more plexes Figure 1-5 shows the connections between Veritas Volume Manager virtual objects and how they relate to physical disks. The disk group contains three VM disks which are used to create two volumes. Volume vol01 is simple and has a single plex. Volume vol02 is a mirrored volume with two plexes.
28 Understanding Veritas Volume Manager How VxVM handles storage management Veritas Volume Manager, such as data change objects (DCOs), and cache objects, to provide extended functionality. These objects are discussed later in this chapter. Disk groups A disk group is a collection of disks that share a common configuration, and which are managed by VxVM (see “VM disks” on page 28).
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-6 VM disk example disk01 VM disk devname Physical disk Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.
30 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-8 Example of three subdisks assigned to one VM Disk disk01-01 disk01-02 disk01-03 disk01-01 disk01-02 disk01-03 Subdisks VM disk with three subdisks disk01 Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. VxVM release 3.0 or higher supports the concept of layered volumes in which subdisks can contain volumes.
Understanding Veritas Volume Manager How VxVM handles storage management You can organize data on subdisks to form a plex by using the following methods: ■ concatenation ■ striping (RAID-0) ■ mirroring (RAID-1) ■ striping with parity (RAID-5) Concatenation, striping (RAID-0), mirroring (RAID-1) and RAID-5 are described in “Volume layouts in VxVM” on page 34.
32 Understanding Veritas Volume Manager How VxVM handles storage management Note: You can use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes. These volumes are very similar to the traditional VxVM volumes that are described in this chapter. However, there are significant differences between the functionality of the two types of volume that prevent them from being used interchangeably.
Understanding Veritas Volume Manager How VxVM handles storage management In Figure 1-11 a volume, vol06, with two data plexes is mirrored. Each plex of the mirror contains a complete copy of the volume data. Figure 1-11 Example of a volume with two plexes vol06 vol06-01 vol06-02 disk01-01 vol06-01 disk02-01 vol06-02 Volume with two plexes Plexes Volume vol06 has the following characteristics: ■ It contains two plexes named vol06-01 and vol06-02. ■ Each plex contains one subdisk.
34 Understanding Veritas Volume Manager Volume layouts in VxVM Volume layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks. The volume presents a virtual device interface that is exposed to other applications for data access. These logical building blocks re-map the volume address space through which I/O is re-directed at run-time.
Understanding Veritas Volume Manager Volume layouts in VxVM Layout methods Data in virtual objects is organized to create volumes by using the following layout methods: ■ Concatenation and spanning ■ Striping (RAID-0) ■ Mirroring (RAID-1) ■ Striping plus mirroring (mirrored-stripe or RAID-0+1) ■ Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) ■ RAID-5 (striping with parity) The following sections describe each layout method.
36 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-12 Example of concatenation Data in disk01-01 n Data in disk01-03 n+1 n+2 n+3 Data blocks disk01-01 disk01-03 disk01-01 disk01-01 Plex with concatenated subdisks disk01-03 disk01-02 disk01-03 Subdisks VM disk disk01 devname n n+1 n+2 n+3 Physical disk You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-13 Example of spanning Data in disk01-01 n Data in disk02-01 Data blocks n+1 n+2 n+3 Plex with concatenated subdisks disk01-01 disk02-01 disk01-01 disk02-01 disk01-01 disk02-01 Subdisks disk02-02 disk01 disk02 devname1 devname2 VM disks Physical disks n n+1 n+2 n+3 Caution: Spanning a plex across multiple disks increases the chance that a disk failure results in failure of the assigned volume.
38 Understanding Veritas Volume Manager Volume layouts in VxVM Striping (RAID-0) Note: You need a full license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-14 Striping across three columns Column 0 Column 1 Column 2 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 Plex SU = stripe unit A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
40 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-15 shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk. This example shows three subdisks that occupy all of the space on the VM disks. It is also possible for each subdisk in a striped plex to occupy only a portion of the VM disk, which leaves free space for other disk management tasks. Figure 1-15 su2 su4 su3 su5 su6 . . .
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-16 su2 Column 0 su4 su3 Column 1 disk02-01 disk01-01 disk02-02 disk02-01 disk01-01 su5 su6 . . . su1 Example of a striped plex with concatenated subdisks per column Column 2 disk03-01 disk03-02 disk02-01 disk01-01 Striped plex disk03-03 disk03-01 disk03-02 disk02-02 Stripe units Subdisks disk03-03 disk03-01 disk03-02 disk02-02 disk03-03 disk01 disk02 disk03 devname1 devname2 devname3 VM disks su3 su6 . . .
42 Understanding Veritas Volume Manager Volume layouts in VxVM Mirroring (RAID-1) Note: You need a full license to use this feature with disks other than the root disk. Mirroring uses multiple mirrors (plexes) to duplicate the information contained in a volume. In the event of a physical disk failure, the plex on the failed disk becomes unavailable, but the system continues to operate using the unaffected mirrors.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-17 Column 0 Mirrored-stripe volume laid out on six disks Column 1 Column 2 Striped plex Mirror Column 0 Column 1 Column 2 Striped plex Mirrored-stripe volume See “Creating a mirrored-stripe volume” on page 254 for information on how to create a mirrored-stripe volume. The layout type of the data plexes in a mirror can be concatenated or striped. Even if only one is striped, the volume is still termed a mirrored-stripe volume.
44 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-18 Striped-mirror volume laid out on six disks Underlying mirrored volumes Column 0 Column 1 Column 2 Mirror Column 0 Column 1 Column 2 Striped plex Striped-mirror volume See “Creating a striped-mirror volume” on page 254 for information on how to create a striped-mirrored volume.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-19 How the failure of a single disk affects mirrored-stripe and stripedmirror volumes Striped plex X Failure of disk detaches plex X Failure of disk removes redundancy from a mirror Detached striped plex Mirrored-stripe volume with no redundancy Striped plex Striped-mirror volume with partial redundancy Compared to mirrored-stripe volumes, striped-mirror volumes are more tolerant of disk failure, and recovery time is shorter.
46 Understanding Veritas Volume Manager Volume layouts in VxVM Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods. Mirroring provides data redundancy by maintaining multiple complete copies of the data in a volume. Data being written to a mirrored volume is reflected in all copies. If a portion of a mirrored volume fails, the system continues to use the other copies of the data. RAID-5 provides data redundancy by using parity.
Understanding Veritas Volume Manager Volume layouts in VxVM parity stripe. Figure 1-21 shows the row and column arrangement of a traditional RAID-5 array. Figure 1-21 Traditional RAID-5 array Stripe 1 Stripe 3 Row 0 Stripe 2 Row 1 Column 0 Column 1 Column 2 Column 3 This traditional array structure supports growth by adding more rows per column.
48 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-22 Veritas Volume Manager RAID-5 array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = subdisk Note: Mirroring of RAID-5 volumes is not supported. See “Creating a RAID-5 volume” on page 256 for information on how to create a RAID-5 volume. Left-symmetric layout There are several layouts for data and parity that can be used in the setup of a RAID-5 array.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-23 Left-symmetric layout Column Stripe Parity stripe unit 0 1 2 3 P0 5 6 7 P1 4 10 11 P2 8 9 15 P3 12 13 14 P4 16 17 18 19 (Data) stripe unit For each stripe, data is organized starting to the right of the parity stripe unit. In the figure, data organization for the first stripe begins at P0 and continues to stripe units 0-3.
50 Understanding Veritas Volume Manager Volume layouts in VxVM Note: Failure of more than one column in a RAID-5 plex detaches the volume. The volume is no longer allowed to satisfy read or write requests. Once the failed columns have been recovered, it may be necessary to recover user data from backups.
Understanding Veritas Volume Manager Volume layouts in VxVM Logs are associated with a RAID-5 volume by being attached as log plexes. More than one log plex can exist for each RAID-5 volume, in which case the log areas are mirrored. See “Adding a RAID-5 log” on page 283 for information on how to add a RAID-5 log to a RAID-5 volume. Layered volumes A layered volume is a virtual Veritas Volume Manager object that is built on top of other volumes.
52 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-25 Example of a striped-mirror layered volume vol01 Striped-mirror volume vol01-01 vol01-01 Column 0 Column 1 Striped plex Managed by User Managed by VxVM vop01 vop02 Subdisks vop01 vop02 Underlying mirrored volumes disk04-01 disk05-01 disk06-01 disk07-01 Concatenated plexes disk04-01 disk05-01 disk06-01 disk07-01 Subdisks on VM disks Figure 1-25 illustrates the structure of a typical layered volume.
Understanding Veritas Volume Manager Volume layouts in VxVM plex (for example, resizing the volume, changing the column width, or adding a column). System administrators can manipulate the layered volume structure for troubleshooting or other operations (for example, to place data on specific disks). Layered volumes are used by VxVM to perform the following tasks and operations: ■ Creating striped-mirrors. (See “Creating a striped-mirror volume” on page 254, and the vxassist(1M) manual page.
54 Understanding Veritas Volume Manager Online relayout Online relayout Note: You need a full license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5).
Understanding Veritas Volume Manager Online relayout amount of temporary space that is required is usually 10% of the size of the volume, from a minimum of 50MB up to a maximum of 1GB. For volumes smaller than 50MB, the temporary space required is the same as the size of the volume.
56 Understanding Veritas Volume Manager Online relayout (shown by the shaded area) decreases the overall storage space that the volume requires. Figure 1-27 Example of relayout of a RAID-5 volume to a striped volume RAID-5 volume ■ Striped volume Change a volume to a RAID-5 volume (add parity). See Figure 1-28 for an example. Note that adding parity (shown by the shaded area) increases the overall storage space that the volume requires.
Understanding Veritas Volume Manager Online relayout Figure 1-30 Example of increasing the stripe width for the columns in a volume For details of how to perform online relayout operations, see “Performing online relayout” on page 294. For information about the relayout transformations that are possible, see “Permitted relayout transformations” on page 295. Limitations of online relayout Note the following limitations of online relayout: ■ Log plexes cannot be transformed.
58 Understanding Veritas Volume Manager Online relayout ■ The number of mirrors in a mirrored volume cannot be changed using relayout. ■ Only one relayout may be applied to a volume at a time. Transformation characteristics Transformation of data from one layout to another involves rearrangement of data in the existing layout to the new layout. During the transformation, online relayout retains data redundancy by mirroring any temporary space used.
Understanding Veritas Volume Manager Volume resynchronization Volume resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
60 Understanding Veritas Volume Manager Dirty region logging Resynchronization of data in the volume is done in the background. This allows the volume to be available for use while recovery is taking place. The process of resynchronization can impact system performance. The recovery process reduces some of this impact by spreading the recoveries to avoid stressing a specific disk or controller. For large volumes or for a large number of volumes, the resynchronization process can take time.
Understanding Veritas Volume Manager Dirty region logging becomes the least recently accessed for writes. This allows writes to the same region to be written immediately to disk if the region’s log bit is set to dirty. On restarting a system after a crash, VxVM recovers only those regions of the volume that are marked as dirty in the dirty region log. Log subdisks and plexes DRL log subdisks store the dirty region log of a mirrored volume that has DRL enabled.
62 Understanding Veritas Volume Manager Dirty region logging SmartSync recovery accelerator The SmartSync feature of Veritas Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as resilvering.) SmartSync reduces the time required to restore consistency, freeing more I/O bandwidth for business-critical applications.
Understanding Veritas Volume Manager Volume snapshots Redo log volume configuration A redo log is a log of changes to the database data. Because the database does not maintain changes to the redo logs, it cannot provide information about which sections require resilvering. Redo logs are also written sequentially, and since traditional dirty region logs are most useful with randomly-written data, they are of minimal use for reducing recovery time for redo logs.
64 Understanding Veritas Volume Manager Volume snapshots Figure 1-31 Time Volume snapshot as a point-in-time image of a volume T1 Original volume T2 Original volume Snapshot volume T3 Original volume Snapshot volume T4 Original volume Snapshot volume Snapshot volume is created at time T2 Snapshot volume retains image taken at time T2 Snapshot volume is updated at time T4 Resynchronize snapshot volume from original volume The traditional type of volume snapshot in VxVM is of the third-mirr
Understanding Veritas Volume Manager Volume snapshots mirror snapshots such as immediate availability and easier configuration and administration. You can also use the third-mirror break-off usage model with full-sized snapshots, where this is necessary for write-intensive applications. For more information, see the following sections: ■ “Full-sized instant snapshots” on page 307. ■ “Space-optimized instant snapshots” on page 309. ■ “Emulation of third-mirror break-off snapshots” on page 310.
66 Understanding Veritas Volume Manager FastResync Table 1-1 Comparison of snapshot features for supported snapshot types Snapshot feature Full-sized SpaceBreak-off instant (vxsnap) optimized (vxassist or instant (vxsnap) vxsnap) Can be reattached to original volume Yes No Yes Can be used to restore contents of original volume Yes Yes Yes Can quickly be refreshed Yes without being reattached Yes No Snapshot hierarchy can be split Yes No No Yes Can be moved into separate disk group from o
Understanding Veritas Volume Manager FastResync snapshot is taken, it can be accessed independently of the volume from which it was taken. In a clustered VxVM environment with shared access to storage, it is possible to eliminate the resource contention and performance overhead of using a snapshot simply by accessing it from a different node. For details of how to enable FastResync on a per-volume basis, see “Enabling FastResync on a volume” on page 292.
68 Understanding Veritas Volume Manager FastResync Availability (HA) environment requires the full resynchronization of a mirror when it is reattached to its parent volume. How non-persistent FastResync works with snapshots The snapshot feature of VxVM takes advantage of FastResync change tracking to record updates to the original volume after a snapshot plex is created. After a snapshot is taken, the snapback option is used to reattach the snapshot plex.
Understanding Veritas Volume Manager FastResync Version 0 DCO volume layout In VxVM releases 3.2 and 3.5, the DCO object only managed information about the FastResync maps. These maps track writes to the original volume and to each of up to 32 snapshot volumes since the last snapshot operation. Each plex of the DCO volume on disk holds 33 maps, each of which is 4 blocks in size by default. Persistent FastResync uses the maps in a version 0 DCO volume on disk to implement change tracking.
70 Understanding Veritas Volume Manager FastResync (by default) are used either for tracking writes to snapshots, or as copymaps. The size of the DCO volume is determined by the size of the regions that are tracked, and by the number of per-volume maps. Both the region size and the number of per-volume maps in a DCO volume may be configured when a volume is prepared for use with snapshots. The region size must be a power of 2 and be greater than or equal to 16KB.
Understanding Veritas Volume Manager FastResync Figure 1-32 Mirrored volume with persistent FastResync enabled Mirrored volume Data plex Data plex Data change object DCO plex DCO plex DCO volume To create a traditional third-mirror snapshot or an instant (copy-on-write) snapshot, the vxassist snapstart or vxsnap make operation respectively is performed on the volume. This sets up a snapshot plex in the volume and associates a disabled DCO plex with it, as shown in Figure 1-33.
72 Understanding Veritas Volume Manager FastResync Note: Space-optimized instant snapshots do not require additional full-sized plexes to be created. Instead, they use a storage cache that typically requires only 10% of the storage that is required by full-sized snapshots. There is a tradeoff in functionality in using space-optimized snapshots as described in “Comparison of snapshot features” on page 65. The storage cache is formed within a cache volume, and this volume is associated with a cache object.
Understanding Veritas Volume Manager FastResync Note: The vxsnap reattach, dis and split operations are not supported for instant space-optimized snapshots. See “Administering volume snapshots” on page 303, and the vxsnap(1M) and vxassist(1M) manual pages for more information.
74 Understanding Veritas Volume Manager FastResync different effects on the map that FastResync uses to track changes to the original volume: ■ For a version 20 DCO volume, the size of the map is increased and the size of the region that is tracked by each bit in the map stays the same. ■ For a version 0 DCO volume, the size of the map remains the same and the region size is increased.
Understanding Veritas Volume Manager Hot-relocation association. However, in such a case, you can use the vxplex snapback command with the -f (force) option to perform the snapback. Note: This restriction only applies to traditional snapshots. It does not apply to instant snapshots. ■ Any operation that changes the layout of a replica volume can mark the FastResync change map for that snapshot “dirty” and require a full resynchronization during snapback.
76 Understanding Veritas Volume Manager Volume sets and availability characteristics of the underlying volumes. For example, file system metadata could be stored on volumes with higher redundancy, and user data on volumes with better performance. For more information about creating and administering volume sets, see “Creating and administering volume sets” on page 361.
Chapter 2 Administering disks This chapter describes the operations for managing disks used by the Veritas Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. Note: Most VxVM commands require superuser or equivalent privileges.
78 Administering disks Disk devices and /dev/rdisk directories. To maintain backward compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories. VxVM recreates disk devices for all paths in the operating system’s hardware device tree as metadevices (DMP nodes) in the /dev/vx/dmp and /dev/vx/rdmp directories. The dynamic multipathing (DMP) feature of VxVM uses a DMP node to represent a disk that can be accessed by one or more physical paths, perhaps via different controllers.
Administering disks Disk devices The syntax of a legacy device name is c#t#d#, where c# represents a controller on a host bus adapter, t# is the target controller ID, and d# identifies a disk on the target controller. Fabric mode disk devices are named as follows: ■ Disks in supported disk arrays are named using the enclosure name_# format. For example, disks in the supported disk array name FirstFloor are named FirstFloor_0, FirstFloor_1, FirstFloor_2 and so on.
80 Administering disks Disk devices Private and public disk regions Most VM disks have two regions: private region A small area where configuration information is stored. A disk header label, configuration records for VxVM objects (such as volumes, plexes and subdisks), and an intent log for the configuration database are stored here.
Administering disks Disk devices auto When the vxconfigd daemon is started, VxVM obtains a list of known disk device addresses from the operating system and configures disk access records for them automatically. Auto-configured disks (with disk access type auto) support the following disk formats: cdsdisk The disk is formatted as a Cross-platform Data Sharing (CDS) disk that is suitable for moving between different operating systems.
82 Administering disks Discovering and configuring newly added disk devices Discovering and configuring newly added disk devices When you physically connect new disks to a host or when you zone new fibre channel devices to a host, you can use the vxdctl enable command to rebuild the volume device node directories and to update the DMP internal database to reflect the new state of the system.
Administering disks Discovering and configuring newly added disk devices Alternatively, you can specify a ! prefix character to indicate that you want to scan for all devices except those that are listed: # vxdisk scandisks !device=c1t1d0,c2t2d0 You can also scan for devices that are connected (or not connected) to a list of logical or physical controllers.
84 Administering disks Discovering and configuring newly added disk devices Adding support for a new disk array The following example illustrates how to add support for a new disk array named vrtsda to an HP-UX system using a vendor-supplied package on a mounted CD-ROM: # swinstall -s /cdrom vrtsda The new disk array does not need to be already connected to the system when the package is installed.
Administering disks Discovering and configuring newly added disk devices See “Changing device naming for TPD-controlled enclosures” on page 94 for information on how to change the form of TPD device names that are displayed by VxVM. See “Displaying information about TPD-controlled devices” on page 143 for details of how to find out the TPD configuration information that is known to DMP.
86 Administering disks Discovering and configuring newly added disk devices This command displays the vendor ID (VID), product IDs (PIDs) for the arrays, array types (for example, A/A or A/P), and array names. The following is sample output. # vxddladm listsupport libname=libvxfujitsu.so ATTR_NAME ATTR_VALUE ================================================= LIBNAME libvxfujitsu.
Administering disks Discovering and configuring newly added disk devices Listing supported disks in the DISKS category To list disks that are supported in the DISKS (JBOD) category, use the following command: # vxddladm listjbod Adding unsupported disk arrays to the DISKS category Disk arrays should be added as JBOD devices if no ASL is available for the array. JBODs are assumed to be Active/Active (A/A) unless otherwise specified.
88 Administering disks Discovering and configuring newly added disk devices [length=serialno_length] [policy=ap] where vendorid and productid are the VID and PID values that you found from the previous step. For example, vendorid might be FUJITSU, IBM, or SEAGATE. For Fujitsu devices, you must also specify the number of characters in the serial number as the argument to the length argument (for example, 10). If the array is of type A/A-A, A/P or A/PF, you must also specify the policy=ap attribute.
Administering disks Discovering and configuring newly added disk devices For more information, enter the command vxddladm help addjbod, or see the vxddladm(1M) and vxdmpadm(1M) manual pages. Removing disks from the DISKS category To remove disks from the DISKS (JBOD) category, use the vxddladm command with the rmjbod keyword.
90 Administering disks Placing disks under VxVM control ■ Enclosure information is not available to VxVM. This can reduce the availability of any disk groups that are created using such devices. ■ EFI disks that are under the control of HP-UX native multipathing cannot be initialized as foreign disks. You must migrate the system to DMP, initialize the disk as an EFI disk, and then migrate the system back to HP-UX native multipathing. See “Migrating between DMP and HP-UX native multipathing” on page 130.
Administering disks Changing the disk-naming scheme ■ If the disk was previously in use by the LVM subsystem, you can preserve existing data while still letting VxVM take control of the disk. This is accomplished using conversion. With conversion, the virtual layout of the data is fully converted to VxVM control (see the Veritas Volume Manager Migration Guide).
92 Administering disks Changing the disk-naming scheme Alternatively, you can change the naming scheme from the command line.
Administering disks Changing the disk-naming scheme # vxdmpadm getlungroup dmpnodename=disk25 VxVM vxdmpadm ERROR V-5-1-10910 Invalid da-name # vxdmpadm getlungroup dmpnodename=Disk_11 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== Disk_11 ENABLED Disk 2 2 0 Disk To find out which sort of naming is currently enabled, use the vxddladm get namingscheme command, as shown in the following example: # vxddladm get namingscheme NAMING_SCHEME PERSI
94 Administering disks Changing the disk-naming scheme Changing device naming for TPD-controlled enclosures Note: This feature is available only if the default disk-naming scheme is set to use operating system-based naming, and the TPD-controlled enclosure does not contain fabric disks. For disk enclosures that are controlled by third-party drivers (TPD) whose coexistence is supported by an appropriate ASL, the default behavior is to assign device names that are based on the TPD-assigned node names.
Administering disks Changing the disk-naming scheme ■ Persistent simple or nopriv disks in the boot disk group ■ Persistent simple or nopriv disks in non-boot disk groups These procedures use the vxdarestore utility to handle errors in persistent simple and nopriv disks that arise from changing to the enclosure-based naming scheme.
96 Administering disks Installing and formatting disks 3 If you want to use enclosure-based naming, use vxdiskadm to add a non-persistent simple disk to the bootdg disk group, change back to the enclosure-based naming scheme, and then run the following command: # /etc/vx/bin/vxdarestore Note: If not all the disks in bootdg go into the error state, you need only run vxdarestore to restore the disks that are in the error state and the objects that they contain.
Administering disks Displaying and changing default disk layout attributes Displaying and changing default disk layout attributes To display or change the default values for initializing disks, select menu item 21 (Change/display the default disk layout) from the vxdiskadm main menu. For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file, /etc/default/vxdisk.
98 Administering disks Adding a disk to VxVM disks available for use as replacement disks. More than one disk or pattern may be entered at the prompt.
Administering disks Adding a disk to VxVM 3 To continue with the operation, enter y (or press Return) at the following prompt: Here are the disks selected.
100 Administering disks Adding a disk to VxVM A site tag is usually applied to disk arrays or enclosures, and is not required unless you want to use the Remote Mirror feature. If you enter y to choose to add a site tag, you are prompted to the site name at step 11. 10 To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group disk group name with default disk names.
Administering disks Adding a disk to VxVM vxdiskadm then proceeds to add the disks. Adding disk device device name to disk group disk group name with disk name disk name. . . . Note: To bring LVM disks under VxVM control, use the Migration Utilities. See the Veritas Volume Manager Migration Guide for details.
102 Administering disks Rootability Note: If you are adding an uninitialized disk, warning and error messages are displayed on the console during the vxdiskadd command. Ignore these messages. These messages should not appear after the disk has been fully initialized; the vxdiskadd command displays a success message when the initialization completes. The interactive dialog for adding a disk using vxdiskadd is similar to that for vxdiskadm, described in “Adding a disk to VxVM” on page 97.
Administering disks Rootability VxVM root disk volume restrictions Volumes on a bootable VxVM root disk have the following configuration restrictions: ■ All volumes on the root disk must be in the disk group that you choose to be the bootdg disk group. ■ The names of the volumes with entries in the LIF LABEL record must be standvol, rootvol, swapvol, and dumpvol (if present).
104 Administering disks Rootability Booting root volumes Note: At boot time, the system firmware provides you with a short time period during which you can manually override the automatic boot process and select an alternate boot device. For information on how to boot your system from a device other than the primary or alternate boot devices, and how to change the primary and alternate boot devices, see the HP-UX documentation and the boot(1M), pdc(1M) and isl(1M) manual pages.
Administering disks Rootability Note: The -b option to vxcp_lvmroot uses the setboot command to define c0t4d0 as the primary boot device. If this option is not specified, the primary boot device is not changed. If the destination VxVM root disk is not big enough to accommodate the contents of the LVM root disk, you can use the -R option to specify a percentage by which to reduce the size of the file systems on the target disk.
106 Administering disks Rootability Note: You may want to keep the LVM root disk in case you ever need a boot disk that does not depend on VxVM being present on the system. However, this may require that you update the contents of the LVM root disk in parallel with changes that you make to the VxVM root disk. See “Creating an LVM root disk from a VxVM root disk” on page 106 for a description of how to create a bootable LVM root disk from the VxVM root disk.
Administering disks Rootability Adding swap volumes to a VxVM rootable system To add a swap volume to an HP-UX system with a VxVM root disk 1 Initialize the disk that is to be used to hold the swap volume (for example, c2t5d0), and add it to the boot disk group with the disk media name “swapdisk”: # /etc/vx/bin/vxdisksetup -i c2t5d0 # vxdg -g bootdg adddisk swapdisk=c2t5d0 2 Create a VxVM volume on swapdisk (with a size of 4 gigabytes in this example): # vxassist -g bootdg -U swap make swapvol1 4g dm:sw
108 Administering disks Dynamic LUN expansion Removing a persistent dump volume Caution: The system will not boot correctly if you delete a dump volume without first removing it from the crash dump configuration. Use this procedure to remove a dump volume from the crash dump configuration.
Administering disks Dynamic LUN expansion Any volumes on the device should only be grown after the device itself has first been grown. Otherwise, storage other than the device may be used to grow the volumes, or the volume resize may fail if no free storage is available. Resizing should only be performed on devices that preserve data. Consult the array documentation to verify that data preservation is supported and has been qualified.
110 Administering disks Removing disks Removing disks Note: You must disable a disk group as described in “Disabling a disk group” on page 207 before you can remove the last disk in that group. Alternatively, you can destroy the disk group as described in “Destroying a disk group” on page 208. You can remove a disk from a system and move it to another system if the disk is failing or has failed.
Administering disks Removing disks Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: VxVM INFO V-5-2-268 Removal of disk mydg01 is complete. You can now remove the disk or leave it on your system as a replacement.
112 Administering disks Removing a disk from VxVM control Removing a disk with no subdisks To remove a disk that contains no subdisks from its disk group, run the vxdiskadm program and select item 2 (Remove a disk) from the main menu, and respond to the prompts as shown in this example to remove mydg02: Enter disk name [,list,q,?] mydg02 VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg02 from group mydg.
Administering disks Removing and replacing disks To replace a disk 1 Select menu item 3 (Remove a disk for replacement) from the vxdiskadm main menu. 2 At the following prompt, enter the name of the disk to be replaced (or enter list for a list of disks): Remove a disk for replacement Menu: VolumeManager/Disk/RemoveForReplace Use this menu operation to remove a physical disk from a disk group, while retaining the disk name. This changes the state for the disk name to a removed disk.
114 Administering disks Removing and replacing disks The following devices are available as replacements: c0t1d0 You can choose one of these disks now, to replace mydg02. Select “none” if you do not wish to select a replacement disk. Choose a device, or select “none” [,none,q,?] (default: c0t1d0) Note: Do not choose the old disk drive as a replacement even though it appears in the selection list. If necessary, you can choose to initialize a new disk.
Administering disks Removing and replacing disks VxVM NOTICE V-5-2-158 Disk replacement completed successfully.
116 Administering disks Removing and replacing disks c0t1d0 c1t1d0 You can choose one of these disks to replace mydg02. Choose "none" to initialize another disk to replace mydg02.
Administering disks Enabling a disk 8 After using the vxdiskadm command to replace one or more failed disks in a VxVM cluster, run the following command on all the cluster nodes: # vxdctl enable Then run the following command on the master node: # vxreattach -r accesname where accessname is the disk access name (such as c0t1d0). This initiates the recovery of all the volumes on the disks. Alternatively, halt the cluster, reboot all the cluster nodes, and restart the cluster.
118 Administering disks Taking a disk offline vxdiskadm enables the specified device. 3 At the following prompt, indicate whether you want to enable another device (y) or return to the vxdiskadm main menu (n): Enable another device? [y,n,q,?] (default: n) Taking a disk offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it.
Administering disks Renaming a disk Renaming a disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type.
120 Administering disks Displaying disk information The vxassist command overrides the reservation and creates a 20 megabyte volume on mydg03. However, the command: # vxassist -g mydg make vol04 20m does not use mydg03, even if there is no free space on any other disk. To turn off reservation of a disk, use the following command: # vxedit [-g diskgroup] set reserve=off diskname See the vxedit(1M) manual page for more information.
Administering disks Displaying disk information Displaying disk information with vxdiskadm Displaying disk information shows you which disks are initialized, to which disk groups they belong, and the disk status. The list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk. To display disk information 1 Start the vxdiskadm program, and select list (List disk information) from the main menu.
122 Administering disks Controlling Powerfail Timeout Controlling Powerfail Timeout Powerfail Timeout is an attribute of a SCSI disk connected to an HP-UX host. This is used to detect and handle I/O on non-responding disks. See the pfto(7) man page. VxVM uses this mechanism in its Powerfail Timeout (pfto) feature. You can specify a timeout value for individual VxVM disks using the vxdisk command. If a disk fails to respond in the specified timeout period, the driver receives a timer interrupt.
Administering disks Controlling Powerfail Timeout Enabling or disabling PFTO To enable or disable PFTO on a disk, use the following command: $ vxdisk -g dg_name set disk_name pftostate={enabled|disabled} For example, to disable PFTO on the disk c5t0d6: $ vxdisk -g testdg set c5t0d6 pftostate=disabled To enable or disable PFTO on a disk group, use the following command: $ vxpfto -g dg_name -o pftostate={enabled|disabled} For example, to disable PFTO on all disks in the diskgroup testdg: $ vxpfto -g testd
124 Administering disks Controlling Powerfail Timeout
Chapter 3 Administering dynamic multipathing (DMP) The dynamic multipathing (DMP) feature of Veritas Volume Manager (VxVM) provides greater availability, reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. DMP coexists with the native multipathing in HP-UX. For more information, see “DMP coexistence with HP-UX native multipathing” on page 130.
126 Administering dynamic multipathing (DMP) How DMP works For Active/Passive arrays with LUN group failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary and secondary controller are each connected to a separate group of LUNs.
Administering dynamic multipathing (DMP) How DMP works Figure 3-1 How DMP represents multiple physical paths to a disk as one node VxVM Host c1 c2 Single DMP node Multiple paths Mapped by DMP DMP Multiple paths Disk As described in “Enclosure-based naming” on page 23, VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs.
128 Administering dynamic multipathing (DMP) How DMP works See “Changing the disk-naming scheme” on page 91 for details of how to change the naming scheme that VxVM uses for disk devices. See “Discovering and configuring newly added disk devices” on page 82 for a description of how to make newly added disk hardware known to a host system.
Administering dynamic multipathing (DMP) How DMP works DMP is also informed when a connection is repaired or restored, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). If required, the response of DMP to I/O failure on a path can be tuned for the paths to individual arrays.
130 Administering dynamic multipathing (DMP) How DMP works DMP coexistence with HP-UX native multipathing The HP-UX 11i v3 release includes support for native multipathing, which can coexist with DMP. HP-UX native multipathing creates a persistent (agile) device in the /dev/disk and /dev/rdisk directories for each disk that can be accessed by one or more physical paths. To maintain backward compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories.
Administering dynamic multipathing (DMP) How DMP works 3 Restart all the volumes in each disk group: # vxvol -g diskgroup startall The output from the vxdisk list command now shows only HP-UX native multipathing metanode names, for example: # vxdisk list DEVICE TYPE disk155 auto:LVM disk156 auto:LVM disk224 auto:cdsdisk disk225 auto:cdsdisk disk226 auto:cdsdisk disk227 auto:cdsdisk disk228 auto:cdsdisk disk229 auto:cdsdisk DISK - GROUP - STATUS LVM LVM online online online online online online When H
132 Administering dynamic multipathing (DMP) How DMP works and under the new naming scheme as: # vxdisk list DEVICE TYPE disk155 auto:LVM disk156 auto:LVM disk224 auto:cdsdisk disk225 auto:cdsdisk disk226 auto:cdsdisk disk227 auto:cdsdisk disk228 auto:cdsdisk disk229 auto:cdsdisk DISK - GROUP - STATUS LVM LVM online online online online online online See “Changing the disk-naming scheme” on page 91. DMP in a clustered environment Note: You need an additional license to use the cluster feature of VxVM.
Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices Enabling or disabling controllers with shared disk groups Prior to release 5.0, VxVM did not allow enabling or disabling of paths or controllers connected to a disk that is part of a shared Veritas Volume Manager disk group. From VxVM 5.0 onward, such operations are supported on shared DMP nodes in a cluster.
134 Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices ◆ Select option 1 to exclude all paths through the specified controller from the view of VxVM. These paths remain in the disabled state until the next reboot, or until the paths are re-included. ◆ Select option 2 to exclude specified paths from the view of VxVM. ◆ Select option 3 to exclude disks from the view of VxVM that match a specified Vendor ID and Product ID.
Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices ? ?? q Display help about menu Display help about the menuing system Exit from menus ◆ Select option 1 to make all paths through a specified controller visible to VxVM. ◆ Select option 2 to make specified paths visible to VxVM. ◆ Select option 3 to make disks visible to VxVM that match a specified Vendor ID and Product ID. ◆ Select option 4 to remove a pathgroup definition.
136 Administering dynamic multipathing (DMP) Enabling and disabling I/O for controllers and storage processors Enabling and disabling I/O for controllers and storage processors DMP allows you to turn off I/O for a controller or the array port of a storage processor so that you can perform administrative operations. This feature can be used for maintenance of HBA controllers on the host, or array ports that are attached to disk arrays supported by VxVM.
Administering dynamic multipathing (DMP) Displaying DMP database information Displaying DMP database information You can use the vxdmpadm command to list DMP database information and perform other administrative tasks. This command allows you to list all controllers that are connected to disks, and other related information that is stored in the DMP database. You can use this information to locate system hardware, and to help you decide which controllers need to be enabled or disabled.
138 Administering dynamic multipathing (DMP) Displaying the paths to a disk devicetag: c1t0d3 type: simple hostid: zort disk: name=mydg04 id=962923652.362193.zort timeout: 30 group: name=mydg id=962212937.1025.zort info: privoffset=128 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/c1t0d3 privpaths: char=/dev/vx/rdmp/c1t0d3 version: 2.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Administering DMP using vxdmpadm The vxdmpadm utility is a command line administrative interface to the DMP feature of VxVM. You can use the vxdmpadm utility to perform the following tasks. ■ Retrieve the name of the DMP device corresponding to a particular path. ■ Display the members of a LUN group. ■ List all paths under a DMP device node, HBA controller or array port.
140 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The physical path is specified by argument to the nodename attribute, which must be a valid path listed in the /dev/rdsk directory.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm For A/P arrays in which the I/O policy is set to singleactive, only one path is shown as ENABLED(A). The other paths are enabled but not available for I/O. If the I/O policy is not set to singleactive, DMP can use a group of paths (all primary or all secondary) for I/O, which are shown as ENABLED(A). See “Specifying the I/O policy” on page 147 for more information.
142 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm operations being disabled on that controller by using the vxdmpadm disable command.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm NAME ENCLR-NAME ARRAY-PORT-ID pWWN ============================================================== c2t66d0 HDS9500V0 1A 20:00:00:E0:8B:06:5F:19 Displaying information about TPD-controlled devices The third-party driver (TPD) coexistence feature allows I/O that is controlled by third-party multipathing drivers to bypass DMP while retaining the monitoring capabilities of DMP.
144 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Gathering and displaying I/O statistics You can use the vxdmpadm iostat command to gather and display I/O statistics for a specified DMP node, enclosure, path or controller.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm c2t115d0 c3t115d0 c2t103d0 c3t103d0 c2t102d0 c3t102d0 c2t121d0 c3t121d0 c2t112d0 c3t112d0 c2t96d0 c3t96d0 c2t106d0 c3t106d0 c2t113d0 c3t113d0 c2t119d0 c3t119d0 87 0 87 0 87 0 87 0 87 0 87 0 87 0 87 0 87 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.001200 0.000000 0.007315 0.000000 0.001132 0.000000 0.000997 0.000000 0.001559 0.
146 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm c3t115d0 PATHNAME c3t115d0 0 0 0 0 0.000000 0.000000 cpu usage = 59us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) READS WRITES READS WRITES READS WRITES 0 0 0 0 0.000000 0.000000 Setting the attributes of the paths to an enclosure You can use the vxdmpadm setattr command to set the following attributes of the paths to an enclosure or disk array: ■ active Changes a standby (failover) path to an active path.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm ■ primary Defines a path as being the primary path for an Active/Passive disk array. The following example specifies a primary path for an A/P disk array: # vxdmpadm setattr path c3t10d0 pathtype=primary ■ secondary Defines a path as being the secondary path for an Active/Passive disk array.
148 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Note: Starting with release 4.1 of VxVM, I/O policies are recorded in the file /etc/vx/dmppolicy.info, and are persistent across reboots of the system. Do not edit this file yourself. The following policies may be set: ■ adaptive This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm You can use the size argument to the partitionsize attribute to specify the partition size.
150 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm ■ minimumq This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queue for a LUN. This is suitable for low-end disks or JBODs where a significant track cache does not exist. No further configuration is possible as DMP automatically determines the path with the shortest queue.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm # vxdmpadm setattr arrayname DISK iopolicy=singleactive Scheduling I/O on the paths of an Asymmetric Active/Active array You can specify the use_all_paths attribute in conjunction with the adaptive, balanced, minimumq, priority and round-robin I/O policies to specify whether I/O requests are to be scheduled on the secondary paths in addition to the primary paths of an Asymmetric Active/Active (A/A-A) array.
152 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm # dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null & By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15: # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 ... cpu usage = 11294us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 0 0 0 0 0.000000 0.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm c4t2d15 c4t3d15 c5t3d15 c5t4d15 1086 1048 1036 1021 0 0 0 0 1086 1048 1036 1021 0 0 0 0 0.390424 0.391221 0.390927 0.392752 0.000000 0.000000 0.000000 0.000000 The enclosure can be returned to the single active I/O policy by entering the following command: # vxdmpadm setattr enclosure ENC0 iopolicy=singleactive Disabling I/O for paths, controllers or array ports Note: From release 5.
154 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The disable operation fails if it is issued to a controller that is connected to the root disk through a single path, and there are no root disk mirrors configured on alternate paths. If such mirrors exist, the command succeeds. Enabling I/O for paths, controllers or array ports Note: This operation is not supported for controllers that are used to access disk arrays on which cluster-shareable disk groups are configured.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm For a system with a volume mirrored across 2 controllers on one HBA, set up the configuration as follows: 1 Disable the plex that is associated with the disk device: # /opt/VRTS/bin/vxplex -g diskgroup det plex 2 Stop I/O to all disks through one controller of the HBA: # /opt/VRTS/bin/vxdmpadm disable ctlr=first_cntlr For the other controller on the HBA, enter: # /opt/VRTS/bin/vxdmpadm -f disable ctlr=second_cntlr 3 Upgrade the
156 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Configuring the response to I/O failures By default, DMP is configured to retry a failed I/O request up to 5 times for a single path.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The following example configures time-bound recovery for the enclosure enc0, and sets the value of iotimeout to 60 seconds: # vxdmpadm setattr enclosure enc0 recoveryoption=timebound \ iotimeout=60 The next example sets a fixed-retry limit of 10 for the paths to all Active/Active arrays: # vxdmpadm setattr arraytype A/A recoveryoption=fixedretry \ retrycount=10 Specifying recoveryoption=default resets DMP to the default settings co
158 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The following example shows how to disable I/O throttling for the paths to the enclosure enc0: # vxdmpadm setattr enclosure enc0 recoveryoption=nothrottle The vxdmpadm setattr command can be used to enable I/O throttling on the paths to a specified enclosure, disk array name, or type of array: # vxdmpadm setattr \ {enclosure enc-name|arrayname name|arraytype type}\ recoveryoption=throttle {iotimeout=seconds|queuedepth=n} If the
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Displaying recoveryoption values The following example shows the vxdmpadm getattr command being used to display the recoveryoption option values that are set on an enclosure.
160 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Configuring DMP path restoration policies DMP maintains a kernel thread that re-examines the condition of paths at a specified interval. The type of analysis that is performed on the paths depends on the checking policy that is configured. Note: The DMP path restoration thread does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The interval attribute must be specified for this policy. The default number of cycles between running the check_all policy is 10. The interval attribute specifies how often the path restoration thread examines the paths. For example, after stopping the path restoration thread, the polling interval can be set to 400 seconds using the following command: # vxdmpadm start restore interval=400 Note: The default interval is 300 seconds.
162 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Displaying information about the DMP error-handling thread To display information about the kernel thread that handles DMP errors, use the following command: # vxdmpadm stat errord One daemon should be shown as running. Configuring array policy modules An array policy module (APM) is a dynamically loadable kernel module that may be provided by some vendors for use in conjunction with an array.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Note: By default, DMP uses the most recent APM that is available. Specify the -u option instead of the -a option if you want to force DMP to use an earlier version of the APM. The current version of an APM is replaced only if it is not in use.
164 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm
Chapter 4 Creating and administering disk groups This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. Note: In releases of Veritas Volume Manager (VxVM) prior to 4.0, a system installed with VxVM was configured with a default disk group, rootdg, that had to contain at least one disk.
166 Creating and administering disk groups As system administrator, you can create additional disk groups to arrange your system’s disks for different purposes. Many systems do not use more than one disk group, unless they have a large number of disks. Disks can be initialized, reserved, and added to disk groups at any time. Disks need not be added to disk groups until the disks are needed to create VxVM objects. When a disk is added to a disk group, it is given a name (for example, mydg02).
Creating and administering disk groups Specifying a disk group to commands Specifying a disk group to commands Note: Most VxVM commands require superuser or equivalent privileges. Many VxVM commands allow you to specify a disk group using the -g option.
168 Creating and administering disk groups Specifying a disk group to commands Rules for determining the default disk group It is recommended that you use the -g option to specify a disk group to VxVM commands that accept this option. If you do not specify the disk group, VxVM applies rules in the following order until it determines a disk group name: ■ Use the default disk group name that is specified by the environment variable VXVM_DEFAULTDG.
Creating and administering disk groups Displaying disk group information If bootdg is specified as the argument to this command, the default disk group is set to be the same as the currently defined system-wide boot disk group. If nodg is specified as the argument to the vxdctl defaultdg command, the default disk group is undefined. Note: The specified diskgroup need not currently exist on the system. See the vxdctl(1M) and vxdg(1M) manual pages for more information.
170 Creating and administering disk groups Creating a disk group flags: diskid: dgname: dgid: hostid: info: online ready private autoconfig autoimport imported 963504891.1070.bass newdg 963504895.1075.bass bass privoffset=128 Displaying free space in a disk group Before you add volumes and file systems to your system, make sure you have enough free disk space to meet your needs.
Creating and administering disk groups Adding a disk to a disk group A disk group must have at least one disk associated with it. A new disk group can be created when you use menu item 1 (Add or initialize one or more disks) of the vxdiskadm command to add disks to VxVM control, as described in “Adding a disk to VxVM” on page 97. The disks to be added to a disk group must not belong to an existing disk group.
172 Creating and administering disk groups Removing a disk from a disk group Removing a disk from a disk group Note: Before you can remove the last disk from a disk group, you must disable the disk group as described in “Disabling a disk group” on page 207. Alternatively, you can destroy the disk group as described in “Destroying a disk group” on page 208.
Creating and administering disk groups Deporting a disk group ■ There is not enough space on the remaining disks. ■ Plexes or striped subdisks cannot be allocated on different disks from existing plexes or striped subdisks in the volume. If vxdiskadm cannot move some volumes, you may need to remove some plexes from some disks to free more space before proceeding with the disk removal operation.
174 Creating and administering disk groups Importing a disk group Enter name of disk group [,list,q,?] (default: list) newdg 5 At the following prompt, enter y if you intend to remove the disks in this disk group: VxVM INFO V-5-2-377 The requested operation is to disable access to the removable disk group named newdg. This disk group is stored on the following disks: newdg01 on device c1t1d0 You can choose to disable access to (also known as “offline”) these disks.
Creating and administering disk groups Handling disks with duplicated identifiers Enable access to (import) a disk group Menu: VolumeManager/Disk/EnableDiskGroup Use this operation to enable access to a disk group. This can be used as the final part of moving a disk group from one system to another. The first part of moving a disk group is to use the “Remove access to (deport) a disk group” operation on the original host.
176 Creating and administering disk groups Handling disks with duplicated identifiers compared with the UDID that is set in the disk’s private region. If the UDID values do not match, the udid_mismatch flag is set on the disk. This flag can be viewed with the vxdisk list command.
Creating and administering disk groups Handling disks with duplicated identifiers # vxdg -o useclonedev=on [-o updateid] import mydg Note: This form of the command allows only cloned disks to be imported. All non-cloned disks remain unimported. If the clone_disk flag is set on a disk, this indicates the disk was previously imported into a disk group with the udid_mismatch flag set.
178 Creating and administering disk groups Handling disks with duplicated identifiers To check which disks in a disk group contain copies of this configuration information, use the vxdg listmeta command: # vxdg [-q] listmeta diskgroup The -q option can be specified to suppress detailed configuration information from being displayed.
Creating and administering disk groups Handling disks with duplicated identifiers These tags can be viewed by using the vxdisk listtag command: # vxdisk listtag DEVICE TagmaStore-USP0_24 TagmaStore-USP0_25 TagmaStore-USP0_28 TagmaStore-USP0_28 NAME t2 t1 t1 t2 VALUE v2 v1 v1 v2 The following command ensures that configuration database copies and kernel log copies are maintained for all disks in the disk group mydg that are tagged as t1: # vxdg -g mydg set tagmeta=on tag=t1 nconfig=all nlog=all The disk
180 Creating and administering disk groups Handling disks with duplicated identifiers To import the cloned disks, they must be assigned a new disk group name, and their UDIDs must be updated: # vxdg -n newdg -o # vxdisk -o alldgs DEVICE TagmaStore-USP0_3 TagmaStore-USP0_23 TagmaStore-USP0_25 TagmaStore-USP0_30 TagmaStore-USP0_31 TagmaStore-USP0_32 useclonedev=on -o updateid import mydg list TYPE DISK GROUP STATUS auto:cdsdisk mydg03 newdg online clone_disk auto:cdsdisk mydg02 mydg online auto:cdsdisk mydg
Creating and administering disk groups Handling disks with duplicated identifiers DEVICE EMC0_1 EMC0_27 TYPE DISK auto:cdsdisk EMC0_1 auto:cdsdisk - GROUP mydg - STATUS online online udid_mismatch The following command imports the cloned disk into the new disk group newdg, and updates the disk’s UDID: # vxdg -n newdg -o useclonedev=on -o updateid import mydg The state of the cloned disk is now shown as online clone_disk: # vxdisk -o alldgs list DEVICE TYPE DISK EMC0_1 auto:cdsdisk EMC0_1 EMC0_27 auto:
182 Creating and administering disk groups Handling disks with duplicated identifiers As the cloned disk EMC0_15 is not tagged as t1, it is not imported. Note that the state of the imported cloned disks has changed from online udid_mismatch to online clone_disk. By default, the state of imported cloned disks is shown as online clone_disk.
Creating and administering disk groups Renaming a disk group Renaming a disk group Only one disk group of a given name can exist per system. It is not possible to import or deport a disk group when the target system already has a disk group of the same name. To avoid this problem, VxVM allows you to rename a disk group during import or deport.
184 Creating and administering disk groups Moving disks between disk groups dgid: 774226267.1025.tweety Note: In this example, the administrator has chosen to name the boot disk group as rootdg. The ID of this disk group is 774226267.1025.tweety. This procedure assumes that all the disks in the boot disk group are accessible by both hosts. 2 Shut down the original host.
Creating and administering disk groups Moving disk groups between systems You can also move a disk by using the vxdiskadm command. Select item 3 (Remove a disk) from the main menu, and then select item 1 (Add or initialize a disk). See “Moving objects between disk groups” on page 203 for an alternative and preferred method of moving disks between disk groups. This method preserves VxVM objects, such as volumes, that are configured on the disks.
186 Creating and administering disk groups Moving disk groups between systems Caution: The purpose of the lock is to ensure that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If two systems try to access the same disks at the same time, this must be managed using software such as the clustering functionality of VxVM.
Creating and administering disk groups Moving disk groups between systems The following error message indicates a recoverable error. VxVM vxdg ERROR V-5-1-587 Disk group groupname: import failed: Disk for disk group not found If some of the disks in the disk group have failed, you can force the disk group to be imported by specifying the -f option to the vxdg import command: # vxdg -f import diskgroup Caution: Be careful when using the -f option.
188 Creating and administering disk groups Moving disk groups between systems minor numbers near the top of this range to allow for temporary device number remapping in the event that a device minor number collision may still occur. VxVM reserves the range of minor numbers from 0 to 999 for use with volumes in the boot disk group. For example, the rootvol volume is always assigned minor number 0. If you do not specify the base of the minor number range for a disk group, VxVM chooses one at random.
Creating and administering disk groups Moving disk groups between systems reminor operation on the nodes that are in the cluster to resolve the conflict. In a cluster where more than one node is joined, use a base minor number which does not conflict on any node. For further information on minor number reservation, see the vxdg(1M) manual page.
190 Creating and administering disk groups Handling conflicting configuration copies You can use the following command to discover the maximum number of volumes that are supported by VxVM on a Linux host: # cat /proc/sys/vxvm/vxio/vol_max_volumes 4079 See the vxdg(1M) manual page for more information.
Creating and administering disk groups Handling conflicting configuration copies Figure 4-1 Typical arrangement of a 2-node campus cluster Node 0 Redundant private network Node 1 Fibre Channel switches Disk enclosures enc0 Building A enc1 Building B A serial split brain condition typically arises in a cluster when a private (nonshared) disk group is imported on Node 0 with Node 1 configured as the failover node.
192 Creating and administering disk groups Handling conflicting configuration copies for the disks in their copies of the configuration database, and also in each disk’s private region, are updated separately on that host. When the disks are subsequently re-imported into the original shared disk group, the actual serial IDs on the disks do not agree with the expected values from the configuration copies on other disks in the disk group.
Creating and administering disk groups Handling conflicting configuration copies ■ If the other disks were also imported on another host, no disk can be considered to have a definitive copy of the configuration database. The figure below illustrates how this condition can arise for two disks.
194 Creating and administering disk groups Handling conflicting configuration copies The following section, “Correcting conflicting configuration information,” describes how to fix this condition. For more information on how to set up and maintain a remote mirror configuration, see “Administering sites and remote mirrors” on page 431. Correcting conflicting configuration information Note: This procedure requires that the disk group has a version number of at least 110.
Creating and administering disk groups Reorganizing the contents of disk groups In this example, the disk group has four disks, and is split so that two disks appear to be on each side of the split. You can specify the -c option to vxsplitlines to print detailed information about each of the disk IDs from the configuration copy on a disk specified by its disk access name: # vxsplitlines DANAME(DMNAME) c2t5d0( c2t5d0 c2t6d0( c2t6d0 c2t7d0( c2t7d0 c2t8d0( c2t8d0 -g newdg -c c2t6d0 || Actual SSB ) || 0.
196 Creating and administering disk groups Reorganizing the contents of disk groups ■ To perform online maintenance and upgrading of fault-tolerant systems that can be split into separate hosts for this purpose, and then rejoined. ■ To isolate volumes or disks from a disk group, and process them independently on the same host or on a different host. This allows you to implement off-host processing solutions for the purposes of backup or decision support.
Creating and administering disk groups Reorganizing the contents of disk groups imported disk group exists with the same name as the target disk group. An existing deported disk group is destroyed if it has the same name as the target disk group (as is the case for the vxdg init command). The split operation is illustrated in Figure 4-5.
198 Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-6 Disk group join operation Source disk group Target disk group join After join Target disk group These operations are performed on VxVM objects such as disks or top-level volumes, and include all component objects such as sub-volumes, plexes and subdisks.
Creating and administering disk groups Reorganizing the contents of disk groups must recover the disk group manually as described in the section “Recovery from Incomplete Disk Group Moves” in the chapter “Recovery from Hardware Failure” of the Veritas Volume Manager Troubleshooting Guide. Limitations of disk group split and join The disk group split and join feature has the following limitations: ■ Disk groups involved in a move, split or join must be version 90 or greater.
200 Creating and administering disk groups Reorganizing the contents of disk groups within storage pools may not be split or moved. See the Veritas Storage Foundation Intelligent Storage Provisioning Administrator’s Guide for a description of ISP and storage pools. ■ If a cache object or volume set that is to be split or moved uses ISP volumes, the storage pool that contains these volumes must also be specified. The following sections describe how to use the vxdg command to reorganize disk groups.
Creating and administering disk groups Reorganizing the contents of disk groups plexes were placed on the same disks as the data plexes for convenience when performing disk group split and move operations. As version 20 DCOs support dirty region logging (DRL) in addition to Persistent FastResync, it is preferable for the DCO plexes to be separated from the data plexes. This improves the performance of I/O from/to the volume, and provides resilience for the DRL logs.
202 Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-7 Examples of disk groups that can and cannot be split Volume data plexes The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data. Snapshot plex Split Volume DCO plexes Snapshot DCO plex Volume data plexes Snapshot plex The disk group cannot be split as the DCO plexes cannot accompany their volumes.
Creating and administering disk groups Reorganizing the contents of disk groups Moving objects between disk groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ...
204 Creating and administering disk groups Reorganizing the contents of disk groups For example, the following output from vxprint shows the contents of disk groups rootdg and mydg: # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm rootdg02 c1t97d0 dm rootdg03 c1t112d0 dm rootdg04 c1t114d0 dm rootdg06 c1t98d0 KSTATE - LENGTH 17678493 1767849 3 17678493 17678493 PLOFFS - STATE - TUTIL0 - PUTIL0 - Disk group: mydg TY NAME ASSOC dg mydg mydg dm mydg01 c0t1d0 dm mydg05 c1t96d0 dm mydg07 c1t
Creating and administering disk groups Reorganizing the contents of disk groups Disk group: mydg TY NAME ASSOC dg mydg mydg dm mydg07 c1t99d0 dm mydg08 c1t100d0 KSTATE - LENGTH 17678493 17678493 PLOFFS - STATE - TUTIL0 - PUTIL0 - The following commands would also achieve the same result: # vxdg move mydg rootdg mydg01 mydg05 # vxdg move mydg rootdg vol1 Splitting disk groups To remove a self-contained set of VxVM objects from an imported source disk group to a new target disk group, use the followi
206 Creating and administering disk groups Reorganizing the contents of disk groups The output from vxprint after the split shows the new disk group, mydg: # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm rootdg01 c0t1d0 dm rootdg02 c1t97d0 dm rootdg03 c1t112d0 dm rootdg04 c1t114d0 dm rootdg05 c1t96d0 dm rootdg06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd rootdg01-01 vol1-01 pl vol1-02 vol1 sd rootdg05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 1767849 3
Creating and administering disk groups Disabling a disk group Disk group: mydg TY NAME ASSOC dg mydg mydg dm mydg05 c1t96d0 dm mydg06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd mydg01-01 vol1-01 pl vol1-02 vol1 sd mydg05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 2048 3591 3591 3591 3591 PLOFFS 0 0 STATE ACTIVE ACTIVE ACTIVE - TUTIL0 - PUTIL0 - The following command joins disk group mydg to rootdg: # vxdg join mydg rootdg The moved volumes are initially disabled
208 Creating and administering disk groups Destroying a disk group Destroying a disk group The vxdg command provides a destroy option that removes a disk group from the system and frees the disks in that disk group for reinitialization: # vxdg destroy diskgroup Caution: This command destroys all data on the disks. When a disk group is destroyed, the disks that are released can be re-used in other disk groups.
Creating and administering disk groups Upgrading a disk group becomes incompatible with earlier releases of VxVM that do not support the new version. Before the imported disk group is upgraded, no changes are made to the disk group to prevent its use on the release from which it was imported until you explicitly upgrade it to the current release. Until completion of the upgrade, the disk group can be used “as is” provided there is no attempt to use the features of the current version.
210 Creating and administering disk groups Upgrading a disk group Importing the disk group of a previous version on a Veritas Volume Manager system prevents the use of features introduced since that version was released.
Creating and administering disk groups Upgrading a disk group Table 4-2 Features supported by disk group versions Disk group version New features supported 70 ■ Non-Persistent FastResync ■ Sequential DRL ■ Unrelocate ■ VVR Enhancements ■ Online Relayout ■ Safe RAID-5 Subdisk Moves 50 ■ SRVM (now known as Veritas Volume Replicator or VVR) 20, 30, 40 40 ■ Hot-Relocation 20, 30 30 ■ VxSmartSync Recovery Accelerator 20 20 ■ Dirty Region Logging (DRL) ■ Disk Group Configuration
212 Creating and administering disk groups Managing the configuration daemon in VxVM To create a disk group with a previous version, specify the -T version option to the vxdg init command. For example, to create a disk group with version 120 that can be imported by a system running VxVM 4.1, use the following command: # vxdg -T 120 init newdg newdg01=c0t3d0 This creates a disk group, newdg, which can be imported by Veritas Volume Manager 4.1. Note that while this disk group can be imported on the VxVM 4.
Creating and administering disk groups Backing up and restoring disk group configuration data For more information about how to use vxdctl, refer to the vxdctl(1M) manual page. Backing up and restoring disk group configuration data The disk group configuration backup and restoration feature allows you to back up and restore all configuration data for disk groups, and for VxVM objects such as volumes that are configured within the disk groups.
214 Creating and administering disk groups Using vxnotify to monitor configuration changes
Chapter 5 Creating and administering subdisks This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a Veritas Volume Mananger (VxVM) configuration that are required to create plexes and volumes. Note: Most VxVM commands require superuser or equivalent privileges. Creating subdisks Note: Subdisks are created automatically if you use the vxassist command or the Veritas Enterprise Administrator (VEA) to create volumes.
216 Creating and administering subdisks Displaying subdisk information Note: As for all VxVM commands, the default size unit is s, representing a sector. Add a suffix, such as k for kilobyte, m for megabyte or g for gigabyte, to change the unit of size. For example, 500m would represent 500 megabytes. If you intend to use the new subdisk to build a volume, you must associate the subdisk with a plex (see “Associating subdisks with plexes” on page 218).
Creating and administering subdisks Moving subdisks Moving subdisks Moving a subdisk copies the disk space contents of a subdisk onto one or more other subdisks. If the subdisk being moved is associated with a plex, then the data stored on the original subdisk is copied to the new subdisks. The old subdisk is dissociated from the plex, and the new subdisks are associated with the plex. The association is at the same offset within the plex as the source subdisk.
218 Creating and administering subdisks Joining subdisks For example, to split subdisk mydg03-02, with size 2000 megabytes into subdisks mydg03-02, mydg03-03, mydg03-04 and mydg03-05, each with size 500 megabytes, all in the disk group, mydg, use the following commands: # vxsd -g mydg -s 1000m split mydg03-02 mydg03-02 mydg03-04 # vxsd -g mydg -s 500m split mydg03-02 mydg03-02 mydg03-03 # vxsd -g mydg -s 500m split mydg03-04 mydg03-04 mydg03-05 Joining subdisks Joining subdisks combines two or more existi
Creating and administering subdisks Associating subdisks with plexes Subdisks can also be associated with a plex that already exists. To associate one or more subdisks with an existing plex, use the following command: # vxsd [-g diskgroup] assoc plex subdisk1 [subdisk2 subdisk3 ...
220 Creating and administering subdisks Associating log subdisks If the volume is enabled, the association operation regenerates data that belongs on the subdisk. Otherwise, it is marked as stale and is recovered when the volume is started. Associating log subdisks Note: The version 20 DCO volume layout includes space for a DRL. Do not apply the procedure described in this section to a volume that has a version 20 DCO volume associated with it.
Creating and administering subdisks Dissociating subdisks from plexes Dissociating subdisks from plexes To break an established connection between a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in another plex.
222 Creating and administering subdisks Changing subdisk attributes ■ putiln ■ tutiln ■ len ■ comment The putiln field attributes are maintained on reboot; tutiln fields are temporary and are not retained on reboot. VxVM sets the putil0 and tutil0 utility fields. Other Symantec products, such as the Veritas Enterprise Administrator (VEA), set the putil1 and tutil1 fields. The putil2 and tutil2 are available for you to use for site-specific purposes.
Chapter 6 Creating and administering plexes This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
224 Creating and administering plexes Creating a striped plex Creating a striped plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 in the disk group, mydg, with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake -g mydg plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=mydg01-01,mydg02-01 To use a plex to build a volume, you must associate the plex with the volume.
Creating and administering plexes Displaying plex information VxVM utilities use plex states to: ■ indicate whether volume contents have been initialized to a known state ■ determine if a plex contains a valid copy (mirror) of the volume contents ■ track whether a plex was in active use at the time of a system failure ■ monitor operations on plexes This section explains the individual plex states in detail.
226 Creating and administering plexes Displaying plex information EMPTY plex state Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOFAIL plex state The IOFAIL plex state is associated with persistent state logging. When the vxconfigd daemon detects an uncorrectable I/O failure on an ACTIVE plex, it places the plex in the IOFAIL state to exclude it from the recovery selection process at volume start time.
Creating and administering plexes Displaying plex information SNAPTMP plex state The SNAPTMP plex state is used during a vxassist snapstart operation when a snapshot is being prepared on a volume. STALE plex state If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state. Also, if an I/O error occurs on a plex, the kernel stops using and updating the contents of that plex, and the plex state is set to STALE.
228 Creating and administering plexes Displaying plex information TEMPRMSD plex state The TEMPRMSD plex state is used by vxassist when attaching new data plexes to a volume. If the synchronization operation does not complete, the plex and its subdisks are removed. Plex condition flags vxprint may also display one of the following condition flags in the STATE field: IOFAIL plex condition The plex was detached as a result of an I/O failure detected during normal volume I/O.
Creating and administering plexes Attaching and associating plexes Plex kernel states The plex kernel state indicates the accessibility of the plex to the volume driver which monitors it. Note: No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled. The following plex kernel states are defined: DETACHED plex kernel state Maintenance is being performed on the plex.
230 Creating and administering plexes Taking plexes offline Note: You can also use the command vxassist mirror volume to add a data plex as a mirror to an existing volume. Taking plexes offline Once a volume has been created and placed online (ENABLED), VxVM can temporarily disconnect plexes from the volume. This is useful, for example, when the hardware on which the plex resides needs repair or when a volume has been left unstartable and a source plex for the volume revive must be chosen manually.
Creating and administering plexes Detaching plexes Detaching plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
232 Creating and administering plexes Moving plexes If the vxinfo command shows that the volume is unstartable (see “Listing Unstartable Volumes” in the section “Recovery from Hardware Failure” in the Veritas Volume Manager Troubleshooting Guide), set one of the plexes to CLEAN using the following command: # vxmend [-g diskgroup] fix clean plex Start the volume using the following command: # vxvol [-g diskgroup] start volume Moving plexes Moving a plex copies the data content from the original plex onto
Creating and administering plexes Copying volumes to plexes Copying volumes to plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex [-g diskgroup] cp volume new_plex After the copy task is complete, new_plex is not associated with the specified volume volume. The plex contains a complete copy of the volume data.
234 Creating and administering plexes Changing plex attributes Alternatively, you can first dissociate the plex and subdisks, and then remove them with the following commands: # vxplex [-g diskgroup] dis plex # vxedit [-g diskgroup] -r rm plex When used together, these commands produce the same result as the vxplex -o rm dis command. The -r option to vxedit rm recursively removes all objects from the specified object downward.
Chapter 7 Creating volumes This chapter describes how to create volumes in Veritas Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Note: You can also use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
236 Creating volumes Types of volume layouts Types of volume layouts VxVM allows you to create volumes with the following layout types: Concatenated A volume whose subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. For more information, see “Concatenation and spanning” on page 35.
Creating volumes Types of volume layouts Layered Volume A volume constructed from other volumes. Non-layered volumes are constructed by mapping their subdisks to VM disks. Layered volumes are constructed by mapping their subdisks to underlying volumes (known as storage volumes), and allow the creation of more complex forms of logical layout. Examples of layered volumes are stripedmirror and concatenated-mirror volumes. See “Layered volumes” on page 51.
238 Creating volumes Creating a volume Refer to the following sections for information on creating a volume on which DRL is enabled: ■ ■ “Creating a volume with dirty region logging enabled” on page 252 for creating a volume with DRL log plexes. ■ “Creating a volume with a version 20 DCO volume” on page 252 for creating a volume with DRL configured within a version 20 DCO volume.
Creating volumes Using vxassist 3 Associate plexes with the volume using vxmake vol; see “Creating a volume using vxmake” on page 258. 4 Initialize the volume using vxvol start or vxvol init zero; see “Initializing and starting a volume created using vxmake” on page 261. See “Creating a volume using a vxmake description file” on page 259 for an example of how you can combine steps 1 through 3 using a volume description file with vxmake.
240 Creating volumes Using vxassist ■ Operations result in a set of configuration changes that either succeed or fail as a group, rather than individually. System crashes or other interruptions do not leave intermediate states that you have to clean up. If vxassist finds an error or an exceptional condition, it exits after leaving the system in the same state as it was prior to the attempted operation. The vxassist utility helps you perform the following tasks: ■ Creating volumes.
Creating volumes Using vxassist The section, “Creating a volume on any disk” on page 243 describes the simplest way to create a volume with default attributes. Later sections describe how to create volumes with specific attributes. For example, “Creating a volume on specific disks” on page 244 describes how to control how vxassist uses the available storage space. Setting default values for vxassist The default values that the vxassist command uses may be specified in the file /etc/default/vxassist.
242 Creating volumes Discovering the maximum size of a volume max_nstripe=8 min_nstripe=2 # for RAID-5, by default create between 3 and 8 stripe columns max_nraid5stripe=8 min_nraid5stripe=3 # by default, create 1 log copy for both mirroring and RAID-5 volumes nregionlog=1 nraid5log=1 # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes rai
Creating volumes Creating a volume on any disk To discover the value in blocks of the alignment that is set on a disk group, use this command: # vxprint -g diskgroup -G -F %align By default, vxassist automatically rounds up the volume size and attribute size values to a multiple of the alignment value. (This is equivalent to specifying the attribute dgalign_checking=round as an additional argument to the vxassist command.
244 Creating volumes Creating a volume on specific disks Creating a volume on specific disks VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. If you want a volume to be created on specific disks, you must designate those disks to VxVM. More than one disk can be specified. To create a volume on a specific disk or disks, use the following command: # vxassist [-b] [-g diskgroup] make volume length \ [layout=layout] diskname ...
Creating volumes Creating a volume on specific disks Specifying ordered allocation of storage to volumes Ordered allocation gives you complete control of space allocation. It requires that the number of disks that you specify to the vxassist command must match the number of disks that are required to create a volume. The order in which you specify the disks to vxassist is also significant.
246 Creating volumes Creating a volume on specific disks Figure 7-2 Example of using ordered allocation to create a striped-mirror volume Underlying mirrored volumes Column 1 Column 2 mydg01-01 mydg02-01 Mirror Column 1 Column 2 mydg03-01 mydg04-01 Striped plex Striped-mirror volume Additionally, you can use the col_switch attribute to specify how to concatenate space on the disks into columns.
Creating volumes Creating a volume on specific disks Figure 7-3 Example of using concatenated disk space to create a mirroredstripe volume Column 1 Column 2 mydg01-01 mydg03-01 mydg02-01 mydg04-01 Striped plex Mirror Column 1 Column 2 mydg05-01 mydg07-01 mydg06-01 mydg08-01 Striped plex Mirrored-stripe volume Other storage specification classes for controllers, enclosures, targets and trays can be used with ordered allocation.
248 Creating volumes Creating a volume on specific disks Figure 7-4 Example of storage allocation used to create a mirrored-stripe volume across controllers c1 c2 c3 Column 1 Column 2 Column 3 Controllers Striped plex Mirror Column 1 Column 2 Column 3 Striped plex Mirrored-stripe volume c4 c5 c6 Controllers For other ways in which you can control how vxassist lays out mirrored volumes across controllers, see “Mirroring across targets, controllers or enclosures” on page 255.
Creating volumes Creating a mirrored volume Creating a mirrored volume Note: You need a full license to use this feature. A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails.
250 Creating volumes Creating a volume with a version 0 DCO volume # vxassist [-b] [-g diskgroup] make volume length \ layout=concat-mirror [nmirror=number] Creating a volume with a version 0 DCO volume If a data change object (DCO) and DCO volume are associated with a volume, this allows Persistent FastResync to be used with the volume.
Creating volumes Creating a volume with a version 0 DCO volume # vxdg list diskgroup To upgrade a disk group to version 90, use the following command: # vxdg -T 90 upgrade diskgroup For more information, see “Upgrading a disk group” on page 208.
252 Creating volumes Creating a volume with a version 20 DCO volume Creating a volume with a version 20 DCO volume To create a volume with an attached version 20 DCO object and volume 1 Ensure that the disk group has been upgraded to the latest version. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the most recent version, use the following command: # vxdg upgrade diskgroup For more information, see “Upgrading a disk group” on page 208.
Creating volumes Creating a striped volume Dirty region logging (DRL), if enabled, speeds recovery of mirrored volumes after a system crash.
254 Creating volumes Creating a striped volume You can specify the disks on which the volumes are to be created by including the disk names on the command line. For example, to create a 30-gigabyte striped volume on three specific disks, mydg03, mydg04, and mydg05, use the following command: # vxassist -b -g mydg make stripevol 30g layout=stripe \ mydg03 mydg04 mydg05 To change the number of columns or the stripe width, use the ncolumn and stripeunit modifiers with vxassist.
Creating volumes Mirroring across targets, controllers or enclosures for the attribute stripe-mirror-col-split-trigger-pt that is defined in the vxassist defaults file. If there are multiple subdisks per column, you can choose to mirror each subdisk individually instead of each column. To mirror at the subdisk level, specify the layout as stripe-mirror-sd rather than stripe-mirror. To mirror at the column level, specify the layout as stripe-mirror-col rather than stripemirror.
256 Creating volumes Creating a RAID-5 volume See “Specifying ordered allocation of storage to volumes” on page 245 for a description of other ways in which you can control how volumes are laid out on the specified storage. Creating a RAID-5 volume Note: VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. A RAID-5 volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume.
Creating volumes Creating tagged volumes RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated with a RAID-5 volume has a complete copy of the logging information for the volume. To support concurrent access to the RAID-5 array, the log should be several times the stripe size of the RAID-5 plex. It is suggested that you configure a minimum of two RAID-5 log plexes for each RAID-5 volume. These log plexes should be located on different disks.
258 Creating volumes Creating a volume using vxmake Tag names and tag values are case-sensitive character strings of up to 256 characters. Tag names can consist of letters (A through Z and a through z), numbers (0 through 9), dashes (-), underscores (_) or periods (.) from the ASCII character set. A tag name must start with either a letter or an underscore. Tag values can consist of any character from the ASCII character set with a decimal value from 32 through 127.
Creating volumes Creating a volume using vxmake If each column in a RAID-5 plex is to be created from multiple subdisks which may span several physical disks, you can specify to which column each subdisk should be added.
260 Creating volumes Initializing and starting a volume The following sample description file defines a volume, db, with two plexes, db01 and db-02: #rty sd sd sd sd sd plex #name mydg03-01 mydg03-02 mydg04-01 mydg04-02 mydg04-03 db-01 #options disk=mydg03 offset=0 len=10000 disk=mydg03 offset=25000 len=10480 disk=mydg04 offset=0 len=8000 disk=mydg04 offset=15000 len=8000 disk=mydg04 offset=30000 len=4480 layout=STRIPE ncolumn=2 stwidth=16k sd=mydg03-01:0/0,mydg03-02:0/10000,mydg04-01:1/0, mydg04-02:1/80
Creating volumes Initializing and starting a volume As an alternative to the -b option, you can specify the init=active attribute to make a new volume immediately available for use. In this example, init=active is specified to prevent VxVM from synchronizing the empty data plexes of a new mirrored volume: # vxassist [-g diskgroup] make volume length layout=mirror \ init=active Caution: There is a very small risk of errors occurring when the init=active attribute is used.
262 Creating volumes Accessing a volume Accessing a volume As soon as a volume has been created and initialized, it is available for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as relational databases and other data management software.
Chapter 8 Administering volumes This chapter describes how to perform common maintenance tasks on volumes in Veritas Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, and changing the layout of volumes without taking them offline. Note: You can also use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
264 Administering volumes Displaying volume information Displaying volume information You can use the vxprint command to display information about how a volume is configured.
Administering volumes Displaying volume information # vxprint -g mydg -t voldef This is example output from this command: V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE v voldef - ACTIVE 20480 SELECT - fsgen ENABLED Note: If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d# names.
266 Administering volumes Displaying volume information INVALID volume state The contents of an instant snapshot volume no longer represent a true point-intime image of the original volume. NEEDSYNC volume state The volume requires a resynchronization operation the next time it is started. For a RAID-5 volume, a parity resynchronization operation is required. REPLAY volume state The volume is in a transient state as part of a log replay.
Administering volumes Monitoring and controlling tasks Note: No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all volumes are ENABLED. The following volume kernel states are defined: DETACHED volume kernel state Maintenance is being performed on the volume. The volume cannot be read from or written to, but certain plex operations and ioctl function calls are accepted.
268 Administering volumes Monitoring and controlling tasks Any tasks started by the utilities invoked by vxrecover also inherit its task ID and task tag, so establishing a parent-child task relationship. For more information about the utilities that support task tagging, see their respective manual pages. Managing tasks with vxtask Note: New tasks take time to be set up, and so may not be immediately available for use after a command is invoked.
Administering volumes Monitoring and controlling tasks pause resume set generated when the task completes. When this occurs, the state of the task is printed as EXITED. Puts a running task in the paused state, causing it to suspend operation. Causes a paused task to continue operation. Changes modifiable parameters of a task. Currently, there is only one modifiable parameter, slow[=iodelay], which can be used to reduce the impact that copy operations have on system performance.
270 Administering volumes Stopping a volume Stopping a volume Stopping a volume renders it unavailable to the user, and changes the volume kernel state from ENABLED or DETACHED to DISABLED. If the volume cannot be disabled, it remains in its current state. To stop a volume, use the following command: # vxvol [-g diskgroup] [-f] stop volume ...
Administering volumes Starting a volume Starting a volume Starting a volume makes it available for use, and changes the volume state from DISABLED or DETACHED to ENABLED. To start a DISABLED or DETACHED volume, use the following command: # vxvol [-g diskgroup] start volume ... If a volume cannot be enabled, it remains in its current state.
272 Administering volumes Adding a mirror to a volume Mirroring all volumes To mirror all volumes in a disk group to available disk space, use the following command: # /etc/vx/bin/vxmirror -g diskgroup -a To configure VxVM to create mirrored volumes by default, use the following command: # /etc/vx/bin/vxmirror -d yes If you make this change, you can still make unmirrored volumes by specifying nmirror=1 as an attribute to the vxassist command.
Administering volumes Removing a mirror You can choose to mirror volumes from disk mydg02 onto any available disk space, or you can choose to mirror onto a specific disk. To mirror to a specific disk, select the name of that disk. To mirror to any available disk space, select "any".
274 Administering volumes Adding logs and maps to volumes This command removes the mirror vol01-02 and all associated subdisks. This is equivalent to entering the following separate commands: # vxplex -g mydg dis vol01-02 # vxedit -g mydg -r rm vol01-02 Adding logs and maps to volumes In Veritas Volume Manager, several types of volume logs and maps are supported: ■ FastResync Maps are used to perform quick and efficient resynchronization of mirrors (see “FastResync” on page 66 for details).
Administering volumes Preparing a volume for DRL and instant snapshots Preparing a volume for DRL and instant snapshots Note: This procedure describes how to add a version 20 data change object (DCO) and DCO volume to a volume that you previously created in a disk group with a version number of 110 or greater.
276 Administering volumes Preparing a volume for DRL and instant snapshots Note: The vxsnap prepare command automatically enables Persistent FastResync on the volume. Persistent FastResync is also set automatically on any snapshots that are generated from a volume on which this feature is enabled. If the volume is a RAID-5 volume, it is converted to a layered volume that can be used with instant snapshots and Persistent FastResync.
Administering volumes Preparing a volume for DRL and instant snapshots If required, you can use the vxassist move command to relocate DCO plexes to different disks. For example, the following command moves the plexes of the DCO volume, vol1_dcl, for volume vol1 from disk03 and disk04 to disk07 and disk08: # vxassist -g mydg move vol1_dcl !disk03 !disk04 disk07 disk08 For more information, see “Moving DCO volumes between disk groups” on page 200, and the vxassist(1M) and vxsnap(1M) manual pages.
278 Administering volumes Preparing a volume for DRL and instant snapshots Determining if DRL is enabled on a volume To determine if DRL (configured using a version 20 DCO volume) is enabled on a volume 1 Use the vxprint command on the volume to discover the name of its DCO: # DCONAME=‘vxprint [-g diskgroup] -F%dco_name volume‘ 2 To determine if DRL is enabled on the volume, use the following command with the volume’s DCO: # vxprint [-g diskgroup] -F%drl $DCONAME DRL is enabled if this command displays
Administering volumes Upgrading existing volumes to use version 20 DCOs To re-enable DRL on a volume, enter this command: # vxvol [-g diskgroup] set drl=on volume To re-enable sequential DRL on a volume, enter: # vxvol [-g diskgroup] set drl=sequential volume You can use these commands to change the DRL policy on a volume by first disabling and then re-enabling DRL as required. DRL is automatically disabled if a data change map (DCM, used with Veritas Volume Replicator) is attached to a volume.
280 Administering volumes Upgrading existing volumes to use version 20 DCOs # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup For more information, see “Upgrading a disk group” on page 208.
Administering volumes Adding traditional DRL logging to a mirrored volume subsequently create from the snapshot plexes. For example, specify ndcomirs=5 for a volume with 3 data plexes and 2 snapshot plexes. The value of the regionsize attribute specifies the size of the tracked regions in the volume. A write to a region is tracked by setting a bit in the change map. The default value is 64k (64KB).
282 Administering volumes Adding traditional DRL logging to a mirrored volume where each bit represents one region in the volume. For example, the size of the log would need to be 20K for a 10GB volume with a region size of 64 kilobytes.
Administering volumes Adding a RAID-5 log Adding a RAID-5 log Note: You need a full license to use this feature. Only one RAID-5 plex can exist per RAID-5 volume. Any additional plexes become RAID-5 log plexes, which are used to log information about data and parity being written to the volume. When a RAID-5 volume is created using the vxassist command, a log plex is created for that volume by default.
284 Administering volumes Resizing a volume Removing a RAID-5 log To identify the plex of the RAID-5 log, use the following command: # vxprint [-g diskgroup] -ht volume where volume is the name of the RAID-5 volume. For a RAID-5 log, the output lists a plex with a STATE field entry of LOG.
Administering volumes Resizing a volume vxassist command also allows you to specify an increment by which to change the volume’s size. Caution: If you use vxassist or vxvol to resize a volume, do not shrink it below the size of the file system which is located on it. If you do not shrink the file system first, you risk unrecoverable data loss. If you have a VxFS file system, shrink the file system first, and then shrink the volume.
286 Administering volumes Resizing a volume ■ Resizing a volume with a usage type other than FSGEN or RAID5 can result in loss of data. If such an operation is required, use the -f option to forcibly resize such a volume. ■ You cannot resize a volume that contains plexes with different layout types.
Administering volumes Resizing a volume Note: If specified, the -b option makes growing the volume a background task. For example, to extend volcat by 100 sectors, use the following command: # vxassist -g mydg growby volcat 100 Note: If you previously performed a relayout on the volume, additionally specify the attribute layout=nodiskalign to the growby command if you want the subdisks to be grown using contiguous disk space.
288 Administering volumes Setting tags on volumes Note: The vxvol set len command cannot increase the size of a volume unless the needed space is available in the plexes of the volume. When the size of a volume is reduced using the vxvol set len command, the freed space is not released into the disk group’s free space pool. If a volume is active and its length is being reduced, the operation must be forced using the -o force option to vxvol.
Administering volumes Changing the read policy for mirrored volumes # vxassist -g mydg settag myvol "dbvol=table space 1" Dotted tag hierarchies are understood by the list operation. For example, the listing for tag=a.b includes all volumes that have tag names that start with a.b. The tag names site, udid and vdid are reserved and should not be used.
290 Administering volumes Removing a volume For example, to set the policy for vol01 to read preferentially from the plex vol01-02, use the following command: # vxvol -g mydg rdpol prefer vol01 vol01-02 To set the read policy to select, use the following command: # vxvol [-g diskgroup] rdpol select volume For more information about how read policies affect performance, see “Volume read policies” on page 466.
Administering volumes Moving volumes from a VM disk To move volumes from a disk 1 Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. 2 At the following prompt, enter the disk name of the disk whose volumes you wish to move, as follows: Move volumes from a disk Menu: VolumeManager/Disk/Evacuate Use this menu operation to move any volumes that are using a disk onto other disks. Use this menu immediately prior to removing a disk, either permanently or for replacement.
292 Administering volumes Enabling FastResync on a volume Enabling FastResync on a volume Note: The recommended method for enabling FastResync on a volume with a version 20 DCO is to use the vxsnap prepare command as described in “Preparing a volume for DRL and instant snapshots” on page 275. You need a Veritas FlashSnapTM or FastResync license to use this feature. FastResync performs quick and efficient resynchronization of stale mirrors.
Administering volumes Enabling FastResync on a volume Note: To use FastResync with a snapshot, FastResync must be enabled before the snapshot is taken, and must remain enabled until after the snapback is completed. Checking whether FastResync is enabled on a volume To check whether FastResync is enabled on a volume, use the following command: # vxprint [-g diskgroup] -F%fastresync volume This command returns on if FastResync is enabled; otherwise, it returns off.
294 Administering volumes Performing online relayout Performing online relayout Note: You need a full license to use this feature. You can use the vxassist relayout command to reconfigure the layout of a volume without taking it offline. The general form of this command is: # vxassist [-b] [-g diskgroup] relayout volume [layout=layout] \ [relayout_options] Note: If specified, the -b option makes relayout of the volume a background task.
Administering volumes Performing online relayout Permitted relayout transformations The tables below give details of the relayout operations that are possible for each type of source storage layout. Table 8-2 Supported relayout transformations for concatenated volumes Relayout to From concat concat No. concat-mirror No. Add a mirror, and then use vxassist convert instead. mirror-concat No. Add a mirror instead. mirror-stripe No.
296 Administering volumes Performing online relayout Table 8-4 Supported relayout transformations for RAID-5 volumes Relayout to From raid5 concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width and number of columns may be changed.
Administering volumes Performing online relayout Table 8-6 Supported relayout transformations for mirrored-stripe volumes Relayout to From mirror-stripe concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes.
298 Administering volumes Performing online relayout Specifying a non-default layout You can specify one or more relayout options to change the default layout configuration. Examples of these options are: ncol=number Specifies the number of columns. ncol=+number Specifies the number of columns to add. ncol=-number Specifies the number of colums to remove. stripeunit=size Specifies the stripe width. See the vxassist(1M) manual page for more information about relayout options.
Administering volumes Performing online relayout Viewing the status of a relayout Online relayout operations take some time to perform. You can use the vxrelayout command to obtain information about the status of a relayout operation. For example, the command: # vxrelayout -g mydg status vol04 might display output similar to this: STRIPED, columns=5, stwidth=128--> STRIPED, columns=6, stwidth=128 Relayout running, 68.58% completed.
300 Administering volumes Converting between layered and non-layered volumes inserts a delay of 1000 milliseconds (1 second) between copying each 10megabyte region: # vxrelayout -g mydg -o bg,slow=1000,iosize=10m start vol04 The default delay and region size values are 250 milliseconds and 1 megabyte respectively.
Administering volumes Converting between layered and non-layered volumes When the relayout has completed, use the vxassist convert command to change the resulting layered striped-mirror volume to a non-layered mirroredstripe: # vxassist -g mydg convert vol1 layout=mirror-stripe Note: If the system crashes during relayout or conversion, the process continues when the system is rebooted.
302 Administering volumes Converting between layered and non-layered volumes
Chapter 9 Administering volume snapshots Veritas Volume Manager (VxVM) provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot. You can also take a snapshot of a volume set as described in “Creating instant snapshots of volume sets” on page 334. Volume snapshots allow you to make backup copies of your volumes online with minimal interruption to users.
304 Administering volume snapshots Note: A volume snapshot represents the data that exists in a volume at a given point in time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system. If the fsgen volume usage type is set on a volume that contains a Veritas File System (VxFS), intent logging of the file system metadata ensures the internal consistency of the file system that is backed up.
Administering volume snapshots Traditional third-mirror break-off snapshots Traditional third-mirror break-off snapshots The traditional third-mirror break-off volume snapshot model that is supported by the vxassist command is shown in Figure 9-1. This also shows the transitions that are supported by the snapback and snapclear commands to vxassist.
306 Administering volume snapshots Traditional third-mirror break-off snapshots its data plexes. The snapshot volume contains a copy of the original volume’s data at the time that you took the snapshot. If more than one snapshot mirror is used, the snapshot volume is itself mirrored. The command, vxassist snapback, can be used to return snapshot plexes to the original volume from which they were snapped, and to resynchronize the data in the snapshot mirrors from the data in the original volume.
Administering volume snapshots Full-sized instant snapshots Full-sized instant snapshots Full-sized instant snapshots are a variation on the third-mirror volume snapshot model that make a snapshot volume available for access as soon as the snapshot plexes have been created. The full-sized instant volume snapshot model is illustrated in Figure 9-2.
308 Administering volume snapshots Full-sized instant snapshots volume are updated, its original contents are gradually relocated to the snapshot volume. If desired, you can additionally select to perform either a background (nonblocking) or foreground (blocking) synchronization of the snapshot volume.
Administering volume snapshots Space-optimized instant snapshots Space-optimized instant snapshots Volume snapshots, such as those described in “Traditional third-mirror breakoff snapshots” on page 305 and “Full-sized instant snapshots” on page 307, require the creation of a complete copy of the original volume, and use as much storage space as the original volume. Instead of requiring a complete copy of the original volume’s storage space, space-optimized instant snapshots use a storage cache.
310 Administering volume snapshots Emulation of third-mirror break-off snapshots As for instant snapshots, space-optimized snapshots use a copy-on-write mechanism to make them immediately available for use when they are first created, or when their data is refreshed. Unlike instant snapshots, however, you cannot enable synchronization on space-optimized snapshots, reattach them to their original volume, or turn them into independent volumes.
Administering volume snapshots Linked break-off snapshot volumes ■ Use the vxsnap make command with the sync=yes and type=full attributes specified to create the snapshot volume, and then use the vxsnap syncwait command to wait for synchronization of the snapshot volume to complete. See “Creating and managing third-mirror break-off snapshots” on page 329 for details of the procedures for creating and using this type of snapshot.
312 Administering volume snapshots Cascaded snapshots to recover the mirror volume in the same way as for a DISABLED volume. See “Starting a volume” on page 271. If you resize (that is, grow or shrink) a volume, all its ACTIVE linked mirror volumes are also resized at the same time. The volume and its mirrors can be in the same disk group or in different disk groups. If the operation is successful, the volume and its mirrors will have the same size.
Administering volume snapshots Cascaded snapshots to read data from an older snapshot that does not exist in that snapshot, it is obtained by searching recursively up the hierarchy of more recent snapshots. A snapshot cascade is most likely to be used for regular online backup of a volume where space-optimized snapshots are written to disk but not to tape.
314 Administering volume snapshots Cascaded snapshots Figure 9-5 Creating a snapshot of a snapshot vxsnap make source=V Original volume V vxsnap make source=S1 Snapshot volume S1 Snapshot volume S2 Even though the arrangement of the snapshots in this figure appears similar to the snapshot hierarchy shown in “Snapshot cascade” on page 312, the relationship between the snapshots is not recursive.
Administering volume snapshots Cascaded snapshots Figure 9-6 Using a snapshot of a snapshot to restore a database 1. Create instant snapshot S1 of volume V vxsnap make source=V Original volume V Snapshot volume of V: S1 2. Create instant snapshot S2 of S1 vxsnap make source=S1 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 3.
316 Administering volume snapshots Cascaded snapshots Figure 9-7 Dissociating a snapshot volume vxsnap dis is applied to snapshot S2, which has no snapshots of its own Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S2 Original volume V Snapshot volume of V: S1 S1 remains owned by V Volume S2 S2 is independent vxsnap dis is applied to snapshot S1, which has one snapshot S2 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S1 Original
Administering volume snapshots Creating multiple snapshots Figure 9-8 Splitting snapshots Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap split S1 Volume S1 Snapshot volume of S1: S2 S1 is independent S2 continues to be a snapshot of S1 Original volume V Creating multiple snapshots To make it easier to create snapshots of several volumes at the same time, both the vxsnap make and vxassist snapshot commands accept more than one volume name as their argument.
318 Administering volume snapshots Restoring the original volume from a snapshot Figure 9-9 Refresh on snapback Resynchronizing an original volume from a snapshot Original volume Snapshot mirror snapshot Snapshot volume -o resyncfromreplica snapback Note: The original volume must not be in use during a snapback operation that specifies the option -o resyncfromreplica to resynchronize the volume from a snapshot.
Administering volume snapshots Creating instant snapshots Creating instant snapshots Note: You need a full license to use this feature. VxVM allows you to make instant snapshots of volumes by using the vxsnap command. Note: The information in this section also applies to RAID-5 volumes that have been converted to a special layered volume layout by the addition of a DCO and DCO volume. See “Using a DCO and DCO volume with a RAID-5 volume” on page 277 for details.
320 Administering volume snapshots Creating instant snapshots You can create instant snapshots of volume sets by replacing volume names with volume set names in the vxsnap command. For more information, see “Creating instant snapshots of volume sets” on page 334.
Administering volume snapshots Creating instant snapshots Preparing to create instant and break-off snapshots To prepare a volume for the creation of instant and break-off snapshots 1 Use the following commands to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume, and to check that FastResync is enabled on the volume: # vxprint -g volumedg -F%instant volume # vxprint -g volumedg -F%fas
322 Administering volume snapshots Creating instant snapshots created, and it must also have the same region size. See “Creating a volume for use as a full-sized instant or linked break-off snapshot” on page 323 for details.
Administering volume snapshots Creating instant snapshots Note: All space-optimized snapshots that share the cache must have a region size that is equal to or an integer multiple of the region size set on the cache. Snapshot creation also fails if the original volume’s region size is smaller than the cache’s region size.
324 Administering volume snapshots Creating instant snapshots 4 Use the vxassist command to create a volume, snapvol, of the required size and redundancy, together with a version 20 DCO volume with the correct region size: # vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] logtype=dco drl=off \ dcoversion=20 [ndcomirror=number] regionsz=$RSZ \ init=active [storage_attributes] Specify the same number of DCO mirrors (ndcomirror) as the number of mirrors in the volume (nmirror).
Administering volume snapshots Creating instant snapshots For space-optimized instant snapshots that share a cache object, the specified region size must be greater than or equal to the region size specified for the cache object. See “Creating a shared cache object” on page 322 for details. The attributes for a snapshot are specified as a tuple to the vxsnap make command. This command accepts multiple tuples. One tuple is required for each snapshot that is being created.
326 Administering volume snapshots Creating instant snapshots For example, to create the space-optimized instant snapshot, snap4myvol, of the volume, myvol, in the disk group, mydg, on the disk mydg15, and which uses a newly allocated cache object that is 1GB in size, but which can automatically grow in size, use the following command: # vxsnap -g mydg make source=myvol/new=snap4myvol\ /cachesize=1g/autogrow=yes alloc=mydg15 Note: If a cache is created implicitly by specifying cachesize, and ncachemirror
Administering volume snapshots Creating instant snapshots Creating and managing full-sized instant snapshots Note: Full-sized instant snapshots are not suitable for write-intensive volumes (such as for database redo logs) because the copy-on-write mechanism may degrade the performance of the volume. For full-sized instant snapshots, you must prepare a volume that is to be used as the snapshot volume.
328 Administering volume snapshots Creating instant snapshots If required, you can use the following command to test if the synchronization of a volume is complete: # vxprint [-g diskgroup] -F%incomplete snapvol This command returns the value off if synchronization of the volume, snapvol, is complete; otherwise, it returns the value on. You can also use the vxsnap print command to check on the progress of synchronization as described in “Displaying instant snapshot information” on page 342.
Administering volume snapshots Creating instant snapshots ■ Dissociate the snapshot volume entirely from the original volume. This may be useful if you want to use the copy for other purposes such as testing or report generation. If desired, you can delete the dissociated volume. See “Dissociating an instant snapshot” on page 340 for details. ■ If the snapshot is part of a snapshot hierarchy, you can also choose to split this hierarchy from its parent volumes.
330 Administering volume snapshots Creating instant snapshots If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchronization of the snapshot plexes to complete, as shown in this example: # vxsnap -g mydg snapwait vol1 nmirror=2 2 To create a third-mirror break-off snapshot, use the following form of the vxsnap make command. # vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\ {/plex=plex1[,plex2,...
Administering volume snapshots Creating instant snapshots synchronization was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “Refreshing an instant snapshot” on page 337 for details. ■ Reattach some or all of the plexes of the snapshot volume with the original volume. See “Reattaching an instant snapshot” on page 338 for details. ■ Restore the contents of the original volume from the snapshot volume.
332 Administering volume snapshots Creating instant snapshots [mirdg=snapdg] The optional mirdg attribute can be used to specify the snapshot volume’s current disk group, snapdg. The -b option can be used to perform the synchronization in the background. If the -b option is not specified, the command does not return until the link becomes ACTIVE.
Administering volume snapshots Creating instant snapshots Note: This operation is not possible if the linked volume and snapshot are in different disk groups. ■ Reattach the snapshot volume with the original volume. See “Reattaching a linked break-off snapshot volume” on page 339 for details. ■ Dissociate the snapshot volume entirely from the original volume. This may be useful if you want to use the copy for other purposes such as testing or report generation.
334 Administering volume snapshots Creating instant snapshots In this example, snapvol1 is a full-sized snapshot that uses a prepared volume, snapvol2 is a space-optimized snapshot that uses a prepared cache, and snapvol3 is a break-off full-sized snapshot that is formed from plexes of the original volume.
Administering volume snapshots Creating instant snapshots VOLUME svol_0 svol_1 svol_2 INDEX 0 1 2 LENGTH 204800 409600 614400 KSTATE ENABLED ENABLED ENABLED CONTEXT - A full-sized instant snapshot of a volume set can be created using a prepared volume set in which each volume is the same size as the corresponding volume in the parent volume set.
336 Administering volume snapshots Creating instant snapshots Adding snapshot mirrors to a volume If you are going to create a full-sized break-off snapshot volume, you can use the following command to add new snapshot mirrors to a volume: # vxsnap [-b] [-g diskgroup] addmir volume|volume_set \ [nmirror=N] [alloc=storage_attributes] Note: The volume must have been prepared using the vxsnap prepare command as described in “Preparing a volume for DRL and instant snapshots” on page 275.
Administering volume snapshots Creating instant snapshots Note: This command is similar in usage to the vxassist snapabort command. If a volume set name is specified instead of a volume, a mirror is removed from each volume in the volume set.
338 Administering volume snapshots Creating instant snapshots To disable resynchronization, specify the syncing=no attribute. This attribute is not supported for space-optimized snapshots. Note: The snapshot being refreshed must not be open to any application. For example, any file system configured on the volume must first be unmounted. It is possible to refresh a volume from an unrelated volume provided that their sizes are compatible.
Administering volume snapshots Creating instant snapshots snapwait command (but not vxsnap syncwait) to wait for the resynchronization of the reattached plexes to complete, as shown here: # vxsnap -g mydg snapwait myvol nmirror=1 Note: If the volume and its snapshot have both been resized (to an identical smaller or larger size) before performing the reattachment, a fast resynchronization can still be performed. A full resynchronization is not required.
340 Administering volume snapshots Creating instant snapshots syncwait) to wait for the resynchronization of the reattached volume to complete, as shown here: # vxsnap -g snapdg snapwait myvol mirvol=prepsnap Restoring a volume from an instant snapshot It may sometimes be desirable to reinstate the contents of a volume from a backup or modified replica in a snapshot volume.
Administering volume snapshots Creating instant snapshots snapshots remain, snapvol may be dissociated. The snapshot hierarchy is then adopted by snapvol’s parent volume. Note: To be usable after dissociation, the snapshot volume and any snapshots in the hierarchy must have been fully synchronized. See “Controlling instant snapshot synchronization” on page 344 for more information.
342 Administering volume snapshots Creating instant snapshots Note: The topmost snapshot volume in the hierarchy must have been fully synchronized for this command to succeed. Snapshots that are lower down in the hierarchy need not have been fully resynchronized. See “Controlling instant snapshot synchronization” on page 344 for more information.
Administering volume snapshots Creating instant snapshots Alternatively, you can use the vxsnap list command, which is an alias for the vxsnap -n print command: # vxsnap [-g diskgroup] [-l] [-v] [-x] list [vol] The following output is an example of using this command on the disk group dg1: # vxsnap -g dg -vx list NAME vol svol1 svol2 svol3 svol21 vol-02 mvol vset1 v1 v2 svset1 sv1 sv2 vol-03 mvol2 DG dg1 dg2 dg1 dg2 dg1 dg1 dg2 dg1 dg1 dg1 dg1 dg1 dg1 dg1 dg2 OBJTYPE vol vol vol vol vol plex vol vset co
344 Administering volume snapshots Creating instant snapshots See the vxsnap(1M) manual page for more information about using the vxsnap print and vxsnap list commands. Controlling instant snapshot synchronization Note: Synchronization of the contents of a snapshot with its original volume is not possible for space-optimized instant snapshots. The commands in this section cannot be used to control the synchronization of linked break-off snapshots.
Administering volume snapshots Creating instant snapshots instant snapshot” on page 338 and “Reattaching a linked break-off snapshot volume” on page 339 for details. Improving the performance of snapshot synchronization Two optional arguments to the -o option are provided to help optimize the performance of synchronization when using the make, refresh, restore and syncstart operations: iosize=size Specifies the size of each I/O request that is used when synchronizing the regions of a volume.
346 Administering volume snapshots Creating instant snapshots Tuning the autogrow attributes of a cache The highwatermark, autogrowby and maxautogrow attributes determine how the VxVM cache daemon (vxcached) maintains the cache if the autogrow feature has been enabled and vxcached is running: ■ When cache usage reaches the high watermark value, highwatermark (default value is 90 percent), vxcached grows the size of the cache volume by the value of autogrowby (default value is 20% of the size of the cache
Administering volume snapshots Creating instant snapshots Caution: Ensure that the cache is sufficiently large, and that the autogrow attributes are configured correctly for your needs.
348 Administering volume snapshots Creating traditional third-mirror break-off snapshots Creating traditional third-mirror break-off snapshots VxVM provides third-mirror break-off snapshot images of volume devices using vxassist and other commands. Note: To enhance the efficiency and usability of volume snapshots, turn on FastResync as described in “Enabling FastResync on a volume” on page 292.
Administering volume snapshots Creating traditional third-mirror break-off snapshots creating the snapshot mirror is long in contrast to the brief amount of time that it takes to create the snapshot volume. The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror. This task detaches the finished snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume.
350 Administering volume snapshots Creating traditional third-mirror break-off snapshots It is also possible to make a snapshot plex from an existing plex in a volume. See “Converting a plex into a snapshot plex” on page 351 for details. 2 3 Choose a suitable time to create a snapshot. If possible, plan to take the snapshot at a time when users are accessing the volume as little as possible.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Note: Dissociating or removing the snapshot volume loses the advantage of fast resynchronization if FastResync was enabled. If there are no further snapshot plexes available, any subsequent snapshots that you take require another complete copy of the original volume to be made.
352 Administering volume snapshots Creating traditional third-mirror break-off snapshots To convert an existing plex into a snapshot plex in the SNAPDONE state for a volume on which Non-Persistent FastResync is enabled, use the following command: # vxplex [-g diskgroup] convert state=SNAPDONE plex A converted plex is in the SNAPDONE state, and can be used immediately to create a snapshot volume.
Administering volume snapshots Creating traditional third-mirror break-off snapshots plexes are snapped back. This task resynchronizes the data in the volume so that the plexes are consistent. Note: To enhance the efficiency of the snapback operation, enable FastResync on the volume before taking the snapshot, as described in “Enabling FastResync on a volume” on page 292.
354 Administering volume snapshots Creating traditional third-mirror break-off snapshots 2 Use the vxassist mirror command to create mirrors of the existing snapshot volume and its DCO volume: # vxassist -g diskgroup mirror snapshot # vxassist -g diskgroup mirror $DCOVOL Note: The new plex in the DCO volume is required for use with the new data plex in the snapshot.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Displaying snapshot information The vxassist snapprint command displays the associations between the original volumes and their respective replicas (snapshot copies): # vxassist snapprint [volume] Output from this command is shown in the following examples: # vxassist -g mydg snapprint V NAME USETYPE SS SNAPOBJ NAME DP NAME VOLUME v1 LENGTH LENGTH LENGTH %DIRTY %DIRTY v ss dp dp 20480 20480 20480 20480 4 0 0 v1 SNAP
356 Administering volume snapshots Adding a version 0 DCO and DCO volume Adding a version 0 DCO and DCO volume Note: The procedure described in this section adds a DCO log volume that has a version 0 layout as introduced in VxVM 3.2. The version 0 layout supports traditional (third-mirror break-off) snapshots, but not full-sized or spaceoptimized instant snapshots.
Administering volume snapshots Adding a version 0 DCO and DCO volume 3 Use the following command to add a DCO and DCO volume to the existing volume: # vxassist [-g diskgroup] addlog volume logtype=dco \ [ndcomirror=number] [dcolen=size] [storage_attributes] For non-layered volumes, the default number of plexes in the mirrored DCO volume is equal to the lesser of the number of plexes in the data volume or 2. For layered volumes, the default number of DCO plexes is always 2.
358 Administering volume snapshots Adding a version 0 DCO and DCO volume the volume named vol1 (the TUTIL0 and PUTIL0 columns are omitted for clarity): TY v pl sd pl sd dc v pl sd pl sd NAME vol1 vol1-01 disk01-01 vol1-02 disk02-01 vol1_dco vol1_dcl vol1_dcl-01 disk03-01 vol1_dcl-02 disk04-01 ASSOC fsgen vol1 vol1-01 vol1 vol1-02 vol1 gen vol1_dcl vol1_dcl-01 vol1_dcl vol1_dcl-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 1024 1024 1024 1024 1024 132 1
Administering volume snapshots Adding a version 0 DCO and DCO volume This form of the command dissociates the DCO object from the volume but does not destroy it or the DCO volume. If the -o rm option is specified, the DCO object, DCO volume and its plexes, and any snap objects are also removed. Note: Dissociating a DCO and DCO volume disables Persistent FastResync on the volume. A full resynchronization of any remaining snapshots is required when they are snapped back.
360 Administering volume snapshots Adding a version 0 DCO and DCO volume
Chapter 10 Creating and administering volume sets This chapter describes how to use the vxvset command to create and administer volume sets in Veritas Volume Manager (VxVM). Volume sets enable the use of the Multi-Volume Support feature with Veritas File System (VxFS). It is also possible to use the Veritas Enterprise Administrator (VEA) to create and administer volumes sets. For more information, see the VEA online help. For full details of the usage of the vxvset command, see the vxvset(1M) manual page.
362 Creating and administering volume sets Creating a volume set ■ Volume sets can be used in place of volumes with the following vxsnap operations on instant snapshots: addmir, dis, make, prepare, reattach, refresh, restore, rmmir, split, syncpause, syncresume, syncstart, syncstop, syncwait, and unprepare. The third-mirror break-off usage model for full-sized instant snapshots is supported for volume sets provided that sufficient plexes exist for each volume in the volume set.
Creating and administering volume sets Listing details of volume sets Caution: The -f (force) option must be specified if the volume being added, or any volume in the volume set, is either a snapshot or the parent of a snapshot. Using this option can potentially cause inconsistencies in a snapshot hierarchy if any of the volumes involved in the operation is already in a snapshot chain.
364 Creating and administering volume sets Removing a volume from a volume set # vxvset -g mydg list set1 VOLUME INDEX vol1 0 vol2 1 vol3 2 LENGTH 12582912 12582912 12582912 KSTATE DISABLED DISABLED DISABLED CONTEXT - LENGTH 12582912 12582912 12582912 KSTATE ENABLED ENABLED ENABLED CONTEXT - # vxvset -g mydg start set1 # vxvset -g mydg list set1 VOLUME INDEX vol1 0 vol2 1 vol3 2 Removing a volume from a volume set To remove a component volume from a volume set, use the following command: # vxvset [
Creating and administering volume sets Raw device node access to component volumes Caution: Writing directly to or reading from the raw device node of a component volume of a volume set should only be performed if it is known that the volume's data will not otherwise change during the period of access. All of the raw device nodes for the component volumes of a volume set can be created or removed in a single operation.
366 Creating and administering volume sets Raw device node access to component volumes value of the makedev attribute is currently set to on. The access mode is determined by the current setting of the compvol_access attribute.
Creating and administering volume sets Raw device node access to component volumes The syntax for setting the compvol_access attribute on a volume set is: # vxvset [-g diskgroup] [-f] set \ compvol_access={read-only|read-write} vset The compvol_access attribute can be specified to the vxvset set command to change the access mode to the component volumes of a volume set. If any of the component volumes are open, the -f (force) option must be specified to set the attribute to read-only.
368 Creating and administering volume sets Raw device node access to component volumes
Chapter 11 Configuring off-host processing Off-host processing allows you to implement the following activities: Data backup As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline. By taking a snapshot of the data, and backing up from this snapshot, business-critical applications can continue to run without extended down time or impacted performance.
370 Configuring off-host processing Implementing off-host processing solutions Off-host processing is made simpler by using linked break-off snapshots, which are described in “Linked break-off snapshot volumes” on page 311.
Configuring off-host processing Implementing off-host processing solutions ■ Implementing decision support These applications use the Persistent FastResync feature of VxVM in conjunction with linked break-off snapshots. Note: A volume snapshot represents the data that exists in a volume at a given point in time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
372 Configuring off-host processing Implementing off-host processing solutions Note: If the volume was created under VxVM 4.0 or a later release, and it is not associated with a new-style DCO object and DCO volume, follow the procedure described in “Preparing a volume for DRL and instant snapshots” on page 275. If the volume was created before release 4.
Configuring off-host processing Implementing off-host processing solutions If a database spans more than one volume, you can specify all the volumes and their snapshot volumes using one command, as shown in this example: # vxsnap -g dbasedg make \ source=vol1/snapvol=snapvol1/snapdg=sdg \ source=vol2/snapvol=snapvol2/snapdg=sdg \ source=vol3/snapvol=snapvol3/snapdg=sdg This step sets up the snapshot volumes ready for the backup cycle, and starts tracking changes to the original volumes.
374 Configuring off-host processing Implementing off-host processing solutions # vxsnap -g snapvoldg reattach snapvol source=vol \ sourcedg=volumedg For example, to reattach the snapshot volumes svol1, svol2 and svol3: # vxsnap -g sdg reattach svol1 \ source=vol1 sourcedg=dbasedg \ svol2 source=vol2 sourcedg=dbasedg \ svol3 source=vol3 sourcedg=dbasedg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete: # vxsnap -g volumedg snapwait volume mirvol
Configuring off-host processing Implementing off-host processing solutions This command returns on if FastResync is enabled; otherwise, it returns off. If FastResync is disabled, enable it using the following command on the primary host: # vxvol -g volumedg set fastresync=on volume 3 Prepare the OHP host to receive the snapshot volume that contains the copy of the database tables.
376 Configuring off-host processing Implementing off-host processing solutions 8 On the primary host, if you temporarily suspended updates to a volume in step 6, release all the database tables from hot backup mode.
Configuring off-host processing Implementing off-host processing solutions For example, to reattach the snapshot volumes svol1, svol2 and svol3: # vxsnap -g sdg reattach svol1 \ source=vol1 sourcedg=dbasedg \ svol2 source=vol2 sourcedg=dbasedg \ svol3 source=vol3 sourcedg=dbasedg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete: # vxsnap -g volumedg snapwait volume mirvol=snapvol You can then resume the procedure from step 6 on page 375.
378 Configuring off-host processing Implementing off-host processing solutions
Chapter 12 Administering hot-relocation If a volume has a disk I/O failure (for example, the disk has an uncorrectable error), Veritas Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
380 Administering hot-relocation How hot-relocation works How hot-relocation works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group.
Administering hot-relocation How hot-relocation works spares (marked spare) in the disk group where the failure occurred. It then relocates the subdisks to use this space. ■ If no spare disks are available or additional space is needed, vxrelocd uses free space on disks in the same disk group, except those disks that have been excluded for hot-relocation use (marked nohotuse). When vxrelocd has relocated the subdisks, it reattaches each relocated subdisk to its plex.
382 Administering hot-relocation How hot-relocation works Figure 12-1 Example of hot-relocation for a subdisk in a RAID-5 volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation. mydg01 mydg02 mydg03 mydg04 mydg01-01 mydg02-01 mydg03-01 mydg04-01 mydg02-02 mydg03-02 mydg05 Spare Disk b) Subdisk mydg02-01 in one RAID-5 volume fails.
Administering hot-relocation How hot-relocation works Partial disk failure mail messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root. If a partial disk failure occurs, the mail identifies the failed plexes.
384 Administering hot-relocation How hot-relocation works Complete disk failure mail messages If a disk fails completely and hot-relocation is enabled, the mail message lists the disk that failed and all plexes that use the disk.
Administering hot-relocation Configuring a system for hot-relocation does not take place. If relocation is not possible, the system administrator is notified and no further action is taken. From the eligible disks, hot-relocation attempts to use the disk that is “closest” to the failed disk. The value of “closeness” depends on the controller, target, and disk number of the failed disk. A disk on the same controller as the failed disk is closer than a disk on a different controller.
386 Administering hot-relocation Displaying spare disk information After a successful relocation, remove and replace the failed disk as described in “Removing and replacing disks” on page 112).
Administering hot-relocation Marking a disk as a hot-relocation spare Marking a disk as a hot-relocation spare Hot-relocation allows the system to react automatically to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk. Otherwise, any suitable free space in the disk group is used.
388 Administering hot-relocation Removing a disk from use as a hot-relocation spare electronic mail. After successful relocation, you may want to replace the failed disk. Removing a disk from use as a hot-relocation spare While a disk is designated as a spare, the space on that disk is not used for the creation of VxVM objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks.
Administering hot-relocation Making a disk available for hot-relocation use To use vxdiskadm to exclude a disk from hot-relocation use 1 Select menu item 15 (Exclude a disk from hot-relocation use) from the vxdiskadm main menu. 2 At the following prompt, enter the disk media name (such as mydg01): Exclude a disk from hot-relocation use Menu: VolumeManager/Disk/UnmarkSpareDisk Use this operation to exclude a disk from hot-relocation use. This operation takes, as input, a disk name.
390 Administering hot-relocation Configuring hot-relocation to use only spare disks Enter disk name [,list,q,?] mydg01 The following confirmation is displayed: V-5-2-932 Making mydg01 in mydg available for hot-relocation use is complete.
Administering hot-relocation Moving and unrelocating subdisks Volume home Subdisk mydg02-03 relocated to mydg05-01, but not yet recovered. Before you move any relocated subdisks, fix or replace the disk that failed (as described in “Removing and replacing disks” on page 112). Once this is done, you can move a relocated subdisk back to the original disk as described in the following sections. Caution: During subdisk move operations, RAID-5 volumes are not redundant.
392 Administering hot-relocation Moving and unrelocating subdisks subdisks using vxassist” on page 392 and “Moving and unrelocating subdisks using vxunreloc” on page 392. Moving and unrelocating subdisks using vxassist You can use the vxassist command to move and unrelocate subdisks.
Administering hot-relocation Moving and unrelocating subdisks without using the original offsets. Refer to the vxunreloc(1M) manual page for more information. The examples in the following sections demonstrate the use of vxunreloc. Moving hot-relocated subdisks back to their original disk Assume that mydg01 failed and all the subdisks were relocated. After mydg01 is replaced, vxunreloc can be used to move all the hot-relocated subdisks back to mydg01.
394 Administering hot-relocation Moving and unrelocating subdisks Examining which subdisks were hot-relocated from a disk If a subdisk was hot relocated more than once due to multiple disk failures, it can still be unrelocated back to its original location. For instance, if mydg01 failed and a subdisk named mydg01-01 was moved to mydg02, and then mydg02 experienced disk failure, all of the subdisks residing on it, including the one which was hot-relocated to it, will be moved again.
Administering hot-relocation Modifying the behavior of hot-relocation If the system goes down after the new subdisks are created on the destination disk, but before all the data has been moved, re-execute vxunreloc when the system has been rebooted. Caution: Do not modify the string UNRELOC in the comment field of a subdisk record. Modifying the behavior of hot-relocation Hot-relocation is turned on as long as the vxrelocd process is running.
396 Administering hot-relocation Modifying the behavior of hot-relocation Alternatively, you can use the following command: # nohup /etc/vx/bin/vxrelocd root user1 user2 & See the vxrelocd(1M) manual page for more information.
Chapter 13 Administering cluster functionality A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: Availability If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
398 Administering cluster functionality Overview of cluster volume management enabled, all the nodes in the cluster can share VxVM objects such as shared disk groups. Private disk groups are supported in the same way as in a non-clustered environment. This chapter discusses the cluster functionality that is provided with VxVM. Note: You need an additional license to use this feature.
Administering cluster functionality Overview of cluster volume management membership. Each node starts up independently and has its own cluster monitor plus its own copies of the operating system and VxVM with support for cluster functionality. When a node joins a cluster, it gains access to shared disk groups and volumes. When a node leaves a cluster, it no longer has access to these shared objects. A node joins a cluster when you issue the appropriate command on that node.
400 Administering cluster functionality Overview of cluster volume management Figure 13-1 Example of a 4-node cluster Redundant private network Node 0 (master) Node 1 (slave) Node 2 (slave) Node 3 (slave) Redundant SCSI or Fibre Channel connectivity Cluster-shareable disks Cluster-shareable disk groups To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster.
Administering cluster functionality Overview of cluster volume management Private and shared disk groups Two types of disk groups are defined: Private disk group Belongs to only one node. A private disk group can only be imported by one system at a time. Disks in a private disk group may be physically accessible from one or more systems, but access is restricted to one system only. The boot disk group (usually aliased by the reserved disk group name bootdg) is always a private disk group.
402 Administering cluster functionality Overview of cluster volume management cluster-shareable disk group is available as long as at least one node is active in the cluster. The failure of a cluster node does not affect access by the remaining active nodes. Regardless of which node accesses a cluster-shareable disk group, the configuration of the disk group looks the same. Note: Applications running on each node can access the data on the VM disks simultaneously.
Administering cluster functionality Overview of cluster volume management Table 13-1 Activation modes for shared disk groups Activation mode Description sharedwrite (sw) The node has write access to the disk group. Attempts to activate the disk group for shared read and shared write access succeed. Attempts to activate the disk group for exclusive write and read-only access fail. off The node has neither read nor write access to the disk group. Query operations on the disk group are permitted.
404 Administering cluster functionality Overview of cluster volume management Note: The activation mode of a disk group controls volume I/O from different nodes in the cluster. It is not possible to activate a disk group on a given node if it is activated in a conflicting mode on another node in the cluster. When enabling activation using the defaults file, it is recommended that this file be made identical on all nodes in the cluster. Otherwise, the results of activation are unpredictable.
Administering cluster functionality Overview of cluster volume management policy. However, in some cases, it is not desirable to have all nodes react in this way to I/O failure. To address this, an alternate way of responding to I/O failures, known as the local detach policy, was introduced in release 3.2 of VxVM. The local detach policy is intended for use with shared mirrored volumes in a cluster. This policy prevents I/O failure on a single slave node from causing a plex to be detached.
406 Administering cluster functionality Overview of cluster volume management Local detach policy Caution: Do not use the local detach policy if you use the VCS agents that monitor the cluster functionality of Veritas Volume Manager, and which are provided with Veritas Storage FoundationTM for Cluster File System HA and Veritas Storage Foundation for databases HA. These agents do not notify VCS about local failures.
Administering cluster functionality Overview of cluster volume management Table 13-3 Cluster behavior under I/O failure to a mirrored volume for different disk detach policies Type of I/O failure Local (diskdetpolicy=local) Global (diskdetpolicy=global) Failure of paths to all disks in a volume for a single node I/O fails for the affected node. The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain.
408 Administering cluster functionality Overview of cluster volume management Guidelines for choosing detach and failure policies In most cases it is recommended that you use the global detach policy, and particularly if any of the following conditions apply: ■ If you are using the VCS agents that monitor the cluster functionality of Veritas Volume Manager, and which are provided with Veritas Storage FoundationTM for Cluster File System HA and Veritas Storage Foundation for databases HA.
Administering cluster functionality Overview of cluster volume management The default settings for the detach and failure policies are global and dgdisable respectively.
410 Administering cluster functionality Cluster initialization and configuration Cluster initialization and configuration Before any nodes can join a new cluster for the first time, you must supply certain configuration information during cluster monitor setup. This information is normally stored in some form of cluster monitor configuration database. The precise content and format of this information depends on the characteristics of the cluster monitor.
Administering cluster functionality Cluster initialization and configuration During cluster reconfiguration, VxVM suspends I/O to shared disks. I/O resumes when the reconfiguration completes. Applications may appear to freeze for a short time during reconfiguration. If other operations, such as VxVM operations or recoveries, are in progress, cluster reconfiguration can be delayed until those operations have completed.
412 Administering cluster functionality Cluster initialization and configuration Table 13-5 Node abort messages Reason Description cannot find disk on slave node Missing disk or bad disk on the slave node. cannot obtain configuration data The node cannot read the configuration data due to an error such as disk failure. cluster device open failed Open of a cluster device failed. clustering license mismatch with master node Clustering license does not match that on the master node.
Administering cluster functionality Cluster initialization and configuration See the vxclustadm(1M) manual page for more information about vxclustadm and for examples of its usage. Volume reconfiguration Volume reconfiguration is the process of creating, changing, and removing VxVM objects such as disk groups, volumes and plexes. In a cluster, all nodes co-operate to perform such operations. The vxconfigd daemons (see “vxconfigd daemon” on page 414) play an active role in volume reconfiguration.
414 Administering cluster functionality Cluster initialization and configuration When an error occurs, such as when a check on a slave fails or a node leaves the cluster, the error is returned to the utility and a message is sent to the console on the master node to identify on which node the error occurred. vxconfigd daemon The VxVM configuration daemon, vxconfigd, maintains the configuration of VxVM objects. It receives cluster-related instructions from the kernel.
Administering cluster functionality Cluster initialization and configuration stopped, volume reconfiguration cannot take place. Other nodes can join the cluster if the vxconfigd daemon is not running on the slave nodes. If the vxconfigd daemon stops, different actions are taken depending on which node this occurred: ■ If the vxconfigd daemon is stopped on the master node, the vxconfigd daemons on the slave nodes periodically attempt to rejoin to the master node.
416 Administering cluster functionality Cluster initialization and configuration Note: The -r reset option to vxconfigd restarts the vxconfigd daemon and recreates all states from scratch. This option cannot be used to restart vxconfigd while a node is joined to a cluster because it causes cluster information to be discarded. In an HP Serviceguard cluster, use the equivalent Serviceguard functionality to stop and restart the appropriate package.
Administering cluster functionality Multiple host failover configurations Note: Once shutdown succeeds, the node has left the cluster. It is not possible to access the shared volumes until the node joins the cluster again. Since shutdown can be a lengthy process, other reconfiguration can take place while shutdown is in progress. Normally, the shutdown attempt is suspended until the other reconfiguration completes. However, if it is already too far advanced, the shutdown may complete first.
418 Administering cluster functionality Multiple host failover configurations corrupted. Similar corruption can also occur if a file system or database on a raw disk partition is accessed concurrently by two hosts, so this problem in not limited to Veritas Volume Manager. Import lock When a host in a non-clustered environment imports a disk group, an import lock is written on all disks in that disk group. The import lock is cleared when the host deports the disk group.
Administering cluster functionality Multiple host failover configurations For details on how to clear locks and force an import, see “Moving disk groups between systems” on page 185 and the vxdg(1M) manual page. Corruption of disk group configuration If vxdg import is used with -C (clears locks) and/or -f (forces import) to import a disk group that is still in use from another host, disk group configuration corruption is likely to occur.
420 Administering cluster functionality Administering VxVM in cluster environments Administering VxVM in cluster environments The following sections describe the administration of VxVM’s cluster functionality. Note: Most VxVM commands require superuser or equivalent privileges. Requesting node status and discovering the master node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon.
Administering cluster functionality Administering VxVM in cluster environments Determining if a disk is shareable The vxdisk utility manages VxVM disks. To use the vxdisk utility to determine whether a disk is part of a cluster-shareable disk group, use the following command: # vxdisk list accessname where accessname is the disk access name (or device name). A portion of the output from this command (for the device c4t1d0) is shown here: Device: devicetag: type: clusterid: disk: timeout: group: flags: ...
422 Administering cluster functionality Administering VxVM in cluster environments The following is example output for the command vxdg list group1 on the master: Group: group1 dgid: 774222028.1090.teal import-id: 32768.1749 flags: shared version: 140 alignment: 8192 (bytes) ssb: on local-activation: exclusive-write cluster-actv-modes: node0=ew node1=off detach-policy: local private_region_failure: leave copies: nconfig=2 nlog=2 config: seqno=0.
Administering cluster functionality Administering VxVM in cluster environments Caution: The operating system cannot tell if a disk is shared. To protect data integrity when dealing with disks that can be accessed by multiple systems, use the correct designation when adding a disk to a disk group. VxVM allows you to add a disk that is not physically shared to a shared disk group if the node where the disk is accessible is the only node in the cluster.
424 Administering cluster functionality Administering VxVM in cluster environments ■ Some of the nodes to which disks in the disk group are attached are not currently in the cluster, so the disk group cannot access all of its disks. In this case, a forced import is unsafe and must not be attempted because it can result in inconsistent mirrors. Converting a disk group from shared to private Note: Shared disk groups can only be deported on the master node.
Administering cluster functionality Administering VxVM in cluster environments can join two private disk groups on any cluster node where those disk groups are imported. If the source disk group and the target disk group are both shared, you must perform the join on the master node. Note: You cannot join a private disk group and a shared disk group. Changing the activation mode on a shared disk group Note: The activation mode for access by a cluster node to a shared disk group is set on that node.
426 Administering cluster functionality Administering VxVM in cluster environments Setting the disk group failure policy on a shared disk group Note: The disk group failure policy for a shared disk group can only be set on the master node. The vxdg command may be used to set either the dgdisable or leave failure policy for a shared disk group: # vxdg -g diskgroup set dgfailpolicy=dgdisable|leave The default failure policy is dgdisable. See “Disk group failure policy” on page 407.
Administering cluster functionality Administering VxVM in cluster environments Multiple opens by the same node are also supported. Any attempts by other nodes to open the volume fail until the final close of the volume by the node that opened it. Specifying exclusive=off instead means that more than one node in a cluster can open a volume simultaneously. This is the default behavior.
428 Administering cluster functionality Administering VxVM in cluster environments Upgrading the cluster protocol version Note: The cluster protocol version can only be updated on the master node. After all the nodes in the cluster have been updated with a new cluster protocol, you can upgrade the entire cluster using the following command on the master node: # vxdctl upgrade Recovering volumes in shared disk groups Note: Volumes can only be recovered on the master node.
Administering cluster functionality Administering VxVM in cluster environments This command produces output similar to the following: TYP vol NAME vol1 OPERATIONS READ WRITE 2421 0 BLOCKS READ 600000 WRITE 0 AVG TIME(ms) READ WRITE 99.0 0.0 To obtain and display statistics for the entire cluster, use the following command: # vxstat -b The statistics for all nodes are summed.
430 Administering cluster functionality Administering VxVM in cluster environments
14 Chapter Administering sites and remote mirrors In a Remote Mirror configuration (also known as a campus cluster or stretch cluster) the hosts and storage of a cluster that would usually be located in one place, are instead divided between two or more sites. These sites are typically connected via a redundant high-capacity network that provides access to storage and private link communication between the cluster nodes. A typical two-site remote mirror configuration is illustrated in Figure 14-1.
432 Administering sites and remote mirrors If a disk group is configured across the storage at the sites, and inter-site communication is disrupted, there is a possibility of a serial split brain condition arising if each site continues to update the local disk group configuration copies (see “Handling conflicting configuration copies” on page 190).
Administering sites and remote mirrors To enhance read performance, VxVM will service reads from the plexes at the local site where an application is running if the siteread read policy is set on a volume. Writes are written to plexes at all sites. The site consistency of a volume is ensured by detaching a site when its last complete plex fails at that site. If a site fails, all its plexes are detached and the site is said to be detached.
434 Administering sites and remote mirrors Configuring sites for hosts and disks Configuring sites for hosts and disks Note: The Remote Mirror feature requires that the Site Awareness license has been installed on all hosts at all sites that are participating in the configuration.
Administering sites and remote mirrors Configuring site consistency on a disk group The -f option allows the requirement to be removed if the site is detached or offline. The site name is not removed from the disks. If required, use the vxdisk rmtag command to remove the site tag as described in “Configuring sites for hosts and disks” on page 434.
436 Administering sites and remote mirrors Setting the siteread policy on a volume To turn on the site consistency requirement for an existing volume, use the following form of the vxvol command: # vxvol [-g diskgroup] set siteconsistent=on volume To turn off the site consistency requirement for a volume, use the following command: # vxvol [-g diskgroup] set siteconsistent=off volume Note: The siteconsistent and allsites attributes must be set to off for RAID-5 volumes in a site-consistent disk group.
Administering sites and remote mirrors Site-based allocation of storage to volumes Note: If the Site Awareness license is installed on all the hosts in the Remote Mirror configuration, and site consistency is enabled on a volume, the vxassist command attempts to allocate storage across the sites that are registered to a disk group. If not enough storage is available at all sites, the command fails unless you also specify the allsites=off attribute.
438 Administering sites and remote mirrors Site-based allocation of storage to volumes Examples of storage allocation using sites The examples in the following table demonstrate how to use site names with the vxassist command to allocate storage. The disk group, ccdg, has been enabled for site consistency with disks configured at two sites, site1 and site2. Command Description # vxassist -g ccdg make vol 2g \ nmirror=2 Create a volume with one mirror at each site.
Administering sites and remote mirrors Making an existing disk group site consistent Command Description # vxassist -g ccdg remove \ mirror vol site:site1 Remove a mirror from a volume at a specified site. If the volume is site consistent, the command fails if this would remove the last remaining plex at a site. # vxassist -g ccdg growto vol 4g Grow a volume. If the volume is site consistent, the command fails if there is insufficient storage available at any site.
440 Administering sites and remote mirrors Fire drill — testing the configuration Fire drill — testing the configuration Caution: To avoid potential loss of service or data, it is recommended that you do not use these procedures on a live system. After validating that the consistency of the volumes and disk groups at your sites, you should validate the procedures that you will use in the event of the various possible types of failure.
Administering sites and remote mirrors Failure scenarios and recovery procedures site state to ACTIVE, and initiates recovery of the plexes. When all the plexes have been recovered, the plexes are put into the ACTIVE state. Note: vxsited does not try to reattach a site that you have explicitly detached by using the vxdg detachsite command. The automatic site reattachment feature is enabled by default.
442 Administering sites and remote mirrors Failure scenarios and recovery procedures Failure scenario Recovery technique Failure of storage at a site. See “Recovery from storage failure” on page 442. Failure of both hosts and storage at a site. See “Recovery from site failure” on page 443.
Administering sites and remote mirrors Failure scenarios and recovery procedures at the other sites.
444 Administering sites and remote mirrors Failure scenarios and recovery procedures
Chapter 15 Using Storage Expert About Storage Expert System administrators often find that gathering and interpreting data about large and complex configurations can be a difficult task. Veritas Storage Expert is designed to help in diagnosing configuration problems with VxVM. Storage Expert consists of a set of simple commands that collect VxVM configuration data and compare it with “best practice.
446 Using Storage Expert How Storage Expert works How Storage Expert works Storage Expert components include a set of rule scripts and a rules engine. The rules engine runs the scripts and produces ASCII output, which is organized and archived by Storage Expert’s report generator. This output contains information about areas of VxVM configuration that do not meet the set criteria. By default, output is sent to the screen, but you can send it to a file using standard output redirection.
Using Storage Expert Running Storage Expert info Describes what the rule does. list Lists the attributes of the rule that you can set. run Runs the rule. See “Rule definitions and attributes” on page 456.
448 Using Storage Expert Running Storage Expert # vxse_dg1 -g mydg run VxVM vxse:vxse_dg1 INFO V-5-1-5511 vxse_vxdg1 - RESULTS ---------------------------------------------------------vxse_dg1 PASS: Disk group (mydg) okay amount of disks in this disk group (4) This indicates that the specified disk group (mydg) met the conditions specified in the rule. See “Rule result types” on page 448. You can set Storage Expert to run as a cron job to notify administrators, and to archive reports automatically.
Using Storage Expert Identifying configuration problems using Storage Expert ■ A value specified on the command line. ■ A value specified in a user-defined defaults file. ■ A value in the /etc/default/vxse file that has not been commented out. ■ A built-in value defined at compile time. Identifying configuration problems using Storage Expert Storage Expert provides a large number of rules that help you to diagnose configuration issues that might cause problems for your storage environment.
450 Using Storage Expert Identifying configuration problems using Storage Expert Checking for large mirror volumes without a dirty region log (vxse_drl1) To check whether large mirror volumes (larger than 1GB) have an associated dirty region log (DRL), run rule vxse_drl1. Creating a DRL speeds recovery of mirrored volumes after a system crash. A DRL tracks those regions that have changed and uses the tracking information to recover only those portions of the volume that need to be recovered.
Using Storage Expert Identifying configuration problems using Storage Expert A mirror of the RAID-5 log protects against loss of data due to the failure of a single disk. You are strongly advised to mirror the log if vxse_raid5log3 reports that the log of a large RAID-5 volume does not have a mirror. See “Adding a RAID-5 log” on page 283. Disk groups Disks groups are the basis of VxVM storage configuration so it is critical that the integrity and resilience of your disk groups are maintained.
452 Using Storage Expert Identifying configuration problems using Storage Expert Checking the number of configuration copies in a disk group (vxse_dg5) To find out whether a disk group has only a single VxVM configured disk, run rule vxse_dg5. See “Creating and administering disk groups” on page 165. Checking for non-imported disk groups (vxse_dg6) To check for disk groups that are visible to VxVM but not imported, run rule vxse_dg6. See “Importing a disk group” on page 174.
Using Storage Expert Identifying configuration problems using Storage Expert ■ volumes needing recovery See “Reattaching plexes” on page 231. See “Starting a volume” on page 271. See the Veritas Volume Manager Troubleshooting Guide. Disk striping Striping enables you to enhance your system’s performance. Several rules enable you to monitor important parameters such as the number of columns in a stripe plex or RAID-5 plex, and the stripe unit size of the columns.
454 Using Storage Expert Identifying configuration problems using Storage Expert Checking the number of columns in striped volumes (vxse_stripes2) The default values for the number of columns in a striped plex are 16 and 3. By default, rule vxse_stripes2 reports a violation if a striped plex in your volume has fewer than 3 columns or more than 16 columns. See “Performing online relayout” on page 294.
Using Storage Expert Identifying configuration problems using Storage Expert Checking the system name (vxse_host) Rule vxse_host can be used to confirm that the system name in the file /etc/vx/volboot is the same as the name that was assigned to the system when it was booted.
456 Using Storage Expert Rule definitions and attributes Rule definitions and attributes You can use the info keyword to show a description of a rule. See “Discovering what a rule does” on page 447. Table 15-1 lists the available rule definitions, and rule attributes and their default values. Table 15-1 Rule definitions in Storage Expert Rule Description vxse_dc_failures Checks and points out failed disks and disabled controllers.
Using Storage Expert Rule definitions and attributes Table 15-1 Rule definitions in Storage Expert Rule Description vxse_raid5log1 Checks for RAID-5 volumes that do not have an associated log. vxse_raid5log2 Checks for recommended minimum and maximum RAID-5 log sizes. vxse_raid5log3 Checks for large RAID-5 volumes that do not have a mirrored RAID-5 log. vxse_redundancy Checks the redundancy of volumes. vxse_rootmir Checks that all root mirrors are set up correctly.
458 Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default Description value vxse_dc_failures - - No user-configurable variables. vxse_dg1 max_disks_per_dg 250 Maximum number of disks in a disk group. Warn if a disk group has more disks than this. vxse_dg2 - - No user-configurable variables. vxse_dg3 - - No user-configurable variables. vxse_dg4 - - No user-configurable variables.
Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default Description value vxse_mirstripe large_mirror_size 1g (1GB) Large mirror-stripe threshold size. Warn if a mirror-stripe volume is larger than this. 8 Large mirror-stripe number of subdisks threshold. Warn if a mirror-stripe volume has more subdisks than this. too_narrow_raid5 4 Minimum number of RAID-5 columns.
460 Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default Description value vxse_redundancy volume_redundancy 0 Volume redundancy check. The value of 2 performs a mirror redundancy check. A value of 1 performs a RAID-5 redundancy check. The default value of 0 performs no redundancy check. vxse_rootmir - - No user-configurable variables.
Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default Description value vxse_volplex - - No user-configurable variables.
462 Using Storage Expert Rule definitions and attributes
Chapter 16 Performance monitoring and tuning Veritas Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately. Performance guidelines VxVM allows you to optimize data storage performance using the following two strategies: ■ Balance the I/O load among the available disk drives.
464 Performance monitoring and tuning Performance guidelines Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel. Striped plexes improve access performance for both read and write operations. Having identified the most heavily accessed volumes (containing file systems or databases), you can increase access bandwidth to this data by striping it across portions of multiple disks.
Performance monitoring and tuning Performance guidelines Combining mirroring and striping Note: You need a full license to use this feature. Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams. Striping provides better throughput because parallel I/O streams can operate concurrently on separate devices. Serial access is optimized when I/O exactly fits across all stripe units in one stripe.
466 Performance monitoring and tuning Performance guidelines Volume read policies To help optimize performance for different types of volumes, VxVM supports the following read policies on data plexes: ■ round—a round-robin read policy, where all plexes in the volume take turns satisfying read requests to the volume. ■ prefer—a preferred-plex read policy, where the plex with the highest performance usually satisfies read requests. If that plex fails, another plex is accessed.
Performance monitoring and tuning Performance monitoring Note: To improve performance for read-intensive workloads, you can attach up to 32 data plexes to the same volume. However, this would usually be an ineffective use of disk space for the gain in read performance. Performance monitoring As a system administrator, you have two sets of priorities for setting priorities for performance. One set is physical, concerned with hardware such as disks and controllers.
468 Performance monitoring and tuning Performance monitoring Tracing volume operations Use the vxtrace command to trace operations on specified volumes, kernel I/O object types or devices. The vxtrace command either prints kernel I/O errors or I/O trace records to the standard output or writes the records to a file in binary format. Binary trace records written to a file can also be read back and formatted by vxtrace.
Performance monitoring and tuning Performance monitoring an operation makes it possible to measure the impact of that particular operation. The following is an example of output produced using the vxstat command: TYP vol vol vol vol vol OPERATIONS NAME READ WRITE blop 0 0 foobarvol 0 0 rootvol 73017 181735 swapvol 13197 20252 testvol 0 0 BLOCKS READ 0 0 718528 105569 0 WRITE 0 0 1114227 162009 0 AVG TIME(ms) READ WRITE 0.0 0.0 0.0 0.0 26.8 27.9 25.8 397.0 0.0 0.
470 Performance monitoring and tuning Performance monitoring Such output helps to identify volumes with an unusually large number of operations or excessive read or write times. To display disk statistics, use the vxstat -d command.
Performance monitoring and tuning Performance monitoring If two volumes (other than the root volume) on the same disk are busy, move them so that each is on a different disk. If one volume is particularly busy (especially if it has unusually large average read or write times), stripe the volume (or split the volume into multiple pieces, with each piece on a different disk). If done online, converting a volume to use striping requires sufficient free space to store an extra copy of the volume.
472 Performance monitoring and tuning Tuning VxVM writes where mirroring can improve performance depends greatly on the disks, the disk controller, whether multiple controllers can be used, and the speed of the system bus. If a particularly busy volume has a high ratio of reads to writes, it is likely that mirroring can significantly improve performance of that volume. Using I/O tracing I/O statistics provide the data for basic performance analysis; I/O traces serve for more detailed analysis.
Performance monitoring and tuning Tuning VxVM Tuning guidelines for large systems On smaller systems (with fewer than a hundred disk drives), tuning is unnecessary and VxVM is capable of adopting reasonable defaults for all configuration parameters. On larger systems, configurations can require additional control over the tuning of these parameters, both for capacity and performance reasons. Generally, only a few significant decisions must be made when setting up VxVM on a large system.
474 Performance monitoring and tuning Tuning VxVM To set the number of configuration copies for a new disk group, use the nconfig operand with the vxdg init command (see the vxdg(1M) manual page for details). You can also change the number of copies for an existing group by using the vxedit set command (see the vxedit(1M) manual page).
Performance monitoring and tuning Tuning VxVM Tunable parameters The following sections describe specific tunable parameters. dmp_cache_open If set to on, the first open of a device that is performed by an array support library (ASL) is cached. This enhances the performance of device discovery by minimizing the overhead caused by subsequent opens by ASLs. If set to off, caching is not performed. The default value is off. The value of this tunable is changed by using the vxdmpadm settune command.
476 Performance monitoring and tuning Tuning VxVM The value of this tunable is changed by using the vxdmpadm settune command. dmp_health_time DMP detects intermittently failing paths, and prevents I/O requests from being sent on them. The value of dmp_health_time represents the time in seconds for which a path must stay healthy.
Performance monitoring and tuning Tuning VxVM increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 14 and 16 for an I/O activity pattern that consists mostly of sequential reads or writes. Note: This parameter only affects the behavior of the balanced I/O policy.
478 Performance monitoring and tuning Tuning VxVM dmp_restore_policy The DMP restore policy, which can be set to 0 (CHECK_ALL), 1 (CHECK_DISABLED), 2 (CHECK_PERIODIC), or 3 (CHECK_ALTERNATE). The value of this tunable is only changeable by using the vxdmpadm start restore command. dmp_retry_count If an inquiry succeeds on a path, but there is an I/O error, the number of retries to attempt on the path. The default number of retries is 30.
Performance monitoring and tuning Tuning VxVM dmp_stat_interval The time interval between gathering DMP statistics. The default and minimum value is 1 second. The value of this tunable is changed by using the vxdmpadm settune command. vol_checkpt_default The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint.
480 Performance monitoring and tuning Tuning VxVM Since the region size must be the same on all nodes in a cluster for a shared volume, the value of the vol_fmr_logsz tunable on the master node overrides the tunable values on the slave nodes, if these values are different. Because the value of a shared volume can change, the value of vol_fmr_logsz is retained for the life of the volume.
Performance monitoring and tuning Tuning VxVM performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests. The default value for this tunable is 32768 bytes (32KB). vol_maxparallelio The number of I/O operations that the vxconfigd(1M) daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call. The default value for this tunable is 256. It is not desirable to change this value.
482 Performance monitoring and tuning Tuning VxVM volcvm_smartsync If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups. See“SmartSync recovery accelerator” on page 62 for more information. voldrl_max_drtregs The maximum number of dirty regions that can exist on the system for nonsequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time.
Performance monitoring and tuning Tuning VxVM voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to voliomem_maxpool_sz, one for RAID-5 and one for mirrored volumes.
484 Performance monitoring and tuning Tuning VxVM tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool. Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Setting this value to a size greater than can readily be accommodated on the system is inadvisable. The default value for this tunable is 131072 bytes (128KB).
Performance monitoring and tuning Tuning VxVM Note: The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications. Setting the value of volpagemod_max_memsz below 512KB fails if cache objects or volumes that have been prepared for instant snapshot operations are present on the system. If you do not use the FastResync or DRL features that are implemented using a version 20 DCO volume, the value of volpagemod_max_memsz can be set to 0.
486 Performance monitoring and tuning Tuning VxVM
Appendix A Commands summary This appendix summarizes the usage and purpose of important commonly-used commands in Veritas Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book. Most commands (excepting daemons, library commands and supporting scripts) are linked to the /usr/sbin directory from the /opt/VRTS/bin directory.
488 Commands summary other commands and scripts, and which are not intended for general use, are not located in /opt/VRTS/bin and do not have manual pages.
Commands summary Table A-1 Obtaining information about objects in VxVM Command Description vxinfo [-g diskgroup] [volume ...] Displays information about the accessibility and usability of volumes. See “Listing Unstartable Volumes” in the Veritas Volume Manager Troubleshooting Guide. Example: # vxinfo -g mydg myvol1 \ myvol2 vxprint -hrt [-g diskgroup] [object] Prints single-line information about objects in VxVM. See “Displaying volume information” on page 264.
490 Commands summary Table A-2 Administering disks Command Description vxdiskadd [devicename ...] Adds a disk specified by device name. See “Using vxdiskadd to place a disk under control of VxVM” on page 101. Example: # vxdiskadd c0t1d0 vxedit [-g diskgroup] rename olddisk \ Renames a disk under control of newdisk VxVM. See “Renaming a disk” on page 119.
Commands summary Table A-2 Administering disks Command Description vxedit [-g diskgroup] set \ spare=on|off diskname Adds/removes a disk from the pool of hot-relocation spares. See “Marking a disk as a hotrelocation spare” on page 387. See “Removing a disk from use as a hot-relocation spare” on page 388. Examples: # vxedit -g mydg set \ spare=on mydg04 # vxedit -g mydg set \ spare=off mydg04 vxdisk offline devicename Takes a disk offline. See “Taking a disk offline” on page 118.
492 Commands summary Table A-3 Creating and administering disk groups Command Description vxdg [-s] init diskgroup \ [diskname=]devicename Creates a disk group using a preinitialized disk. See “Creating a disk group” on page 170. See “Creating a shared disk group” on page 422. Example: # vxdg init mydg \ mydg01=c0t1d0 vxsplitlines -g diskgroup Reports conflicting configuration information. See “Handling conflicting configuration copies” on page 190.
Commands summary Table A-3 Creating and administering disk groups Command Description vxdg [-o expand] listmove sourcedg \ targetdg object ... Lists the objects potentially affected by moving a disk group. See “Listing objects potentially affected by a move” on page 200. Example: # vxdg -o expand listmove \ mydg newdg myvol1 vxdg [-o expand] move sourcedg \ targetdg object ... Moves objects between disk groups. See “Moving objects between disk groups” on page 203.
494 Commands summary Table A-3 Creating and administering disk groups Command Description vxrecover -g diskgroup -sb Starts all volumes in an imported disk group. See “Moving disk groups between systems” on page 185. Example: # vxrecover -g mydg -sb vxdg destroy diskgroup Destroys a disk group and releases its disks. See “Destroying a disk group” on page 208.
Commands summary Table A-4 Creating and administering subdisks Command Description vxsd [-g diskgroup] assoc plex \ subdisk1:0 ... subdiskM:N-1 Adds subdisks to the ends of the columns in a striped or RAID-5 volume. See “Associating subdisks with plexes” on page 218. Example: # vxsd -g mydg assoc \ vol01-01 mydg10-01:0 \ mydg11-01:1 mydg12-01:2 vxsd [-g diskgroup] mv oldsubdisk \ newsubdisk ... Replaces a subdisk. See “Moving subdisks” on page 217.
496 Commands summary Table A-4 Creating and administering subdisks Command Description vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “Moving and unrelocating subdisks using vxunreloc” on page 392. Example: # vxunreloc -g mydg mydg01 vxsd [-g diskgroup] dis subdisk Dissociates a subdisk from a plex. See “Dissociating subdisks from plexes” on page 221. Example: # vxsd -g mydg dis mydg02-01 vxedit [-g diskgroup] rm subdisk Removes a subdisk.
Commands summary Table A-5 Creating and administering plexes Command Description vxmake [-g diskgroup] plex plex \ layout=stripe|raid5 stwidth=W \ ncolumn=N sd=subdisk1[,subdisk2,...] Creates a striped or RAID-5 plex. See “Creating a striped plex” on page 224. Example: # vxmake -g mydg plex pl-01 \ layout=stripe stwidth=32 \ ncolumn=2 \ sd=mydg01-01,mydg02-01 vxplex [-g diskgroup] att volume plex Attaches a plex to an existing volume. See “Attaching and associating plexes” on page 229.
498 Commands summary Table A-5 Creating and administering plexes Command Description vxplex [-g diskgroup] cp volume newplex Copies a volume onto a plex. See “Copying volumes to plexes” on page 233. Example: # vxplex -g mydg cp vol02 \ vol03-01 vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching plexes” on page 231.
Commands summary Table A-6 Creating volumes Command Description vxassist -b [-g diskgroup] make \ volume length [layout=layout ] [attributes] Creates a volume. See “Creating a volume on any disk” on page 243. See “Creating a volume on specific disks” on page 244. Example: # vxassist -b -g mydg make \ myvol 20g layout=concat \ mydg01 mydg02 vxassist -b [-g diskgroup] make \ volume length layout=mirror \ [nmirror=N] [attributes] Creates a mirrored volume. See “Creating a mirrored volume” on page 249.
500 Commands summary Table A-6 Creating volumes Command Description vxassist -b [-g diskgroup] make \ volume length layout=mirror \ mirror=ctlr [attributes] Creates a volume with mirrored data plexes on separate controllers. See “Mirroring across targets, controllers or enclosures” on page 255. Example: # vxassist -b -g mydg make \ mymcvol 20g layout=mirror \ mirror=ctlr vxmake -b [-g diskgroup] -Uusage_type \ vol volume [len=length] plex=plex,... Creates a volume from existing plexes.
Commands summary Table A-7 Administering volumes Command Description vxassist [-g diskgroup] mirror volume \ [attributes] Adds a mirror to a volume. See “Adding a mirror to a volume” on page 271. Example: # vxassist -g mydg mirror \ myvol mydg10 vxassist [-g diskgroup] remove \ mirror volume [attributes] Removes a mirror from a volume. See “Removing a mirror” on page 273.
502 Commands summary Table A-7 Administering volumes Command Description vxsnap [-g diskgroup] prepare volume \ [drl=on|sequential|off] Prepares a volume for instant snapshots and for DRL logging. See “Preparing a volume for DRL and instant snapshots” on page 275. Example: # vxsnap -g mydg prepare \ myvol drl=on vxsnap [-g diskgroup] make \ source=volume/newvol=snapvol\ [/nmirror=number] Takes a full-sized instant snapshot of a volume by breaking off plexes of the original volume.
Commands summary Table A-7 Administering volumes Command Description vxmake [-g diskgroup] cache \ cache_object cachevolname=volume \ [regionsize=size] Creates a cache object for use by space-optimized instant snapshots. See “Creating a shared cache object” on page 322.
504 Commands summary Table A-7 Administering volumes Command Description vxsnap [-g diskgroup] unprepare volume Removes support for instant snapshots and DRL logging from a volume. See “Removing support for DRL and instant snapshots from a volume” on page 279. Example: # vxsnap -g mydg unprepare \ myvol vxassist [-g diskgroup] relayout \ volume [layout=layout] [relayout_options] Performs online relayout of a volume. See “Performing online relayout” on page 294.
Commands summary Table A-7 Administering volumes Command Description vxassist [-g diskgroup] convert \ volume [layout=layout] [convert_options] Converts between a layered volume and a non-layered volume layout. See “Converting between layered and non-layered volumes” on page 300. Example: # vxassist -g mydg convert \ vol3 layout=stripe-mirror vxassist [-g diskgroup] remove \ volume volume Removes a volume. See “Removing a volume” on page 290.
506 Commands summary Table A-8 Monitoring and controlling tasks Command Description vxtask pause task Suspends operation of a task. See “Using the vxtask command” on page 269. Example: # vxtask pause mytask vxtask -p [-g diskgroup] list Lists all paused tasks. See “Using the vxtask command” on page 269. Example: # vxtask -p -g mydg list vxtask resume task Resumes a paused task. See “Using the vxtask command” on page 269.
Commands summary Online manual pages Online manual pages Manual pages are organized into three sections: ■ Section 1M — administrative commands ■ Section 4 — file formats ■ Section 7 — device driver interfaces Section 1M — administrative commands Manual pages in section 1M describe commands that are used to administer Veritas Volume Manager. Table A-9 Section 1M manual pages Name Description dgcfgbackup Create or update VxVM volume group configuration backup file.
508 Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxconfigd Veritas Volume Manager configuration daemon vxconfigrestore Restore disk group configuration. vxcp_lvmroot Copy LVM root disk onto new Veritas Volume Manager root disk. vxdarestore Restore simple or nopriv disk access records. vxdco Perform operations on version 0 DCO objects and DCO volumes. vxdctl Control the volume configuration daemon.
Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxmend Mend simple problems in configuration records. vxmirror Mirror volumes on a disk or control default mirroring. vxnotify Display Veritas Volume Manager configuration events. vxpfto Set Powerfail Timeout (pfto). vxplex Perform Veritas Volume Manager operations on plexes. vxpool Create and administer ISP storage pools. vxprint Display records from the Veritas Volume Manager configuration.
510 Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxtune Adjust Veritas Volume Replicator and Veritas Volume Manager tunables. vxunreloc Move a hot-relocated subdisk back to its original disk. vxusertemplate Create and administer ISP user templates. vxvmboot Prepare Veritas Volume Manager volume as a root, boot, primary swap or dump volume. vxvmconvert Convert LVM volume groups to VxVM disk groups.
Appendix B Configuring Veritas Volume Manager This appendix provides guidelines for setting up efficient storage management after installing the Veritas Volume Manager software.
512 Configuring Veritas Volume Manager Adding unsupported disk arrays as JBODs Optional Setup Tasks ■ Place the root disk under VxVM control and mirror it to create an alternate boot disk. ■ Designate hot-relocation spare disks in each disk group. ■ Add mirrors to volumes. ■ Configure DRL and FastResync on volumes. Maintenance Tasks ■ Resize volumes and file systems. ■ Add more disks, create new disk groups, and create new volumes. ■ Create and maintain snapshots.
Configuring Veritas Volume Manager Guidelines for configuring storage groups. Storage pools are only required if you intend using the ISP feature of VxVM. Guidelines for configuring storage A disk failure can cause loss of data on the failed disk and loss of access to your system. Loss of access is due to the failure of a key disk used for system operations. Veritas Volume Manager can protect your system from these problems.
514 Configuring Veritas Volume Manager Guidelines for configuring storage ■ Leave the Veritas Volume Manager hot-relocation feature enabled. See “Hotrelocation guidelines” on page 516 for details. Mirroring guidelines Refer to the following guidelines when using mirroring. ■ Do not place subdisks from different plexes of a mirrored volume on the same physical disk. This action compromises the availability benefits of mirroring and degrades performance.
Configuring Veritas Volume Manager Guidelines for configuring storage Dirty region logging guidelines Dirty region logging (DRL) can speed up recovery of mirrored volumes following a system crash. When DRL is enabled, Veritas Volume Manager keeps track of the regions within a volume that have changed as a result of writes to a plex. Note: Using Dirty Region Logging can impact system performance in a writeintensive environment. For more information, see “Dirty region logging” on page 60.
516 Configuring Veritas Volume Manager Guidelines for configuring storage ■ If more than one plex of a mirrored volume is striped, configure the same stripe-unit size for each striped plex. ■ Where possible, distribute the subdisks of a striped volume across drives connected to different controllers and buses. ■ Avoid the use of controllers that do not support overlapped seeks. (Such controllers are rare.
Configuring Veritas Volume Manager Guidelines for configuring storage The hot-relocation feature is enabled by default. The associated daemon, vxrelocd, is automatically started during system startup. Refer to the following guidelines when using hot-relocation. ■ The hot-relocation feature is enabled by default. Although it is possible to disable hot-relocation, it is advisable to leave it enabled.
518 Configuring Veritas Volume Manager Controlling VxVM’s view of multipathed devices subdisks to determine whether they should be relocated to more suitable disks to regain the original performance benefits. ■ Although it is possible to build Veritas Volume Manager objects on spare disks (using vxmake or the VEA interface), it is recommended that you use spare disks for hot-relocation only. See “Administering hot-relocation” on page 379 for more information.
Configuring Veritas Volume Manager Configuring cluster support Configuring shared disk groups This section describes how to configure shared disks in a cluster. If you are installing Veritas Volume Manager for the first time or adding disks to an existing cluster, you need to configure new shared disks. If you are setting up Veritas Volume Manager for the first time, configure the shared disks using the following procedure: 1 Start the cluster on one node only to prevent access by other nodes.
520 Configuring Veritas Volume Manager Reconfiguration tasks If dirty region logs exist, ensure they are active. If not, replace them with larger ones. To display the shared flag for all the shared disk groups, use the following command: # vxdg list The disk groups are now ready to be shared. 3 Bring up the other cluster nodes. Enter the vxdg list command on each node to display the shared disk groups. This command displays the same list of shared disk groups displayed earlier.
Glossary Active/Active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. Active/Passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
522 Glossary cluster A set of hosts (each termed a node) that share a set of disks. cluster manager An externally-provided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform VxVM of changes in cluster membership. cluster-shareable disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a shared disk group). Also see private disk group. column A set of one or more subdisks within a striped plex.
Glossary maintained in the DCO volume. Otherwise, the DRL is allocated to an associated subdisk called a log subdisk. disabled path A path to a disk that is not available for I/O. A path can be disabled due to real hardware failures or if the user has used the vxdmpadm disable command on that controller. disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name An alternative term for a device name.
524 Glossary An alternative term for a disk name. disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name. disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03. The term disk media name is also used to refer to a disk name. dissociate The process by which any link that exists between two VxVM objects is removed.
Glossary An area of a disk under VxVM control that is not allocated to any subdisk or reserved for use by any other VxVM object. free subdisk A subdisk that is not associated with any plex and has an empty putil[0] field. hostid A string that identifies a host to VxVM. The hostid for a host is stored in its volboot file, and is used in defining ownership of disks and disk groups. hot-relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails.
526 Glossary Where there are multiple physical access paths to a disk connected to a system, the disk is called multipathed. Any software residing on the host, (for example, the DMP driver) that hides this fact from the user is said to provide multipathing functionality. node One of the hosts in a cluster. node abort A situation where a node leaves a cluster (on an emergency basis) without attempting to stop ongoing operations.
Glossary A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO volume on disk. Also see data change object (DCO). persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and prevents failed mirrors from being selected for recovery. This is also known as kernel logging. physical disk The underlying storage device, which may or may not be under VxVM control.
528 Glossary The disk containing the root file system. This disk may be under VxVM control. root file system The initial file system mounted as part of the UNIX kernel startup sequence. root partition The disk region on which the root file system resides. root volume The VxVM volume that contains the root file system, if such a volume is designated by the system configuration. rootability The ability to place the root file system and the swap device under VxVM control.
Glossary A plex that is not as long as the volume or that has holes (regions of the plex that do not have a backing subdisk). Storage Area Network (SAN) A networking paradigm that provides easily reconfigurable connectivity between any subset of computers, disk storage and interconnecting hardware such as switches, hubs and bridges. stripe A set of stripe units that occupy the same positions across a series of columns.
530 Glossary A virtual disk, representing an addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection of from one to 32 plexes. volume configuration device The volume configuration device (/dev/vx/config) is the interface through which all configuration changes to the volume device driver are performed. volume device driver The driver that forms the virtual disk drive between the application and the physical device driver level.
Index Symbols /dev/vx/dmp directory 126 /dev/vx/rdmp directory 126 /etc/default/vxassist file 241, 390 /etc/default/vxdg defaults file 403 /etc/default/vxdg file 171 /etc/default/vxdisk file 81, 97 /etc/default/vxse file 448 /etc/fstab file 290 /etc/volboot file 212 /etc/vx/darecs file 212 /etc/vx/disk.info file 93 /etc/vx/dmppolicy.info file 148 /etc/vx/volboot file 186 /sbin/init.
532 Index ndcomirror 251, 252, 357 ndcomirs 275, 321 newvol 330 nmirror 330 nomanual 146 nopreferred 146 plex 234 preferred priority 146 primary 147 putil 222, 234 secondary 147 sequential DRL 252 setting for paths 146 setting for rules 448 snapvol 327, 332 source 327, 332 standby 147 subdisk 221 syncing 319, 344 tutil 222, 234 auto disk type 81 autogrow tuning 346 autogrow attribute 322, 325 autogrowby attribute 322 autotrespass mode 125 B backups created using snapshots 319 creating for volumes 303 crea
Index clusters activating disk groups 403 activating shared disk groups 425 activation modes for shared disk groups 402 benefits 397 checking cluster protocol version 427 cluster-shareable disk groups 401 configuration 410 configuring exclusive open of volume by node 426 connectivity policies 404 converting shared disk groups to private 424 creating shared disk groups 422 designating shareable disk groups 401 detach policies 404 determining if disks are shared 421 forcibly adding disks to disk groups 423 f
534 Index crash dumps using VxVM volumes for 107 Cross-platform Data Sharing (CDS) alignment constraints 242 disk format 81 CVM cluster functionality of VxVM 397 D d# 20, 78 data change object DCO 69 data redundancy 42, 43, 46 data volume configuration 62 database replay logs and sequential DRL 61 databases resilvering 62 resynchronizing 62 DCO adding to RAID-5 volumes 277 adding version 0 DCOs to volumes 356 adding version 20 DCOs to volumes 275 calculating plex size for version 20 70 considerations for
Index A/P-C 126 A/PF 126 A/PF-C 126 A/PG 126 A/PG-C 126 Active/Active 126 Active/Passive 125 adding disks to DISKS category 87 adding vendor-supplied support package 84 Asymmetric Active/Active 126 defined 21 excluding support for 86 listing excluded 86 listing supported 85 listing supported disks in DISKS category 87 multipathed 22 re-including support for 86 removing disks from DISKS category 89 removing vendor-supplied support package 84 disk drives variable geometry 515 disk duplexing 42, 255 disk grou
536 Index serial split brain condition 190 setting connectivity policies in clusters 425 setting default disk group 168 setting failure policies in clusters 426 setting number of configuration copies 474 shared in clusters 401 specifying to commands 167 splitting 196, 205 splitting in clusters 424 Storage Expert rules 451 upgrading version of 208, 211 version 208, 210 disk media names 28, 77 disk names 77 configuring persistent 93 disk sparing Storage Expert rules 454 disk## 29, 78 disk##-## 29 diskdetpoli
Index spares 388 removing from VxVM control 112, 172 removing tags from 178 removing with subdisks 111, 112 renaming 119 replacing 112 replacing removed 115 reserving for special purposes 119 resolving status in clusters 404 scanning for 82 secondary path 138 setting connectivity policies in clusters 425 setting failure policies in clusters 426 setting tags on 177 simple 80 spare 384 specifying to vxassist 244 stripe unit size 515 tagging with site name 434 taking offline 118 UDID flag 175 unique identifie
538 Index dmp_scsi_timeout tunable 478 dmp_stat_interval tunable 479 DRL adding log subdisks 220 adding logs to mirrored volumes 281 checking existence of 450 checking existence of mirror 450 creating volumes with DRL enabled 252, 253 determining if active 278 determining if enabled 278 dirty bits 60 dirty region logging 60 disabling 278 enabling on volumes 275 hot-relocation limitations 381 log subdisks 61 maximum number of dirty regions 482 minimum number of sectors 482 recovery map in version 20 DCO 69
Index use with snapshots 66 fastresync attribute 251, 252, 293 file systems growing using vxresize 285 shrinking using vxresize 285 unmounting 290 fire drill defined 432 testing 440 firmware upgrading 154 FMR.
540 Index initialization of disks 90 instant snapshots backing up multiple volumes 333 cascaded 312 creating backups 319 creating for volume sets 334 creating full-sized 327 creating space-optimized 324 creating volumes for use as full-sized 323 displaying information about 342 dissociating 340 full-sized 307 improving performance of synchronization 345 reattaching 338 refreshing 337 removing 341 removing support for 279 restoring volumes using 340 space-optimized 309 splitting hierarchies 341 synchronizin
Index LUN group failover 126 LUN groups displaying details of 140 LUNs idle 477 M maps adding to volumes 274 usage with volumes 237 master node defined 400 discovering 420 maxautogrow attribute 322 maxdev attribute 189 MC/ServiceGuard use with VxVM in clusters 410 memory granularity of allocation by VxVM 482 maximum size of pool for VxVM 483 minimum size of pool for VxVM 485 persistence of FastResync in 67 messages complete disk failure 384 hot-relocation of subdisks 390 partial disk failure 383 metadata
542 Index plex attribute 234 renaming disks 119 subdisk 29 subdisk attribute 221 VM disk 29 volume 31 naming scheme changing for disks 91 changing for TPD enclosures 94 for disk devices 78 native multipathing 77, 130 ncachemirror attribute 325 ndcomirror attribute 251, 252, 357 ndcomirs attribute 275, 321 NEEDSYNC volume state 266 newvol attribute 330 nmirror attribute 329, 330 NODAREC plex condition 228 nodes DMP 126 in clusters 399 maximum number in a cluster 397 node abort in clusters 417 requesting sta
Index hot spots identified by I/O traces 472 impact of number of disk group configuration copies 473 improving for instant snapshot synchronization 345 load balancing in DMP 129 mirrored volumes 464 monitoring 467 moving volumes to improve 470 obtaining statistics for disks 470 obtaining statistics for volumes 468 RAID-5 volumes 465 setting priorities 467 striped volumes 464 striping to improve 471 tracing volume operations 468 tuning large systems 473 tuning VxVM 472 using I/O statistics 469 persistent de
544 Index condition flags 228 converting to snapshot 351 copying 233 creating 223 creating striped 224 defined 30 detaching from volumes temporarily 231 disconnecting from volumes 230 displaying information about 224 dissociating from volumes 233 dissociating subdisks from 221 failure in hot-relocation 380 kernel states 229 limit on number per volume 467 maximum number of subdisks 481 maximum number per volume 31 mirrors 33 moving 232, 277, 358 name attribute 234 names 31 partial failure messages 383 putil
Index performance of 466 prefer 289 round 289 select 289 siteread 289, 433, 434, 436 split 289 read-only mode 402 readonly mode 402 RECOVER plex condition 228 recovery checkpoint interval 479 I/O delay 479 preventing on restarting volumes 271 recovery accelerator 62 recovery time Storage Expert rules 449 redo log configuration 63 redundancy checking for volumes 452 of data on mirrors 236 of data on RAID-5 236 redundant-loop access 25 region 80 regionsize attribute 275, 321, 322 reinitialization of disks 10
546 Index read policy 289 rules attributes 458 checking attribute values 447 checking disk group configuration copies 451 checking disk group version number 451 checking for full disk group configuration database 451 checking for initialized disks 452 checking for mirrored DRL 450 checking for multiple RAID-5 logs on a disk 449 checking for non-imported disk groups 452 checking for non-mirrored RAID-5 log 450 checking for RAID-5 log 450 checking hardware 454 checking hot-relocation 454 checking mirrored vo
Index siteconsistent attribute 435 siteread read policy 289, 433, 434, 436 sites reattaching 440 size units 216 slave nodes defined 400 SmartSync 62 disabling on shared disk groups 482 enabling on shared disk groups 482 snap objects 72 snap volume naming 317 snapabort 305 SNAPATT plex state 226 snapback defined 306 merging snapshot volumes 352 resyncfromoriginal 317 resyncfromreplica 317, 353 snapclear creating independent volumes 354 SNAPDIS plex state 226 SNAPDONE plex state 226 snapmir snapshot type 343
548 Index standby path attribute 147 states for plexes 224 of link objects 311 volume 265 statistics gathering 128 storage ordered allocation of 245, 251, 257 storage attributes and volume layout 244 storage cache used by space-optimized instant snapshots 309 Storage Expert check keyword 447 checking default values of rule attributes 447 command-line syntax 446 diagnosing configuration issues 449 info keyword 447 introduced 445 list keyword 447 listing rule attributes 447 obtaining a description of a rule
Index physical disk placement 513 putil attribute 222 RAID-5 failure of 380 RAID-5 plex, configuring 516 removing from VxVM 221 restrictions on moving 217 specifying different offsets for unrelocation 393 splitting 217 tutil attribute 222 unrelocating after hot-relocation 390 unrelocating to different disks 393 unrelocating using vxassist 392 unrelocating using vxdiskadm 391 unrelocating using vxunreloc 392 swap space using VxVM volumes to increase 107 SYNC volume state 266 synchronization controlling for
550 Index vol_default_iodelay 479 vol_fmr_logsz 68, 479 vol_max_vol 480 vol_maxio 480 vol_maxioctl 480 vol_maxparallelio 481 vol_maxspecialio 481 vol_subdisk_num 481 volcvm_smartsync 482 voldrl_max_drtregs 482 voldrl_max_seq_dirty 61, 482 voldrl_min_regionsz 482 voliomem_chunk_size 482 voliomem_maxpool_sz 483 voliot_errbuf_dflt 483 voliot_iobuf_default 483 voliot_iobuf_limit 483 voliot_iobuf_max 484 voliot_max_open 484 volpagemod_max_memsz 484 volraid_minpool_size 485 volraid_rsrtransmax 485 tutil plex att
Index DETACHED 267 DISABLED 267 ENABLED 267 volume length, RAID-5 guidelines 516 volume resynchronization 59 volume sets adding volumes to 362 administering 361 controlling access to raw device nodes 366 creating 362 creating instant snapshots of 334 displaying access to raw device nodes 366 enabling access to raw device nodes 365 listing details of 363 raw device nodes 364 removing volumes from 364 starting 363 stopping 363 volume states ACTIVE 265 CLEAN 265 EMPTY 265 INVALID 266 NEEDSYNC 266 REPLAY 266 S
552 Index effect of growing on FastResync maps 73 enabling FastResync on 292 enabling FastResync on new 251 excluding storage from use by vxassist 244 finding maximum size of 242 finding out maximum possible growth of 284 flagged as dirty 59 initializing contents to zero 261 initializing using vxassist 260 initializing using vxvol 261 kernel states 266 layered 43, 51, 237 limit on number of plexes 31 limitations 31 making immediately available for use 260 maximum number of 480 maximum number of data plexes
Index zeroing out contents of 261 vxassist adding a log subdisk 220 adding a RAID-5 log 283 adding DCOs to volumes 357 adding DRL logs 281 adding mirrors to volumes 230, 271 adding sequential DRL logs 282 advantages of using 239 changing number of columns 298 changing stripe unit size 298 command usage 240 configuring exclusive access to a volume 426 configuring site consistency on volumes 435 converting between layered and non-layered volumes 300 creating cache volumes 322 creating concatenated-mirror vol
554 Index reattaching version 0 DCOs to volumes 359 removing version 0 DCOs from volumes 358 vxdctl checking cluster protocol version 427 managing vxconfigd 212 setting a site tag 434 setting default disk group 168 upgrading cluster protocol version 428 usage in clusters 420 vxdctl enable configuring new disks 82 invoking device discovery 84 used to configure new disks 82 vxddladm adding disks to DISKS category 87 adding foreign devices 89 changing naming scheme 92 listing excluded disk arrays 86 listing s
Index vxdisk scandisks rescanning devices 82 scanning devices 82 vxdiskadd adding disks to disk groups 171 creating disk groups 171 placing disks under VxVM control 101 vxdiskadm Add or initialize one or more disks 97, 171 adding disks 97 adding disks to disk groups 171 Change/display the default disk layout 97 changing disk-naming scheme 91 changing the disk-naming scheme 91 creating disk groups 171 deporting disk groups 173 Disable (offline) a disk device 118 Enable (online) a disk device 117 Enable acce
556 Index removing instant snapshots 341 removing plexes 234 removing snapshots from a cache 347 removing subdisks from VxVM 221 removing volumes 290 renaming disks 119 reserving disks 119 VxFS file system resizing 285 vxiod I/O kernel threads 19 vxmake associating plexes with volumes 229 associating subdisks with new plexes 218 creating cache objects 322 creating plexes 223, 271 creating striped plexes 224 creating subdisks 215 creating volumes 258 using description file with 259 vxmend putting plexes onl
Index vxse_dg2 rule to check disk group configuration copies 451 vxse_dg3 rule to check on disk config size 451 vxse_dg4 rule to check disk group version number 451 vxse_dg5 rule to check number of configuration copies in disk group 452 vxse_dg6 rule to check for non-imported disk groups 452 vxse_disk rule to check for initialized disks 452 vxse_disklog rule to check for multiple RAID-5 logs on a disk 449 vxse_drl1 rule to check for mirrored volumes without a DRL 450 vxse_drl2 rule to check for mirrored DR
558 Index moving subdisks after hot-relocation 392 restarting after errors 394 specifying different offsets for unrelocated subdisks 393 unrelocating subdisks after hot-relocation 392 unrelocating subdisks to different disks 393 VxVM benefits to performance 463 cluster functionality (CVM) 397 configuration daemon 212 configuring disk devices 82 configuring to create mirrored volumes 272 dependency on operating system 19 disk discovery 83 granularity of memory allocation by 482 limitations of shared disk gr