Veritas™ Volume Manager 5.0.1 Administrator's Guide HP-UX 11i v3 HP Part Number: 5900-0087 Published: November 2009 Edition: 1.
© Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Chapter 1 Understanding Veritas Volume Manager ....................... 21 About Veritas Volume Manager ...................................................... VxVM and the operating system ..................................................... How data is stored .................................................................. How VxVM handles storage management .....................
8 Contents Non-persistent FastResync ...................................................... Persistent FastResync ............................................................. DCO volume versioning ........................................................... FastResync limitations ............................................................ Hot-relocation ............................................................................. Volume sets ...............................................................
Contents Dynamic LUN expansion .............................................................. Removing disks .......................................................................... Removing a disk with subdisks ................................................ Removing a disk with no subdisks ............................................ Removing a disk from VxVM control .............................................. Removing and replacing disks .......................................................
10 Contents Displaying the redundancy level of a device or enclosure ............. Specifying the minimum number of active paths ........................ Displaying the I/O policy ....................................................... Specifying the I/O policy ........................................................ Disabling I/O for paths, controllers or array ports ...................... Enabling I/O for paths, controllers or array ports ....................... Upgrading disk controller firmware ......
Contents Correcting conflicting configuration information ....................... Reorganizing the contents of disk groups ........................................ Limitations of disk group split and join ..................................... Listing objects potentially affected by a move ............................ Moving objects between disk groups ........................................ Splitting disk groups ............................................................. Joining disk groups ............
12 Contents Copying volumes to plexes ........................................................... 270 Dissociating and removing plexes .................................................. 270 Changing plex attributes .............................................................. 271 Chapter 7 Creating volumes ............................................................... 273 About volume creation ................................................................ Types of volume layouts ....................
Contents Monitoring and controlling tasks .................................................. Specifying task tags .............................................................. Managing tasks with vxtask ................................................... Stopping a volume ...................................................................... Putting a volume in maintenance mode .................................... Starting a volume .......................................................................
14 Contents Controlling the progress of a relayout ...................................... Converting between layered and non-layered volumes ...................... Using Thin Provisioning .............................................................. About Thin Provisioning ........................................................ About Thin Reclamation ........................................................ Thin Reclamation of a disk, a disk group, or an enclosure .............
Contents Removing a cache ................................................................. Creating traditional third-mirror break-off snapshots ....................... Converting a plex into a snapshot plex ..................................... Creating multiple snapshots ................................................... Reattaching a snapshot volume ............................................... Adding plexes to a snapshot volume .........................................
16 Contents Excluding a disk from hot-relocation use ........................................ Making a disk available for hot-relocation use ................................. Configuring hot-relocation to use only spare disks ............................ Moving relocated subdisks ........................................................... Moving relocated subdisks using vxdiskadm .............................. Moving relocated subdisks using vxassist .................................
Contents Setting the disk detach policy on a shared disk group .................. Setting the disk group failure policy on a shared disk group ......... Creating volumes with exclusive open access by a node ............... Setting exclusive open access to a volume by a node ................... Displaying the cluster protocol version ..................................... Displaying the supported cluster protocol version range .............. Recovering volumes in shared disk groups ..........................
18 Contents Running a rule ..................................................................... Identifying configuration problems using Storage Expert .................. Recovery time ...................................................................... Disk groups ......................................................................... Disk striping ........................................................................ Disk sparing and relocation management .................................
Contents Appendix C Configuring Veritas Volume Manager ........................... 581 Setup tasks after installation ........................................................ Unsupported disk arrays .............................................................. Foreign devices .......................................................................... Initialization of disks and creation of disk groups ............................. Guidelines for configuring storage ........................................
20 Contents
Chapter 1 Understanding Veritas Volume Manager This chapter includes the following topics: ■ About Veritas Volume Manager ■ VxVM and the operating system ■ How VxVM handles storage management ■ Volume layouts in VxVM ■ Online relayout ■ Volume resynchronization ■ Dirty region logging ■ Volume snapshots ■ FastResync ■ Hot-relocation ■ Volume sets About Veritas Volume Manager VeritasTM Volume Manager (VxVM) by Symantec is a storage management subsystem that allows you to manage physica
22 Understanding Veritas Volume Manager VxVM and the operating system VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments. By supporting the Redundant Array of Independent Disks (RAID) model, VxVM can be configured to protect against disk and hardware failure, and to increase I/O throughput. Additionally, VxVM provides features that enhance fault tolerance and fast recovery from disk failure or storage array failure.
Understanding Veritas Volume Manager How VxVM handles storage management How data is stored There are several methods used to store data on physical disks. These methods organize data on the disk so the data can be stored and retrieved efficiently. The basic method of disk organization is called formatting. Formatting prepares the hard disk so that files can be written to and retrieved from the disk by using a prearranged storage pattern.
24 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-1 Physical disk example devname In HP-UX 11i v3, disks may be identified either by their legacy device name, which takes the form c#t#d#, or by their persistent (or agile) device name, which takes the form disk##. In a legacy device name, c# specifies the controller, t# specifies the target ID, and d# specifies the disk.
Understanding Veritas Volume Manager How VxVM handles storage management How VxVM presents the disks in a disk array as volumes to the operating system Figure 1-2 Operating system Veritas Volume Manager Volumes Physical disks Disk 1 Disk 2 Disk 3 Disk 4 Data can be spread across several disks within an array to distribute or balance I/O operations across the disks.
26 Understanding Veritas Volume Manager How VxVM handles storage management Device discovery Device discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery service enables you to add support dynamically for new disk arrays.
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-3 Example configuration for disk enclosures connected via a fibre channel hub or switch Host c1 Fibre Channel hub or switch Disk enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on.
28 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-4 shows a High Availability (HA) configuration where redundant-loop access to storage is implemented by connecting independent controllers on the host to separate hubs with independent paths to the enclosures.
Understanding Veritas Volume Manager How VxVM handles storage management each disk is known by the same name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as sdf and sdm. See “Disk device naming in VxVM” on page 77. See “Changing the disk-naming scheme” on page 98.
30 Understanding Veritas Volume Manager How VxVM handles storage management Bringing the contents of physical disks under VxVM control is accomplished only if VxVM takes control of the physical disks and the disk is not under control of another storage manager such as LVM. VxVM creates virtual objects and makes logical connections between the objects. The virtual objects are then used by VxVM to do storage management tasks.
Understanding Veritas Volume Manager How VxVM handles storage management Connection between objects in VxVM Figure 1-5 Disk group vol01 vol02 Volumes vol01-01 vol02-01 vol02-02 vol01-01 vol02-01 vol02-02 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 Subdisks disk01-01 disk02-01 disk03-01 VM disks disk01 disk02 disk03 devname1 devname2 devname3 Plexes Physical disks The disk group contains three VM disks which are used to create two volumes.
32 Understanding Veritas Volume Manager How VxVM handles storage management See “VM disks” on page 32. In releases before VxVM 4.0, the default disk group was rootdg (the root disk group). For VxVM to function, the rootdg disk group had to exist and it had to contain at least one disk. This requirement no longer exists, and VxVM can work without any disk groups configured (although you must set up at least one disk group before you can create any volumes of other VxVM objects).
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-6 disk01 VM disk example VM disk Physical disk devname Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.
34 Understanding Veritas Volume Manager How VxVM handles storage management Example of three subdisks assigned to one VM Disk Figure 1-8 disk01-01 disk01-02 disk01-01 disk01-02 disk01-03 disk01-03 Subdisks VM disk with three subdisks disk01 Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. Plexes VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks.
Understanding Veritas Volume Manager How VxVM handles storage management Volumes A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume. Due to its virtual nature, a volume is not restricted to a particular disk or a specific area of a disk.
36 Understanding Veritas Volume Manager Volume layouts in VxVM The volume vol01 has the following characteristics: ■ It contains one plex named vol01-01. ■ The plex contains one subdisk named disk01-01. ■ The subdisk disk01-01 is allocated from VM disk disk01. Figure 1-11 shows a mirrored volume, vol06, with two data plexes.
Understanding Veritas Volume Manager Volume layouts in VxVM Non-layered volumes In a non-layered volume, a subdisk maps directly to a VM disk. This allows the subdisk to define a contiguous extent of storage space backed by the public region of a VM disk. When active, the VM disk is directly associated with an underlying physical disk. The combination of a volume layout and the physical disks therefore determines the storage service available from a given virtual device.
38 Understanding Veritas Volume Manager Volume layouts in VxVM See “Mirroring (RAID-1)” on page 44. ■ Striping plus mirroring (mirrored-stripe or RAID-0+1) See “Striping plus mirroring (mirrored-stripe or RAID-0+1)” on page 45. ■ Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) See “Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)” on page 46. ■ RAID-5 (striping with parity) See “RAID-5 (striping with parity)” on page 47.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-12 Example of concatenation Data in Data in disk01-01 disk01-03 n n+1 n+2 n+3 Data blocks disk01-01 disk01-03 Plex with concatenated subdisks disk01-01 disk01-03 Subdisks disk01-01 disk01-02 disk01 disk01-03 VM disk devname n n+1 n+2 n+3 Physical disk The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are contiguous on the plex, but actually come from two distinct subdisks on the same physical di
40 Understanding Veritas Volume Manager Volume layouts in VxVM Example of spanning Figure 1-13 Data in disk01-01 n Data in disk02-01 n+1 n+2 n+3 Data blocks disk01-01 disk02-01 Plex with concatenated subdisks disk01-01 disk02-01 Subdisks disk01-01 disk01 disk02-01 devname1 n n+1 n+2 disk02-02 VM disks disk02 devname2 n+3 Physical disks The blocks n, n+1, n+2 and n+3 (numbered relative to the start of the plex) are contiguous on the plex, but actually come from two distinct subdisks fro
Understanding Veritas Volume Manager Volume layouts in VxVM physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex. The subdisks are grouped into “columns,” with each physical disk limited to one column. Each column contains one or more subdisks and can be derived from one or more physical disks. The number and sizes of subdisks per column can vary. Additional subdisks can be added to columns, as necessary.
42 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-14 Striping across three columns Column 0 Column 1 Column 2 Stripe 1 stripe unit 1 stripe unit 2 stripe unit 3 Stripe 2 stripe unit 4 stripe unit 5 stripe unit 6 Subdisk 1 Subdisk 2 Subdisk 3 Plex A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
Understanding Veritas Volume Manager Volume layouts in VxVM Example of a striped plex with one subdisk per column Figure 1-15 su1 su2 su3 su4 su5 su6 Column 0 Column 1 Column 2 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 disk01-01 disk02-01 disk03-01 disk01 disk02 disk03 devname1 devname2 devname3 su1 su4 su2 su5 su3 su6 Stripe units Striped plex Subdisks VM disks Physical disk There is one column per physical disk.
44 Understanding Veritas Volume Manager Volume layouts in VxVM Example of a striped plex with concatenated subdisks per column Figure 1-16 su1 Column 0 su2 su3 su4 su5 su6 Column 1 Column 2 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk02-01 disk03-01 disk01-01 disk03-02 disk02-02 disk03-03 disk01 disk02 disk03 devname1 devname2 devname3 su1 su4 su2 su5 su3 su6 Stripe units Striped plex Subd
Understanding Veritas Volume Manager Volume layouts in VxVM Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable.
46 Understanding Veritas Volume Manager Volume layouts in VxVM Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) Note: You need a full license to use this feature. VxVM supports the combination of striping above mirroring. This combined layout is called a striped-mirror layout. Putting mirroring below striping mirrors each column of the stripe. If there are multiple subdisks per column, each subdisk can be mirrored individually instead of each column.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-19 How the failure of a single disk affects mirrored-stripe and striped-mirror volumes Mirrored-stripe volume with no Striped plex redundancy Mirror Detached striped plex Failure of disk detaches plex Striped-mirror volume with partial redundancy Mirror Striped plex Failure of disk removes redundancy from a mirror When the disk is replaced, the entire plex must be brought up to date.
48 Understanding Veritas Volume Manager Volume layouts in VxVM Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods. Mirroring provides data redundancy by maintaining multiple complete copies of the data in a volume. Data being written to a mirrored volume is reflected in all copies. If a portion of a mirrored volume fails, the system continues to use the other copies of the data. RAID-5 provides data redundancy by using parity.
Understanding Veritas Volume Manager Volume layouts in VxVM Traditional RAID-5 arrays A traditional RAID-5 array is several disks organized in rows and columns. A column is a number of disks located in the same ordinal position in the array. A row is the minimal number of disks necessary to support the full width of a parity stripe. Figure 1-21 shows the row and column arrangement of a traditional RAID-5 array.
50 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-22 Veritas Volume Manager RAID-5 array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = subdisk VxVM allows each column of a RAID-5 plex to consist of a different number of subdisks. The subdisks in a given column can be derived from different physical disks. Additional subdisks can be added to the columns as necessary.
Understanding Veritas Volume Manager Volume layouts in VxVM unit is located in the next stripe, shifted left one column from the previous parity stripe unit location. If there are more stripes than columns, the parity stripe unit placement begins in the rightmost column again. Figure 1-23 shows a left-symmetric parity layout with five disks (one per column).
52 Understanding Veritas Volume Manager Volume layouts in VxVM RAID-5 logging Logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device such as a volume on disk or in non-volatile RAM. The new data and parity are then written to the disks. Without logging, it is possible for data not involved in any active writes to be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-25 shows a typical striped-mirror layered volume where each column is represented by a subdisk that is built from an underlying mirrored volume.
54 Understanding Veritas Volume Manager Online relayout Creating striped-mirrors See “Creating a striped-mirror volume” on page 296. See the vxassist(1M) manual page. Creating concatenated-mirrors See “Creating a concatenated-mirror volume” on page 290. See the vxassist(1M) manual page. Online Relayout See “Online relayout” on page 54. See the vxassist(1M) manual page. See the vxrelayout(1M) manual page. Moving RAID-5 subdisks See the vxsd(1M) manual page.
Understanding Veritas Volume Manager Online relayout File systems mounted on the volumes do not need to be unmounted to achieve this transformation provided that the file system (such as Veritas File System) supports online shrink and grow operations. Online relayout reuses the existing storage space and has space allocation policies to address the needs of the new layout.
56 Understanding Veritas Volume Manager Online relayout Figure 1-26 shows how decreasing the number of columns can require disks to be added to a volume. Example of decreasing the number of columns in a volume Figure 1-26 Five columns of length L Three columns of length 5L/3 Note that the size of the volume remains the same but an extra disk is needed to extend one of the columns.
Understanding Veritas Volume Manager Online relayout Note that adding parity increases the overall storage space that the volume requires. ■ Change the number of columns in a volume. Figure 1-29 shows an example of changing the number of columns. Figure 1-29 Example of increasing the number of columns in a volume Two columns Three columns Note that the length of the columns is reduced to conserve the size of the volume. ■ Change the column stripe width in a volume.
58 Understanding Veritas Volume Manager Online relayout ■ The usual restrictions apply for the minimum number of physical disks that are required to create the destination layout. For example, mirrored volumes require at least as many disks as mirrors, striped and RAID-5 volumes require at least as many disks as columns, and striped-mirror volumes require at least as many disks as columns multiplied by mirrors.
Understanding Veritas Volume Manager Volume resynchronization Volume resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
60 Understanding Veritas Volume Manager Dirty region logging Resynchronization can impact system performance. The recovery process reduces some of this impact by spreading the recoveries to avoid stressing a specific disk or controller. For large volumes or for a large number of volumes, the resynchronization process can take time.
Understanding Veritas Volume Manager Dirty region logging plex of the volume. Only one log subdisk can exist per plex. If the plex contains only a log subdisk and no data subdisks, that plex is referred to as a log plex. The log subdisk can also be associated with a regular plex that contains data subdisks. In that case, the log subdisk risks becoming unavailable if the plex must be detached due to the failure of one of its data subdisks.
62 Understanding Veritas Volume Manager Dirty region logging The following section describes how to configure VxVM raw volumes and SmartSync. The database uses the following types of volumes: ■ Data volumes are the volumes used by the database (control files and tablespace files). ■ Redo log volumes contain redo logs of the database. SmartSync works with these two types of volumes differently, so they must be configured as described in the following sections.
Understanding Veritas Volume Manager Volume snapshots Volume snapshots Veritas Volume Manager provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot. Such snapshots should not be confused with file system snapshots, which are point-in-time images of a Veritas File System. Figure 1-31 shows how a snapshot volume represents a copy of an original volume at a given point in time.
64 Understanding Veritas Volume Manager Volume snapshots One type of volume snapshot in VxVM is the third-mirror break-off type. This name comes from its implementation where a snapshot plex (or third mirror) is added to a mirrored volume. The contents of the snapshot plex are then synchronized from the original plexes of the volume. When this synchronization is complete, the snapshot plex can be detached as a snapshot volume for use in backup or decision support applications.
Understanding Veritas Volume Manager FastResync Table 1-1 Comparison of snapshot features for supported snapshot types Snapshot feature Full-sized instant (vxsnap) Space-optimized Break-off instant (vxassist or (vxsnap) vxsnap) Immediately available for use on creation Yes Yes No Requires less storage space than original volume No Yes No Can be reattached to original volume Yes No Yes Can be used to restore contents of original volume Yes Yes Yes Can quickly be refreshed without being
66 Understanding Veritas Volume Manager FastResync The FastResync feature (previously called Fast Mirror Resynchronization or FMR) performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized). This increases the efficiency of the VxVM snapshot mechanism, and improves the performance of operations such as backup and decision support applications.
Understanding Veritas Volume Manager FastResync Non-persistent FastResync Non-persistent FastResync allocates its change maps in memory. They do not reside on disk nor in persistent store. This has the advantage that updates to the FastResync map have little impact on I/O performance, as no disk updates needed to be performed. However, if a system is rebooted, the information in the map is lost, so a full resynchronization is required on snapback.
68 Understanding Veritas Volume Manager FastResync DCO volume versioning The internal layout of the DCO volume changed in VxVM 4.0 to support new features such as full-sized and space-optimized instant snapshots, and a unified DRL/DCO. Because the DCO volume layout is versioned, VxVM software continues to support the version 0 layout for legacy volumes. However, you must configure a volume to have a version 20 DCO volume if you want to take instant snapshots of the volume.
Understanding Veritas Volume Manager FastResync See “Dirty region logging” on page 60. Each bit in a map represents a region (a contiguous number of blocks) in a volume’s address space. A region represents the smallest portion of a volume for which changes are recorded in a map. A write to a single byte of storage anywhere within a region is treated in the same way as a write to the entire region.
70 Understanding Veritas Volume Manager FastResync Figure 1-32 Mirrored volume with persistent FastResync enabled Mirrored volume Data plex Data plex Data change object DCO volume DCO plex DCO plex Associated with the volume are a DCO object and a DCO volume with two plexes. To create a traditional third-mirror snapshot or an instant (copy-on-write) snapshot, the vxassist snapstart or vxsnap make operation respectively is performed on the volume.
Understanding Veritas Volume Manager FastResync See “Comparison of snapshot features” on page 64. A traditional snapshot volume is created from a snapshot plex by running the vxassist snapshot operation on the volume. For instant snapshots, however, the vxsnap make command makes an instant snapshot volume immediately available for use. There is no need to run an additional command. Figure 1-34 shows how the creation of the snapshot volume also sets up a DCO object and a DCO volume for the snapshot volume.
72 Understanding Veritas Volume Manager FastResync allows VxVM to track the relationship between volumes and their snapshots even if they are moved into different disk groups. The snap objects in the original volume and snapshot volume are automatically deleted in the following circumstances: ■ For traditional snapshots, the vxassist snapback operation is run to return all of the plexes of the snapshot volume to the original volume.
Understanding Veritas Volume Manager FastResync the commands vxsnap reattach, vxsnap restore, or vxassist snapback. Growing the two volumes separately can lead to a snapshot that shares physical disks with another mirror in the volume. To prevent this, grow the volume after the snapback command is complete.
74 Understanding Veritas Volume Manager Hot-relocation See the vxassist (1M) manual page. See the vxplex (1M) manual page. See the vxvol (1M) manual page. Hot-relocation Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks.
Chapter 2 Administering disks This chapter includes the following topics: ■ About disk management ■ Disk devices ■ Discovering and configuring newly added disk devices ■ Disks under VxVM control ■ Changing the disk-naming scheme ■ Discovering the association between enclosure-based disk names and OS-based disk names ■ Disk installation and formatting ■ Displaying or changing default disk layout attributes ■ Adding a disk to VxVM ■ Rootability ■ Dynamic LUN expansion ■ Removing disks
76 Administering disks About disk management ■ Displaying disk information ■ Controlling Powerfail Timeout About disk management Veritas Volume Manager (VxVM) allows you to place disks under VxVM control, to initialize disks, and to remove and replace disks. Note: Most VxVM commands require superuser or equivalent privileges.
Administering disks Disk devices In HP-UX 11i v3, the persistent (agile) forms of such devices are located in the /dev/disk and /dev/rdisk directories. To maintain backward compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories. VxVM uses the device names to create metadevices in the /dev/vx/[r]dmp directories.
78 Administering disks Disk devices You can change the disk-naming scheme if required. See “Changing the disk-naming scheme” on page 98. Operating system-based naming Under operating system-based naming, all disk devices except fabric mode disks are displayed either using the legacy c#t#d# format or the persistent disk## format. By default, VxVM commands display the names of these devices in the legacy format as these correspond to the names of the metanodes that are created by DMP.
Administering disks Disk devices By default, enclosure-based names are persistent, so they do not change after reboot. If a CVM cluster is symmetric, each node in the cluster accesses the same set of disks. Enclosure-based names provide a consistent naming system so that the device names are the same on each node. To display the native OS device names of a VM disk (such as mydg01), use the following command: # vxdisk path | grep diskname See “Renaming an enclosure” on page 184.
80 Administering disks Disk devices auto When the vxconfigd daemon is started, VxVM obtains a list of known disk device addresses from the operating system and configures disk access records for them automatically. nopriv There is no private region (only a public region for allocating subdisks). This is the simplest disk type consisting only of space for allocating subdisks.
Administering disks Discovering and configuring newly added disk devices using the vxdiskadm(1M) command to update the /etc/default/vxdisk defaults file. Auto-configured EFI disks are formatted as hpdisk disks by default. See “Displaying or changing default disk layout attributes” on page 107. See the vxdisk(1M) manual page.
82 Administering disks Discovering and configuring newly added disk devices The vxdisk scandisks command rescans the devices in the OS device tree and triggers a DMP reconfiguration. You can specify parameters to vxdisk scandisks to implement partial device discovery.
Administering disks Discovering and configuring newly added disk devices Discovering disks and dynamically adding disk arrays DMP uses array support libraries (ASLs) to provide array-specific support for multipathing. An array support library (ASL) is a dynamically loadable shared library (plug-in for DDL). The ASL implements hardware-specific logic to discover device attributes during device discovery.
84 Administering disks Discovering and configuring newly added disk devices Disks in JBODs that do not fall into any supported category, and which are not capable of being multipathed by DMP are placed in the OTHER_DISKS category. Adding support for a new disk array You can dynamically add support for a new type of disk array which has been developed by a third-party vendor. The support comes in the form of array support libraries (ASLs). Add support to an HP-UX system by using the swinstall command.
Administering disks Discovering and configuring newly added disk devices If the arrays remain physically connected to the host after support has been removed, they are listed in the OTHER_DISKS category, and the volumes remain available.
86 Administering disks Discovering and configuring newly added disk devices ■ Remove support for an array from DDL. ■ List information about excluded disk arrays. ■ List disks that are supported in the DISKS (JBOD) category. ■ Add disks from different vendors to the DISKS category. ■ Remove disks from the DISKS category. ■ Add disks as foreign devices. The following sections explain these tasks in more detail. See the vxddladm(1M) manual page.
Administering disks Discovering and configuring newly added disk devices Firmware Firmware version. Discovery The discovery method employed for the targets. State Whether the device is Online or Offline. Address The hardware address. 87 To list all the Host Bus Adapters including iSCSI ◆ Type the following command: # vxddladm list hbas You can use this command to obtain all of the HBAs, including iSCSI devices, configured on the system.
88 Administering disks Discovering and configuring newly added disk devices Listing the targets configured from a Host Bus Adapter or port You can obtain information about all the targets configured from a Host Bus Adapter. This includes the following information: Alias The alias name, if available. HBA-ID Parent HBA or port. State Whether the device is Online or Offline. Address The hardware address.
Administering disks Discovering and configuring newly added disk devices State Whether the device is Online or Offline. DDL status Whether the device is claimed by DDL. If claimed, the output also displays the ASL name. To list the devices configured from a Host Bus Adapter ◆ To obtain the devices configured, use the following command: # vxddladm list devices Device Target-ID State DDL status (ASL) -----------------------------------------------------------c2t0d2 c2_p0_t0 Online CLAIMED (libvxemc.
90 Administering disks Discovering and configuring newly added disk devices Table 2-1 Parameters for iSCSI devices (continued) Parameter Default Minimum Maximum value value value FirstBurstLength 65535 512 16777215 InitialR2T yes no yes ImmediateData yes no yes MaxBurstLength 262144 512 16777215 MaxConnections 1 1 65535 MaxOutStandingR2T 1 1 65535 MaxRecvDataSegmentLength 8182 512 16777215 To get the iSCSI operational parameters on the initiator for a specific iSCSI target ◆
Administering disks Discovering and configuring newly added disk devices To set the iSCSI operational parameters on the initiator for a specific iSCSI target ◆ Type the following command: # vxddladm setiscsi target=tgt-id parameter=value Displaying details about a supported array library # vxddladm listsupport libname=library_name.so To display details about a supported array library ◆ Type the following command: # vxddladm listsupport libname=library_name.
92 Administering disks Discovering and configuring newly added disk devices Re-including support for an excluded disk array library To re-include support for an excluded disk array library ◆ If you have excluded support for all arrays that depend on a particular disk array library, you can use the includearray keyword to remove the entry from the exclude list, as shown in the following example: # vxddladm includearray libname=libvxenc.
Administering disks Discovering and configuring newly added disk devices To add an unsupported disk array to the DISKS category 1 Use the following command to identify the vendor ID and product ID of the disks in the array: # /etc/vx/diag.d/vxdmpinq device_name where device_name is the device name of one of the disks in the array. Note the values of the vendor ID (VID) and product ID (PID) in the output from this command.
94 Administering disks Discovering and configuring newly added disk devices 5 Use the vxdctl enable command to bring the array under VxVM control. # vxdctl enable See “Enabling discovery of new disk arrays” on page 84.
Administering disks Discovering and configuring newly added disk devices 7 To verify that the array is recognized, use the vxdmpadm listenclosure command as shown in the following sample output for the example array: # vxdmpadm listenclosure ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ====================================================== OTHER_DISKS OTHER_DISKS OTHER_DISKS CONNECTED Disk Disk DISKS CONNECTED The enclosure name and type for the array are both shown as being set to Disk.
96 Administering disks Discovering and configuring newly added disk devices Foreign devices DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multipathing or RAM disk capabilities. For these devices it may be preferable to use the multipathing capability that is provided by the third-party drivers for some arrays rather than using the Dynamic Multipathing (DMP) feature.
Administering disks Disks under VxVM control ■ Foreign devices, such as HP-UX native multipathing metanodes, do not have enclosures, controllers or DMP nodes that can be administered using VxVM commands. An error message is displayed if you attempt to use the vxddladm or vxdmpadm commands to administer such devices while HP-UX native multipathing is configured. ■ The I/O Fencing and Cluster File System features are not supported for foreign devices.
98 Administering disks Changing the disk-naming scheme For example, use the following commands to destroy the file system and initialize the disk: # dd if=/dev/zero of=/dev/dsk/diskname bs=1024k count=50 # vxdisk scandisks # vxdisk -f init diskname ■ If the disk was previously in use by the LVM subsystem, you can preserve existing data while still letting VxVM take control of the disk. This is accomplished using conversion.
Administering disks Changing the disk-naming scheme Table 2-2 Modes to display device names for all VxVM commands Mode Format of output from VxVM command default The same format is used as in the input to the command (if this can be determined). Otherwise, legacy names are used. This is the default mode. legacy Only legacy names are displayed. new Only new (agile) names are displayed.
100 Administering disks Changing the disk-naming scheme To change the disk-naming scheme ◆ Select Change the disk naming scheme from the vxdiskadm main menu to change the disk-naming scheme that you want VxVM to use. When prompted, enter y to change the naming scheme. For operating system based naming, you are asked to select between default, legacy or new device names. Alternatively, you can change the naming scheme from the command line.
Administering disks Changing the disk-naming scheme NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ====================================================== c1t65d0 ENABLED Disk 2 2 0 Disk # vxdmpadm getlungroup dmpnodename=disk25 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ===================================================== disk25 ENABLED Disk 2 2 0 Disk # vxddladm set namingscheme=osn mode=legacy # vxdmpadm getlungroup dmpnodename=c1t65d0 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ===========
102 Administering disks Changing the disk-naming scheme Displaying the disk-naming scheme VxVM disk naming can be operating-system based naming or enclosure-based naming. This command displays whether the VxVM disk naming scheme is currently set. It also displays the attributes for the disk naming scheme, such as whether persistence is enabled.
Administering disks Changing the disk-naming scheme Changing device naming for TPD-controlled enclosures The feature to change device naming is available only if the disk-naming scheme is set to use operating system-based naming, and the TPD-controlled enclosure does not contain fabric disks.
104 Administering disks Changing the disk-naming scheme Note: You cannot run vxdarestore if c#t#d# naming is in use. Additionally, vxdarestore does not handle failures on persistent simple or nopriv disks that are caused by renaming enclosures, by hardware reconfiguration that changes device names. or by removing support from the JBOD category for disks that belong to a particular vendor when enclosure-based naming is in use.
Administering disks Changing the disk-naming scheme Removing the error state for persistent simple or nopriv disks in non-boot disk groups If an imported disk group, other than bootdg, consists only of persistent simple and/or nopriv disks, it is put in the “online dgdisabled” state after the change to the enclosure-based naming scheme.
106 Administering disks Discovering the association between enclosure-based disk names and OS-based disk names Discovering the association between enclosure-based disk names and OS-based disk names To discover the association between enclosure-based disk names and OS-based disk names ◆ If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than OS-based names.
Administering disks Displaying or changing default disk layout attributes Displaying or changing default disk layout attributes To display or change the default values for initializing the layout of disks ◆ Select Change/display the default disk layout from the vxdiskadm main menu. For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file, /etc/default/vxdisk. See the vxdisk(1M) manual page.
108 Administering disks Adding a disk to VxVM To initialize disks for VxVM use 1 Select Add or initialize one or more disks from the vxdiskadm main menu. 2 At the following prompt, enter the disk device name of the disk to be added to VxVM control (or enter list for a list of disks): Select disk devices to add: [,all,list,q,?] The pattern-list can be a single disk, or a series of disks and/or controllers (with optional targets).
Administering disks Adding a disk to VxVM 5 If you specified the name of a disk group that does not already exist, vxdiskadm prompts for confirmation that you really want to create this new disk group: There is no active disk group named disk group name.
110 Administering disks Adding a disk to VxVM 10 To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group disk group name with default disk names.
Administering disks Adding a disk to VxVM 14 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.) Enter desired private region length [,q,?] (default: 32768) vxdiskadm then proceeds to add the disks.
112 Administering disks Rootability Note: This release only supports the conversion of LVM version 1 volume groups to VxVM. It does not support the conversion of LVM version 2 volume groups. See the Veritas Volume Manager Migration Guide. Using vxdiskadd to put a disk under VxVM control To use the vxdiskadd command to put a disk under VxVM control.
Administering disks Rootability LIF LABEL record is initialized with volume extent information for the stand, root, swap, and dump (if present) volumes. See “Setting up a VxVM root disk and mirror” on page 115. From the AR0902 release of HP-UX 11i onward, you can choose to configure either a VxVM root disk or an LVM root disk at install time. See the HP-UX Installation and Configuration Guide. See the Veritas Volume Manager Troubleshooting Guide.
114 Administering disks Rootability Root disk mirrors All the volumes on a VxVM root disk may be mirrored. The simplest way to achieve this is to mirror the VxVM root disk onto an identically sized or larger physical disk. If a mirror of the root disk must also be bootable, the restrictions for the encapsulated root disk also apply to the mirror disk. See “Booting root volumes” on page 114.
Administering disks Rootability volumes is performed using the VxVM configuration objects that were loaded into the kernel. Setting up a VxVM root disk and mirror To set up a VxVM root disk and a bootable mirror of this disk, use the vxcp_lvmrootutility.
116 Administering disks Rootability # /etc/vx/bin/vxcp_lvmroot -m c1t1d0 -R 30 -v -b c0t4d0 In this example, the -b option to vxcp_lvmroot sets c0t4d0 as the primary boot device and c1t1d0 as the alternate boot device.
Administering disks Rootability remove the VxVM root disk or any mirrors of this disk, nor does it affect their bootability. The target disk must be large enough to accommodate the volumes from the VxVM root disk. Warning: This procedure should be carried out at init level 1. This example shows how to create an LVM root disk on physical disk c0t1d0 after removing the existing LVM root disk configuration from that disk.
118 Administering disks Rootability 4 Add the volume to the /etc/fstab file, and enable the volume as a swap device. #echo "/dev/vx/dsk/bootdg/swapvol1 - swap defaults 0 0" \ >> /etc/fstab #swapon -a 5 View the changed swap configuration: #swapinfo Adding persistent dump volumes to a VxVM rootable system A persistent dump volume is used when creating crash dumps, which are eventually saved in the /var/adm/crash directory. A maximum of ten VxVM volumes can be configured as persistent dump volumes.
Administering disks Dynamic LUN expansion Removing a persistent dump volume Warning: The system will not boot correctly if you delete a dump volume without first removing it from the crash dump configuration. Use this procedure to remove a dump volume from the crash dump configuration.
120 Administering disks Removing disks Any volumes on the device should only be grown after the LUN itself has first been grown. Resizing should only be performed on LUNs that preserve data. Consult the array documentation to verify that data preservation is supported and has been qualified. The operation also requires that only storage at the end of the LUN is affected. Data at the beginning of the LUN must not be altered. No attempt is made to verify the validity of pre-existing data on the LUN.
Administering disks Removing disks You can remove a disk from a system and move it to another system if the disk is failing or has failed. To remove a disk 1 Stop all activity by applications to volumes that are configured on the disk that is to be removed. Unmount file systems and shut down databases that are configured on the volumes. 2 Use the following command to stop the volumes: # vxvol [-g diskgroup] stop vol1 vol2 ... 3 Move the volumes to other disks or back up the volumes.
122 Administering disks Removing disks Removing a disk with subdisks You can remove a disk on which some subdisks are defined. For example, you can consolidate all the volumes onto one disk. If you use the vxdiskadm program to remove a disk, you can choose to move volumes off that disk. Some subdisks are not movable. A subdisk may not be movable for one of the following reasons: ■ There is not enough space on the remaining disks in the subdisks disk group.
Administering disks Removing a disk from VxVM control 123 Removing a disk with no subdisks To remove a disk that contains no subdisks from its disk group ◆ Run the vxdiskadm program and select Remove a disk from the main menu, and respond to the prompts as shown in this example to remove mydg02: Enter disk name [,list,q,?] mydg02 VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg02 from group mydg.
124 Administering disks Removing and replacing disks Note: You may need to run commands that are specific to the operating system or disk array before removing a physical disk. If failures are starting to occur on a disk, but the disk has not yet failed completely, you can replace the disk. This involves detaching the failed or failing disk from its disk group, followed by replacing the failed or failing disk with a new one. Replacing the disk can be postponed until a later date if necessary.
Administering disks Removing and replacing disks 3 When you select a disk to remove for replacement, all volumes that are affected by the operation are displayed, for example: VxVM NOTICE V-5-2-371 The following volumes will lose mirrors as a result of this operation: home src No data on these volumes will be lost. The following volumes are in use, and will be disabled as a result of this operation: mkting Any applications using these volumes will fail future accesses.
126 Administering disks Removing and replacing disks 4 At the following prompt, either select the device name of the replacement disk (from the list provided), press Return to choose the default disk, or enter none if you are going to replace the physical disk: The following devices are available as replacements: c0t1d0 You can choose one of these disks now, to replace mydg02. Select none if you do not wish to select a replacement disk.
Administering disks Removing and replacing disks 7 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32 MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.
128 Administering disks Removing and replacing disks 3 The vxdiskadm program displays the device names of the disk devices available for use as replacement disks. Your system may use a device name that differs from the examples. Enter the device name of the disk or press Return to select the default device: The following devices are available as replacements: c0t1d0 c1t1d0 You can choose one of these disks to replace mydg02. Choose "none" to initialize another disk to replace mydg02.
Administering disks Enabling a disk 6 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32 MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.
130 Administering disks Taking a disk offline 3 At the following prompt, indicate whether you want to enable another device (y) or return to the vxdiskadm main menu (n): Enable another device? [y,n,q,?] (default: n) 4 After using the vxdiskadm command to replace one or more failed disks in a VxVM cluster, run the following command on all the cluster nodes: # vxdctl enable Then run the following command on the master node: # vxreattach -r accesname where accessname is the disk access name (such as c0t
Administering disks Renaming a disk Renaming a disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type. To rename a disk ◆ Type the following command: # vxedit [-g diskgroup] rename old_diskname new_diskname By default, VxVM names subdisk objects after the VM disk on which they are located.
132 Administering disks Displaying disk information To reserve a disk ◆ Type the following command: # vxedit [-g diskgroup] set reserve=on diskname After you enter this command, the vxassist program does not allocate space from the selected disk unless that disk is specifically mentioned on the vxassist command line.
Administering disks Displaying disk information To display information on all disks that are known to VxVM ◆ Type the following command: # vxdisk list VxVM returns a display similar to the following: DEVICE c0t0d0 c1t0d0 c1t1d0 enc0_2 enc0_3 enc0_0 enc0_1 TYPE auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk DISK mydg04 mydg03 mydg02 mydg05 - GROUP mydg mydg mydg mydg - STATUS online online online invalid online online online online The phrase online invalid in the
134 Administering disks Controlling Powerfail Timeout List disk information Menu: VolumeManager/Disk/ListDisk VxVM INFO V-5-2-475 Use this menu operation to display a list of disks. You can also choose to list detailed information about the disk at a specific disk device address. Enter disk device or "all" [
,all,q,?] (default: all) ■ If you enter all, VxVM displays the device name, disk name, group, and status.Administering disks Controlling Powerfail Timeout $ vxpfto -g dg_name -t 50 For example, to set the PFTO on all disks in the diskgroup testdg: $ vxpfto -g testdg -t 50 To show the PFTO value and whether PFTO is enabled or disabled for a disk, use one of the following commands: vxprint -g -l vxdisk -g list The output shows the pftostate field, which indicates whether PFTO is enabled or disabled. The timeout field shows the PFTO timeout value.
136 Administering disks Controlling Powerfail Timeout
Chapter 3 Administering Dynamic Multipathing This chapter includes the following topics: ■ How DMP works ■ Disabling multipathing and making devices invisible to VxVM ■ Enabling multipathing and making devices visible to VxVM ■ Enabling and disabling I/O for controllers and storage processors ■ Displaying DMP database information ■ Displaying the paths to a disk ■ Setting customized names for DMP nodes ■ Administering DMP using vxdmpadm How DMP works Note: You need a full license to use th
138 Administering Dynamic Multipathing How DMP works Multiported disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array type. DMP can also differentiate between different enclosures of a supported array type that are connected to the same host system. See “Discovering and configuring newly added disk devices” on page 81.
Administering Dynamic Multipathing How DMP works Active/Passive with LUN group failover For Active/Passive arrays with LUN group failover (A/P-G) (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary and secondary controller are each connected to a separate group of LUNs.
140 Administering Dynamic Multipathing How DMP works How DMP represents multiple physical paths to a disk as one node Figure 3-1 VxVM Host c1 Single DMP node c2 Mapped by DMP DMP Multiple paths Multiple paths Disk VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs. Figure 3-2 shows an example where two paths, c1t99d0 and c2t99d0, exist to a single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.
Administering Dynamic Multipathing How DMP works How DMP monitors I/O on paths In older releases of VxVM, DMP had one kernel daemon (errord) that performed error processing, and another (restored) that performed path restoration activities. From release 5.0, DMP maintains a pool of kernel threads that are used to perform such tasks as error processing, path restoration, statistics collection, and SCSI request callbacks. The vxdmpadm stat command can be used to provide information about the threads.
142 Administering Dynamic Multipathing How DMP works If required, the response of DMP to I/O failure on a path can be tuned for the paths to individual arrays. DMP can be configured to time out an I/O request either after a given period of time has elapsed without the request succeeding, or after a given number of retries on a path have failed. See “Configuring the response to I/O failures” on page 185.
Administering Dynamic Multipathing How DMP works DMP coexistence with HP-UX native multipathing The HP-UX 11i v3 release includes support for native multipathing, which can coexist with DMP. HP-UX native multipathing creates a persistent (agile) device in the /dev/disk and /dev/rdisk directories for each disk that can be accessed by one or more physical paths. To maintain backward compatibility, HP-UX also creates legacy devices in the /dev/dsk and /dev/rdsk directories.
144 Administering Dynamic Multipathing How DMP works If the migration involves a currently booted disk, you must reboot the system. Specifying the –r option with the vxddladm addforeign command and the vxddladm rmforeign command, automatically reboots the system. See the Storage Foundation Release Notes for limitations regarding rootability support for native multipathing.
Administering Dynamic Multipathing How DMP works To migrate from HP-UX native multipathing to DMP 1 Stop all the volumes in each disk group on the system: # vxvol -g diskgroup stopall 2 Use the following commands to initiate the migration: # vxddladm rmforeign blockdir=/dev/disk chardir=/dev/rdisk # vxconfigd -kr reset For migration involving a current boot disk, use: # vxddladm -r rmforeign blockdir=/dev/disk/ chardir=/dev/rdisk/ 145
146 Administering Dynamic Multipathing How DMP works 3 Restart all the volumes in each disk group: # vxvol -g diskgroup startall The output from the vxdisk list command now shows DMP metanode names according to the current naming scheme.
Administering Dynamic Multipathing Disabling multipathing and making devices invisible to VxVM Path failover on a single cluster node is also coordinated across the cluster so that all the nodes continue to share the same physical path. Prior to release 4.1 of VxVM, the clustering and DMP features could not handle automatic failback in A/P arrays when a path was restored, and did not support failback for explicit failover mode arrays.
148 Administering Dynamic Multipathing Enabling multipathing and making devices visible to VxVM To disable multipathing and make devices invisible to VxVM 1 Run the vxdiskadm command, and select Prevent multipathing/Suppress devices from VxVM’s view from the main menu. You are prompted to confirm whether you want to continue. 2 Select the operation you want to perform from the following options: Option 1 Supresses all paths through the specified controller from the view of VxVM.
Administering Dynamic Multipathing Enabling and disabling I/O for controllers and storage processors To enable multipathing and make devices visible to VxVM 1 Run the vxdiskadm command, and select Allow multipathing/Unsuppress devices from VxVM’s view from the main menu. You are prompted to confirm whether you want to continue. 2 Select the operation you want to perform from the following options: Option 1 Unsupresses all paths through the specified controller from the view of VxVM.
150 Administering Dynamic Multipathing Displaying DMP database information array port resulted in all primary paths being disabled, DMP will failover to active secondary paths and I/O will continue on them. After the operation is over, you can use vxdmpadm to re-enable the paths through the controllers. See “Disabling I/O for paths, controllers or array ports” on page 182. See “Enabling I/O for paths, controllers or array ports” on page 183. See “Upgrading disk controller firmware” on page 183.
Administering Dynamic Multipathing Displaying the paths to a disk To display the multipathing information on a system ◆ Use the vxdisk path command to display the relationships between the device paths, disk access names, disk media names and disk groups on a system as shown here: # vxdisk path SUBPATH c1t0d0 c4t0d0 c1t1d0 c4t1d0 . . .
152 Administering Dynamic Multipathing Setting customized names for DMP nodes private: slice=0 offset=128 len=1024 update: time=962923719 seqno=0.
Administering Dynamic Multipathing Administering DMP using vxdmpadm To specify a custom name for a DMP node ◆ Use the following command: # vxdmpadm setattr dmpnode dmpnodename name=name You can also assign names from an input file. This enables you to customize the DMP nodes on the system with meaningful names. To assign DMP nodes from a file 1 Use the script vxgetdmpnames to get a sample file populated from the devices in your configuration.
154 Administering Dynamic Multipathing Administering DMP using vxdmpadm ■ Configure the attributes of the paths to an enclosure. ■ Set the I/O policy that is used for the paths to an enclosure. ■ Enable or disable I/O for a path, HBA controller or array port on the system. ■ Upgrade disk controller firmware. ■ Rename an enclosure. ■ Configure how DMP responds to I/O request failures. ■ Configure the I/O throttling mechanism. ■ Control the operation of the DMP path restoration thread.
Administering Dynamic Multipathing Administering DMP using vxdmpadm 155 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== c2t1d0 ENABLED ACME 2 2 0 enc0 c2t1d1 ENABLED ACME 2 2 0 enc0 c2t1d2 ENABLED ACME 2 2 0 enc0 c2t1d3 ENABLED ACME 2 2 0 enc0 Use the dmpnodenameattribute with getdmpnode to display the DMP information for a given DMP node.
156 Administering Dynamic Multipathing Administering DMP using vxdmpadm iopolicy = MinimumQ avid = lun-sno = 600508B4000544050001700002BE0000 udid = HP%5FHSV200%5F50001FE1500A8F00%5F600508B4000544050001700002BE0000 dev-attr = ###path = name state type transport ctlr hwpath aportID aportWWN attr path = c18t0d1 enabled(a) primary SCSI c18 0/3/1/0.0x50001fe1500a8f081-1 path = c26t0d1 enabled(a) primary SCSI c26 0/3/1/1.0x50001fe1500a8f081-1 path = c28t0d1 enabled(a) primary SCSI c28 0/3/1/1.
Administering Dynamic Multipathing Administering DMP using vxdmpadm 157 a path name. The detailed information for the specified DMP node includes path information for each subpath of the listed dmpnode. # vxdmpadm list dmpnode dmpnodename=dmpnodename For example, the following command displays the consolidated information for the DMP node emc_clariion0_158.
158 Administering Dynamic Multipathing Administering DMP using vxdmpadm Displaying paths controlled by a DMP node, controller, enclosure, or array port The vxdmpadm getsubpaths command lists all of the paths known to DMP. The vxdmpadm getsubpaths command also provides options to list the subpaths through a particular DMP node, controller, enclosure, or array port. To list the paths through an array port, specify either a combination of enclosure name and array port id, or array port WWN.
Administering Dynamic Multipathing Administering DMP using vxdmpadm 159 You can also use getsubpaths to obtain information about all the paths that are connected to a port on an array.
160 Administering Dynamic Multipathing Administering DMP using vxdmpadm Displaying information about controllers The following command lists attributes of all HBA controllers on the system: # vxdmpadm listctlr all # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME =============================================================== c1 OTHER ENABLED other0 c2 X1 ENABLED jbod0 c3 ACME ENABLED enc0 c4 ACME ENABLED enc0 This output shows that the controller c1 is connected to disks that are not in any
Administering Dynamic Multipathing Administering DMP using vxdmpadm 161 # vxdmpadm getctlr c5 LNAME PNAME HBA-VENDOR CTLR-ID =========================================================================== c5 c5 qlogic 20:07:00:a0:b8:17:e1:37 c6 c6 qlogic iqn.1986-03.com.sun:01:0003ba8ed1b5.
162 Administering Dynamic Multipathing Administering DMP using vxdmpadm The following example shows information about the array port that is accessible via DMP node c2t66d0: # vxdmpadm getportids dmpnodename=c2t66d0 NAME ENCLR-NAME ARRAY-PORT-ID pWWN ============================================================== c2t66d0 HDS9500V0 1A 20:00:00:E0:8B:06:5F:19 Displaying information about TPD-controlled devices The third-party driver (TPD) coexistence feature allows I/O that is controlled by third-party mult
Administering Dynamic Multipathing Administering DMP using vxdmpadm =================================================================== c7t0d10 emcpower10s2 emcpower10 EMC EMC0 c6t0d10 emcpower10s2 emcpower10 EMC EMC0 Conversely, the next command displays information about the PowerPath node that corresponds to the path, c7t0d10, discovered by DMP: # vxdmpadm gettpdnode nodename=c7t0d10 NAME STATE PATHS ENCLR-TYPE ENCLR-NAME =================================================================== emcpower10s2
164 Administering Dynamic Multipathing Administering DMP using vxdmpadm ASLs furnish this information to DDL through the property DDL_DEVICE_ATTR. The vxdisk -x attribute -p list command displays the 1-line listing for the property list and the attributes. You can specify multiple -x options in the same command to display multiple entries.
Administering Dynamic Multipathing Administering DMP using vxdmpadm BCV-NR BCV device in Not ready state. MIRROR This is EMC Mirror device. SRDF-R1 Primary/source device involved in SRDF operation. SRDF-R2 Secondary/Target device involved in SRDF operation. b. DS8k The following are the list of attributes recognized by DS8000 arrays. STD This is the standard device and is not involved in any special operation. FlashCopy Disk is involved in Point in time copy operation and it is the target. c.
166 Administering Dynamic Multipathing Administering DMP using vxdmpadm Suppressing or including devices for VxVM or DMP control The vxdmpadm exclude command suppresses devices from VxVM based on the criteria that you specify. The devices can be added back into VxVM control by using the vxdmpadm include command. The devices can be included or excluded based on VID:PID combination, paths, controllers, or disks.
Administering Dynamic Multipathing Administering DMP using vxdmpadm 167 To display the accumulated statistics at regular intervals, use the following command: # vxdmpadm iostat show {all | dmpnodename=dmp-node | \ enclosure=enclr-name | pathname=path-name | ctlr=ctlr-name} \ [interval=seconds [count=N]] This command displays I/O statistics for all paths (all), or for a specified DMP node, enclosure, path or controller.
168 Administering Dynamic Multipathing Administering DMP using vxdmpadm c3t102d0 c2t121d0 c3t121d0 c2t112d0 c3t112d0 c2t96d0 c3t96d0 c2t106d0 c3t106d0 c2t113d0 c3t113d0 c2t119d0 c3t119d0 0 87 0 87 0 87 0 87 0 87 0 87 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 44544 0 44544 0 44544 0 44544 0 44544 0 44544 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.
Administering Dynamic Multipathing Administering DMP using vxdmpadm 169 # vxdmpadm iostat show pathname=c3t115d0 interval=2 count=2 cpu usage = 8195us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c3t115d0 0 0 0 0 0.00 0.00 PATHNAME c3t115d0 cpu usage = 59us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) READS WRITES READS WRITES READS WRITES 0 0 0 0 0.00 0.
170 Administering Dynamic Multipathing Administering DMP using vxdmpadm Displaying cumulative I/O statistics Use the groupby clause of the vxdmpadm iostat command to display cumulative I/O statistics listings per DMP node, controller, array port id, or host-array controller pair and enclosure. If the groupby clause is not specified, then the statistics are displayed per path.
Administering Dynamic Multipathing Administering DMP using vxdmpadm You can also filter out entities for which all data entries are zero. This option is especially useful in a cluster environment which contains many failover devices. You can display only the statistics for the active paths.
172 Administering Dynamic Multipathing Administering DMP using vxdmpadm Setting the attributes of the paths to an enclosure You can use the vxdmpadm setattr command to set the attributes of the paths to an enclosure or disk array. The attributes set for the paths are persistent and are stored in the file /etc/vx/dmppolicy.info. You can set the following attributes: active Changes a standby (failover) path to an active path.
Administering Dynamic Multipathing Administering DMP using vxdmpadm primary 173 Defines a path as being the primary path for an Active/Passive disk array. The following example specifies a primary path for an A/P disk array: # vxdmpadm setattr path c3t10d0 \ pathtype=primary secondary Defines a path as being the secondary path for an Active/Passive disk array.
174 Administering Dynamic Multipathing Administering DMP using vxdmpadm To display the minimum redundancy level for a particular device, use the vxdmpadm getattr command, as follows: # vxdmpadm getattr enclosure|arrayname|arraytype component-name redundancy For example, to show the minimum redundancy level for the enclosure HDS9500-ALUA0: # vxdmpadm getattr enclosure HDS9500-ALUA0 redundancy ENCLR_NAME DEFAULT CURRENT ============================================= HDS9500-ALUA0 0 4 Specifying the minimum
Administering Dynamic Multipathing Administering DMP using vxdmpadm The following example displays the default and current setting of iopolicy for JBOD disks: # vxdmpadm getattr enclosure Disk iopolicy ENCLR_NAME DEFAULT CURRENT --------------------------------------Disk MinimumQ Balanced The next example displays the setting of partitionsize for the enclosure enc0, on which the balanced I/O policy with a partition size of 2MB has been set: # vxdmpadm getattr enclosure enc0 partitionsize ENCLR_NAME DEFAUL
176 Administering Dynamic Multipathing Administering DMP using vxdmpadm adaptive This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time. For example, I/O from/to a database may exhibit both long transfers (table scans) and short transfers (random look ups). The policy is also useful for a SAN environment where different paths may have different number of hops.
Administering Dynamic Multipathing Administering DMP using vxdmpadm balanced This policy is designed to optimize the use of caching in disk drives [partitionsize=size] and RAID controllers. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware. During normal operation, the disks (or LUNs) are logically divided into a number of regions (or partitions), and I/O from/to a given region is sent on only one of the active paths.
178 Administering Dynamic Multipathing Administering DMP using vxdmpadm priority This policy is useful when the paths in a SAN have unequal performance, and you want to enforce load balancing manually. You can assign priorities to each path based on your knowledge of the configuration and performance characteristics of the available paths, and of other aspects of your system. See “Setting the attributes of the paths to an enclosure” on page 172.
Administering Dynamic Multipathing Administering DMP using vxdmpadm characteristics of the array, the consequent improved load balancing can increase the total I/O throughput. However, this feature should only be enabled if recommended by the array vendor. It has no effect for array types other than A/A-A.
180 Administering Dynamic Multipathing Administering DMP using vxdmpadm c3t2d15 c4t2d15 c4t3d15 c5t3d15 c5t4d15 state=enabled state=enabled state=enabled state=enabled state=enabled type=primary type=primary type=primary type=primary type=primary In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1.
Administering Dynamic Multipathing Administering DMP using vxdmpadm This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path.
182 Administering Dynamic Multipathing Administering DMP using vxdmpadm Disabling I/O for paths, controllers or array ports Disabling I/O through a path, HBA controller or array port prevents DMP from issuing I/O requests through the specified path, or the paths that are connected to the specified controller or array port. The command blocks until all pending I/O requests issued through the paths are completed. Note: From release 5.
Administering Dynamic Multipathing Administering DMP using vxdmpadm Enabling I/O for paths, controllers or array ports Enabling a controller allows a previously disabled path, HBA controller or array port to accept I/O again. This operation succeeds only if the path, controller or array port is accessible to the host, and I/O can be performed on it. When connecting Active/Passive disk arrays, the enable operation results in failback of I/O to the primary path.
184 Administering Dynamic Multipathing Administering DMP using vxdmpadm First obtain the appropriate firmware upgrades from your disk drive vendor. You can usually download the appropriate files and documentation from the vendor’s support website. To upgrade the disk controller firmware 1 Disable the plex that is associated with the disk device: # /opt/VRTS/bin/vxplex -g diskgroup det plex (The example is a volume mirrored across 2 controllers on one HBA.
Administering Dynamic Multipathing Administering DMP using vxdmpadm # vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ============================================================ other0 jbod0 GRP1 OTHER X1 ACME OTHER_DISKS X1_DISKS 60020f20000001a90000 CONNECTED CONNECTED CONNECTED Configuring the response to I/O failures You can configure how DMP responds to failed I/O requests on the paths to a specified enclosure, disk array name, or type of array.
186 Administering Dynamic Multipathing Administering DMP using vxdmpadm The default value of iotimeout is 10 seconds. For some applications, such as Oracle, it may be desirable to set iotimeout to a larger value, such as 60 seconds. Note: The fixedretry and timebound settings are mutually exclusive.
Administering Dynamic Multipathing Administering DMP using vxdmpadm # vxdmpadm setattr \ {enclosure enc-name|arrayname name|arraytype type} \ recoveryoption=nothrottle The following example shows how to disable I/O throttling for the paths to the enclosure enc0: # vxdmpadm setattr enclosure enc0 recoveryoption=nothrottle The vxdmpadm setattr command can be used to enable I/O throttling on the paths to a specified enclosure, disk array name, or type of array: # vxdmpadm setattr \ {enclosure enc-name|array
188 Administering Dynamic Multipathing Administering DMP using vxdmpadm # vxdmpadm setattr arraytype A/A recoveryoption=default The above command configures the default behavior, corresponding to recoveryoption=nothrottle. The above command also configures the default behavior for the response to I/O failures. See “Configuring the response to I/O failures” on page 185. Note: The I/O throttling settings are persistent across reboots of the system.
Administering Dynamic Multipathing Administering DMP using vxdmpadm Table 3-2 Recovery options for I/O throttling Recovery option Possible settings Description recoveryoption=nothrottle None I/O throttling is not used. recoveryoption=throttle Queuedepth (queuedepth) DMP throttles the path if the specified number of queued I/O requests is exceeded. recoveryoption=throttle Timebound (iotimeout) DMP throttles the path if an I/O request does not return within the specified time in seconds.
190 Administering Dynamic Multipathing Administering DMP using vxdmpadm check_all if there are only two paths per DMP node. The command to configure this policy is: # vxdmpadm start restore [interval=seconds] \ policy=check_alternate ■ check_disabled This is the default path restoration policy. The path restoration thread checks the condition of paths that were previously disabled due to hardware failures, and revives them if they are back online.
Administering Dynamic Multipathing Administering DMP using vxdmpadm are used. The system default restore policy is check_disabled. The system default interval is 300 seconds. Warning: Decreasing the interval below the system default can adversely affect system performance. Stopping the DMP path restoration thread Use the following command to stop the DMP path restoration thread: # vxdmpadm stop restore Warning: Automatic path failback stops if the path restoration thread is stopped.
192 Administering Dynamic Multipathing Administering DMP using vxdmpadm ■ Select the path failover mechanism. ■ Select the alternate path in the case of a path failure. ■ Put a path change into effect. ■ Respond to SCSI reservation or release requests. DMP supplies default procedures for these functions when an array is registered. An APM may modify some or all of the existing procedures that are provided by DMP or by another version of the APM.
Chapter Creating and administering disk groups This chapter includes the following topics: ■ About disk groups ■ Displaying disk group information ■ Creating a disk group ■ Adding a disk to a disk group ■ Removing a disk from a disk group ■ Deporting a disk group ■ Importing a disk group ■ Handling cloned disks with duplicated identifiers ■ Renaming a disk group ■ Moving disks between disk groups ■ Moving disk groups between systems ■ Handling conflicting configuration copies ■ Reo
194 Creating and administering disk groups About disk groups ■ Managing the configuration daemon in VxVM ■ Backing up and restoring disk group configuration data ■ Using vxnotify to monitor configuration changes About disk groups Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group.
Creating and administering disk groups About disk groups Having disk groups that contain many disks and VxVM objects causes the private region to fill. If you have large disk groups that are expected to contain more than several hundred disks and VxVM objects, you should set up disks with larger private areas. A major portion of a private region provides space for a disk group configuration database that contains records for each VxVM object in that disk group.
196 Creating and administering disk groups About disk groups # vxassist -g mktdg make mktvol 5g The block special device that corresponds to this volume is /dev/vx/dsk/mktdg/mktvol. System-wide reserved disk groups The following disk group names are reserved, and cannot be used to name any disk groups that you create: bootdg Specifies the boot disk group. This is an alias for the disk group that contains the volumes that are used to boot the system.
Creating and administering disk groups About disk groups ■ Use the disk group that has been assigned to the system-wide default disk group alias, defaultdg. If this alias is undefined, the following rule is applied. See “Displaying and specifying the system-wide default disk group” on page 197. ■ If the operation can be performed without requiring a disk group name (for example, an edit operation on disk access records), do so. If none of these rules succeeds, the requested operation fails.
198 Creating and administering disk groups Displaying disk group information The specified disk group is not required to exist on the system. See the vxdctl(1M) manual page. See the vxdg(1M) manual page. Displaying disk group information To display information on existing disk groups, enter the following command: # vxdg list NAME rootdg newdg STATE enabled enabled ID 730344554.1025.tweety 731118794.1213.
Creating and administering disk groups Displaying disk group information This command provides output that includes the following information for the specified disk. For example, output for disk c0t12d0 as follows: Disk: type: c0t12d0 simple flags: diskid: dgname: dgid: hostid: info: online ready private autoconfig autoimport imported 963504891.1070.bass newdg 963504895.1075.
200 Creating and administering disk groups Creating a disk group Creating a disk group You must associate a disk group with at least one disk. You can create a new disk group when you select Add or initialize one or more disks from the main menu of the vxdiskadm command to add disks to VxVM control. The disks to be added to a disk group must not belong to an existing disk group.
Creating and administering disk groups Removing a disk from a disk group You can also use the vxdiskadd command to add a disk to a disk group. Enter the following: # vxdiskadd c1t1d0 where c1t1d0 is the device name of a disk that is not currently assigned to a disk group. The command dialog is similar to that described for the vxdiskadm command. # vxdiskadm c1t1d0 See “Adding a disk to VxVM” on page 107.
202 Creating and administering disk groups Deporting a disk group # vxdiskunsetup devicename For example, to remove the disk c1t0d0 from VxVM control, enter the following: # vxdiskunsetup c1t0d0 You can remove a disk on which some subdisks of volumes are defined. For example, you can consolidate all the volumes onto one disk. If you use vxdiskadm to remove a disk, you can choose to move volumes off that disk. To do this, run vxdiskadm and select Remove a disk from the main menu.
Creating and administering disk groups Deporting a disk group To deport a disk group 1 Stop all activity by applications to volumes that are configured in the disk group that is to be deported. Unmount file systems and shut down databases that are configured on the volumes. If the disk group contains volumes that are in use (for example, by mounted file systems or databases), deportation fails.
204 Creating and administering disk groups Importing a disk group Importing a disk group Importing a disk group enables access by the system to a disk group. To move a disk group from one system to another, first disable (deport) the disk group on the original system, and then move the disk between systems and enable (import) the disk group.
Creating and administering disk groups Handling cloned disks with duplicated identifiers Advanced disk arrays provide hardware tools that you can use to create clones of existing disks outside the control of VxVM. For example, these disks may have been created as hardware snapshots or mirrors of existing disks in a disk group. As a result, the VxVM private region is also duplicated on the cloned disk.
206 Creating and administering disk groups Handling cloned disks with duplicated identifiers Writing a new UDID to a disk You can use the following command to update the unique disk identifier (UDID) for one or more disks. This is useful when building a new LUN from space previously used by a deleted LUN, for example. # vxdisk [-f] [-g diskgroup ] updateudid disk ...
Creating and administering disk groups Handling cloned disks with duplicated identifiers Alternatively, you can update the UDIDs of the cloned disks. See “Writing a new UDID to a disk” on page 206. To check which disks are tagged, use the vxdisk listtag command: # vxdisk listtag DANAME c0t06d0 c0t16d0 . . .
208 Creating and administering disk groups Handling cloned disks with duplicated identifiers # vxdg -o useclonedev=on -o tag=my_tagged_disks import mydg If you have already imported the non-cloned disks in a disk group, you can use the -n and -t option to specify a temporary name for the disk group containing the cloned disks: # vxdg -t -n clonedg -o useclonedev=on -o tag=my_tagged_disks \ import mydg See “Renaming a disk group” on page 213.
Creating and administering disk groups Handling cloned disks with duplicated identifiers The following command ensures that configuration database copies and kernel log copies are maintained for all disks in the disk group mydg that are tagged as t1: # vxdg -g mydg set tagmeta=on tag=t1 nconfig=all nlog=all The disks for which such metadata is maintained can be seen by using this command: # vxdisk -o alldgs list DEVICE TagmaStore-USP0_10 TagmaStore-USP0_24 TagmaStore-USP0_25 TagmaStore-USP0_26 TagmaStore-
210 Creating and administering disk groups Handling cloned disks with duplicated identifiers TagmaStore-USP0_30 auto:cdsdisk TagmaStore-USP0_31 auto:cdsdisk TagmaStore-USP0_32 auto:cdsdisk mydg01 (mydg) (mydg) mydg online udid_mismatch online udid_mismatch online To import the cloned disks, they must be assigned a new disk group name, and their UDIDs must be updated: # vxdg -n snapdg -o useclonedev=on -o updateid import mydg # vxdisk -o alldgs list DEVICE TagmaStore-USP0_3 TagmaStore-USP0_23 TagmaStore
Creating and administering disk groups Handling cloned disks with duplicated identifiers TagmaStore-USP0_31 auto:cdsdisk mydg01 mydg TagmaStore-USP0_32 auto:cdsdisk (mydg) online clone_disk online In the next example, a cloned disk (BCV device) from an EMC Symmetrix DMX array is to be imported.
212 Creating and administering disk groups Handling cloned disks with duplicated identifiers EMC0_4 EMC0_6 EMC0_8 EMC0_15 EMC0_18 EMC0_24 auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk mydg01 mydg mydg02 mydg (mydg) (mydg) mydg03 mydg (mydg) online online online udid_mismatch online udid_mismatch online online udid_mismatch The disks are tagged as follows: # vxdisk listtag DEVICE EMC0_4 EMC0_4 EMC0_6 EMC0_8 EMC0_15 EMC0_18 EMC0_24 EMC0_24 NAME t2 t1 t2 t1 t2 t1 t1 t2 V
Creating and administering disk groups Renaming a disk group DEVICE EMC0_4 EMC0_6 EMC0_8 EMC0_15 EMC0_18 EMC0_24 TYPE auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk auto:cdsdisk DISK GROUP mydg01 mydg mydg02 mydg mydg03 bcvdg (mydg) mydg03 mydg mydg01 bcvdg STATUS online online online online udid_mismatch online online clone_disk In the next example, none of the disks (neither cloned nor non-cloned) have been imported: # vxdisk -o alldgs list DEVICE EMC0_4 EMC0_6 EMC0_8 EMC0_15 EMC0_1
214 Creating and administering disk groups Renaming a disk group To rename a disk group during import, use the following command: # vxdg [-t] -n newdg import diskgroup If the -t option is included, the import is temporary and does not persist across reboots. In this case, the stored name of the disk group remains unchanged on its original host, but the disk group is known by the name specified by newdg to the importing host. If the -t option is not used, the name change is permanent.
Creating and administering disk groups Renaming a disk group To temporarily move the boot disk group, bootdg, from one host to another (for repair work on the root volume, for example) and then move it back 1 On the original host, identify the disk group ID of the bootdg disk group to be imported with the following command: # vxdisk -g bootdg -s list dgname: rootdg dgid: 774226267.1025.tweety In this example, the administrator has chosen to name the boot disk group as rootdg.
216 Creating and administering disk groups Moving disks between disk groups Moving disks between disk groups To move a disk between disk groups, remove the disk from one disk group and add it to the other.
Creating and administering disk groups Moving disk groups between systems 4 Import (enable local access to) the disk group on the target system with this command: # vxdg import diskgroup Warning: All disks in the disk group must be moved to the other system. If they are not moved, the import fails. 5 After the disk group is imported, start all volumes in the disk group with this command: # vxrecover -g diskgroup -sb You can also move disks from a system that has crashed.
218 Creating and administering disk groups Moving disk groups between systems # vxdisk clearimport devicename ... To clear the locks during import, use the following command: # vxdg -C import diskgroup Warning: Be careful when using the vxdisk clearimport or vxdg -C import command on systems that see the same disks via a SAN. Clearing the locks allows those disks to be accessed at the same time from multiple hosts and can result in corrupted data.
Creating and administering disk groups Moving disk groups between systems from the main menu. To import a disk group, select Enable access to (import) a disk group. The vxdiskadm import operation checks for host import locks and prompts to see if you want to clear any that are found. It also starts volumes in the disk group. Reserving minor numbers for disk groups A device minor number uniquely identifies some characteristic of a device to the device driver that controls that device.
220 Creating and administering disk groups Moving disk groups between systems # vxprint -l mydg | grep minors minors: >=45000 # vxprint -g mydg -m | egrep base_minor base_minor=45000 To set a base volume device minor number for a disk group that is being created, use the following command: # vxdg init diskgroup minor=base_minor disk_access_name ...
Creating and administering disk groups Handling conflicting configuration copies On a Linux platform with a pre-2.6 kernel, the number of minor numbers per major number is limited to 256 with a base of 0. This has the effect of limiting the number of volumes and disks that can be supported system-wide to a smaller value than is allowed on other operating system platforms. The number of disks that are supported by a pre-2.6 Linux kernel is typically limited to a few hundred.
222 Creating and administering disk groups Handling conflicting configuration copies to resolve manually. This section and following sections describe how such a condition can occur, and how to correct it. (When the condition occurs in a cluster that has been split, it is usually referred to as a serial split brain condition). Example of a serial split brain condition in a cluster This section presents an example of how a serial split brain condition might occur for a shared disk group in a cluster.
Creating and administering disk groups Handling conflicting configuration copies The fibre channel connectivity is multiply redundant to implement redundant-loop access between each node and each enclosure. As usual, the two nodes are also linked by a redundant private network. A serial split brain condition typically arises in a cluster when a private (non-shared) disk group is imported on Node 0 with Node 1 configured as the failover node.
224 Creating and administering disk groups Handling conflicting configuration copies Figure 4-2 Example of a serial split brain condition that can be resolved automatically Partial disk group imported on host X Disk B not imported Disk A Disk B Disk A = 1 Disk B = 0 Configuration database Expected A = 1 Expected B = 0 Configuration database Expected A = 0 Expected B = 0 1. Disk A is imported on a separate host. Disk B is not imported.
Creating and administering disk groups Handling conflicting configuration copies Figure 4-3 225 Example of a true serial split brain condition that cannot be resolved automatically Partial disk group imported on host X Partial disk group imported on host Y Disk A Disk B Disk A = 1 Configuration database Expected A = 1 Expected B = 0 Disk B = 1 Configuration database Expected A = 0 Expected B = 1 1. Disks A and B are imported independently on separate hosts.
226 Creating and administering disk groups Handling conflicting configuration copies Correcting conflicting configuration information To resolve conflicting configuration information, you must decide which disk contains the correct version of the disk group configuration database. To assist you in doing this, you can run the vxsplitlines command to show the actual serial ID on each disk in the disk group and the serial ID that was expected from the configuration database.
Creating and administering disk groups Reorganizing the contents of disk groups 227 You can specify the -c option to vxsplitlines to print detailed information about each of the disk IDs from the configuration copy on a disk specified by its disk access name: # vxsplitlines DANAME(DMNAME) c2t5d0( c2t5d0 c2t6d0( c2t6d0 c2t7d0( c2t7d0 c2t8d0( c2t8d0 -g newdg -c c2t6d0 || Actual SSB ) || 0.1 ) || 0.1 ) || 0.1 ) || 0.1 || || || || || Expected SSB 0.0 ssb ids don’t match 0.1 ssb ids match 0.
228 Creating and administering disk groups Reorganizing the contents of disk groups ■ To reduce the size of a disk group’s configuration database in the event that its private region is nearly full. This is a much simpler solution than the alternative of trying to grow the private region. ■ To perform online maintenance and upgrading of fault-tolerant systems that can be split into separate hosts for this purpose, and then rejoined. Use the vxdg command to reorganize your disk groups.
Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-5 Disk group split operation Source disk group Disks to be split into new disk group Source disk group ■ After split New target disk group join removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. Figure 4-6 shows the join operation.
230 Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-6 Disk group join operation Source disk group Target disk group Join After join Target disk group These operations are performed on VxVM objects such as disks or top-level volumes, and include all component objects such as sub-volumes, plexes and subdisks.
Creating and administering disk groups Reorganizing the contents of disk groups imported by another host or because it no longer exists, you must recover the disk group manually. See the Veritas Volume Manager Troubleshooting Guide. Limitations of disk group split and join The disk group split and join feature has the following limitations: ■ Disk groups involved in a move, split or join must be version 90 or greater. See “Upgrading a disk group” on page 241.
232 Creating and administering disk groups Reorganizing the contents of disk groups See the Veritas Storage Foundation Intelligent Storage Provisioning Administrator’s Guide. ■ If a cache object or volume set that is to be split or moved uses ISP volumes, the storage pool that contains these volumes must also be specified.
Creating and administering disk groups Reorganizing the contents of disk groups automatically placed on different disks from the data plexes of the parent volume. In previous releases, version 0 DCO plexes were placed on the same disks as the data plexes for convenience when performing disk group split and move operations. As version 20 DCOs support dirty region logging (DRL) in addition to Persistent FastResync, it is preferable for the DCO plexes to be separated from the data plexes.
234 Creating and administering disk groups Reorganizing the contents of disk groups Examples of disk groups that can and cannot be split Figure 4-7 Volume data plexes Snapshot plex The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data Split Volume DCO plexes Snapshot DCO plex Volume data plexes Snapshot plex The disk group cannot be split as the DCO plexes cannot accompany their volumes.
Creating and administering disk groups Reorganizing the contents of disk groups # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ... The -o expand option ensures that the objects that are actually moved include all other disks containing subdisks that are associated with the specified objects or with objects that they contain. The default behavior of vxdg when moving licensed disks in an EMC array is to perform an EMC disk compatibility check for each disk involved in the move.
236 Creating and administering disk groups Reorganizing the contents of disk groups TY dg dm dm dm dm v pl sd pl sd NAME mydg mydg01 mydg05 mydg07 mydg08 vol1 vol1-01 mydg01-01 vol1-02 mydg05-01 ASSOC mydg c0t1d0 c1t96d0 c1t99d0 c1t100d0 fsgen vol1 vol1-01 vol1 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 17678493 17678493 2048 3591 3591 3591 3591 PLOFFS 0 0 STATE ACTIVE ACTIVE ACTIVE - TUTIL0 - PUTIL0 - The following command moves the self-contained set of obje
Creating and administering disk groups Reorganizing the contents of disk groups dg mydg dm mydg07 dm mydg08 mydg c1t99d0 c1t100d0 - 17678493 17678493 - - - - The following commands would also achieve the same result: # vxdg move mydg rootdg mydg01 mydg05 # vxdg move mydg rootdg vol1 See “Moving objects between shared disk groups” on page 481.
238 Creating and administering disk groups Reorganizing the contents of disk groups pl vol1-02 vol1 sd rootdg05-01 vol1-02 ENABLED ENABLED 3591 3591 0 ACTIVE - - - The following command removes disks rootdg07 and rootdg08 from rootdg to form a new disk group, mydg: # vxdg -o expand split rootdg mydg rootdg07 rootdg08 The moved volumes are initially disabled following the split.
Creating and administering disk groups Reorganizing the contents of disk groups # vxdg [-o override|verify] join sourcedg targetdg See “Moving objects between disk groups” on page 234. Note: You cannot specify rootdg as the source disk group for a join operation. The following output from vxprint shows the contents of the disk groups rootdg and mydg. The output includes two utility fields, TUTIL0 and PUTIL0..
240 Creating and administering disk groups Disabling a disk group # vxrecover -g targetdg -m [volume ...
Creating and administering disk groups Upgrading a disk group Warning: This command destroys all data on the disks. When a disk group is destroyed, the disks that are released can be re-used in other disk groups. Recovering a destroyed disk group If a disk group has been accidentally destroyed, you can recover it, provided that the disks that were in the disk group have not been modified or reused elsewhere.
242 Creating and administering disk groups Upgrading a disk group Until the disk group is upgraded, it may still be deported back to the release from which it was imported. Until completion of the upgrade, the disk group can be used “as is” provided there is no attempt to use the features of the current version. There is no "downgrade" facility.
Creating and administering disk groups Upgrading a disk group Table 4-1 Disk group version assignments (continued) VxVM release Introduces disk group version Supports disk group versions 4.0 110 20-110 4.1 120 20-120 5.0 140 20-140 Importing the disk group of a previous version on a Veritas Volume Manager system prevents the use of features introduced since that version was released. Table 4-2 summarizes the features that are supported by disk group versions 20 through 140.
244 Creating and administering disk groups Upgrading a disk group Table 4-2 Features supported by disk group versions (continued) Disk group version New features supported 110 ■ ■ ■ ■ ■ ■ ■ 90 Previous version features supported Cross-platform Data Sharing 20, 30, 40, 50, 60, 70, 80, 90 (CDS) Device Discovery Layer (DDL) 2.
Creating and administering disk groups Upgrading a disk group Table 4-2 Features supported by disk group versions (continued) Disk group version New features supported Previous version features supported 50 ■ SRVM (now known as Veritas 20, 30, 40 Volume Replicator or VVR) 40 ■ Hot-Relocation 20, 30 30 ■ VxSmartSync Recovery Accelerator 20 20 ■ Dirty Region Logging (DRL) Disk Group Configuration Copy Limiting ■ Mirrored Volumes Logging ■ ■ New-Style Stripes ■ RAID-5 Volumes ■ Recove
246 Creating and administering disk groups Managing the configuration daemon in VxVM For example, to create a disk group with version 120 that can be imported by a system running VxVM 4.1, use the following command: # vxdg -T 120 init newdg newdg01=c0t3d0 This creates a disk group, newdg, which can be imported by Veritas Volume Manager 4.1. Note that while this disk group can be imported on the VxVM 4.1 system, attempts to use features from Veritas Volume Manager 5.0 and later releases will fail.
Creating and administering disk groups Backing up and restoring disk group configuration data ■ Update the DMP database with changes in path type for active/passive disk arrays. Use the utilities provided by the disk-array vendor to change the path type between primary and secondary. See the vxdctl(1M) manual page.
248 Creating and administering disk groups Using vxnotify to monitor configuration changes # vxnotify -s -I See the vxnotify(1M) manual page.
Chapter 5 Creating and administering subdisks This chapter includes the following topics: ■ About subdisks ■ Creating subdisks ■ Displaying subdisk information ■ Moving subdisks ■ Splitting subdisks ■ Joining subdisks ■ Associating subdisks with plexes ■ Associating log subdisks ■ Dissociating subdisks from plexes ■ Removing subdisks ■ Changing subdisk attributes About subdisks Subdisks are the low-level building blocks in a Veritas Volume Manager (VxVM) configuration that are requir
250 Creating and administering subdisks Creating subdisks Note: Most VxVM commands require superuser or equivalent privileges. Creating subdisks Use the vxmake command to create VxVM objects, such as subdisks: # vxmake [-g diskgroup] sd subdisk diskname,offset,length where subdisk is the name of the subdisk, diskname is the disk name, offset is the starting point (offset) of the subdisk within the disk, and length is the length of the subdisk.
Creating and administering subdisks Moving subdisks sd mydg01-01 vol1-01 mydg01 0 sd mydg02-01 vol2-01 mydg02 0 102400 0 102400 0 c2t0d1 ENA c2t1d1 ENA You can display complete information about a particular subdisk by using this command: # vxprint [-g diskgroup] -l subdisk For example, the following command displays all information for subdisk mydg02-01 in the disk group, mydg: # vxprint -g mydg -l mydg02-01 This command provides the following output: Disk group: mydg Subdisk: info: assoc: flags: dev
252 Creating and administering subdisks Splitting subdisks Subdisk can also be moved manually after hot-relocation. See “Moving relocated subdisks” on page 443. Splitting subdisks Splitting a subdisk divides an existing subdisk into two separate subdisks.
Creating and administering subdisks Associating subdisks with plexes # vxsd -g mydg join mydg03-02 mydg03-03 mydg03-04 mydg03-05 \ mydg03-02 Associating subdisks with plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex. The entire area that the subdisk fills must not be occupied by any portion of another subdisk.
254 Creating and administering subdisks Associating subdisks with plexes volume, and subsequently want to make the plex complete. To complete the plex, create a subdisk of a size that fits the hole in the sparse plex exactly.
Creating and administering subdisks Associating log subdisks Associating log subdisks Log subdisks are defined and added to a plex that is to become part of a volume on which dirty region logging (DRL) is enabled. DRL is enabled for a volume when the volume is mirrored and has at least one log subdisk. Warning: Only one log subdisk can be associated with a plex. Because this log subdisk is frequently written, care should be taken to position it on a disk that is not heavily used.
256 Creating and administering subdisks Dissociating subdisks from plexes Dissociating subdisks from plexes To break an established connection between a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in another plex.
Creating and administering subdisks Changing subdisk attributes The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit [-g diskgroup] set attribute=value ... subdisk ... The subdisk fields you can change with the vxedit command include the following: name Subdisk name. putiln Persistent utility field(s) used to manage objects and communication between different commands and Symantec products.
258 Creating and administering subdisks Changing subdisk attributes To prevent a particular subdisk from being associated with a plex, set the putil0 field to a non-null string, as shown in the following command: # vxedit -g mydg set putil0="DO-NOT-USE" mydg02-01 See the vxedit(1M) manual page.
Chapter 6 Creating and administering plexes This chapter includes the following topics: ■ About plexes ■ Creating plexes ■ Creating a striped plex ■ Displaying plex information ■ Attaching and associating plexes ■ Taking plexes offline ■ Detaching plexes ■ Automatic plex reattachment ■ Reattaching plexes ■ Moving plexes ■ Copying volumes to plexes ■ Dissociating and removing plexes ■ Changing plex attributes About plexes Plexes are logical groupings of subdisks that create an are
260 Creating and administering plexes Creating plexes disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data. Because each data plex must reside on different disks from the other plexes, the replication provided by mirroring prevents data loss in the event of a single-point disk-subsystem failure. Multiple data plexes also provide increased data integrity and reliability. See “About subdisks” on page 249.
Creating and administering plexes Displaying plex information Displaying plex information Listing plexes helps identify free plexes for building volumes. Use the plex (–p) option to the vxprint command to list information about all plexes.
262 Creating and administering plexes Displaying plex information Table 6-1 Plex states State Description ACTIVE A plex can be in the ACTIVE state in the following ways: when the volume is started and the plex fully participates in normal volume I/O (the plex contents change as the contents of the volume change) ■ when the volume is stopped as a result of a system crash and the plex is ACTIVE at the moment of the crash ■ In the latter case, a system failure can leave plex contents in an inconsistent
Creating and administering plexes Displaying plex information Table 6-1 Plex states (continued) State Description OFFLINE The vxmend off task indefinitely detaches a plex from a volume by setting the plex state to OFFLINE. Although the detached plex maintains its association with the volume, changes to the volume do not update the OFFLINE plex. The plex is not updated until the plex is put online and reattached with the vxplex att task.
264 Creating and administering plexes Displaying plex information Table 6-1 Plex states (continued) State Description TEMPRM A TEMPRM plex state is similar to a TEMP state except that at the completion of the operation, the TEMPRM plex is removed. Some subdisk operations require a temporary plex. Associating a subdisk with a plex, for example, requires updating the subdisk with the volume contents before actually associating the subdisk.
Creating and administering plexes Attaching and associating plexes Table 6-2 Plex condition flags (continued) Condition flag Description RECOVER A disk corresponding to one of the disk media records was replaced, or was reattached too late to prevent the plex from becoming out-of-date with respect to the volume. The plex required complete recovery from another plex in the volume to synchronize its contents.
266 Creating and administering plexes Taking plexes offline For example, to attach a plex named vol01-02 to a volume named vol01 in the disk group, mydg, use the following command: # vxplex -g mydg att vol01 vol01-02 If the volume does not already exist, a plex (or multiple plexes) can be associated with the volume when it is created using the following command: # vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...
Creating and administering plexes Detaching plexes Detaching plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
268 Creating and administering plexes Reattaching plexes To disable automatic plex attachment, remove vxattachd from the start up scripts. Disabling vxattachd disables the automatic reattachment feature for both plexes and sites. In a Cluster Volume Manager (CVM) the following considerations apply: ■ If the global detach policy is set, a storage failure from any node causes all plexes on that storage to be detached globally.
Creating and administering plexes Moving plexes For example, to re-enable a plex named vol01-02 in the disk group, mydg, enter: # vxmend -g mydg on vol01-02 In this case, the state of vol01-02 is set to STALE. When the volume is next started, the data on the plex is revived from another plex, and incorporated into the volume with its state set to ACTIVE.
270 Creating and administering plexes Copying volumes to plexes Copying volumes to plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex [-g diskgroup] cp volume new_plex After the copy task is complete, new_plex is not associated with the specified volume volume. The plex contains a complete copy of the volume data.
Creating and administering plexes Changing plex attributes This command removes the plex vol01-02 and all associated subdisks. Alternatively, you can first dissociate the plex and subdisks, and then remove them with the following commands: # vxplex [-g diskgroup] dis plex # vxedit [-g diskgroup] -r rm plex When used together, these commands produce the same result as the vxplex -o rm dis command. The -r option to vxedit rm recursively removes all objects from the specified object downward.
272 Creating and administering plexes Changing plex attributes
Chapter Creating volumes This chapter includes the following topics: ■ About volume creation ■ Types of volume layouts ■ Creating a volume ■ Using vxassist ■ Discovering the maximum size of a volume ■ Disk group alignment constraints on volumes ■ Creating a volume on any disk ■ Creating a volume on specific disks ■ Creating a mirrored volume ■ Creating a volume with a version 0 DCO volume ■ Creating a volume with a version 20 DCO volume ■ Creating a volume with dirty region logging e
274 Creating volumes About volume creation ■ Accessing a volume About volume creation Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. You can also use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
Creating volumes Types of volume layouts Striped A volume with data spread evenly across multiple disks. Stripes are equal-sized fragments that are allocated alternately and evenly to the subdisks of a single plex. There must be at least two subdisks in a striped plex, each of which must exist on a different disk. Throughput increases with the number of disks across which a plex is striped. Striping helps to balance I/O load in cases where high traffic areas exist on certain subdisks.
276 Creating volumes Types of volume layouts Layered Volume A volume constructed from other volumes. Non-layered volumes are constructed by mapping their subdisks to VM disks. Layered volumes are constructed by mapping their subdisks to underlying volumes (known as storage volumes), and allow the creation of more complex forms of logical layout. Examples of layered volumes are striped-mirror and concatenated-mirror volumes. See Layered volumes.
Creating volumes Creating a volume See “Dirty region logging” on page 60. These logs are supported either as DRL log plexes, or as part of a version 20 DCO volume. Refer to the following sections for information on creating a volume on which DRL is enabled: See “Creating a volume with dirty region logging enabled” on page 293. See “Creating a volume with a version 20 DCO volume” on page 293. ■ RAID-5 logs are used to prevent corruption of data during recovery of RAID-5 volumes.
278 Creating volumes Using vxassist See “Creating a volume using vxmake” on page 300. ■ Initialize the volume using vxvol start or vxvol init zero. See “Initializing and starting a volume created using vxmake” on page 303. The steps to create the subdisks and plexes, and to associate the plexes with the volumes can be combined by using a volume description file with the vxmake command. See “Creating a volume using a vxmake description file” on page 302. See “Creating a volume using vxmake” on page 300.
Creating volumes Using vxassist not leave intermediate states that you have to clean up. If vxassist finds an error or an exceptional condition, it exits after leaving the system in the same state as it was prior to the attempted operation. The vxassist utility helps you perform the following tasks: ■ Creating volumes. ■ Creating mirrors for existing volumes. ■ Growing or shrinking existing volumes. ■ Backing up volumes online. ■ Reconfiguring a volume’s layout online.
280 Creating volumes Using vxassist A large number of vxassist keywords and attributes are available for use. See the vxassist(1M) manual page. The simplest way to create a volume is to use default attributes. Creating a volume on any disk More complex volumes can be created with specific attributes by controlling how vxassist uses the available storage space. See “Creating a volume on specific disks” on page 283.
Creating volumes Using vxassist # allow only root access to a volume mode=u=rw,g=,o= user=root group=root # when mirroring, create two mirrors nmirror=2 for regular striping, by default create between 2 and 8 stripe columns max_nstripe=8 min_nstripe=2 # # 281 # for RAID-5, by default create between 3 and 8 stripe columns max_nraid5stripe=8 min_nraid5stripe=3 # by default, create 1 log copy for both mirroring and RAID-5 volumes nregionlog=1 nraid5log=1 # by default, limit mirroring log lengths to
282 Creating volumes Discovering the maximum size of a volume Note: The file system must be mounted to get the benefits of the SmartMove™ feature. When the SmartMove feature is on, less I/O is sent through the host, through the storage network and to the disks or LUNs. The SmartMove feature can be used for faster plex creation and faster array migrations. The SmartMove feature enables migration from a traditional LUN to a thinly provisioned LUN, removing unused space in the process.
Creating volumes Creating a volume on any disk By default, vxassist automatically rounds up the volume size and attribute size values to a multiple of the alignment value. (This is equivalent to specifying the attribute dgalign_checking=round as an additional argument to the vxassist command.
284 Creating volumes Creating a volume on specific disks # vxassist [-b] [-g diskgroup] make volume length \ [layout=layout] diskname ... Specify the -b option if you want to make the volume immediately available for use. See “Initializing and starting a volume” on page 303.
Creating volumes Creating a volume on specific disks 285 excludes disks dgrp07 and dgrp08 when calculating the maximum size of RAID-5 volume that vxassist can create using the disks in the disk group dg: # vxassist -b -g dgrp maxsize layout=raid5 nlog=2 \!dgrp07 \!dgrp08 It is also possible to control how volumes are laid out on the specified storage. See “Specifying ordered allocation of storage to volumes” on page 285. See the vxassist(1M) manual page.
286 Creating volumes Creating a volume on specific disks Figure 7-1 Example of using ordered allocation to create a mirrored-stripe volume column 1 column 2 column 3 mydg01-01 mydg02-01 mydg03-01 Mirrored-stripe volume Striped plex Mirror column 1 mydg04-01 column 2 mydg05-01 column 3 mydg06-01 Striped plex For layered volumes, vxassist applies the same rules to allocate storage as for non-layered volumes.
Creating volumes Creating a volume on specific disks # vxassist -b -g mydg -o ordered make strmir2vol 10g \ layout=mirror-stripe ncol=2 col_switch=3g,2g \ mydg01 mydg02 mydg03 mydg04 mydg05 mydg06 mydg07 mydg08 This command allocates 3 gigabytes from mydg01 and 2 gigabytes from mydg02 to column 1, and 3 gigabytes from mydg03 and 2 gigabytes from mydg04 to column 2. The mirrors of these columns are then similarly formed from disks mydg05 through mydg08.
288 Creating volumes Creating a mirrored volume Example of storage allocation used to create a mirrored-stripe volume across controllers Figure 7-4 c1 c2 c3 Controllers Mirrored-stripe volume column 1 column 2 column 3 column 1 column 2 column 3 Striped plex Mirror Striped plex c4 c5 c6 Controllers There are other ways in which you can control how vxassist lays out mirrored volumes across controllers. See “Mirroring across targets, controllers or enclosures” on page 296.
Creating volumes Creating a mirrored volume By default, the attribute stripe-mirror-col-split-trigger-pt is set to one gigabyte. The value can be set in /etc/default/vxassist. If there is a reason to implement a particular layout, you can specify layout=mirror-concat or layout=concat-mirror to implement the desired layout.
290 Creating volumes Creating a volume with a version 0 DCO volume Creating a concatenated-mirror volume Note: You need a full license to use this feature. A concatenated-mirror volume is an example of a layered volume which concatenates several underlying mirror volumes.
Creating volumes Creating a volume with a version 0 DCO volume To create a volume with an attached version 0 DCO object and volume 1 Ensure that the disk group has been upgraded to at least version 90. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup See “Upgrading a disk group” on page 241.
292 Creating volumes Creating a volume with a version 0 DCO volume 3 To enable DRL or sequential DRL logging on the newly created volume, use the following command: # vxvol [-g diskgroup] set logtype=drl|drlseq volume If you use ordered allocation when creating a mirrored volume on specified storage, you can use the optional logdisk attribute to specify on which disks dedicated log plexes should be created.
Creating volumes Creating a volume with a version 20 DCO volume Creating a volume with a version 20 DCO volume To create a volume with an attached version 20 DCO object and volume 1 Ensure that the disk group has been upgraded to the latest version. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the most recent version, use the following command: # vxdg upgrade diskgroup See “Upgrading a disk group” on page 241.
294 Creating volumes Creating a striped volume The nlog attribute can be used to specify the number of log plexes to add. By default, one log plex is added. The loglen attribute specifies the size of the log, where each bit represents one region in the volume. For example, the size of the log would need to be 20K for a 10GB volume with a region size of 64 kilobytes.
Creating volumes Creating a striped volume # vxassist [-b] [-g diskgroup] make volume length layout=stripe Specify the -b option if you want to make the volume immediately available for use. See “Initializing and starting a volume” on page 303.
296 Creating volumes Mirroring across targets, controllers or enclosures See “Adding a mirror to a volume ” on page 315. Creating a striped-mirror volume A striped-mirror volume is an example of a layered volume which stripes several underlying mirror volumes. A striped-mirror volume requires space to be available on at least as many disks in the disk group as the number of columns multiplied by the number of stripes in the volume.
Creating volumes Creating a RAID-5 volume The attribute mirror=ctlr specifies that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume: # vxassist [-b] [-g diskgroup] make volume length \ layout=layout mirror=ctlr [attributes] Note: Both paths of an active/passive array are not considered to be on different controllers when mirroring across controllers.
298 Creating volumes Creating a RAID-5 volume Note: You need a full license to use this feature. You can create RAID-5 volumes by using either the vxassist command (recommended) or the vxmake command. Both approaches are described below. A RAID-5 volume contains a RAID-5 data plex that consists of three or more subdisks located on three or more physical disks. Only one RAID-5 data plex can exist per volume.
Creating volumes Creating tagged volumes RAID-5 log plexes for each RAID-5 volume protects against the loss of logging information due to the failure of a single disk. If you use ordered allocation when creating a RAID-5 volume on specified storage, you must use the logdisk attribute to specify on which disks the RAID-5 log plexes should be created.
300 Creating volumes Creating a volume using vxmake The following is an example of listtag output: # vxassist -g dgl listtag vol TY NAME DISKGROUP TAG ================================================= v vol dg1 Symantec To list the volumes that have a specified tag name, use this command: # vxassist [-g diskgroup] list tag=tagname Tag names and tag values are case-sensitive character strings of up to 256 characters.
Creating volumes Creating a volume using vxmake Note that because four subdisks are specified, but the number of columns is not specified, the vxmake command assumes a four-column RAID-5 plex and places one subdisk in each column. Striped plexes are created using the same method except that the layout is specified as stripe.
302 Creating volumes Creating a volume using vxmake Creating a volume using a vxmake description file You can use the vxmake command to add a new volume, plex or subdisk to the set of objects managed by VxVM. vxmake adds a record for each new object to the VxVM configuration database. You can create records either by specifying parameters to vxmake on the command line, or by using a file which contains plain-text descriptions of the objects.
Creating volumes Initializing and starting a volume See “Initializing and starting a volume created using vxmake” on page 303. Initializing and starting a volume If you create a volume using the vxassist command, vxassist initializes and starts the volume automatically unless you specify the attribute init=none.
304 Creating volumes Accessing a volume To initialize and start a volume, use the following command: # vxvol [-g diskgroup] start volume The following command can be used to enable a volume without initializing it: # vxvol [-g diskgroup] init enable volume This allows you to restore data on the volume from a backup before using the following command to make the volume fully active: # vxvol [-g diskgroup] init active volume If you want to zero out the contents of an entire volume, use this command to in
Chapter Administering volumes This chapter includes the following topics: ■ About volume administration ■ Displaying volume information ■ Monitoring and controlling tasks ■ Stopping a volume ■ Starting a volume ■ Adding a mirror to a volume ■ Removing a mirror ■ Adding logs and maps to volumes ■ Preparing a volume for DRL and instant snapshots ■ Upgrading existing volumes to use version 20 DCOs ■ Adding traditional DRL logging to a mirrored volume ■ Adding a RAID-5 log ■ Resizing a
306 Administering volumes About volume administration ■ Performing online relayout ■ Converting between layered and non-layered volumes ■ Using Thin Provisioning About volume administration Veritas Volume Manager (VxVM) lets you perform common maintenance tasks on volumes.
Administering volumes Displaying volume information # vxprint -g mydg -hvt This example produces the following output: V PL SD SV SC DC SP NAME NAME NAME NAME NAME NAME NAME v pl sd v pl sd RVG/VSET/CO VOLUME PLEX PLEX PLEX PARENTVOL SNAPVOL KSTATE KSTATE DISK VOLNAME CACHE LOGVOL DCO STATE STATE DISKOFFS NVOLLAYR DISKOFFS LENGTH LENGTH LENGTH LENGTH LENGTH READPOL LAYOUT [COL/]OFF [COL/]OFF [COL/]OFF PREFPLEX NCOL/WID DEVICE AM/NM DEVICE UTYPE MODE MODE MODE MODE pubs pubs-01 pubs mydg11-01 pub
308 Administering volumes Displaying volume information The output from the vxprint command includes information about the volume state. See “Volume states” on page 308. Volume states Table 8-1 shows the volume states that may be displayed by VxVM commands such as vxprint. Table 8-1 Volume states Volume state Description ACTIVE The volume has been started (the kernel state is currently ENABLED) or was in use (the kernel state was ENABLED) when the machine was rebooted.
Administering volumes Displaying volume information Table 8-1 Volume states (continued) Volume state Description SYNC The volume is either in read-writeback recovery mode (the kernel state is ENABLED) or was in read-writeback mode when the machine was rebooted (the kernel state is DISABLED). With read-writeback recovery, plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plexes.
310 Administering volumes Monitoring and controlling tasks Monitoring and controlling tasks The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The task monitor lets you monitor task progress and modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance). Note: VxVM supports this feature only for private disk groups, not for shared disk groups in a CVM environment.
Administering volumes Monitoring and controlling tasks For more information about the utilities that support task tagging, see their respective manual pages. Managing tasks with vxtask You can use the vxtask command to administer operations on VxVM tasks. Operations include listing tasks, modifying the task state (pausing, resuming, aborting) and modifying the task's progress rate. VxVM tasks represent long-term operations in progress on the system.
312 Administering volumes Monitoring and controlling tasks monitor Prints information continuously about a task or group of tasks as task information changes. This lets you track task progress. Specifying -l prints a long listing. By default, one-line listings are printed. In addition to printing task information when a task state changes, output is also generated when the task completes. When this occurs, the state of the task is printed as EXITED.
Administering volumes Stopping a volume # vxtask abort recovall This command causes VxVM to try to reverse the progress of the operation so far. For example, aborting an Online Relayout results in VxVM returning the volume to its original layout. See “Controlling the progress of a relayout” on page 346. Stopping a volume Stopping a volume renders it unavailable to the user, and changes the volume kernel state from ENABLED or DETACHED to DISABLED.
314 Administering volumes Starting a volume # vxmend -g mydg off vol01-02 Make sure that all the plexes are offline except for the one that you will use for revival. The plex from which you will revive the volume should be placed in the STALE state. The vxmend on command can change the state of an OFFLINE plex of a DISABLED volume to STALE.
Administering volumes Adding a mirror to a volume Adding a mirror to a volume You can add a mirror to a volume with the vxassist command, as follows: # vxassist [-b] [-g diskgroup] mirror volume Specifying the -b option makes synchronizing the new mirror a background task.
316 Administering volumes Removing a mirror To mirror volumes on a disk 1 Make sure that the target disk has an equal or greater amount of space as the source disk. 2 From the vxdiskadm main menu, select Mirror volumes on a disk .
Administering volumes Adding logs and maps to volumes You can also use storage attributes to specify the storage to be removed. For example, to remove a mirror on disk mydg01 from volume vol01, enter the following. Note: The ! character is a special character in some shells. The following example shows how to escape it in a bash shell. # vxassist -g mydg remove mirror vol01 \!mydg01 See “Creating a volume on specific disks” on page 283.
318 Administering volumes Preparing a volume for DRL and instant snapshots ■ Version 20 DCO volumes, introduced in VxVM 4.0, combine DRL logging (see below) and Persistent FastResync for full-sized and space-optimized instant volume snapshots. See “Version 20 DCO volume layout” on page 68. See “Preparing a volume for DRL and instant snapshots” on page 318. ■ Dirty Region Logs let you quickly recover mirrored volumes after a system crash.
Administering volumes Preparing a volume for DRL and instant snapshots The ndcomirs attribute specifies the number of DCO plexes that are created in the DCO volume. You should configure as many DCO plexes as there are data and snapshot plexes in the volume. The DCO plexes are used to set up a DCO volume for any snapshot volume that you subsequently create from the snapshot plexes. For example, specify ndcomirs=5 for a volume with 3 data plexes and 2 snapshot plexes.
320 Administering volumes Preparing a volume for DRL and instant snapshots To view the details of the DCO object and DCO volume that are associated with a volume, use the vxprint command.
Administering volumes Preparing a volume for DRL and instant snapshots mirrors that may be broken off as full-sized instant snapshots. You cannot relayout or resize such a volume unless you convert it back to a pure RAID-5 volume. To convert a volume back to a RAID-5 volume, remove any snapshot plexes from the volume, and dissociate the DCO and DCO volume from the layered volume. You can then perform relayout and resize operations on the resulting non-layered RAID-5 volume.
322 Administering volumes Preparing a volume for DRL and instant snapshots Determining if DRL is enabled on a volume To determine if DRL (configured using a version 20 DCO) is enabled on a volume 1 Use the vxprint command on the volume to discover the name of its DCO.
Administering volumes Preparing a volume for DRL and instant snapshots Determining if DRL logging is active on a volume To determine if DRL logging (configured using a version 20 DCO) is active on a mirrored volume 1 Use the following vxprint commands to discover the name of the volume’s DCO volume: # DCONAME=`vxprint [-g diskgroup] -F%dco_name volume` # DCOVOL=`vxprint [-g diskgroup] -F%parent_vol $DCONAME` 2 Use the vxprint command on the DCO volume to find out if DRL logging is active: # vxprint [-g
324 Administering volumes Upgrading existing volumes to use version 20 DCOs Note: If the volume is part of a snapshot hierarchy, this command fails . Upgrading existing volumes to use version 20 DCOs You can upgrade a volume created before VxVM 4.0 to take advantage of new features such as instant snapshots and DRL logs that are configured within the DCO volume.
Administering volumes Upgrading existing volumes to use version 20 DCOs To upgrade an existing disk group and the volumes that it contains 1 Upgrade the disk group that contains the volume to the latest version before performing the remainder of the procedure described in this section.
326 Administering volumes Adding traditional DRL logging to a mirrored volume 6 To dissociate a version 0 DCO object, DCO volume and snap objects from the volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco 7 To upgrade the volume, use the following command: # vxsnap [-g diskgroup] prepare volume [ndcomirs=number] \ [regionsize=size] [drl=on|sequential|off] \ [storage_attribute ...
Administering volumes Adding traditional DRL logging to a mirrored volume The nlog attribute specifies the number of log plexes to add. By default, one log plex is added. The loglen attribute specifies the size of the log, where each bit represents one region in the volume. For example, a 10 GB volume with a 64 KB region size needs a 20K log.
328 Administering volumes Adding a RAID-5 log To remove a traditional DRL log ◆ Type the following command: # vxassist [-g diskgroup] remove log volume logtype=drl [nlog=n] By default, the vxassist command removes one log. Use the optional attribute nlog=n to specify the number of logs that are to remain after the operation completes. You can use storage attributes to specify the storage from which a log will be removed.
Administering volumes Adding a RAID-5 log Adding a RAID-5 log using vxplex You can also add a RAID-5 log using the vxplex command. For example, to attach the RAID-5 log plex r5log, to the RAID-5 volume r5vol, in the disk group mydg, use the following command: # vxplex -g mydg att r5vol r5log The attach operation can only proceed if the size of the new log is large enough to hold all the data on the stripe.
330 Administering volumes Resizing a volume Note: When you remove a log and it leaves less than two valid logs on the volume, a warning is printed and the operation is stopped. You can force the operation by specifying the -f option with vxplex or vxassist. Resizing a volume Resizing a volume changes its size. For example, if a volume is too small for the amount of data it needs to store, you can increase its length .
Administering volumes Resizing a volume 331 Table 8-3 shows which operations are permitted and whether you must unmount the file system before you resize it. Table 8-3 Permitted resizing operations on file systems Online JFS (Full-VxFS) Base JFS (Lite-VxFS) HFS Mounted file system Grow and shrink Not allowed Not allowed Unmounted file system Grow only Grow only Grow only For example, the following command resizes a volume from 1 GB to 10 GB.
332 Administering volumes Resizing a volume growto Increases the volume size to a specified length. growby Increases the volume size by a specified amount. shrinkto Reduces the volume size to a specified length. shrinkby Reduces the volume size by a specified amount. Extending to a given length To extend a volume to a specific length, use the following command: # vxassist [-b] [-g diskgroup] growto volume length If you specify the -b option, growing the volume is a background task.
Administering volumes Resizing a volume Warning: Do not shrink the volume below the current size of the file system or database using the volume. You can safely use the vxassist shrinkto command on empty volumes.
334 Administering volumes Setting tags on volumes Warning: Sparse log plexes are not valid. They must map the entire length of the log. If increasing the log length makes any of the logs invalid, the operation is not allowed. Also, if the volume is not active and is dirty (for example, if it has not been shut down cleanly), you cannot change the log length. If you are decreasing the log length, this feature avoids losing any of the log contents.
Administering volumes Changing the read policy for mirrored volumes ■ Dashes (-) ■ Underscores (_) ■ Periods (.) A tag name must start with either a letter or an underscore. Tag values can consist of any ASCII character that has a decimal value from 32 through 127. If a tag value includes spaces, quote the specification to protect it from the shell, as follows: # vxassist -g mydg settag myvol "dbvol=table space 1" The list operation understands dotted tag hierarchies.
336 Administering volumes Removing a volume split Divides read the requests and distributes them across all the available plexes. Note: You cannot set the read policy on a RAID-5 volume.
Administering volumes Moving volumes from a VM disk 3 If the volume is listed in the /etc/fstab file, edit this file and remove its entry. For more information about the format of this file and how you can modify it, see your operating system documentation.
338 Administering volumes Enabling FastResync on a volume Continue with operation? [y,n,q,?] (default: y) As the volumes are moved from the disk, the vxdiskadm program displays the status of the operation: VxVM vxevac INFO V-5-2-24 Move volume voltest ... When the volumes have all been moved, the vxdiskadm program displays the following success message: VxVM INFO V-5-2-188 Evacuation of disk mydg02 is complete.
Administering volumes Enabling FastResync on a volume ■ Non-Persistent FastResync holds the FastResync maps in memory. These maps do not survive on a system that is rebooted. By default, FastResync is not enabled on newly-created volumes. If you want to enable FastResync on a volume that you create, specify the fastresync=on attribute to the vxassist make command. Note: You cannot configure Persistent and Non-Persistent FastResync on a volume.
340 Administering volumes Performing online relayout To list all volumes on which Persistent FastResync is enabled, use the following command: # vxprint [-g diskgroup] -F "%name" -e "v_fastresync=on \ && v_hasdcolog" Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as follows: # vxvol [-g diskgroup] set fastresync=off volume Turning off FastResync releases all tracking maps for the specified volume.
Administering volumes Performing online relayout See “Permitted relayout transformations” on page 341. For example, the following command changes the concatenated volume vol02, in disk group mydg, to a striped volume. By default, the striped volume has 2 columns and a 64 KB striped unit size.: # vxassist -g mydg relayout vol02 layout=stripe Sometimes, you may need to perform a relayout on a plex rather than on a volume. See “Specifying a plex for relayout” on page 345.
342 Administering volumes Performing online relayout Table 8-5 Supported relayout transformations for concatenated-mirror volumes (continued) Relayout to From concat-mirror mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. stripe Yes. This relayout removes a mirror and adds striping. The stripe width and number of columns may be defined. stripe-mirror Yes. The stripe width and number of columns may be defined.
Administering volumes Performing online relayout Table 8-7 Supported relayout transformations for mirrored-concatenated volumes (continued) Relayout to From mirror-concat mirror-concat No. mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be defined. Choose a plex in the existing mirrored volume on which to perform the relayout. The other plexes are removed at the end of the relayout operation.
344 Administering volumes Performing online relayout Table 8-9 Supported relayout transformations for unmirrored stripe and layered striped-mirror volumes Relayout to From stripe or stripe-mirror concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to the concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to the striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes.
Administering volumes Performing online relayout Specifying a plex for relayout If you have enough disks and space in the disk group, you can change any layout to RAID-5 . To convert a mirrored volume to RAID-5, you must specify which plex is to be converted. When the conversion finishes, all other plexes are removed, releasing their space for other purposes. If you convert a mirrored volume to a layout other than RAID-5, the unconverted plexes are not removed.
346 Administering volumes Performing online relayout If you specify a task tag to vxassist when you start the relayout, you can use this tag with the vxtask command to monitor the progress of the relayout. For example, to monitor the task that is tagged as myconv, enter the following: # vxtask monitor myconv Controlling the progress of a relayout You can use the vxtask command to stop (pause) the relayout temporarily, or to cancel it (abort).
Administering volumes Converting between layered and non-layered volumes This undoes changes made to the volume so far, and returns it to its original layout. If you cancel a relayout using vxtask abort, the direction of the conversion is also reversed, and the volume is returned to its original configuration. See “Managing tasks with vxtask” on page 311. See the vxrelayout(1M) manual page. See the vxtask(1M) manual page.
348 Administering volumes Using Thin Provisioning Note: If the system crashes during relayout or conversion, the process continues when the system is rebooted. However, if the system crashes during the first stage of a two-stage relayout and conversion, only the first stage finishes. To complete the operation, you must run vxassist convert manually. Using Thin Provisioning This section describes how to use VxVM volumes with Thin Storage LUNs.
Administering volumes Using Thin Provisioning You can only perform Thin Reclamation on thin_rclm LUNs. VxVM automatically discovers LUNs that support Thin Reclamation from capable storage arrays. To list devices that are known to be thin or thin_rclm on a host, use the vxdisk -o thin list command. You can only perform Thin Reclamation if the disks support a mounted VxFS file system.
350 Administering volumes Using Thin Provisioning
Chapter 9 Administering volume snapshots This chapter includes the following topics: ■ About volume snapshots ■ Traditional third-mirror break-off snapshots ■ Full-sized instant snapshots ■ Space-optimized instant snapshots ■ Emulation of third-mirror break-off snapshots ■ Linked break-off snapshot volumes ■ Cascaded snapshots ■ Creating multiple snapshots ■ Restoring the original volume from a snapshot ■ Creating instant snapshots ■ Creating traditional third-mirror break-off snapsho
352 Administering volume snapshots About volume snapshots You can also take a snapshot of a volume set. See “Creating instant snapshots of volume sets” on page 381. Volume snapshots allow you to make backup copies of your volumes online with minimal interruption to users. You can then use the backup copies to restore data that has been lost due to disk failure, software errors or human mistakes, or to create replica volumes for the purposes of report generation, application development, or testing.
Administering volume snapshots Traditional third-mirror break-off snapshots Traditional third-mirror break-off snapshots Figure 9-1 shows the traditional third-mirror break-off volume snapshot model that is supported by the vxassist command.
354 Administering volume snapshots Full-sized instant snapshots The FastResync feature minimizes the time and I/O needed to resynchronize the data in the snapshot. If FastResync is not enabled, a full resynchronization of the data is required. See “FastResync” on page 65. Finally, you can use the vxassist snapclear command to break the association between the original volume and the snapshot volume. The snapshot volume then exists independently of the original volume.
Administering volume snapshots Full-sized instant snapshots plexes from the original volume (which is similar to the way that the vxassist command creates its snapshots). Unlike a third-mirror break-off snapshot created using the vxassist command, you can make a backup of a full-sized instant snapshot, instantly refresh its contents from the original volume, or attach its plexes to the original volume, without completely synchronizing the snapshot plexes from the original volume.
356 Administering volume snapshots Space-optimized instant snapshots Space-optimized instant snapshots Volume snapshots require the creation of a complete copy of the original volume, and use as much storage space as the copy of the original volume. Space-optimized instant snapshots do not require a complete copy of the original volume’s storage space. They use a storage cache.
Administering volume snapshots Emulation of third-mirror break-off snapshots Emulation of third-mirror break-off snapshots Third-mirror break-off snapshots are suitable for write-intensive volumes (such as for database redo logs) where the copy-on-write mechanism of space-optimized or full-sized instant snapshots might degrade performance.
358 Administering volume snapshots Linked break-off snapshot volumes processing applications as it avoids the disk group split/join administrative step. As with third-mirror break-off snapshots, you must wait for the contents of the snapshot volume to be synchronized with the data volume before you can use the vxsnap make command to take the snapshot.
Administering volume snapshots Cascaded snapshots An empty volume must be prepared for use by linked break-off snapshots. See “Creating a volume for use as a full-sized instant or linked break-off snapshot” on page 370. Cascaded snapshots Figure 9-4 shows a snapshot hierarchy, known as a snapshot cascade, that can improve write performance for some applications.
360 Administering volume snapshots Cascaded snapshots For these reasons, it is recommended that you do not attempt to use a snapshot cascade with applications that need to remove or split snapshots from the cascade. In such cases, it may be more appropriate to create a snapshot of a snapshot as described in the following section. See “Adding a snapshot to a cascaded snapshot hierarchy” on page 384. Note: Only unsynchronized full-sized or space-optimized instant snapshots are usually cascaded.
Administering volume snapshots Cascaded snapshots Figure 9-6 Using a snapshot of a snapshot to restore a database 1 Create instant snapshot S1 of volume V Original volume V Snapshot volume of V: S1 2 Create instant snapshot S2 of S1 Original volume V vxsnap make source=S1 Snapshot volume of V: S1 Snapshot volume of S1: S2 3 After contents of V have gone bad, apply the database to redo logs to S2 Apply redo logs Original volume V 4 Snapshot volume of V: S1 Snapshot volume of S1: S2 Restore con
362 Administering volume snapshots Creating multiple snapshots Figure 9-7 Dissociating a snapshot volume vxsnap dis is applied to snapshot S2, which has no snapshots of its own Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S2 Original volume V Snapshot volume of V: S1 Volume S2 S1 remains owned by V S2 is independent vxsnap dis is applied to snapshot S1, which has one snapshot S2 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis
Administering volume snapshots Restoring the original volume from a snapshot For traditional snapshots, you can create snapshots of all the volumes in a single disk group by specifying the option -o allvols to the vxassist snapshot command. By default, each replica volume is named SNAPnumber-volume, where number is a unique serial number, and volume is the name of the volume for which a snapshot is being taken. This default can be overridden by using the option -o name=pattern.
364 Administering volume snapshots Creating instant snapshots from an instant snapshot. The volume that is used to restore the original volume can either be a true backup of the contents of the original volume at some point in time, or it may have been modified in some way (for example, by applying a database log replay or by running a file system checking utility such as fsck). All synchronization of the contents of this backup must have been completed before the original volume can be restored from it.
Administering volume snapshots Creating instant snapshots Note: Synchronization of a full-sized instant snapshot from the original volume is enabled by default. If you specify the syncing=no attribute to vxsnap make, this disables synchronization, and the contents of the instant snapshot are unlikely ever to become fully synchronized with the contents of the original volume at the point in time that the snapshot was taken.
366 Administering volume snapshots Creating instant snapshots See “Creating and managing linked break-off snapshot volumes” on page 378.
Administering volume snapshots Creating instant snapshots Preparing to create instant and break-off snapshots To prepare a volume for the creation of instant and break-off snapshots 1 Use the following commands to see if the volume has a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume, and to check that FastResync is enabled on the volume: # vxprint -g volumedg -F%instant volume # vxprint -g volumedg -F%fastresync volume
368 Administering volume snapshots Creating instant snapshots 3 If you need several space-optimized instant snapshots for the volumes in a disk group, you may find it convenient to create a single shared cache object in the disk group rather than a separate cache object for each snapshot. See “Creating a shared cache object” on page 368. For full-sized instant snapshots and linked break-off snapshots, you must prepare a volume that is to be used as the snapshot volume.
Administering volume snapshots Creating instant snapshots 3 Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step: # vxmake [-g diskgroup] cache cache_object \ cachevolname=volume [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]] If the region size, regionsize, is specified, it must be a power of 2, and be greater than or equal to 16KB (16k).
370 Administering volume snapshots Creating instant snapshots Creating a volume for use as a full-sized instant or linked break-off snapshot To create an empty volume for use by a full-sized instant snapshot or a linked break-off snapshot 1 Use the vxprint command on the original volume to find the required size for the snapshot volume. # LEN=`vxprint [-g diskgroup] -F%len volume` The command as shown assumes a Bourne-type shell such as sh, ksh or bash.
Administering volume snapshots Creating instant snapshots Creating and managing space-optimized instant snapshots Space-optimized instant snapshots are not suitable for write-intensive volumes (such as for database redo logs) because the copy-on-write mechanism may degrade performance.
372 Administering volume snapshots Creating instant snapshots # vxsnap [-g diskgroup] make source=vol/newvol=snapvol\ [/cachesize=size][/autogrow=yes][/ncachemirror=number]\ [alloc=storage_attributes] The cachesize attribute determines the size of the cache relative to the size of the volume. The autogrow attribute determines whether VxVM will automatically enlarge the cache if it is in danger of overflowing. By default, the cache is not grown.
Administering volume snapshots Creating instant snapshots was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “Refreshing an instant snapshot” on page 385. ■ Restore the contents of the original volume from the snapshot volume. The space-optimized instant snapshot remains intact at the end of the operation. See “Restoring a volume from an instant snapshot” on page 387. ■ Destroy the snapshot.
374 Administering volume snapshots Creating instant snapshots to turn it into an independent volume, you must wait for its contents to be synchronized with those of its parent volume.
Administering volume snapshots Creating instant snapshots 3 To backup the data in the snapshot, use an appropriate utility or operating system command to copy the contents of the snapshot to tape, or to some other backup medium. 4 You now have the following options: ■ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup.
376 Administering volume snapshots Creating instant snapshots To create and manage a third-mirror break-off snapshot 1 To create the snapshot, you can either take some of the existing ACTIVE plexes in the volume, or you can use the following command to add new snapshot mirrors to the volume: # vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \ [alloc=storage_attributes] By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a diffe
Administering volume snapshots Creating instant snapshots 2 To create a third-mirror break-off snapshot, use the following form of the vxsnap make command. # vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\ {/plex=plex1[,plex2,...]|/nmirror=number]} Either of the following attributes may be specified to create the new snapshot volume, snapvol, by breaking off one or more existing plexes in the original volume: plex Specifies the plexes in the existing volume that are to be broken off.
378 Administering volume snapshots Creating instant snapshots ■ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup. If synchronization was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “Refreshing an instant snapshot” on page 385. ■ Reattach some or all of the plexes of the snapshot volume with the original volume.
Administering volume snapshots Creating instant snapshots 379 To create and manage a linked break-off snapshot 1 Use the following command to link the prepared snapshot volume, snapvol, to the data volume: # vxsnap [-g diskgroup] [-b] addmir volume mirvol=snapvol \ [mirdg=snapdg] The optional mirdg attribute can be used to specify the snapshot volume’s current disk group, snapdg. The -b option can be used to perform the synchronization in the background.
380 Administering volume snapshots Creating instant snapshots 4 To backup the data in the snapshot, use an appropriate utility or operating system command to copy the contents of the snapshot to tape, or to some other backup medium. 5 You now have the following options: ■ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup.
Administering volume snapshots Creating instant snapshots # vxsnap [-g diskgroup] make \ source=vol1/newvol=snapvol1/cache=cacheobj \ source=vol2/newvol=snapvol2/cache=cacheobj \ source=vol3/newvol=snapvol3/cache=cacheobj \ [alloc=storage_attributes] The vxsnap make command also allows the snapshots to be of different types, have different redundancy, and be configured from different storage, as shown here: # vxsnap [-g diskgroup] make source=vol1/snapvol=snapvol1 \ source=vol2[/newvol=snapvol2]/cache=cac
382 Administering volume snapshots Creating instant snapshots snapshot of a volume set must itself be a volume set with the same number of volumes, and the same volume sizes and index numbers as the parent. For example, if a volume set contains three volumes with sizes 1GB, 2GB and 3GB, and indexes 0, 1 and 2 respectively, then the snapshot volume set must have three volumes with the same sizes matched to the same set of index numbers.
Administering volume snapshots Creating instant snapshots See “Adding snapshot mirrors to a volume” on page 383.
384 Administering volume snapshots Creating instant snapshots Once you have added one or more snapshot mirrors to a volume, you can use the vxsnap make command with either the nmirror attribute or the plex attribute to create the snapshot volumes.
Administering volume snapshots Creating instant snapshots # vxsnap -g dbdg make source=dbvol/newvol=fri_bu/\ infrontof=thurs_bu/cache=dbdgcache See “Cascaded snapshots” on page 359. Refreshing an instant snapshot Refreshing an instant snapshot replaces it with another point-in-time copy of a parent volume.
386 Administering volume snapshots Creating instant snapshots By default, all the plexes are reattached, which results in the removal of the snapshot. If required, the number of plexes to be reattached may be specified as the value assigned to the nmirror attribute. Warning: The snapshot that is being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted.
Administering volume snapshots Creating instant snapshots # vxsnap [-g snapdiskgroup] reattach snapvolume|snapvolume_set \ source=volume|volume_set [sourcedg=diskgroup] The sourcedg attribute must be used to specify the data volume’s disk group if this is different from the snapshot volume’s disk group, snapdiskgroup. Warning: The snapshot that is being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted.
388 Administering volume snapshots Creating instant snapshots Warning: For this operation to succeed, the volume that is being restored and the snapshot volume must not be open to any application. For example, any file systems that are configured on either volume must first be unmounted. It is not possible to restore a volume from an unrelated volume. The destroy and nmirror attributes are not supported for space-optimized instant snapshots.
Administering volume snapshots Creating instant snapshots Removing an instant snapshot When you have dissociated a full-sized instant snapshot, you can use the vxedit command to delete it altogether, as shown in this example: # vxedit -g mydg -r rm snap2myvol You can also use this command to remove a space-optimized instant snapshot from its cache. See “Removing a cache” on page 395. Splitting an instant snapshot hierarchy Note: This operation is not supported for space-optimized instant snapshots.
390 Administering volume snapshots Creating instant snapshots This command shows the percentage progress of the synchronization of a snapshot or volume. If no volume is specified, information about the snapshots for all the volumes in a disk group is displayed.
Administering volume snapshots Creating instant snapshots svset1 sv1 sv2 vol-03 mvol2 dg1 dg1 dg1 dg1 dg2 vset compvol compvol plex vol mirbrk mirbrk mirbrk detmir detvol vset v1 v2 vol vol dg1 dg1 dg1 dg1 dg1 2006/2/1 12:29 2006/2/1 12:29 2006/2/1 12:29 - 1G (50%) 512M (50%) 512M (50%) 20M (0.2%) 20M (0.
392 Administering volume snapshots Creating instant snapshots Table 9-1 Commands for controlling instant snapshot synchronization Command Description vxsnap [-g diskgroup] syncpause \ vol|vol_set Pause synchronization of a volume. vxsnap [-g diskgroup] syncresume \ Resume synchronization of a volume. vol|vol_set vxsnap [-b] [-g diskgroup] syncstart \ vol|vol_set Start synchronization of a volume. The -b option puts the operation in the background.
Administering volume snapshots Creating instant snapshots iosize=size Specifies the size of each I/O request that is used when synchronizing the regions of a volume. Specifying a larger size causes synchronization to complete sooner, but with greater impact on the performance of other processes that are accessing the volume. The default size of 1m (1MB) is suggested as the minimum value for high-performance array and controller hardware.
394 Administering volume snapshots Creating instant snapshots ■ When cache usage reaches the high watermark value, highwatermark (default value is 90 percent), vxcached grows the size of the cache volume by the value of autogrowby (default value is 20% of the size of the cache volume in blocks). The new required cache size cannot exceed the value of maxautogrow (default value is twice the size of the cache volume in blocks).
Administering volume snapshots Creating instant snapshots Growing and shrinking a cache You can use the vxcache command to increase the size of the cache volume that is associated with a cache object: # vxcache [-g diskgroup] growcacheto cache_object size For example, to increase the size of the cache volume associated with the cache object, mycache, to 2GB, you would use the following command: # vxcache -g mydg growcacheto mycache 2g To grow a cache by a specified amount, use the following form of the c
396 Administering volume snapshots Creating traditional third-mirror break-off snapshots 3 Stop the cache object: # vxcache -g diskgroup stop cache_object 4 Finally, remove the cache object and its cache volume: # vxedit -g diskgroup -r rm cache_object Creating traditional third-mirror break-off snapshots Note: You need a full license to use this feature. VxVM provides third-mirror break-off snapshot images of volume devices using vxassist and other commands.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Note: If the snapstart procedure is interrupted, the snapshot mirror is automatically removed when the volume is started. Once the snapshot mirror is synchronized, it continues being updated until it is detached. You can then select a convenient time at which to create a snapshot volume as an image of the existing volume.
398 Administering volume snapshots Creating traditional third-mirror break-off snapshots To back up a volume using the vxassist command 1 Create a snapshot mirror for a volume using the following command: # vxassist [-b] [-g diskgroup] snapstart [nmirror=N] volume For example, to create a snapshot mirror of a volume called voldef, use the following command: # vxassist [-g diskgroup] snapstart voldef The vxassist snapstart task creates a write-only mirror, which is attached to and synchronized from the
Administering volume snapshots Creating traditional third-mirror break-off snapshots 3 Create a snapshot volume using the following command: # vxassist [-g diskgroup] snapshot [nmirror=N] volume snapshot If required, use the nmirror attribute to specify the number of mirrors in the snapshot volume.
400 Administering volume snapshots Creating traditional third-mirror break-off snapshots ■ Remove the snapshot volume to save space with this command: # vxedit [-g diskgroup] -rf rm snapshot Dissociating or removing the snapshot volume loses the advantage of fast resynchronization if FastResync was enabled. If there are no further snapshot plexes available, any subsequent snapshots that you take require another complete copy of the original volume to be made.
Administering volume snapshots Creating traditional third-mirror break-off snapshots # vxplex -o dcoplex=trivol_dco-03 convert state=SNAPDONE \ trivol-03 Here the DCO plex trivol_dco_03 is specified as the DCO plex for the new snapshot plex.
402 Administering volume snapshots Creating traditional third-mirror break-off snapshots and re-attached to the original volume. The snapshot volume is removed if all its snapshot plexes are snapped back. This task resynchronizes the data in the volume so that the plexes are consistent. The snapback operation cannot be applied to RAID-5 volumes unless they have been converted to a special layered volume layout by the addition of a DCO and DCO volume.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Adding plexes to a snapshot volume If you want to retain the existing plexes in a snapshot volume after a snapback operation, you can create additional snapshot plexes that are to be used for the snapback.
404 Administering volume snapshots Creating traditional third-mirror break-off snapshots Displaying snapshot information The vxassist snapprintcommand displays the associations between the original volumes and their respective replicas (snapshot copies): # vxassist snapprint [volume] Output from this command is shown in the following examples: # vxassist -g mydg snapprint v1 V NAME SS SNAPOBJ DP NAME USETYPE NAME VOLUME LENGTH LENGTH LENGTH %DIRTY %DIRTY v ss dp dp fsgen SNAP-v1 v1 v1 20480 20480 2
Administering volume snapshots Adding a version 0 DCO and DCO volume If a volume is specified, the snapprint command displays an error message if no FastResync maps are enabled for that volume. Adding a version 0 DCO and DCO volume The version 0 DCO log volume was introduced in VxVM 3.2. The version 0 layout supports traditional (third-mirror break-off) snapshots, but not full-sized or space-optimized instant snapshots. See “Version 0 DCO volume layout” on page 68.
406 Administering volume snapshots Adding a version 0 DCO and DCO volume To add a DCO object and DCO volume to an existing volume 1 Ensure that the disk group containing the existing volume has been upgraded to at least version 90. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup See “Upgrading a disk group” on page 241.
Administering volume snapshots Adding a version 0 DCO and DCO volume 2 Use the following command to turn off Non-Persistent FastResync on the original volume if it is currently enabled: # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes. Note: The ! character is a special character in some shells. The following example shows how to escape it in a bash shell.
408 Administering volume snapshots Adding a version 0 DCO and DCO volume placed on disks which are used to hold the plexes of other volumes, this may cause problems when you subsequently attempt to move volumes into other disk groups. You can use storage attributes to specify explicitly which disks to use for the DCO plexes. If possible, specify the same disks as those on which the volume is configured.
Administering volume snapshots Adding a version 0 DCO and DCO volume Removing a version 0 DCO and DCO volume To dissociate a version 0 DCO object, DCO volume and any snap objects from a volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco This completely removes the DCO object, DCO volume and any snap objects. It also has the effect of disabling FastResync for the volume.
410 Administering volume snapshots Adding a version 0 DCO and DCO volume See the vxdco(1M) manual page.
Chapter 10 Creating and administering volume sets This chapter includes the following topics: ■ About volume sets ■ Creating a volume set ■ Adding a volume to a volume set ■ Listing details of volume sets ■ Stopping and starting volume sets ■ Removing a volume from a volume set ■ Raw device node access to component volumes About volume sets Veritas File System (VxFS) uses volume sets to implement its Multi-Volume Support and Dynamic Storage Tiering (DST) features.
412 Creating and administering volume sets Creating a volume set ■ The first volume (index 0) in a volume set must be larger than the sum of the total volume size divided by 4000, the size of the VxFS intent log, and 1MB. Volumes 258 MB or larger should always suffice. ■ Raw I/O from and to a volume set is not supported. ■ Raw I/O from and to the component volumes of a volume set is supported under certain conditions. See “Raw device node access to component volumes” on page 415.
Creating and administering volume sets Adding a volume to a volume set 413 Adding a volume to a volume set Having created a volume set containing a single volume, you can use the following command to add further volumes to the volume set: # vxvset [-g diskgroup] [-f] addvol volset volume For example, to add the volume vol2, to the volume set myvset, use the following command: # vxvset -g mydg addvol myvset vol2 Warning: The -f (force) option must be specified if the volume being added, or any volume in
414 Creating and administering volume sets Stopping and starting volume sets The context field contains details of any string that the application has set up for the volume or volume set to tag its purpose. Stopping and starting volume sets Under some circumstances, you may need to stop and restart a volume set.
Creating and administering volume sets Removing a volume from a volume set Removing a volume from a volume set To remove a component volume from a volume set, use the following command: # vxvset [-g diskgroup] [-f] rmvol volset volume For example, the following commands remove the volumes, vol1 and vol2, from the volume set myvset: # vxvset -g mydg rmvol myvset vol1 # vxvset -g mydg rmvol myvset vol2 Removing the final volume deletes the volume set.
416 Creating and administering volume sets Raw device node access to component volumes Access to the raw device nodes for the component volumes can be configured to be read-only or read-write. This mode is shared by all the raw device nodes for the component volumes of a volume set. The read-only access mode implies that any writes to the raw device will fail, however writes using the ioctl interface or by VxFS to update metadata are not prevented.
Creating and administering volume sets Raw device node access to component volumes # vxvset -g mydg -o makedev=on -o compvol_access=read-write \ make myvset1 myvol1 Displaying the raw device access settings for a volume set You can use the vxprint -m command to display the current settings for a volume set. If the makedev attribute is set to on, one of the following strings is displayed in the output: vset_devinfo=on:read-only Raw device nodes in read-only mode.
418 Creating and administering volume sets Raw device node access to component volumes The compvol_access attribute can be specified to the vxvset set command to change the access mode to the component volumes of a volume set. If any of the component volumes are open, the -f (force) option must be specified to set the attribute to read-only.
Chapter 11 Configuring off-host processing This chapter includes the following topics: ■ About off-host processing solutions ■ Implemention of off-host processing solutions About off-host processing solutions Off-host processing lets you implement the following activities: Data backup As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline.
420 Configuring off-host processing Implemention of off-host processing solutions Database error recovery Logic errors caused by an administrator or an application program can compromise the integrity of a database. By restoring the database table files from a snapshot copy, the database can be recovered more quickly than by full restoration from tape or other backup media. Using linked break-off snapshots makes off-host processing simpler. See “Linked break-off snapshot volumes” on page 357.
Configuring off-host processing Implemention of off-host processing solutions These applications use the Persistent FastResync feature of VxVM in conjunction with linked break-off snapshots. A volume snapshot represents the data that exists in a volume at a given time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
422 Configuring off-host processing Implemention of off-host processing solutions To back up a volume in a private disk group 1 On the primary host, use the following command to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume: # vxprint -g volumedg -F%instant volume If the volume can be used for instant snapshot operations, this command returns on; otherwise, it returns off.
Configuring off-host processing Implemention of off-host processing solutions 4 On the primary host, link the snapshot volume in the snapshot disk group to the data volume. Enter the following: # vxsnap -g volumedg -b addmir volume mirvol=snapvol \ mirdg=snapvoldg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete.
424 Configuring off-host processing Implemention of off-host processing solutions 10 The snapshot volume is initially disabled following the join. On the OHP host, use the following commands to recover and restart the snapshot volume: # vxrecover -g snapvoldg -m snapvol # vxvol -g snapvoldg start snapvol 11 On the OHP host, back up the snapshot volume. If you need to remount the file system in the volume to back it up, first run fsck on the volume.
Configuring off-host processing Implemention of off-host processing solutions 14 The snapshot volume is initially disabled following the join.
426 Configuring off-host processing Implemention of off-host processing solutions To set up a replica database using the table files that are configured within a volume in a private disk group 1 Use the following command on the primary host to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume: # vxprint -g volumedg -F%instant volume This command returns on if the volume can be used
Configuring off-host processing Implemention of off-host processing solutions 5 On the primary host, link the snapshot volume in the snapshot disk group to the data volume: # vxsnap -g volumedg -b addmir volume mirvol=snapvol \ mirdg=snapvoldg You can use the vxsnap snapwait command to wait for synchronization of the linked snapshot volume to complete: # vxsnap -g volumedg snapwait volume mirvol=snapvol \ mirdg=snapvoldg This step sets up the snapshot volumes, and starts tracking changes to the original
428 Configuring off-host processing Implemention of off-host processing solutions 10 On the OHP host where the replica database is to be set up, use the following command to import the snapshot volume’s disk group: # vxdg import snapvoldg 11 The snapshot volume is initially disabled following the join.
Configuring off-host processing Implemention of off-host processing solutions 4 The snapshot volume is initially disabled following the join.
430 Configuring off-host processing Implemention of off-host processing solutions
Chapter 12 Administering hot-relocation This chapter includes the following topics: ■ About hot-relocation ■ How hot-relocation works ■ Configuring a system for hot-relocation ■ Displaying spare disk information ■ Marking a disk as a hot-relocation spare ■ Removing a disk from use as a hot-relocation spare ■ Excluding a disk from hot-relocation use ■ Making a disk available for hot-relocation use ■ Configuring hot-relocation to use only spare disks ■ Moving relocated subdisks ■ Modify
432 Administering hot-relocation How hot-relocation works If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled. Apparent disk failure may not be due to a fault in the physical disk media or the disk controller, but may instead be caused by a fault in an intermediate or ancillary component such as a cable, host bus adapter, or power supply.
Administering hot-relocation How hot-relocation works Disk failure This is normally detected as a result of an I/O failure from a VxVM object. VxVM attempts to correct the error. If the error cannot be corrected, VxVM tries to access configuration information in the private region of the disk. If it cannot access the private region, it considers the disk failed. Plex failure This is normally detected as a result of an uncorrectable I/O error in the plex (which affects subdisks within the plex).
434 Administering hot-relocation How hot-relocation works ■ The failing subdisks are on non-redundant volumes (that is, volumes of types other than mirrored or RAID-5). ■ There are insufficient spare disks or free disk space in the disk group. ■ The only available space is on a disk that already contains a mirror of the failing plex. ■ The only available space is on a disk that already contains the RAID-5 log plex or one of its healthy subdisks.
Administering hot-relocation How hot-relocation works Figure 12-1 Example of hot-relocation for a subdisk in a RAID-5 volume a Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is availavle for hot-relocation. mydg01 mydg02 mydg03 mydg01-01 mydg02-01 mydg03-01 mydg02-02 mydg03-02 mydg04 mydg04-01 mydg05 Spare disk b Subdisk mydg02-01 in one RAID-5 volume fails.
436 Administering hot-relocation How hot-relocation works Mail can be sent to users other than root. See “Modifying the behavior of hot-relocation” on page 448. You can determine which disk is causing the failures in the above example message by using the following command: # vxstat -g mydg -s -ff home-02 src-02 The -s option asks for information about individual subdisks, and the -ff option displays the number of failed read and write operations.
Administering hot-relocation How hot-relocation works Failures have been detected by the Veritas Volume Manager: failed disks: mydg02 failed plexes: home-02 src-02 mkting-01 failing disks: mydg02 This message shows that mydg02 was detached by a failure. When a disk is detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01 were also detached (probably because of the failure of the disk). One possible cause of the problem could be a cabling error.
438 Administering hot-relocation Configuring a system for hot-relocation Hot-relocation tries to move all subdisks from a failing drive to the same destination disk, if possible. When hot-relocation takes place, the failed subdisk is removed from the configuration database, and VxVM ensures that the disk space used by the failed subdisk is not recycled as free space.
Administering hot-relocation Marking a disk as a hot-relocation spare # vxdg [-g diskgroup] spare The following is example output: GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS mydg c0t2d0 c0t2d0 0 658007 s mydg02 Here mydg02 is the only disk designated as a spare in the mydg disk group. The LENGTH field indicates how much spare space is currently available on mydg02 for relocation.
440 Administering hot-relocation Removing a disk from use as a hot-relocation spare To use vxdiskadm to designate a disk as a hot-relocation spare 1 Select Mark a disk as a spare for a disk group from the vxdiskadm main menu. 2 At the following prompt, enter a disk media name (such as mydg01): Enter disk name [,list,q,?] mydg01 The following notice is displayed when the disk has been marked as spare: VxVM NOTICE V-5-2-219 Marking of mydg01 in mydg as a spare disk is complete.
Administering hot-relocation Excluding a disk from hot-relocation use To use vxdiskadm to remove a disk from the hot-relocation pool 1 Select Turn off the spare flag on a disk from the vxdiskadm main menu. 2 At the following prompt, enter the disk media name of a spare disk (such as mydg01): Enter disk name [,list,q,?] mydg01 The following confirmation is displayed: VxVM NOTICE V-5-2-143 Disk mydg01 in mydg no longer marked as a spare disk.
442 Administering hot-relocation Making a disk available for hot-relocation use Making a disk available for hot-relocation use Free space is used automatically by hot-relocation in case spare space is not sufficient to relocate failed subdisks. You can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool.
Administering hot-relocation Moving relocated subdisks Moving relocated subdisks When hot-relocation occurs, subdisks are relocated to spare disks and/or available free space within the disk group. The new subdisk locations may not provide the same performance or data layout that existed before hot-relocation took place. You can move the relocated subdisks (after hot-relocation is complete) to improve performance.
444 Administering hot-relocation Moving relocated subdisks To move the relocated subdisks using vxdiskadm 1 Select Unrelocate subdisks back to a disk from the vxdiskadm main menu. 2 This option prompts for the original disk media name first.
Administering hot-relocation Moving relocated subdisks Moving relocated subdisks using vxassist You can use the vxassist command to move and unrelocate subdisks. For example, to move the relocated subdisks on mydg05 belonging to the volume home back to mydg02, enter the following command. Note: The ! character is a special character in some shells. The following example shows how to escape it in a bash shell.
446 Administering hot-relocation Moving relocated subdisks If vxunreloc cannot replace the subdisks back to the same original offsets, a force option is available that allows you to move the subdisks to a specified disk without using the original offsets. See the vxunreloc(1M) manual page. The examples in the following sections demonstrate the use of vxunreloc. Moving hot-relocated subdisks back to their original disk Assume that mydg01 failed and all the subdisks were relocated.
Administering hot-relocation Moving relocated subdisks Assume that mydg01 failed and the subdisks were relocated and that you want to move the hot-relocated subdisks to mydg05 where some subdisks already reside.
448 Administering hot-relocation Modifying the behavior of hot-relocation The comment fields of all the subdisks on the destination disk remain marked as UNRELOC until phase 3 completes. If its execution is interrupted, vxunreloc can subsequently re-use subdisks that it created on the destination disk during a previous execution, but it does not use any data that was moved to the destination disk. If a subdisk data move fails, vxunreloc displays an error message and exits.
Administering hot-relocation Modifying the behavior of hot-relocation 1 To prevent vxrelocd starting, comment out the entry that invokes it in the startup file: # nohup vxrelocd root & 2 By default, vxrelocd sends electronic mail to root when failures are detected and relocation actions are performed.
450 Administering hot-relocation Modifying the behavior of hot-relocation
Chapter 13 Administering cluster functionality This chapter includes the following topics: ■ About the cluster functionality of VxVM ■ Overview of cluster volume management ■ Cluster initialization and configuration ■ Upgrading cluster functionality ■ Dirty region logging in cluster environments ■ Multiple host failover configurations ■ Administering VxVM in cluster environments About the cluster functionality of VxVM A cluster consists of a number of hosts or nodes that share a set of disks
452 Administering cluster functionality About the cluster functionality of VxVM Availability If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster. This ability to provide continuous uninterrupted service by switching to redundant hardware is commonly termed failover.
Administering cluster functionality Overview of cluster volume management Campus cluster configurations (also known as stretch cluster or remote mirror configurations) can also be configured and administered. See “About sites and remote mirrors” on page 487. Overview of cluster volume management In recent years, tightly-coupled cluster systems have become increasingly popular in the realm of enterprise-scale mission-critical data processing.
454 Administering cluster functionality Overview of cluster volume management Figure 13-1 Example of a 4-node cluster Redundant private network Node 1 (slave) Node 2 (slave) Node 3 (slave) Node 0 (master) Redundant SCSIor Fibre Channel connectivity Cluster-shareable disks Cluster-shareable disk groups Node 0 is configured as the master node and nodes 1, 2 and 3 are configured as slave nodes.
Administering cluster functionality Overview of cluster volume management You must run commands that configure or reconfigure VxVM objects on the master node. Tasks that must be initiated from the master node include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations. VxVM determines that the first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master.
456 Administering cluster functionality Overview of cluster volume management it joins the cluster and imports the same shared disk groups as the master. When a node leaves the cluster gracefully, it deports all its imported shared disk groups, but they remain imported on the surviving nodes. Reconfiguring a shared disk group is performed with the cooperation of all nodes.
Administering cluster functionality Overview of cluster volume management Table 13-1 Activation modes for shared disk groups (continued) Activation mode Description readonly (ro) The node has read access to the disk group and denies write access for all other nodes in the cluster. The node has no write access to the disk group. Attempts to activate a disk group for either of the write modes on other nodes fail. sharedread (sr) The node has read access to the disk group.
458 Administering cluster functionality Overview of cluster volume management enable_activation=true default_activation_mode=activation-mode The activation-mode is one of exclusivewrite, readonly, sharedread, sharedwrite, or off. When a shared disk group is created or imported, it is activated in the specified mode. When a node joins the cluster, all shared disk groups accessible from the node are activated in the specified mode.
Administering cluster functionality Overview of cluster volume management ■ Any changes on the master node are automatically coordinated and propagated to the slave nodes in the cluster. ■ Any failures that require a configuration change must be sent to the master node so that they can be resolved correctly. ■ As the master node resolves failures, all the slave nodes are correctly updated. This ensures that all nodes have the same view of the configuration.
460 Administering cluster functionality Overview of cluster volume management Global detach policy Warning: The global detach policy must be selected when Dynamic MultiPathing (DMP) is used to manage multipathing on Active/Passive arrays, This ensures that all nodes correctly coordinate their use of the active path. The global detach policy is the traditional and default policy for all nodes on the configuration.
Administering cluster functionality Overview of cluster volume management See “Setting the disk detach policy on a shared disk group” on page 482. Table 13-3 summarizes the effect on a cluster of I/O failure to the disks in a mirrored volume.
462 Administering cluster functionality Overview of cluster volume management Table 13-4 Type of I/O failure Behavior of master node for different failure policies Leave (dgfailpolicy=leave) Disable (dgfailpolicy=dgdisable) Master node loses The master node panics with the The master node disables the disk access to all copies message “klog update failed” for group. of the logs. a failed kernel-initiated transaction, or “cvm config update failed” for a failed user-initiated transaction.
Administering cluster functionality Cluster initialization and configuration Some failure scenarios do not result in a disk group failure policy being invoked, but can potentially impact the cluster. For example, if the local disk detach policy is in effect, and the new master node has a failed plex, this results in all nodes detaching the plex because the new master is unaffected by the policy.
464 Administering cluster functionality Cluster initialization and configuration ■ network addresses of nodes ■ port addresses When a node joins the cluster, this information is automatically loaded into VxVM on that node at node startup time. Note: To make effective use of the cluster functionality of VxVM requires that you configure a cluster monitor, such as provided by HP Serviceguard, or by GAB (Group Membership and Atomic Broadcast) in VCS.
Administering cluster functionality Cluster initialization and configuration vxclustadm utility The vxclustadm command provides an interface to the cluster functionality of VxVM when VCS or HP Serviceguard is used as the cluster monitor. It is also called during cluster startup and shutdown. In the absence of a cluster monitor, vxclustadm can also be used to activate or deactivate the cluster functionality of VxVM on any node in a cluster.
466 Administering cluster functionality Cluster initialization and configuration Table 13-5 Node abort messages (continued) Reason Description clustering license not available Clustering license cannot be found. connection refused by master Join of a node refused by the master node. disk in use by another cluster A disk belongs to a cluster other than the one that a node is joining.
Administering cluster functionality Cluster initialization and configuration A volume reconfiguration transaction is initiated by running a VxVM utility on the master node. The utility contacts the local vxconfigd daemon on the master node, which validates the requested change. For example, vxconfigd rejects an attempt to create a new disk group with the same name as an existing disk group.
468 Administering cluster functionality Cluster initialization and configuration vxconfigd daemons on other nodes. During cluster startup, the kernel prompts vxconfigd to begin cluster operation and indicates whether it is a master node or a slave node.
Administering cluster functionality Cluster initialization and configuration information about the shared configuration, so that any displayed configuration information is correct. ■ If the vxconfigd daemon is stopped on a slave node, the master node takes no action. When the vxconfigd daemon is restarted on the slave, the slave vxconfigd daemon attempts to reconnect to the master daemon and to re-acquire the information about the shared configuration.
470 Administering cluster functionality Cluster initialization and configuration To restart vxconfigd manually 1 Use the following command to disable failover on any service groups that contain VxVM objects: # hagrp -freeze groupname 2 Enter the following command to stop and restart the VxVM configuration daemon on the affected node: # vxconfigd -k 3 Use the following command to re-enable failover for the service groups that you froze in step 1: # hagrp -unfreeze groupname Node shutdown Although it
Administering cluster functionality Upgrading cluster functionality ■ If all volumes in shared disk groups are closed, VxVM makes them unavailable to applications. Because all nodes are informed that these volumes are closed on the leaving node, no resynchronization is performed. ■ If any volume in a shared disk group is open, the shutdown operation in the kernel waits until the volume is closed. There is no timeout checking in this operation. Once shutdown succeeds, the node has left the cluster.
472 Administering cluster functionality Dirty region logging in cluster environments then join it back into the cluster. This operation is repeated for each node in the cluster. Each Veritas Volume Manager release starting with Release 3.1 has a cluster protocol version number associated with it. The cluster protocol version is not the same as the release number or the disk group version number. The cluster protocol version is stored in the /etc/vx/volboot file.
Administering cluster functionality Multiple host failover configurations If a shared disk group is imported as a private disk group on a system without cluster support, VxVM considers the logs of the shared volumes to be invalid and conducts a full volume recovery. After the recovery completes, VxVM uses DRL. The cluster functionality of VxVM can perform a DRL recovery on a non-shared volume.
474 Administering cluster functionality Multiple host failover configurations group imported (importing host) must deport (give up access to) the disk group. Once deported, the disk group can be imported by another host. If two hosts are allowed to access a disk group concurrently without proper synchronization, such as that provided by the Oracle Parallel Server, the configuration of the disk group, and possibly the contents of volumes, can be corrupted.
Administering cluster functionality Multiple host failover configurations Veritas Volume Manager can support failover, but it relies on the administrator or on an external high-availability monitor to ensure that the first system is shut down or unavailable before the disk group is imported to another system. See “Moving disk groups between systems” on page 216. See the vxdg(1M) manual page.
476 Administering cluster functionality Administering VxVM in cluster environments If you use the Veritas Cluster Server product, all disk group failover issues can be managed correctly. VCS includes a high availability monitor and includes failover scripts for VxVM, VxFS, and for several popular databases.
Administering cluster functionality Administering VxVM in cluster environments Cluster status messages (continued) Table 13-6 Status message Description mode: enabled: cluster active - role not set master: mozart state: joining reconfig: master update The node has not yet been assigned a role, and is in the process of joining the cluster. mode: enabled: cluster active - SLAVE master: mozart state: joining The node is configured as a slave, and is in the process of joining the cluster.
478 Administering cluster functionality Administering VxVM in cluster environments Note that the clusterid field is set to cvm2 (the name of the cluster), and the flags field includes an entry for shared. The imported flag is only set if a node is a part of the cluster and the disk group is imported. Listing shared disk groups vxdg can be used to list information about shared disk groups.
Administering cluster functionality Administering VxVM in cluster environments copies: nconfig=2 nlog=2 config: seqno=0.1976 permlen=1456 free=1448 templen=6 loglen=220 config disk c1t0d0 copy 1 len=1456 state=clean online vconfig disk c1t0d0 copy 1 len=1456 state=clean onlinev log disk c1t0d0 copy 1 len=220 log disk c1t0d0 copy 1 len=220 Note that the flags field is set to shared. The output for the same command when run on a slave is slightly different.
480 Administering cluster functionality Administering VxVM in cluster environments # vxdg -s import diskgroup where diskgroup is the disk group name or ID. On subsequent cluster restarts, the disk group is automatically imported as shared. Note that it can be necessary to deport the disk group (using the vxdg deport diskgroup command) before invoking the vxdg utility. Forcibly importing a disk group You can use the -f option to the vxdg command to import a disk group forcibly.
Administering cluster functionality Administering VxVM in cluster environments Moving objects between shared disk groups You can only move objects between shared disk groups on the master node. You cannot move objects between private and shared disk groups. You can use the vxdg move command to move a self-contained set of VxVM objects such as disks and top-level volumes between disk groups.
482 Administering cluster functionality Administering VxVM in cluster environments The activation mode is one of exclusivewrite or ew, readonly or ro, sharedread or sr, sharedwrite or sw, or off.
Administering cluster functionality Administering VxVM in cluster environments in the disk group dskgrp, and configure it for exclusive open, use the following command: # vxassist -g dskgrp make volmir 5g layout=mirror exclusive=on Multiple opens by the same node are also supported. Any attempts by other nodes to open the volume fail until the final close of the volume by the node that opened it. Specifying exclusive=off instead means that more than one node in a cluster can open a volume simultaneously.
484 Administering cluster functionality Administering VxVM in cluster environments You can also check the existing cluster protocol version using the following command: # vxdctl protocolversion This produces output similar to the following: Cluster running at protocol 80 Displaying the supported cluster protocol version range The following command displays the maximum and minimum protocol version supported by the node and the current protocol version: # vxdctl support This command produces out put simi
Administering cluster functionality Administering VxVM in cluster environments Warning: While the vxrecover utility is active, there can be some degradation in system performance. Obtaining cluster performance statistics The vxstat utility returns statistics for specified objects. In a cluster environment, vxstat gathers statistics from all of the nodes in the cluster. The statistics give the total usage, by all nodes, for the requested objects. If a local object is specified, its local usage is returned.
486 Administering cluster functionality Administering VxVM in cluster environments
Chapter 14 Administering sites and remote mirrors This chapter includes the following topics: ■ About sites and remote mirrors ■ Configuring Remote Mirror sites ■ Configuring sites for hosts ■ Configuring sites for storage ■ Changing the site name ■ Configuring site-based allocation on a disk group ■ Configuring site consistency on a disk group ■ Configuring site consistency on a volume ■ Setting the siteread policy on a volume ■ Site-based allocation of storage to volumes ■ Making an
488 Administering sites and remote mirrors About sites and remote mirrors place, are instead divided between two or more sites. These sites are typically connected via a redundant high-capacity network that provides access to storage and private link communication between the cluster nodes. Figure 14-1 shows a typical two-site remote mirror configuration.
Administering sites and remote mirrors About sites and remote mirrors By tagging disks with site names, storage can be allocated from the correct location when creating, resizing or relocating a volume, and when changing a volume’s layout. Figure 14-2 shows an example of a site-consistent volume with two plexes configured at each of two sites.
490 Administering sites and remote mirrors Configuring Remote Mirror sites Figure 14-3 Example of a two-site configuration with remote storage only Site A Site B Cluster or standalone system Fibre Channel hub or switch Metropolitan or wide area network link (Fibre Channel or DWDM) Disk enclosures Fibre Channel hub or switch Disk enclosures Configuring Remote Mirror sites Note: The Remote Mirror feature requires that the Site Awareness license has been installed on all hosts at all sites that part
Administering sites and remote mirrors Configuring sites for hosts See “Configuring automatic site tagging for a disk group” on page 492. ■ Assign a site name to the disks or enclosures. You can set site tags at the disk level or at the enclosure level. If you specify one or more enclosures, the site tag applies to the disks in that enclosure that are within the disk group. See “Configuring site tagging for disks or enclosures” on page 492. ■ Turn on site consistency for the disk group.
492 Administering sites and remote mirrors Configuring sites for storage Configuring automatic site tagging for a disk group Configure automatic site tagging if you want disks or LUNs to inherit the tag from the enclosure. After you turn on automatic site tagging for a disk group, assign the site names to the enclosures in the disk group. Any disks or LUNs added to that disk group inherit the tag from the enclosure to which they belong.
Administering sites and remote mirrors Changing the site name To tag disks or enclosures with a site name ◆ Assign a site name to one or more disks or enclosures, using the following command: # vxdisk [-g diskgroup] settag site=sitename \ disk disk1...|encl:encl_name encl:encl_name1... where the disks can be specified either by the disk access name or the disk media name.
494 Administering sites and remote mirrors Configuring site consistency on a disk group requirement for a disk group, each volume created has the allsites attribute set to on, by default. The allsites attribute indicates that the volume must have at least one plex on each site that is registered to the disk group. For new volumes, the read policy is set to siteread.
Administering sites and remote mirrors Configuring site consistency on a disk group site fails, all its plexes are detached and the site is said to be detached. Turn on this behavior by setting the siteconsistent attribute to on. If the siteconsistent attribute is set to off, only the plex that fails is detached. The remaining volumes and their plexes on that site are not detached. Site consistency is intended for data volumes. The feature is not recommended for the boot disk group.
496 Administering sites and remote mirrors Configuring site consistency on a volume Configuring site consistency on a volume To set the site consistency requirement when creating a volume, specify the siteconsistent attribute to the vxassist make command, for example: # vxassist [-g diskgroup] make volume size \ nmirror=4 siteconsistent={on|off} By default, a volume inherits the value that is set on its disk group.
Administering sites and remote mirrors Site-based allocation of storage to volumes Site-based allocation of storage to volumes The vxassist command can be used to create volumes only from storage that exists at a specified site, as shown in this example: # vxassist -g diskgroup make volume size site:site1 \ [allsites={on|off}] [siteconsistent={on|off}] The storage class site is used in similar way to other storage classes with the vxassist command, such as enclr, ctlr and disk.
498 Administering sites and remote mirrors Site-based allocation of storage to volumes Specifying the number of mirrors ensures that each mirror is created on a different site: # vxassist -g diskgroup make volume size mirror=site \ nmirror=2 site:site1 site:site2 [allsites={on|off}] \ [siteconsistent={on|off}] If a volume is intended to be site consistent, the number of mirrors that are specified must be equal to the number of sites.
Administering sites and remote mirrors Making an existing disk group site consistent Table 14-1 Examples of storage allocation by specifying sites (continued) Command Description # vxassist -g ccdg make vol 2g \ nmirror=2 site:site2 \ siteconsistent=off \ allsites=off Create a mirrored volume that is not site consistent. Both mirrors are allocated from any available storage in the disk group that is tagged as belonging to site2.
500 Administering sites and remote mirrors Fire drill — testing the configuration 3 Tag all the disks in the disk group with the appropriate site name: # vxdisk [-g diskgroup] settag site=sitename disk1 disk2 Or, to tag all the disks in a specified enclosure, use the following command: # vxdisk [-g diskgroup] settag site=sitename encl:encl_name 4 Use the vxdg move command to move any unsupported RAID-5 volumes to another disk group.
Administering sites and remote mirrors Fire drill — testing the configuration Simulating site failure To simulate the failure of a site, use the following command to detach all the devices at a specified site: # vxdg -g diskgroup [-f] detachsite sitename The -f option must be specified if any plexes configured on storage at the site are currently online. After the site is detached, the application should run correctly on the available site. This step verifies that the primary site is fine.
502 Administering sites and remote mirrors Failure scenarios and recovery procedures Failure scenarios and recovery procedures Table 14-2 lists the possible failure scenarios and recovery procedures for the Remote Mirror feature. Table 14-2 Failure scenarios and recovery procedures Failure scenario Recovery procedure Disruption of network link between sites. See “Recovering from a loss of site connectivity” on page 502. Failure of hosts at a site. See “Recovering from host failure” on page 502.
Administering sites and remote mirrors Failure scenarios and recovery procedures Recovering from storage failure If storage fails at a site, the plexes that are configured on that storage are detached locally if a site-consistent volume still has other mirrors available at the site. The hot-relocation feature of VxVM will attempt to recreate the failed plexes on other available storage in the disk group.
504 Administering sites and remote mirrors Failure scenarios and recovery procedures recovered, the plexes are put into the ACTIVE state, and the state of the site is set to ACTIVE. If vxrelocd is not running, vxattachd reattaches a site only when all the disks at that site become accessible. After reattachment succeeds, vxattachd sets the site state to ACTIVE, and initiates recovery of the plexes. When all the plexes have been recovered, the plexes are put into the ACTIVE state.
Chapter 15 Using Storage Expert This chapter includes the following topics: ■ About Storage Expert ■ How Storage Expert works ■ Before using Storage Expert ■ Running Storage Expert ■ Identifying configuration problems using Storage Expert About Storage Expert System administrators often find that gathering and interpreting data about large and complex configurations can be a difficult task. Veritas Storage Expert is designed to help in diagnosing configuration problems with VxVM.
506 Using Storage Expert How Storage Expert works See the vxse(1M) manual page. How Storage Expert works Storage Expert components include a set of rule scripts and a rules engine. The rules engine runs the scripts and produces ASCII output, which is organized and archived by Storage Expert’s report generator. This output contains information about areas of VxVM configuration that do not meet the set criteria.
Using Storage Expert Running Storage Expert check Lists the default values used by the rule’s attributes. info Describes what the rule does. list Lists the attributes of the rule that you can set. run Runs the rule. See “Rule definitions and attributes” on page 515.
508 Using Storage Expert Running Storage Expert too_wide_stripe - (16) columns in a striped volume too_narrow_stripe - (3) columns in a striped volume Storage Expert lists the default value of each of the rule’s attributes. See “Rule definitions and attributes” on page 515. To alter the behavior of rules, you can change the value of their attributes. See “Setting rule attributes” on page 508.
Using Storage Expert Identifying configuration problems using Storage Expert # vxse_drl2 -g mydg run large_mirror_size=30m ■ Create your own defaults file, and specify that file on the command line: # vxse_drl2 -d mydefaultsfile run Lines in this file contain attribute values definitions for a rule in this format: rule_name,attribute=value For example, the following entry defines a value of 20 gigabytes for the attribute large_mirror_size of the rule vxse_drl2: vxse_drl2,large_mirror_size=20g You can s
510 Using Storage Expert Identifying configuration problems using Storage Expert ■ Disk sparing and relocation management ■ Hardware failures ■ Rootability ■ System name See “Rule definitions and attributes” on page 515. Recovery time Several “best practice” rules enable you to check that your storage configuration has the resilience to withstand a disk failure or a system failure.
Using Storage Expert Identifying configuration problems using Storage Expert Checking for RAID-5 volumes without a RAID-5 log (vxse_raid5log1) To check whether a RAID-5 volume has an associated RAID-5 log, run rule vxse_raid5log1. In the event of both a system failure and a failure of a disk in a RAID-5 volume, data that is not involved in an active write could be lost or corrupted if there is no RAID-5 log. See “Adding a RAID-5 log” on page 328.
512 Using Storage Expert Identifying configuration problems using Storage Expert By default, this rule suggests a limit of 250 for the number of disks in a disk group. If one of your disk groups exceeds this figure, you should consider creating a new disk group. The number of objects that can be configured in a disk group is limited by the size of the private region which stores configuration information about every object in the disk group.
Using Storage Expert Identifying configuration problems using Storage Expert Checking for initialized VM disks that are not in a disk group (vxse_disk) To find out whether there are any initialized disks that are not a part of any disk group, run rule vxse_disk. This prints out a list of disks, indicating whether they are part of a disk group or unassociated. See “Adding a disk to a disk group” on page 200.
514 Using Storage Expert Identifying configuration problems using Storage Expert Checking the configuration of large mirrored-stripe volumes (vxse_mirstripe) To check whether large mirror-striped volumes should be reconfigured as striped-mirror volumes, run rule vxse_mirstripe. A large mirrored-striped volume should be reconfigured, using relayout, as a striped-mirror volume to improve redundancy and enhance recovery time after failure. See “Converting between layered and non-layered volumes” on page 347.
Using Storage Expert Identifying configuration problems using Storage Expert Checking the number of spare disks in a disk group (vxse_spares) This “best practice” rule assumes that between 10% and 20% of disks in a disk group should be allocated as spare disks. By default, vxse_spares checks that a disk group falls within these limits. See “About hot-relocation” on page 431. Hardware failures VxVM maintains information about failed disks and disabled controllers.
516 Using Storage Expert Identifying configuration problems using Storage Expert Table 15-1 lists the available rule definitions, and rule attributes and their default values. Table 15-1 Rule definitions in Storage Expert Rule Description vxse_dc_failures Checks and points out failed disks and disabled controllers. vxse_dg1 Checks for disk group configurations in which the disk group has become too large.
Using Storage Expert Identifying configuration problems using Storage Expert Table 15-1 Rule definitions in Storage Expert (continued) Rule Description vxse_raid5log1 Checks for RAID-5 volumes that do not have an associated log. vxse_raid5log2 Checks for recommended minimum and maximum RAID-5 log sizes. vxse_raid5log3 Checks for large RAID-5 volumes that do not have a mirrored RAID-5 log. vxse_redundancy Checks the redundancy of volumes.
518 Using Storage Expert Identifying configuration problems using Storage Expert Table 15-2 Rule attributes and default attribute values Rule Attribute Default value Description vxse_dc_failures - - No user-configurable variables. vxse_dg1 max_disks_per_dg 250 vxse_dg2 - - No user-configurable variables. vxse_dg3 - - No user-configurable variables. vxse_dg4 - - No user-configurable variables. vxse_dg5 - - No user-configurable variables.
Using Storage Expert Identifying configuration problems using Storage Expert Table 15-2 Rule attributes and default attribute values (continued) Rule Attribute Default value vxse_mirstripe large_mirror_size 1g (1GB) Description Large mirror-stripe threshold size. Warn if a mirror-stripe volume is larger than this. nsd_threshold 8 Large mirror-stripe number of subdisks threshold. Warn if a mirror-stripe volume has more subdisks than this. too_narrow_raid5 4 Minimum number of RAID-5 columns.
520 Using Storage Expert Identifying configuration problems using Storage Expert Table 15-2 Rule attributes and default attribute values (continued) Rule Attribute vxse_redundancy volume_redundancy 0 vxse_rootmir - vxse_spares max_disk_spare_ratio 20 Maximum percentage of spare disks in a disk group. Warn if the percentage of spare disks is greater than this. min_disk_spare_ratio 10 Minimum percentage of spare disks in a disk group. Warn if the percentage of spare disks is less than this.
Chapter 16 Performance monitoring and tuning This chapter includes the following topics: ■ Performance guidelines ■ RAID-5 ■ Performance monitoring ■ Tuning VxVM Performance guidelines Veritas Volume Manager (VxVM) can improve system performance by optimizing the layout of data storage on the available hardware. VxVM lets you optimize data storage performance using the following strategies: ■ Balance the I/O load among the available disk drives.
522 Performance monitoring and tuning Performance guidelines VxVM can split volumes across multiple drives. This approach gives you a finer level of granularity when you locate data. After you measure access patterns, you can adjust your decisions on where to place file systems. You can reconfigure volumes online without adversely impacting their availability. Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel.
Performance monitoring and tuning RAID-5 Combining mirroring and striping When you have multiple I/O streams, you can use mirroring and striping together to significantly improve performance. Because parallel I/O streams can operate concurrently on separate devices, striping provides better throughput. When I/O fits exactly across all stripe units in one stripe, serial access is optimized.
524 Performance monitoring and tuning Performance monitoring Figure 16-2 shows an example in which the read policy of the mirrored-stripe volume labeled Hot Vol is set to prefer for the striped plex PL1.
Performance monitoring and tuning Performance monitoring Best performance is usually achieved by striping and mirroring all volumes across a reasonable number of disks and mirroring between controllers, when possible. This procedure tends to even out the load between all disks, but it can make VxVM more difficult to administer. For large numbers of disks (hundreds or thousands), set up disk groups containing 10 disks, where each group is used to create a striped-mirror volume.
526 Performance monitoring and tuning Performance monitoring ■ average operation time (which reflects the total time through the VxVM interface and is not suitable for comparison against other statistics programs) These statistics are recorded for logical I/O including reads, writes, atomic copies, verified reads, verified writes, plex reads, and plex writes for each volume.
Performance monitoring and tuning Performance monitoring 527 due to volumes being created, and also removes statistics from boot time (which are not usually of interest). After resetting the counters, allow the system to run during typical system activity. Run the application or workload of interest on the system to measure its effect. When monitoring a system that is used for multiple purposes, try not to exercise any one application more than usual.
528 Performance monitoring and tuning Performance monitoring pl sd archive-01 archive mydg03-03 archive-01 ENABLED mydg03 ACTIVE 0 20480 40960 CONCAT 0 c1t2d0 The subdisks line (beginning sd) indicates that the volume archive is on disk mydg03. To move the volume off mydg03, use the following command. Note: The ! character is a special character in some shells. This example shows how to escape it in a bach shell.
Performance monitoring and tuning Performance monitoring If some disks appear to be excessively busy (or have particularly long read or write times), you may want to reconfigure some volumes. If there are two relatively busy volumes on a disk, move them closer together to reduce seek times on the disk. If there are too many relatively busy volumes on one disk, move them to a disk that is less busy.
530 Performance monitoring and tuning Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources that are used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General tuning guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Performance monitoring and tuning Tuning VxVM Number of configuration copies for a disk group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance. As a general rule, reducing the number of configuration copies in a disk group speeds up initial access of the disk group, initial startup of the vxconfigd daemon, and transactions that are performed within the disk group.
532 Performance monitoring and tuning Tuning VxVM Tunable parameters for VxVM Table 16-1 lists the kernel tunable parameters for VxVM. Table 16-1 Kernel tunable parameters for VxVM Parameter Description vol_checkpt_default The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint.
Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed.
534 Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit.
Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description vol_maxspecialio The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously.
536 Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description voldrl_max_drtregs The maximum number of dirty regions that can exist on the system for non-sequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure. The default value is 2048.
Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to this size, one for RAID-5 and one for mirrored volumes.
538 Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description voliot_iobuf_limit The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.
Performance monitoring and tuning Tuning VxVM Table 16-1 Kernel tunable parameters for VxVM (continued) Parameter Description volpagemod_max_memsz The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata. The default value is 6144KB (6MB). The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications.
540 Performance monitoring and tuning Tuning VxVM DMP tunable parameters Table 16-2 shows the DMP parameters that can be tuned by using the vxdmpadm settune command. Table 16-2 DMP parameters that are tunable Parameter Description dmp_cache_open If this parameter is set to on, the first open of a device that is performed by an array support library (ASL) is cached. This caching enhances the performance of device discovery by minimizing the overhead that is caused by subsequent opens by ASLs.
Performance monitoring and tuning Tuning VxVM Table 16-2 DMP parameters that are tunable (continued) Parameter Description dmp_failed_io_threshold The time limit that DMP waits for a failed I/O request to return before the device is marked as INSANE, I/O is avoided on the path, and any remaining failed I/O requests are returned to the application layer without performing any error analysis. The default value is 57600 seconds (16 hours). See “Configuring the response to I/O failures” on page 185.
542 Performance monitoring and tuning Tuning VxVM Table 16-2 DMP parameters that are tunable (continued) Parameter Description dmp_log_level The level of detail that is displayed for DMP console messages. The following level values are defined: 1 — Displays all DMP log messages that existed in releases before 5.0. 2 — Displays level 1 messages plus messages that relate to path or disk addition or removal, SCSI errors, IO errors and DMP node migration.
Performance monitoring and tuning Tuning VxVM Table 16-2 DMP parameters that are tunable (continued) Parameter Description dmp_monitor_fabric Whether the Event Source daemon (vxesd) uses the Storage Networking Industry Association (SNIA) HBA API. This API allows DDL to improve the performance of failover by collecting information about the SAN topology and by monitoring fabric events. If this parameter is set to on, DDL uses the SNIA HBA API.
544 Performance monitoring and tuning Tuning VxVM Table 16-2 DMP parameters that are tunable (continued) Parameter Description dmp_probe_idle_lun If DMP statistics gathering is enabled, set this tunable to on (default) to have the DMP path restoration thread probe idle LUNs. Set this tunable to off to turn off this feature. (Idle LUNs are VM disks on which no I/O requests are scheduled.) The value of this tunable is only interpreted when DMP statistics gathering is enabled.
Performance monitoring and tuning Tuning VxVM Table 16-2 DMP parameters that are tunable (continued) Parameter Description dmp_restore_policy The DMP restore policy, which can be set to one of the following values: ■ check_all ■ check_alternate ■ check_disabled ■ check_periodic The default value is check_disabled The value of this tunable can also be set using the vxdmpadm start restore command. See “Configuring DMP path restoration policies” on page 189.
546 Performance monitoring and tuning Tuning VxVM
Appendix A Using Veritas Volume Manager commands This appendix includes the following topics: ■ About Veritas Volume Manager commands ■ Online manual pages About Veritas Volume Manager commands Most Veritas Volume Manager (VxVM) commands (excepting daemons, library commands and supporting scripts) are linked to the /usr/sbin directory from the /opt/VRTS/bin directory.
548 Using Veritas Volume Manager commands About Veritas Volume Manager commands Note: If you have not installed database software, you can omit /opt/VRTSdbed/bin, /opt/VRTSdb2ed/bin and /opt/VRTSsybed/bin. Similarly, /opt/VRTSvxfs/bin is only required to access some VxFS commands. VxVM library commands and supporting scripts are located under the /usr/lib/vxvm directory hierarchy. You can include these directories in your path if you need to use them on a regular basis.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-1 Obtaining information about objects in VxVM (continued) Command Description vxdisk [-g diskgroup] list [diskname] Lists disks under control of VxVM. See “Displaying disk information” on page 132. Example: # vxdisk -g mydg list vxdg list [diskgroup] Lists information about disk groups. See “Displaying disk group information” on page 198.
550 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-1 Obtaining information about objects in VxVM (continued) Command Description vxprint -hrt [-g diskgroup] [object ...] Prints single-line information about objects in VxVM. See “Displaying volume information” on page 306. Example: # vxprint -g mydg myvol1 \ myvol2 vxprint -st [-g diskgroup] [subdisk ...] Displays information about subdisks. See “Displaying subdisk information” on page 250.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-2 Administering disks (continued) Command Description vxedit [-g diskgroup] rename \ olddisk newdisk Renames a disk under control of VxVM. See “Renaming a disk” on page 131. Example: # vxedit -g mydg rename \ mydg03 mydg02 vxedit [-g diskgroup] set \ reserve=on|off diskname Sets aside/does not set aside a disk from use in a disk group. See “Reserving disks” on page 131.
552 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-2 Administering disks (continued) Command Description vxedit [-g diskgroup] set \ spare=on|off diskname Adds/removes a disk from the pool of hot-relocation spares. See “Marking a disk as a hot-relocation spare” on page 439. See “Removing a disk from use as a hot-relocation spare” on page 440.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups Command Description vxdg [-s] init diskgroup \ [diskname=]devicename Creates a disk group using a pre-initialized disk. See “Creating a disk group” on page 200. See “Creating a shared disk group” on page 479. Example: # vxdg init mydg \ mydg01=c0t1d0 vxdg -g diskgroup listssbinfo Reports conflicting configuration information.
554 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups (continued) Command Description vxdg [-n newname] -s import diskgroup Imports a disk group as shared by a cluster, and optionally renames it. See “Importing disk groups as shared” on page 479. Example: # vxdg -n newsdg -s import \ mysdg vxdg [-o expand] listmove sourcedg \ Lists the objects potentially affected by targetdg object ... moving a disk group.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-3 Creating and administering disk groups (continued) Command Description vxdg join sourcedg targetdg Joins two disk groups. See “Joining disk groups” on page 238. Example: # vxdg join newdg mydg vxdg -g diskgroup set \ activation=ew|ro|sr|sw|off Sets the activation mode of a shared disk group in a cluster. See “Changing the activation mode on a shared disk group” on page 481.
556 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks Command Description vxmake [-g diskgroup] sd subdisk \ diskname,offset,length Creates a subdisk. See Creating subdisks. Example: # vxmake -g mydg sd \ mydg02-01 mydg02,0,8000 vxsd [-g diskgroup] assoc plex \ subdisk... Associates subdisks with an existing plex. See “Associating subdisks with plexes” on page 253.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks (continued) Command Description vxsd [-g diskgroup] -s size split \ subdisk sd1 sd2 Splits a subdisk in two. See “Splitting subdisks” on page 252. Example: # vxsd -g mydg -s 1000m \ split mydg03-02 mydg03-02 \ mydg03-03 vxsd [-g diskgroup] join \ sd1 sd2 ... subdisk Joins two or more subdisks. See “Joining subdisks” on page 252.
558 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-4 Creating and administering subdisks (continued) Command Description vxsd [-g diskgroup] dis subdisk Dissociates a subdisk from a plex. See “Dissociating subdisks from plexes” on page 256. Example: # vxsd -g mydg dis mydg02-01 vxedit [-g diskgroup] rm subdisk Removes a subdisk. See “Removing subdisks” on page 256.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-5 Creating and administering plexes (continued) Command Description vxmake [-g diskgroup] plex plex \ layout=stripe|raid5 stwidth=W \ ncolumn=N \ sd=subdisk1[,subdisk2,...] Creates a striped or RAID-5 plex. See “Creating a striped plex” on page 260.
560 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-5 Creating and administering plexes (continued) Command Description vxplex [-g diskgroup] mv oldplex \ newplex Replaces a plex. See “Moving plexes” on page 269. Example: # vxplex -g mydg mv \ vol02-02 vol02-03 vxplex [-g diskgroup] cp volume \ newplex Copies a volume onto a plex. See “Copying volumes to plexes” on page 270.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes Command Description vxassist [-g diskgroup] maxsize \ layout=layout [attributes] Displays the maximum size of volume that can be created. See “Discovering the maximum size of a volume” on page 282. Example: # vxassist -g mydg maxsize \ layout=raid5 nlog=2 vxassist -b [-g diskgroup] make \ volume length [layout=layout] \ [attributes] Creates a volume. See “Creating a volume on any disk” on page 283.
562 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes (continued) Command Description vxassist -b [-g diskgroup] make \ Creates a striped or RAID-5 volume. volume length layout={stripe|raid5} \ See “Creating a striped volume” [stripeunit=W] [ncol=N] \ on page 294. [attributes] See “Creating a RAID-5 volume” on page 297.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-6 Creating volumes (continued) Command Description vxvol [-g diskgroup] init zero \ volume Initializes and zeros out a volume for use. See “Initializing and starting a volume” on page 303. Example: # vxvol -g mydg init zero \ myvol Table A-7 Administering volumes Command Description vxassist [-g diskgroup] mirror \ volume [attributes] Adds a mirror to a volume. See “Adding a mirror to a volume ” on page 315.
564 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] \ {growto|growby} volume length Grows a volume to a specified size or by a specified amount. See “Resizing volumes with vxassist” on page 331. Example: # vxassist -g mydg growby \ myvol 10g vxassist [-g diskgroup] \ {shrinkto|shrinkby} volume length Shrinks a volume to a specified size or by a specified amount.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxsnap [-g diskgroup] make \ source=volume\ /newvol=snapvol\ [/nmirror=number] Takes a full-sized instant snapshot of a volume by breaking off plexes of the original volume. See “Creating instant snapshots” on page 364.
566 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxsnap [-g diskgroup] make \ source=volume/newvol=snapvol\ /cache=cache_object Takes a space-optimized instant snapshot of a volume. See “Creating instant snapshots” on page 364. Example: # vxsnap -g mydg make \ source=myvol/\ newvol=mysosvol/\ cache=cobj vxsnap [-g diskgroup] refresh snapshotRefreshes a snapshot from its original volume.
Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] relayout \ volume [layout=layout] \ [relayout_options] Performs online relayout of a volume. See “Performing online relayout” on page 340.
568 Using Veritas Volume Manager commands About Veritas Volume Manager commands Table A-7 Administering volumes (continued) Command Description vxassist [-g diskgroup] remove \ volume volume Removes a volume. See “Removing a volume” on page 336. Example: # vxassist -g mydg remove \ myvol Table A-8 Monitoring and controlling tasks Command Description command [-g diskgroup] -t tasktag \ [options] [arguments] Specifies a task tag to a VxVM command. See “Specifying task tags” on page 310.
Using Veritas Volume Manager commands Online manual pages Table A-8 Monitoring and controlling tasks (continued) Command Description vxtask pause task Suspends operation of a task. See “Using the vxtask command” on page 312. Example: # vxtask pause mytask vxtask -p [-g diskgroup] list Lists all paused tasks. See “Using the vxtask command” on page 312. Example: # vxtask -p -g mydg list vxtask resume task Resumes a paused task. See “Using the vxtask command” on page 312.
570 Using Veritas Volume Manager commands Online manual pages 7 Device driver interfaces. Section 1M — administrative commands Table A-9 lists the manual pages in section 1M for commands that are used to administer Veritas Volume Manager. Table A-9 Section 1M manual pages Name Description dgcfgbackup Create or update VxVM volume group configuration backup file. dgcfgdaemon Start the VxVM configuration backup daemon. dgcfgrestore Display or restore VxVM disk group configuration from backup.
Using Veritas Volume Manager commands Online manual pages Table A-9 Section 1M manual pages (continued) Name Description vxcmdlog Administer command logging. vxconfigbackup Back up disk group configuration. vxconfigbackupd Disk group configuration backup daemon. vxconfigd Veritas Volume Manager configuration daemon vxconfigrestore Restore disk group configuration. vxcp_lvmroot Copy LVM root disk onto new Veritas Volume Manager root disk.
572 Using Veritas Volume Manager commands Online manual pages Table A-9 Section 1M manual pages (continued) Name Description vxdmpinq Display SCSI inquiry data. vxedit Create, remove, and modify Veritas Volume Manager records. vxevac Evacuate all volumes from a disk. vximportdg Import a disk group into the Veritas Volume Manager configuration. vxinfo Print accessibility and usability of volumes. vxinstall Menu-driven Veritas Volume Manager initial configuration.
Using Veritas Volume Manager commands Online manual pages Table A-9 Section 1M manual pages (continued) Name Description vxreattach Reattach disk drives that have become accessible again. vxrecover Perform volume recovery operations. vxrelayout Convert online storage from one layout to another. vxrelocd Monitor Veritas Volume Manager for failure events and relocate failed subdisks. vxres_lvmroot Restore LVM root disk from Veritas Volume Manager root disk.
574 Using Veritas Volume Manager commands Online manual pages Table A-9 Section 1M manual pages (continued) Name Description vxunreloc Move a hot-relocated subdisk back to its original disk. vxusertemplate Create and administer ISP user templates. vxvmboot Prepare Veritas Volume Manager volume as a root, boot, primary swap or dump volume. vxvmconvert Convert LVM volume groups to VxVM disk groups. vxvol Perform Veritas Volume Manager operations on volumes.
Using Veritas Volume Manager commands Online manual pages Table A-11 Section 7 manual pages Name Description vxconfig Configuration device. vxdmp Dynamic multipathing device. vxinfo General information device. vxio Virtual disk device. vxiod I/O daemon process control device. vxtrace I/O tracing device.
576 Using Veritas Volume Manager commands Online manual pages
Appendix B Migrating arrays This appendix includes the following topics: ■ Migrating to thin provisioning Migrating to thin provisioning The SmartMove™ feature enables migration from traditional LUNs to thinly provisioned LUNs, removing unused space in the process. To migrate to thin provisioning 1 Turn on the SmartMove feature. Edit the /etc/default/vxsf file so that usefssmartmove is set to all.
578 Migrating arrays Migrating to thin provisioning NOTE: The VxFS file system must be mounted to get the benefits of the SmartMove feature. The following methods are available to add the LUNs: ■ Use the default settings: # vxassist -g datadg mirror datavol da_name ■ Use the options for fast completion. The following command has more I/O impact.
Migrating arrays Migrating to thin provisioning The above output indicates that the thin LUNs corresponds to plex datavol-02. ■ Direct all reads to come from those LUNs: # vxvol -g datadg rdpol prefer datavol datavol-02 6 Remove the original non-thin LUNs. Note: The ! character is a special character in some shells. This example shows how to escape it in a bash shell.
580 Migrating arrays Migrating to thin provisioning
Appendix C Configuring Veritas Volume Manager This appendix includes the following topics: ■ Setup tasks after installation ■ Unsupported disk arrays ■ Foreign devices ■ Initialization of disks and creation of disk groups ■ Guidelines for configuring storage ■ VxVM’s view of multipathed devices ■ Cluster support Setup tasks after installation A number of setup tasks can be performed after installing the Veritas Volume Manager (VxVM) software.
582 Configuring Veritas Volume Manager Unsupported disk arrays ■ Place the root disk under VxVM control, and mirror it to create an alternate boot disk. ■ Designate hot-relocation spare disks in each disk group. ■ Add mirrors to volumes. ■ Configure DRL and FastResync on volumes. The following tasks are to perform ongoing maintenance: ■ Resize volumes and file systems. ■ Add more disks, create new disk groups, and create new volumes. ■ Create and maintain snapshots.
Configuring Veritas Volume Manager Guidelines for configuring storage Guidelines for configuring storage A disk failure can cause loss of data on the failed disk and loss of access to your system. Loss of access is due to the failure of a key disk used for system operations. Veritas Volume Manager can protect your system from these problems. To maintain system availability, data important to running and booting your system must be mirrored. The data must be preserved so it can be used in case of failure.
584 Configuring Veritas Volume Manager Guidelines for configuring storage Mirroring guidelines Refer to the following guidelines when using mirroring. ■ Do not place subdisks from different plexes of a mirrored volume on the same physical disk. This action compromises the availability benefits of mirroring and degrades performance. Using the vxassist or vxdiskadm commands precludes this from happening.
Configuring Veritas Volume Manager Guidelines for configuring storage Striping guidelines Refer to the following guidelines when using striping. ■ Do not place more than one column of a striped plex on the same physical disk. ■ Calculate stripe-unit sizes carefully. In general, a moderate stripe-unit size (for example, 64 kilobytes, which is also the default used by vxassist) is recommended.
586 Configuring Veritas Volume Manager Guidelines for configuring storage RAID-5 guidelines Refer to the following guidelines when using RAID-5. In general, the guidelines for mirroring and striping together also apply to RAID-5. The following guidelines should also be observed with RAID-5: ■ Only one RAID-5 plex can exist per RAID-5 volume (but there can be multiple log plexes). ■ The RAID-5 plex must be derived from at least three subdisks on three or more physical disks.
Configuring Veritas Volume Manager Guidelines for configuring storage free space is used for relocation purposes, it is possible to have performance degradation after the relocation. ■ After hot-relocation occurs, designate one or more additional disks as spares to augment the spare space. Some of the original spare space may be occupied by relocated subdisks.
588 Configuring Veritas Volume Manager VxVM’s view of multipathed devices Creating a volume in a disk group sets up block and character (raw) device files that can be used to access the volume: /dev/vx/dsk/dg/vol block device file for volume vol in disk group dg /dev/vx/rdsk/dg/vol character device file for volume vol in disk group dg The pathnames include a directory named for the disk group.
Configuring Veritas Volume Manager Cluster support ■ On one node, run the vxdiskadm program and choose option 1 to initialize new disks. When asked to add these disks to a disk group, choose none to leave the disks for future use. ■ On other nodes in the cluster, run vxdctl enable to see the newly initialized disks. ■ From the master node, create disk groups on the shared disks. To determine if a node is a master or slave, run the command vxdctl -c mode.
590 Configuring Veritas Volume Manager Cluster support Converting existing VxVM disk groups to shared disk groups To convert existing disk groups to shared disk groups 1 Start the cluster on one node only to prevent access by other nodes. 2 Configure the disk groups using the following procedure.
Glossary Active/Active disk arrays Active/Passive disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
592 Glossary cluster-shareable disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a shared disk group). column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex. concatenation A layout style characterized by subdisks that are arranged sequentially and contiguously. configuration copy A single copy of a configuration database.
Glossary disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by VxVM in deciding how to access and manipulate the disk that is defined by the disk access record. disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits such as redundancy or improved performance.
594 Glossary enabled path A path to a disk that is available for I/O. encapsulation A process that converts existing partitions on a specified disk to volumes. Encapsulation is not supported on the HP-UX platform. enclosure See disk enclosure. enclosure-based naming See device name. fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) via a Fibre Channel switch.
Glossary mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each mirror consists of one plex of the volume with which the mirror is associated. mirroring A layout technique that mirrors the contents of a volume onto multiple plexes. Each plex duplicates the data stored on the volume, but the plexes themselves may have different layouts.
596 Glossary Persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO volume on disk). persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and prevents failed mirrors from being selected for recovery. This is also known as kernel logging. physical disk The underlying storage device, which may or may not be under VxVM control.
Glossary rootability The ability to place the root file system and the swap device under VxVM control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure. secondary path In Active/Passive disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths.
598 Glossary stripe unit size The size of each stripe unit. The default stripe unit size is 64KB. The stripe unit size is sometimes also referred to as the stripe width. striping A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes.
Index Symbols /dev/vx/dmp directory 139 /dev/vx/rdmp directory 139 /etc/default/vxassist file 280, 442 /etc/default/vxdg defaults file 457 /etc/default/vxdg file 200 /etc/default/vxdisk file 81, 107 /etc/default/vxse file 509 /etc/fstab file 337 /etc/volboot file 246 /etc/vx/darecs file 246 /etc/vx/dmppolicy.info file 175 /etc/vx/volboot file 217 /sbin/init.
600 Index attributes (continued) name 257, 271 ncachemirror 371 ndcomirror 291, 293, 407 ndcomirs 319, 367 newvol 377 nmirror 377 nomanual 172 nopreferred 172 plex 271 preferred priority 172 primary 173 putil 257, 271 secondary 173 sequential DRL 293 setting for paths 172, 174 setting for rules 508 snapvol 373, 379 source 373, 379 standby 173 subdisk 257 syncing 365, 391 tutil 257, 271 auto disk type 80 autogrow tuning 393 autogrow attribute 369, 371 autogrowby attribute 369 autotrespass mode 138 B backu
Index cluster functionality (continued) shared disks 588 cluster protocol version number 472 cluster-shareable disk groups in clusters 455 clusters activating disk groups 457 activating shared disk groups 481 activation modes for shared disk groups 456 benefits 451 checking cluster protocol version 484 cluster protocol version number 472 cluster-shareable disk groups 455 configuration 463 configuring exclusive open of volume by node 483 connectivity policies 458 converting shared disk groups to private 480
602 Index controllers (continued) specifying to vxassist 283 upgrading firmware 183 converting disks 98 copy-on-write used by instant snapshots 355 copymaps 68 Cross-platform Data Sharing (CDS) alignment constraints 282 disk format 80 customized naming DMP nodes 152 CVM cluster functionality of VxVM 451 D d# 78 data change object DCO 68 data redundancy 44–45, 48 data volume configuration 62 database replay logs and sequential DRL 61 databases resilvering 61 resynchronizing 61 DCO adding to RAID-5 volumes
Index DISABLED plex kernel state 265 volume kernel state 309 disabled paths 152 disk access records stored in /etc/vx/darecs 246 disk arrays A/A 138 A/A-A 138 A/P 138 A/P-C 139 A/PF 138 A/PF-C 139 A/PG 139 A/PG-C 139 Active/Active 138 Active/Passive 138 adding array support library package 84 adding disks to DISKS category 93 Asymmetric Active/Active 138 defined 24 excluding support for 91 JBOD devices 83 listing excluded 92 listing supported 91 listing supported disks in DISKS category 92 multipathed 25 r
604 Index disk groups (continued) nodg 196 number of spare disks 515 private in clusters 455 recovering destroyed 241 recovery from failed reconfiguration 231 removing disks from 201 renaming 213 reorganizing 227 reserving minor numbers 219 restarting moved volumes 236, 238–239 root 32 rootdg 32, 195 serial split brain condition 222 setting connectivity policies in clusters 482 setting default disk group 197 setting failure policies in clusters 482 setting number of configuration copies 531 shared in clus
Index disks (continued) naming schemes 77 nopriv 80 obtaining performance statistics 527 OTHER_DISKS category 84 partial failure messages 435 postponing replacement 124 primary path 152 putting under control of VxVM 97 reinitializing 111 releasing from disk groups 240 removing 121, 124 removing from disk groups 201 removing from DISKS category 95 removing from pool of hot-relocation spares 440 removing from VxVM control 123, 201 removing tags from 208 removing with subdisks 122–123 renaming 131 replacing 1
606 Index DMP (continued) path failover mechanism 141 path-switch tunable 543 renaming an enclosure 184 restore policy 189 scheduling I/O on secondary paths 179 setting the DMP restore polling interval 189 stopping the DMP restore daemon 191 vxdmpadm 153 DMP nodes displaying consolidated information 155 setting names 152 DMP support JBOD devices 83 dmp_cache_open tunable 540 dmp_daemon_count tunable 540 dmp_delayq_interval tunable 540 dmp_failed_io_threshold tunable 541 dmp_fast_recovery tunable 541 dmp_h
Index error messages (continued) tmpsize too small to perform this relayout 55 Volume has different organization in each mirror 331 vxdg listmove failed 232 errord daemon 141 exclusive-write mode 457 exclusivewrite mode 456 explicit failover mode 138 extended attributes devices 163 F fabric devices 82 FAILFAST flag 141 failover 452–453 failover mode 138 failure handled by hot-relocation 433 failure in RAID-5 handled by hot-relocation 433 failure policies 461 setting for disk groups 482 FastResync checking
608 Index hot-relocation (continued) Storage Expert rules 514 subdisk relocation 438 subdisk relocation messages 443 unrelocating subdisks 443 unrelocating subdisks using vxassist 445 unrelocating subdisks using vxdiskadm 444 unrelocating subdisks using vxunreloc 445 use of free space in disk groups 437 use of spare disks 437 use of spare disks and free space 437 using only spare disks for 442 vxrelocd 432 HP disk format 80 hpdisk format 80 I I/O gathering statistics for DMP 166 kernel threads 22 schedul
Index left-symmetric layout 50 len subdisk attribute 257 LIF area 113 LIF LABEL record 113 link objects 358 linked break-off snapshots 358 creating 378 linked third-mirror snapshots reattaching 386 listing DMP nodes 155 supported disk arrays 91 load balancing 138 across nodes in a cluster 453 displaying policy for 174 specifying policy for 175 lock clearing on disks 217 LOG plex state 262 log subdisks 584 associating with plexes 255 DRL 61 logdisk 292, 298–299 logical units 138 loglen attribute 294 logs ad
610 Index mirrored volumes (continued) removing sequential DRL logs 328 snapshots 66 mirrored-concatenated volumes converting to concatenated-mirror 347 creating 289 defined 45 mirrored-stripe volumes benefits of 45 checking configuration 514 converting to striped-mirror 347 creating 295 defined 275 performance 523 mirroring defined 44 guidelines 584 mirroring controllers 584 mirroring plus striping 46 mirrors adding to volumes 315 boot disk 114 creating of VxVM root disk 115 creating snapshot 398 defined
Index online backups implementing 421 online invalid status 133 online relayout changing number of columns 344 changing region size 346 changing speed of 346 changing stripe unit size 344 combining with conversion 347 controlling progress of 346 defined 54 destination layouts 340 failure recovery 58 how it works 54 limitations 57 monitoring tasks for 346 pausing 346 performing 340 resuming 346 reversing direction of 346 specifying non-default 344 specifying plexes 345 specifying task tags for 345 temporary
612 Index physical disks (continued) moving between disk groups 216, 235 moving disk groups between systems 216 moving volumes from 337 partial failure messages 435 postponing replacement 124 releasing from disk groups 240 removing 121, 124 removing from disk groups 201 removing from pool of hot-relocation spares 440 removing with subdisks 122–123 replacing 124 replacing removed 127 reserving for special purposes 131 spare 437 taking offline 130 unreserving 132 physical objects 23 ping-pong effect 147 ple
Index preferred plex read policy 335 preferred priority path attribute 172 primary path 138, 152 primary path attribute 173 priority load balancing 178 private disk groups converting from shared 480 in clusters 455 private network in clusters 454 private region checking size of configuration database 511 configuration database 79 defined 79 effect of large disk groups on 195 public region 79 putil plex attribute 271 subdisk attribute 257 Q queued I/Os displaying statistics 169 R RAID-0 40 RAID-0+1 45 RAI
614 Index relayout (continued) changing region size 346 changing speed of 346 changing stripe unit size 344 combining with conversion 347 controlling progress of 346 limitations 57 monitoring tasks for 346 online 54 pausing 346 performing online 340 resuming 346 reversing direction of 346 specifying non-default 344 specifying plexes 345 specifying task tags for 345 storage 54 transformation characteristics 58 types of transformation 341 viewing status of 345 relocation automatic 431 complete failure messa
Index rules (continued) checking plex and volume states 513 checking RAID-5 log size 511 checking rootability 515 checking stripe unit size 514 checking system name 515 checking volume redundancy 513 definitions 516–517 finding information about 507 for checking hardware 515 for checking rootability 515 for checking system name 515 for disk groups 511 for disk sparing 514 for recovery time 510 for striped mirror volumes 513 listing attributes 507 result types 508 running 508 setting values of attributes 50
616 Index snapclear creating independent volumes 403 SNAPDIS plex state 263 SNAPDONE plex state 263 snapmir snapshot type 391 snapshot hierarchies creating 384 splitting 389 snapshot mirrors adding to volumes 383 removing from volumes 384 snapshots adding mirrors to volumes 383 adding plexes to 403 and FastResync 66 backing up multiple volumes 380, 401 backing up volumes online using 364 cascaded 359 comparison of features 64 converting plexes to 400 creating a hierarchy of 384 creating backups using thir
Index Storage Expert (continued) introduced 505 list keyword 507 listing rule attributes 507 obtaining a description of a rule 507 requirements 506 rule attributes 520 rule definitions 516–517 rule result types 508 rules 506 rules engine 506 run keyword 508 running a rule 508 setting values of rule attributes 508 vxse 505 storage failures 503 storage processor 138 storage relayout 54 stripe columns 41 stripe unit size recommendations 585 stripe units changing size 344 checking size 514 defined 41 stripe-mi
618 Index synchronization (continued) improving performance of 392 syncing attribute 365, 391 syncpause 392 syncresume 392 syncstart 392 syncstop 392 syncwait 392 system names checking 515 T t# 78 tags for tasks 310 listing for disks 207 removing from disks 208 removing from volumes 334 renaming 334 setting on disks 206 setting on volumes 299, 334 specifying for online relayout tasks 345 specifying for tasks 310 target IDs specifying to vxassist 283 target mirroring 285, 296 targets listing 88 task monit
Index tunables (continued) voliot_iobuf_max 538 voliot_max_open 538 volpagemod_max_memsz 539 volraid_minpool_size 539 volraid_rsrtransmax 539 tutil plex attribute 271 subdisk attribute 257 U UDID flag 205 udid_mismatch flag 205 units of size 250 use_all_paths attribute 179 use_avid vxddladm option 100 user-specified device names 152 usesfsmartmove parameter 281 V V-5-1-2536 331 V-5-1-2829 242 V-5-1-552 201 V-5-1-569 475 V-5-1-587 217 V-5-2-3091 232 V-5-2-369 202 V-5-2-4292 232 version 0 of DCOs 68 versio
620 Index volume sets (continued) enabling access to raw device nodes 416 listing details of 413 raw device nodes 415 removing volumes from 415 starting 414 stopping 414 volume states ACTIVE 308 CLEAN 308 EMPTY 308 INVALID 308 NEEDSYNC 308 REPLAY 308 SYNC 309 volumes accessing device files 304, 587 adding DRL logs 326 adding logs and maps to 317 adding mirrors 315 adding RAID-5 logs 328 adding sequential DRL logs 326 adding snapshot mirrors to 383 adding subdisks to plexes of 254 adding to volume sets 413
Index volumes (continued) layered 46, 52, 276 limit on number of plexes 35 limitations 35 making immediately available for use 303 maximum number of 533 maximum number of data plexes 524 merging snapshots 402 mirrored 44, 275 mirrored-concatenated 45 mirrored-stripe 45, 275 mirroring across controllers 287, 296 mirroring across targets 285, 296 mirroring all 315 mirroring on disks 315 mirroring VxVM-rootable 114 moving from VM disks 337 moving to improve performance 527 names 35 naming snap 363 obtaining p
622 Index vxassist (continued) configuring site consistency on volumes 496 converting between layered and non-layered volumes 347 creating cache volumes 368 creating concatenated-mirror volumes 290 creating mirrored volumes 289 creating mirrored-concatenated volumes 289 creating mirrored-stripe volumes 295 creating RAID-5 volumes 298 creating snapshots 396 creating striped volumes 294 creating striped-mirror volumes 296 creating volumes 278 creating volumes for use as full-sized instant snapshots 370 crea
Index vxddladm adding disks to DISKS category 93 adding foreign devices 96 changing naming scheme 100 displaying the disk-naming scheme 102 listing all devices 86 listing configured devices 89 listing configured targets 88 listing excluded disk arrays 92–93 listing ports on a Host Bus Adapter 87 listing supported disk arrays 91 listing supported disks in DISKS category 92 listing supported HBAs 87 removing disks from DISKS category 95–96 setting iSCSI parameters 89 used to exclude support for disk arrays 9
624 Index vxdiskadd (continued) creating disk groups 200 placing disks under VxVM control 112 vxdiskadm Add or initialize one or more disks 107, 200 adding disks 107 adding disks to disk groups 200 Change/display the default disk layout 107 changing the disk-naming scheme 98 creating disk groups 200 deporting disk groups 203 Disable (offline) a disk device 130 Enable (online) a disk device 129 Enable access to (import) a disk group 204 Exclude a disk from hot-relocation use 441 excluding free space on dis
Index vxedit (continued) removing volumes 337 renaming disks 131 reserving disks 132 VxFS file system resizing 330 vxiod I/O kernel threads 22 vxmake associating plexes with volumes 266 associating subdisks with new plexes 253 creating cache objects 369 creating plexes 260, 315 creating striped plexes 260 creating subdisks 250 creating volumes 300 using description file with 302 vxmend re-enabling plexes 268 taking plexes offline 266, 313 vxmirror configuring VxVM default behavior 315 mirroring volumes 315
626 Index vxse_dg3 rule to check on disk config size 512 vxse_dg4 rule to check disk group version number 512 vxse_dg5 rule to check number of configuration copies in disk group 512 vxse_dg6 rule to check for non-imported disk groups 512 vxse_disk rule to check for initialized disks 513 vxse_disklog rule to check for multiple RAID-5 logs on a disk 510 vxse_drl1 rule to check for mirrored volumes without a DRL 510 vxse_drl2 rule to check for mirrored DRL 510 vxse_host rule to check system name 515 vxse_mir
Index VxVM benefits to performance 521 cluster functionality (CVM) 451 configuration daemon 246 configuring disk devices 81 configuring to create mirrored volumes 315 dependency on operating system 22 disk discovery 83 granularity of memory allocation by 536 limitations of shared disk groups 463 maximum number of data plexes per volume 524 maximum number of subdisks per plex 535 maximum number of volumes 533 maximum size of memory pool 537 minimum size of memory pool 539 objects in 29 operation in clusters