HP StorageWorks SAN Virtualization Services Platform Best Practices Guide Abstract The SAN Virtualization Services Platform (SVSP) can provide ease of use for day-to-day operations. However, the breadth and flexibility of the SVSP capabilities, plus its implementation as a fabric-based storage solution, introduces the potential for performance and stability issues.
© Copyright 2010, 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Sizing the SVSP configuration......................................................................7 Sizing the domain....................................................................................................................7 Determining current performance................................................................................................7 SAN and storage considerations................................................................................................
DPM group.......................................................................................................................27 Primary and secondary DPMs..............................................................................................27 Pool characteristics.............................................................................................................27 Virtual disk path presentations and resource limits..................................................................
11 Asynchronous mirroring............................................................................49 Deploying asynchronous mirroring............................................................................................49 Using virtual disk groups.........................................................................................................49 User-created PiTs.....................................................................................................................
Size the SVSP solution appropriately.........................................................................................64 15 SVSP management..................................................................................65 Removing a failed back-end drive from the configuration..............................................................65 Importing and exporting arrays and LUs into an SVSP configuration..............................................65 Import and migrate............................
1 Sizing the SVSP configuration Sizing the SVSP configuration is important to meet the following needs: • Availability • Performance • Stability • Ease of operation Understanding the environment is critical when configuring SVSP. Although the process is iterative, taking time up front to analyze and monitor initial configurations can significantly improve the SVSP and its ability to deliver increased capability. HP Services can help you; contact your HP representative for more information.
SAN and storage considerations It is very important to consider the current SAN design and the added impact that the SVSP will have. • Monitor the interswitch links (ISLs) on the switches to ensure utilization is not being exceeded. • High bandwidth devices (such as tape backup servers and storage arrays) must be on the same SAN switches as the SVSP components. • The VSM servers perform data movement tasks. The throughput and performance parameters of these tasks must be taken into account.
NOTE: The procedure refers to the two DPMs as DPM 1 and DPM 2, and it is assumed that the Fibre Channel cables are disconnected from both DPMs. 1. Disable all licensed ports on all DPMs using the disable port [port number] command, where port number indicates the DPM Fibre Channel port number to be disabled. Port numbers range from 0–15. 2. Install new licenses on both VSM servers. 3. Failover all virtual disks from DPM 1 to DPM 2 (with both DPMs belonging to the same DPM group). 4.
2 Fabric topology This chapter describes specific SVSP guidelines for SAN configuration. For additional information about designing the SVSP fabric configuration, see the following documents: • The HP StorageWorks SAN Design Reference Guide provides SVSP and SAN design information, and is available from http://www.hp.com/go/sandesignguide.
TIP: When designing a fault-tolerant configuration, be sure to account for a complete fabric failure. If the failure of one fabric causes the other fabric to become congested, the Quality of Service (QoS) may be impacted much more than anticipated. Therefore, the fault-tolerant configuration bandwidth alerts should be reduced to a level so that after a failure, bandwidth utilization does not exceed 90%.
Core-edge switch fabric topology Core-edge switch topologies introduce ISLs. Best configuration practice is that the DPMs, VSM servers, and array controllers are connected directly to the core switches as shown with red lines in Figure 3 (page 12). Edge switches are used to fan out to a larger numbers of hosts as shown with the dark lines in Figure 2 (page 11).
Figure 4 Incorrect core-edge switch topology (do not configure path P4) Complex fabric topologies The introduction of additional switches and the ISLs associated with them is supported when the overall fabric topology is carefully designed and monitored to avoid congestion. If you choose to work with complex fabric topologies, you must handle the responsibility for monitoring this design, and be prepared to make adjustments as signs of congestion appear.
In SVSP, there needs to be zoning that keeps a good conversation flow among the initiators and targets of these devices. Initiator Target Host DPM DPM Storage arrays VSM server Storage arrays VSM server VSM server VSM server DPM DPM VSM server Alternatively, you want to prohibit any communication between these devices: Initiator Target Host VSM server Host SVSP back-end LU Host Host Array Array Figure 5 (page 15) shows the original zoning used with SVSP, which has since been changed.
Figure 5 Deprecated zoning (not recommended) Figure 6 (page 15) shows the current recommended zoning. This improved implementation provides these necessary benefits: • The front-end host ports have two paths to a server. More paths may be needed for additional bandwidth, but should be avoided if not required. • The VSM server and DPM communicate with each other for control. • The DPMs and VSMs have limited redundant paths to the arrays. • Easier to understand, troubleshoot, set up, and expand.
SVSP path limits SVSP has a limited number of supported front-end and back-end paths. To avoid unpredictable results, ensure that your configuration does not exceed these limits (see Figure 7 (page 16)). DPM paths to back-end LUs The DPM supports 4096 PSCs (paths to storage controllers).
Figure 8 Maximizing storage array ports SVSP path limits 17
3 Array configuration To ensure optimum performance and availability of the SVSP configuration, it is important to understand how logical units (called back-end logical units or LUs) are: • Presented by the array to SVSP • Seen and accessed by SVSP Back-end LU configuration A back-end LU is a logical unit presented directly to the SVSP from a storage array. The question arises as to how many LUs should be presented from the array to the DPMs.
TIP: Use HP Command View EVAPerf or similar tools to monitor the load on back-end LUs. Unless you are specifically conducting stress testing, do not run storage "in the red-zone" and maintain maximum capacity of 80% or less during normal testing. You could gain performance with less impact to your production systems. When you add drives to an EVA, the EVA will automatically distribute back-end LUs (BELUs) across all new drives in the EVAs, gaining the added performance of the drives in the system.
EMC Symmetrix DMX arrays To configure EMC Symmetrix arrays to work with SVSP and return the correct information in SCSI mode, turn on the following flags for each EMC Symmetrix controller. Bit Description Common_Serial_Number (C) This flag is enabled for multipath configurations or hosts that need a unique serial number to determine which paths lead to the same device.
4 Setup virtual disk The setup virtual disk contains the SVSP metadata. It is accessed by the VSM servers when the SVSP configuration changes, and must be configured as a high performance and highly available virtual disk. Configurations with snapshots, mirrors, and thin provisioned virtual disks generate the most demand for setup virtual disk access. Typically, the larger the environment, the more frequently the VSM server needs to update the setup virtual disk.
2. 3. Select the Sync Mirror Task tab to view which storage pools (see the Storage Pool column) your setup virtual disks are currently located on. Create a new sync mirror task (Manage > Add Task) and pick a storage pool from the array where you want the new setup virtual disk to be located. When the status (in the Status column) reaches Normal, the new setup volume is in use. Moving a setup virtual disk Suppose you wish move a setup virtual disk to a different array.
5 Storage pools Construction of storage pools plays a critical role and ultimately affects the performance and availability of the virtual disks presented to servers in the SVSP environment. This chapter discusses several factors that need to be considered when building and managing storage pools. Types of storage pools Storage Pools are built from LUs that are presented from one or more storage arrays to the SVSP; these LUs are called back-end LUs.
of these large disk groups. Contrast this with the HP XP array that allows for either a 4- or 8-member disk group for RAID1. RAID5 groups on the HP XP array can be built from 4, 8, 16, or 32 spindles. The number of array ports influences the number of paths available to SVSP for communicating with the back-end LUs. Using multiple ports affects performance as well as availability. See “Array configuration” (page 18) for more information.
Capacity pools Figure 11 (page 25) shows how back-end LUs are grouped into a capacity pool and then made into SVSP virtual disks. The back-end LUs must be a minimum of 1 GB in size to be seen by SVSP, and no larger than 2 TB. Once the virtual disks are created, they can be presented to one or more hosts. Figure 11 Virtual disk creation TIP: SVSP field experience indicates that storage pools should have a minimum of 8 physical LUs with 16 physical LUs being optimal.
Other considerations when creating performance pools are: • Performance pools should be created with back-end LUs that are the same size, otherwise SVSP uses the capacity of the smallest back-end LU for all the virtual disks. • SVSP does not allow another back-end LU to be added to a stripe set once a stripe set has been created. • SVSP does allow adding multiple stripe sets to a storage pool, but SVSP will not rebalance as an EVA would.
6 Virtual disks Virtual disk characteristics Before creating a virtual disk and presenting to a host, consider the following characteristics: Name Choose the virtual disk name carefully. If you change the name after creation, you must take the virtual disk offline momentarily. TIP: Name a virtual disk for the data it contains, since it may be moved to a different host or to different storage. Do not name it after the host or the storage where the virtual disk was first created.
have this pattern. Although SVSP uses redirect-on-write snapshots (PiTs), SVSP still must perform a segment read followed by a segment write to compose the new data into a segment on first write to that segment after the snapshot (PiT) is created. Using PiT storage allocation: • Allocation is from the same storage pool as the original virtual disk • Initial allocation = 0.
Additional capacity (4) 4 GB 8 GB Additional capacity (5) 5 GB 13 GB Additional capacity (6) 5 GB 18 GB Snapclones The snapclone feature allows you to create multiple physical copies of a SVSP virtual disk, snapshot, virtual disk group, or virtual disk group snapshot. The snapclone function can create copies within a single SVSP domain (intra-domain) or between SVSP domains (inter-domain). The copy process is carried out without any use of host resources.
Virtual disks with dynamically allocated capacity Thin provisioning is the ability to provide more capacity to host servers than physically available today and allows virtual disks to grow dynamically without impacting the operating system. This allows the administrator to manage capacity as it is actually used instead of using it all at the time of creation. Alerts and warnings allow the administrator to manage the resources needed for the growth.
See “Thin provisioning and operating system interaction” (page 70) for more information about how various operating systems treat thin virtual disks. Thin virtual disks are: • Allocated from only one storage pool • Initial real allocated capacity • ◦ 0.5 GB < real capacity < smallest of {32 GB or 10% of virtual capacity} ◦ The minimum initial allocated capacity is 1 GB ◦ For a 1 TB thin virtual disk, the maximum initial allocation is 32 GB (1,024 GB * 0.
7 Hardware To determine the configuration limits of the current SVSP build, select Tools > Maintenance > Manage Limits in the VSM GUI and review the displayed information. Knowing the support limitations is important to ensure you do not exceed the limits, which can result in difficulty troubleshooting system problems. Data Path Modules The Data Path Modules (DPMs) are a key component to the SVSP system.
NOTE: Remember to zone any given host to only one DPM pair. The above is just an example; you should determine the zoning strategy that works best for your environment. Whatever strategy you use, ensure you use it consistently in the configuration to aid troubleshooting. VSM servers VSM servers in SVSP perform two roles: • Manage configuration and map information • Act as data movers Consider the following configuration guidelines for VSM servers: • The VSM server must set up as part of a workgroup.
8 Hosts Before adding or configuring a host, you must check the latest release notes or SPOCK for support. When configuring hosts, consider the following: • All hosts must have at least two HBA ports. Each HBA should be in a zone with both DPMs. This is called a fully cross-connected front-side. • Hosts must not be zoned to see the VSMs over the Fibre Channel SAN. • After presenting or removing any new virtual disks to a host, perform a rescan on the host to discover the new (or absent) virtual disks.
In the SVSP GUI, there are two personality options used with Create VMware Host: 1. 2. OS Type: VMWARE Personality: HP-EVA 3000 or ALUA If you select HP-EVA 3000, the SVSP performs as an active/passive array with product ID HSV100. Active/passive means a LUN can only be operational from one DPM or another, but not both.
Additional VMware parameters • ESX/ESXi: VMFS Filesystem Blocksize (1=Default, 2,4,8 MB)— 8 MB is recommended in order to be able to create VMDKs with a size up to 2 TB (can be important in some environments). • ESX/ESXi configuration/advanced parameters: ◦ Disk.UseDeviceReset: 0 (where default is 1) ◦ Disk.UseLunReset: 1 (default in ESX/ESXi 4.1 is already 1, so no change is required) • In ESX/ESXi, do not enable Adaptive queue depth algorithm as stated in http:// kb.vmware.com/kb/1008113.
9 Data migration Migration tasks Migration is a task, and counts against a domain limit of simultaneous tasks such as asynchronous mirrors, snapclones, and remote copies. Because migration tasks immediately start processing after the task has been created, and the priority of the migration task against the host I/Os cannot be defined, HP recommends planning a migration when the application I/O requirements are lowest. Figure 15 (page 37) shows how a migration is accomplished.
SVSP supports two types of PiTs: • Crash consistent—Occurs if I/O continues to the virtual disk during PiT creation. • Transaction consistent—Occurs when the application has been stopped and the cache has been flushed to the device (can be accomplished by unmounting the device from the host). Additional information about PiTs: • PiTs start at 0.5 GB, and grow by the smaller of current temporary virtual disk size, or 5% of the original virtual disk size.
Figure 16 Snapshot benefits For more information about SVSP PiTs and snapshots, see the HP StorageWorks SAN Virtualization Services Platform Manager User Guide. For more information about commands or parameters, see the HP StorageWorks SAN Virtualization Services Platform Command Line Interface User Guide.
10 Synchronous mirroring Synchronous mirroring is advantageous because the distance between sites can reach 100 KM (in a stretched domain), failover is transparent for most events, and the RPO and RTO are predictable. Overview A synchronous mirror is started by creating a synchronous mirror group for the virtual disk to be mirrored. This group replaces the virtual disk and creates an initial task representing the operations to the source virtual disk.
Table 2 (page 41) show the recommendation from HP for the sizing of SVSP configurations that use synchronous mirrors heavily. The term safe means that the performance of two DPMs is examined and cut in half to account for failure of one DPM. CAUTION: Existing installations may have relied on performance numbers available for basic virtual disks that were much more optimistic than these measured for synchronous mirroring.
A synchronous mirror group can be created only if the virtual disk: • Has host permissions. • Status is Normal. • Does not contain PiTs. • Is not a member of a virtual disk group (VDG). NOTE: See the HP StorageWorks SAN Virtualization Service Platform Administrator Guide for information and examples about test and recovery of various synchronous mirror failure scenarios. Dirty regions When the synchronous mirror group receives a write, each task issues that write to its virtual disk.
NOTE: This recommendation has changed since earlier installations of SVSP. Be sure your configuration matches this recommendation if running synchronous mirroring. Working set recommendations A working set is a description (maps within the DPM of 1 MB chunks of data) of all the storage currently in use on a particular virtual disk. When this working set size exceeds the SVSP limits for either synchronous mirroring or basic virtual disks, new maps must be imported from the VSM.
Prepare destination site group 1. Delete synchronous mirrors to prepare for migration (the synchronous mirror cannot be migrated). Delete the synchronous mirrors in the following order or you may lose host presentation: a. Detach the second task. b. Delete the synchronous mirror group. 2. 3. Delete SVSP virtual disks on the second site (former second task) to create pool space. Delete the pool (if there are still virtual disk segments, the deletion will not be possible).
a. Failover the SVSP virtual disks to one DPM (right-click the DPM, and select failover all). NOTE: Ensure all application servers have MPIO installed. Otherwise they will lose access to their virtual disks. b. c. d. e. 5. Verify that the DPM does not have any active virtual disks. Reboot the DPM. When the DPM is online, use PuTTY or Telnet and run show debug agentstate to verify that all paths to the back-end LUs display an OK status. Repeat steps a through d for the other DPMs.
EVA side Each DPM quad is specified as a host: • Primary DPM, first quad: DPMA1_Q1; dpm_21; dpm_23 • Secondary DPM, first quad: DPMA2_Q1; dpm_41; dpm_43 • Primary DPM, second quad: DPMA1_Q2; dpm_25; dpm_27 • Secondary DPM, second quad: DPMA2_Q2; dpm_45; dpm_47 Red virtual disk • RAID1|RAID5|RAID6 (all virtual disks must have same RAID level); Path Controller A; mapped to DPMA1_Q1, DPMA2_Q1 • Size: between 100 GB and 750 GB, depending on initial pool size (800 GB – 6 TB) Green virtual disk • RAI
• ◦ Consider the level of HP service required and currently available. ◦ Consider purchasing proactive services for health check reports and recommendations and for assistance and analysis of configuration changes. See “SAN congestion” (page 63) for suggestions on how to prevent SAN congestion. Stretched domains A stretched domain is a configuration that allows a domain to span a campus.
TIP: Because ISLs between both sides of the domain may become overloaded, it is important that they be monitored. It is also important that hosts and their storage are at the same site and virtual disks are assigned as active on the local DPM. If performance is important during component failure scenarios, these configurations should be tested.
11 Asynchronous mirroring Asynchronous mirroring is accomplished by sending the changes at regular intervals over iSCSI (WAN) to another SVSP domain. The number of PiTs kept at the source and destination is configurable, allowing a flexible framework to meet your needs. If the link is not sized appropriately, it is possible the PiTs will queue, compromising the RPO or RTO.
User-created PiTs User-created PiTs (through a script or the GUI) have the following benefits: • Can be created when the application is stopped (typically part of a script) • Are application-consistent and applications recover more quickly However, there are other considerations regarding user-created PiTs. For example, user-created PiTs are not deleted the same as automatic PiTs. See the HP StorageWorks SAN Virtualization Services Platform Administrator Guide for details.
virtual disk as the source virtual disk. The merge creates a new task, resuming mirroring from the newest PiT that is identical on the source and destination. Merge with rollback disabled To re-establish an asynchronous mirror task between virtual disks that were properly split, the last PiT copied from the source to the destination is identical to the source, and the PiT’s temporary virtual disk on the destination site is empty, so the merge succeeds.
12 Supported operations Supported operations and commands Table 4 (page 52) shows which operations can be performed using an existing task from the VSM GUI. In the table, a “Y” (Yes) means the operation is allowed, and an “N” (No) means the operation is not allowed.
13 Monitoring the SVSP environment The SVSP environment can be complex, and research into proper sizing is still ongoing, making it difficult to assess when configuration changes, additional quads, DPMs, or domains are needed to process the load. Monitoring of system health should be a part of any SAN, but as configurations approach the SVSP limits, it is important to monitor the system more closely.
◦ The Diagnostics panel in the DPM Management GUI may be used to monitor the performance of a DPM. ◦ Perfmon is an SNMP-based performance monitoring tool. It may be used on the VSM servers to provide insight into events and performance of the servers. ◦ License and capacity should be checked periodically to ensure they provide what is expected.
NOTE: To support the use of a Microsoft Outlook contact group, ensure that the VSM server is on the network and that the SMTP server and the Outlook mail server port are defined in VSM Monitor.
SAN Visibility SAN Visibility is a complementary software utility for HP customers that helps with SAN analysis, SAN diagnostics, and SAN optimization. SAN Visibility saves time, money, and effort by automating inventory activities and providing a quick and accurate view of your SAN topology. SAN Visibility has an automated report generation feature that produces recommendations, topology diagrams, and informative SAN element reports for switches, host bus adapters, and storage array connectivity.
TIP: Test the failover procedure periodically. Checking the DPM zoning To ensure that the DPM recognizes all of the devices on its back-end (storage arrays and VSM servers) and front-end (host HBAs) ports: 1. 2. 3. Open a Telnet session to the DPM. Log in to the DPM as the administrator. Type show debug wwpn and press Enter. The expected output is similar to this example.
Note that ports 0 and 2 (quad 1 front-side ports) have the same count of 'Inits' (initiator remote ports) and that ports 1 and 3 (quad 1 back-side ports) have the same count of 'Targs' (target remote ports). The same is true for quad 2. The other DPM in this DPM group displays the same exact values. This indicates a symmetrical configuration. Also note that the 'Flags' virtual disks all have values that end in 3f9. This indicates that all the ports are up and in a good state.
Operating system-specific monitoring VMware recommends that very large I/O sizes be used for data transfers. These recommendations are valid when the storage is local to the servers; it does not apply to array controllers on the SAN, specifically SVSP. Check ESX servers to ensure that all paths to the DPMs are visible. If the zoning is correct, you may need to perform a rescan several times to ensure that the path is set up properly.
• User is not permitted to create new virtual disks • Expansion thresholds for thin virtual disks, PiTs, and snapshots are reduced to 1 GB per expansion request. PiT capacity planning When looking into PiT capacity planning it is important to understand the Recovery Point Objective (RPO). That is, what is the frequency of PiT creation and the retention period (how long do you need to keep the PiT around). Some experimentation may need to be done to calculate the average rate of change of a virtual disk.
14 Remedies for SAN and SVSP issues Setup virtual disk access times Access to the VSM setup virtual disks can be monitored with the Windows Performance Monitor tool. On a VSM server, Windows Performance Monitor includes an additional dedicated performance object that the VSM agent adds during VSM software installation. To launch Windows Performance Monitor, click Start > Programs > Administrative tools > Performance (or Start > Run > Perfmon).
SCSI flow control The SCSI standard provides mechanisms for target devices to temporarily back off initiators. These mechanisms are not available to SVSP because it is not an end point in the way it operates. SVSP operations There are some SVSP operations that require the breaking up of large commands, striping for performance, or mirroring to multiple destinations. These operations take many resources and time, and can slow down the processing of new commands by SVSP.
Figure 20 Managing queue depth There is a relationship between queue depth and the size of transfers. Transfer sizes greater than 128 KB are not recommended because there is no additional gain in performance associated with larger I/O size, and the latencies for these larger I/Os are greater. In Step 2 of the flowchart, if you eliminate the conditions, it suggests you may want to increase queuing. The goal is not that you find the exact number, but get the best performance.
Monitoring ISLs If ISLs are being used, set up alert or notification levels on the switches to ensure that the utilization of the links are monitored. Sizing of ISLs can frequently cause issues, because the ISLs must be able to support the bandwidth needed. Size the SVSP solution appropriately The performance sizing numbers have changed since many of SVSP solutions have been sold. Refer to the latest numbers and analyze your performance needs.
15 SVSP management Removing a failed back-end drive from the configuration See the HP StorageWorks SAN Virtualization Services Platform Administrator Guide for the procedure. It is critical that the steps be followed exactly. IMPORTANT: If a remove is not performed correctly, data availability may be at risk.
to their original positions. A virtual disk used for a synchronous mirror and then exported when the synchronous mirror is dissolved can only be exported after the task to the mirror copy is removed. CAUTION: If the task is detached on the original virtual disk first, the export will not be possible. Import, clone, and return capacity You can import an LU into SVSP management and then use a snapclone to make a copy of the imported virtual disk to a new storage pool.
NOTE: Except for the VSM server operating system license, SVSP licenses are permanent and do not expire when SVSP software or hardware versions change. TIP: Monitor license use periodically. If your environment grows rapidly, you should monitor more frequently.
16 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
Typographic conventions Table 5 Document conventions Convention Element Blue text: Table 5 (page 69) Cross-reference links and e-mail addresses Blue, underlined text: http://www.hp.
A Thin provisioning and operating system interaction Executive summary Thin provisioned storage allows the SVSP administrator to pre-plan user capacity needs and allocate virtual storage based on the expectation, but only consume the actual disk space the user is accessing. As a result, the administrators no longer have to concern themselves with wasted storage not currently in use by the users.
There are ways in which ThP can reduce the cost of ownership and significantly accelerate return on investment (ROI): Advantage Description Notes Simplified virtual disk design • Smooth implementation of logical virtual disk system without physical format • Logical virtual disk design independent of physical configuration • Actual capacity design independent of logical virtual disk configuration Increased cost for storage Implementation of large capacity virtual disks for reasonable implementation disk
• It is not necessary to format a SVSP ThP virtual disk. Why? All ThP pool reads to unwritten data will return “all zero” data. • Restore only file backups, and never restore an image (sector based) backup into a ThP virtual disk. Why? Image backups are usually full virtual disk backups that do not differentiate between data and free space (for example, a 10 GB virtual disk with 1 GB of actual data will have a 10 GB backup image, while a file backup will only be a 1 GB backup set).
can set the virtual disk threshold between 1% and 99% in increments of 1%. The default value is 10%. ThP pool threshold Storage pool threshold is the percentage of the used storage pool capacity versus the total storage pool capacity. The threshold allows the user to define three alarm levels. You can set the pool threshold between 1% and 99% in increments of 1%. The default value is 10%.
Keep in mind that production cycles are not the same across all customer environments and that process controls differ. If you do not have precise answers for the above considerations (knowing that there is another threshold alarm preset at 80%), do the following: 1. Record the current capacity and time. 2. Set the virtual disk alarm threshold initially at 10% above the current capacity. 3. When the alarm triggers, record the time. 4.
3. 4. 5. 6. Partition the 2-TB virtual disk into 8 equal 250 GB host partitions . Present the first partition to your file system virtual disk group. Let the application use it for the first 6 months. The server administrator can add more partitions or utilize scripting to perform the LVM expansion as required. Benefits • This file system can never allocate more than 250 GB from the ThP pool unless permitted. • The storage pool consumption can be predicted over time resulting in better ThP management.
1 2 3 4 5 At FS creation time, the capacity of the storage pool is consumed up to 100% of the ThP virtual disk capacity. At FS creation time, the capacity of the storage pool is consumed up to 30% of the ThP virtual disk capacity. VMware eagerzeroedthick formatting is not recommended because it will force the virtual disk to fully allocate space in the storage pool. ZFS "zpool scrub" is not recommended to use because it will force the virtual disk to fully allocate.
Glossary This glossary defines acronyms and terms used with the SVSP solution. A access path A specific series of physical connections through which a device is recognized by another device. active boot set The boot set used to supply system software in a running system. Applies to the DPM. See also boot set.. active path A path that is currently available for use. See also passive path, and in use path..
D Data Path Module A SAN-based device, separate from the core Fibre Channel switching infrastructure, that provides storage virtualization services across heterogeneous hosts, storage, and SAN fabrics. The device runs a VSM fabric agent, communicates with a VSM server, is able to process virtual disk information, present virtual disks to servers as LUNs, and handle their I/Os by routing them to storage systems managed by the VSM server.
initiator device A device, such as an HBA installed into a server, that contains one or more initiator ports. initiator port A Fibre Channel port capable of issuing new SCSI commands over Fibre Channel (FCP) commands. interswitch link (ISL) A connection between two Fibre Channel switches that creates a single switch fabric. Multiple physical connections between the same two Fibre Channel switches create multiple ISLs. Each independent ISL is treated as a single path between the two switches.
physical disk A disk device that can be discovered and managed by VSM. PiT Point-in-Time. A VSM term denoting an entity created by a snapshot that represents the freezing of a virtual disk’s data at a particular time and the redirection of any further modifications to the virtual disk’s data to a new virtual disk, called a temporary virtual disk. POST Power-on Self Test. The diagnostic sequence executed by devices during system startup.
stripe set In VSM, a set of back-end LUs across which VSM stripes data, optionally used to build storage pools. SVSP domain Consists of all SVSP components and the storage they manage. synchronous mirroring A mode of data mirroring in which the updates on the mirror site are synchronized between destinations. system software image A software component, capable of being updated, that contains the operating environment for the Data Path Module, including the SVSP VSM agent for the Data Path Module.
VSM GUI Graphical user interface used to manage the HP StorageWorks SAN Virtualization Services Platform environment. VSM server VSM software that runs on a dedicated appliance connected to a SAN fabric and manages and controls all storage systems on the SAN. A VSM server virtualizes the storage space on the storage systems, creates storage pools and virtual disks, and provides agents with virtual disk information.