White Papers
Table Of Contents
- 1 Introduction
- 2 Fibre Channel switch zoning
- 3 Host initiator settings
- 4 Modifying queue depth and timeouts
- 4.1 Host bus adapter queue depth
- 4.2 Storage driver queue depth and timeouts
- 4.3 Adjusting settings for permanent device loss conditions
- 4.4 Modifying the VMFS queue depth for virtual machines (DSNRO)
- 4.5 Adaptive queue depth
- 4.6 Modifying the guest operating system queue depth
- 4.7 Setting operating system disk timeouts
- 5 Guest virtual SCSI adapter selection
- 6 Mapping volumes to an ESXi server
- 6.1 Basic volume mapping concepts
- 6.2 Basic SC Series volume mappings
- 6.3 Multipathed volume concepts
- 6.4 Multipathed SC Series volumes
- 6.5 Configuring the VMware iSCSI software initiator for a single path
- 6.6 Configuring the VMware iSCSI software initiator for multipathing
- 6.7 iSCSI port multi-VLAN configuration recommendations
- 6.8 Configuring the FCoE software initiator for multipathing
- 6.9 VMware multipathing policies
- 6.10 Multipathing using a fixed path selection policy
- 6.11 Multipathing using a round robin path selection policy
- 6.12 Asymmetric logical unit access (ALUA) for front-end SAS
- 6.13 Unmapping volumes from an ESXi host
- 6.14 Mapping volumes from multiple arrays
- 6.15 Multipathing resources
- 7 Boot from SAN
- 8 Volume creation and sizing
- 9 Volume mapping layout
- 10 Raw device mapping (RDM)
- 11 Data Progression and storage profile selection
- 12 Thin provisioning and virtual disks
- 13 Extending VMware volumes
- 14 Snapshots (replays) and virtual machine backups
- 15 Replication and remote recovery
- 16 VMware storage features
- A Determining the appropriate queue depth for an ESXi host
- B Deploying vSphere client plug-ins
- C Configuring Dell Storage Manager VMware integrations
- D Host and cluster settings
- E Additional resources
Volume mapping layout
45 Dell EMC SC Series: Best Practices with VMware vSphere | 2060-M-BP-V
9.2 One virtual machine per volume
Although creating one volume for each virtual machine is not a common technique, there are both advantages
and disadvantages that will be discussed below. Keep in mind that deciding to use this technique should be
based on business-related factors and may not be appropriate for all circumstances. Using a 1:1 virtual
machine-to-datastore ratio should be the exception, not the rule.
Advantages of creating one volume per virtual machine include:
• Granularity in replication: Since the SC Series replicates at the volume level, if there is one virtual
machine per volume, administrators can choose which virtual machine to replicate.
• Reduced I/O contention: A single volume is dedicated to a single virtual machine.
• Flexibility with volume mappings: Since a path can be individually assigned to each volume, it could
allow a virtual machine a specific path to a controller.
• Statistical reporting: Storage usage and performance can be monitored for an individual virtual
machine.
• Simplified backup and restore of an entire virtual machine: If a VM needs to be restored, an
administrator can unmap or remap a snapshot in its place.
Disadvantages of creating one volume per virtual machine include:
• Maximum virtual machines equal to disk device maximums in the ESXi cluster: For example, if the
HBA has a maximum limit of 512 volumes that can be mapped to the ESXi host. Since each logical
unit number can be used only once when mapping across multiple ESXi hosts. It would have a 512
virtual machine limit (assuming that no extra LUNs would be needed for recoveries).
• Increased administrative overhead: Managing a volume for each virtual machine and all the
corresponding mappings may be challenging.