White Papers
Table Of Contents
- 1 Introduction
- 2 Fibre Channel switch zoning
- 3 Host initiator settings
- 4 Modifying queue depth and timeouts
- 4.1 Host bus adapter queue depth
- 4.2 Storage driver queue depth and timeouts
- 4.3 Adjusting settings for permanent device loss conditions
- 4.4 Modifying the VMFS queue depth for virtual machines (DSNRO)
- 4.5 Adaptive queue depth
- 4.6 Modifying the guest operating system queue depth
- 4.7 Setting operating system disk timeouts
- 5 Guest virtual SCSI adapter selection
- 6 Mapping volumes to an ESXi server
- 6.1 Basic volume mapping concepts
- 6.2 Basic SC Series volume mappings
- 6.3 Multipathed volume concepts
- 6.4 Multipathed SC Series volumes
- 6.5 Configuring the VMware iSCSI software initiator for a single path
- 6.6 Configuring the VMware iSCSI software initiator for multipathing
- 6.7 iSCSI port multi-VLAN configuration recommendations
- 6.8 Configuring the FCoE software initiator for multipathing
- 6.9 VMware multipathing policies
- 6.10 Multipathing using a fixed path selection policy
- 6.11 Multipathing using a round robin path selection policy
- 6.12 Asymmetric logical unit access (ALUA) for front-end SAS
- 6.13 Unmapping volumes from an ESXi host
- 6.14 Mapping volumes from multiple arrays
- 6.15 Multipathing resources
- 7 Boot from SAN
- 8 Volume creation and sizing
- 9 Volume mapping layout
- 10 Raw device mapping (RDM)
- 11 Data Progression and storage profile selection
- 12 Thin provisioning and virtual disks
- 13 Extending VMware volumes
- 14 Snapshots (replays) and virtual machine backups
- 15 Replication and remote recovery
- 16 VMware storage features
- A Determining the appropriate queue depth for an ESXi host
- B Deploying vSphere client plug-ins
- C Configuring Dell Storage Manager VMware integrations
- D Host and cluster settings
- E Additional resources
Boot from SAN
36 Dell EMC SC Series: Best Practices with VMware vSphere | 2060-M-BP-V
7 Boot from SAN
Booting ESXi hosts from SAN yields both advantages and disadvantages. Sometimes, such as with blade
servers that do not have internal disk drives, booting from SAN may be the only option. However, many ESXi
hosts can have internal mirrored drives, providing the flexibility of choice. The benefits of booting from SAN
are obvious—it alleviates the need for internal drives and allows the ability to take snapshots (replays) of the
boot volume.
However, there are also benefits to booting from local disks and having the virtual machines on SAN
resources. Booting from local disks gives ESXi the advantage of staying online if maintenance needs to be
performed on Fibre Channel switches, Ethernet switches, or the array itself. The other clear advantage of
booting from local disks is using the VMware iSCSI software initiator instead of iSCSI HBAs or Fibre Channel
cards.
The decision to boot from SAN depends on many business-related factors including cost, recoverability, and
configuration needs. Dell does not offer a specific recommendation.
7.1 Configuring boot from SAN
When deciding to boot ESXi hosts from SAN, a few best practices need consideration.
When mapping the boot volume to the ESXi host for the initial install, the boot volume should only be mapped
down a single path to a single HBA. Once ESXi has been loaded and multipath modules are operating
correctly, the second path can be added to the boot volume.
To use the advanced mapping screen in DSM, it must be enabled through the Preferences menu in the SC
Series settings.
Enabling advanced mapping