Users Guide
Table Of Contents
- Table of Contents
- Chapter 1: Introduction
- Chapter 2: Booting from SAN
- Chapter 3: Updating and Enabling Boot Code
- Chapter 4: Emulex LightPulse FC BIOS utility
- 4.1 Navigating the Emulex LightPulse FC BIOS Utility
- 4.2 Starting the Emulex LightPulse FC BIOS Utility
- 4.3 Enabling an Adapter to BFS
- 4.4 Scanning for Target Devices
- 4.5 Configuring Boot Devices
- 4.6 Configuring Advanced Adapter Parameters
- 4.6.1 Changing the Default AL_PA
- 4.6.2 Changing the PLOGI Retry Timer
- 4.6.3 Enabling or Disabling the Spinup Delay
- 4.6.4 Setting Auto Scan
- 4.6.5 Enabling or Disabling EDD 3.0
- 4.6.6 Enabling or Disabling the Start Unit Command
- 4.6.7 Enabling or Disabling the Environment Variable
- 4.6.8 Enabling or Disabling Auto Boot Sector
- 4.7 Configuring Adapter Firmware Parameters
- 4.8 Resetting the Adapter to Default Values
- 4.9 Using Multipath BFS
- Chapter 5: OpenBoot
- Chapter 6: Configuring Boot Using the UEFI HII
- 6.1 Prerequisites
- 6.2 Starting the UEFI HII
- 6.3 Configuring Boot in the UEFI HII
- 6.4 Setting Boot from SAN
- 6.5 Scanning for Fibre Devices
- 6.6 Adding Boot Devices
- 6.7 Deleting Boot Devices
- 6.8 Changing the Boot Device Order
- 6.9 Configuring HBA and Boot Parameters
- 6.9.1 Changing the Topology
- 6.9.2 Changing the PLOGI Retry Timer
- 6.9.3 Changing the Link Speed
- 6.9.4 Changing the Maximum LUNs per Target
- 6.9.5 Changing the Boot Target Scan Method
- 6.9.6 Changing the Device Discovery Delay
- 6.9.7 Configuring the Brocade FA-PWWN
- 6.9.8 Configuring the Brocade Boot LUN
- 6.9.9 Configuring 16G Forward Error Correction
- 6.9.10 Selecting Trunking
- 6.10 Resetting Emulex Adapters to Their Default Values
- 6.11 Displaying Adapter Information
- 6.12 Legacy-Only Configuration Settings
- 6.13 Requesting a Reset or Reconnect
- 6.14 Emulex Firmware Update Utility
- 6.15 NVMe over FC Boot Settings
- 6.16 Enabling or Disabling the HPE Shared Memory Feature (HPE Systems Only)
- Chapter 7: Troubleshooting
Broadcom BT-FC-UG126-100
16
Emulex Boot for the Fibre Channel Protocol User Guide
11. Return to the operating system installation GUI by pressing Ctrl+Alt+F6.
12. Click Reboot to complete the operating system installation.
After the installation is complete, the system reboots using the newly installed media.
NOTE: The operating system installer has a known issue in which the installer fails to set a UEFI boot entry. To work around
this issue, perform the following steps:
1. Press F11 to enter the UEFI Boot Menu.
2. Select the UEFI boot entry that is mapped to the adapter port that is configured for BFS.
3. Press Enter. The UEFI utility automatically adds a UEFI boot path for the adapter, and the operating system boots.
2.2.4 Configuring Boot from SAN for NVMe over FC on VMware
BFS for NVMe over FC in VMware is supported only on ESXi 7.0 U1.
To configure BFS for NVMe over FC on ESXi 7.0 U1, perform the following steps:
1. Follow the instructions in Section 6.15, NVMe over FC Boot Settings, to configure NVMe over FC boot and to add an
NVMe boot device using the UEFI utility.
NOTE: Before starting the installation, zone the target WWN appropriately to the initiator WWNs. Create the Namespace
of appropriate size in the NVMe target and map it to the initiator NQNs (for instructions see the note in
Section 6.15.1, Enabling NVMe over FC BFS).
2. Attach the operating system installation media to the server, and reboot or power on the server using the UEFI Boot
Menu.
NOTE: Use custom ESXi 7.0 U1 ISO with 12.8.x. lpfc and brcmnvmefc drivers (available with most server vendors) for
NVMe over FC BFS on VMware.
3. When the installation GUI appears, wait for the Welcome to the VMware ESXi 7.0.1 Installation screen.
4. Press Alt+F1 to switch to the ESXi console window.
5. Enter root as the login credential and press Enter. Leave the password blank and press Enter to continue.
6. Since hostd is not available use localcli instead of esxcli to perform operations in this shell.
7. At the command prompt, run the localcli nvme info get command to obtain the ESXi host NQN.
8. Make a note of this NQN string and provide it to the storage administrator to configure the boot namespace in the NVMe
target.
9. Issue the
localcli storage core adapter list command to obtain a list of available FC adapters.
10. After configuring the host NQN on the target, trigger a discovery at the host to discover the newly configured namespace
for boot. Use the localcli storage core adapter list command to obtain a list of available FC adapters. Use
one of the following methods to trigger a discovery:
a. Perform a LIP reset to a chosen FC adapter:
#localcli storage core adapter list localcli storage san fc reset -A <vmhba#>
b. Use the VMware native nvme fabrics command to discover an NVMe controller on a specific target port through a
specific NVMe adapter:
#localcli nvme fabrics discover -a<vmhba> -W=<Target WWNN> -w=<Target WWPN>