Technical White Paper PVRDMA Deployment and Configuration of QLogic CNA devices in VMware ESXi Abstract In server connectivity, transferring large amounts data can be a major overhead on the processor. In a conventional networking stack, packets received are stored in the memory of the operating system and later transferred to the application memory. This transfer causes a latency. Network adapters that implement Remote Direct Memory Access (RDMA) write data directly to the application memory.
Revisions Revisions Date Description June 2020 Initial release Acknowledgements Author: Syed Hussaini, Software Engineer Support: Krishnaprasad K, Senior Principal Engineering Technologist Gurupreet Kaushik, Technical Writer, IDD. The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Table of contents Table of contents Revisions.............................................................................................................................................................................2 Acknowledgements .............................................................................................................................................................2 Table of contents .................................................................................................
Executive summary Executive summary The speed at which data can be transferred is critical for efficiently using information. RDMA offers an ideal option for helping with better data center efficiency by reducing overall complexity and increasing the performance of data delivery. RDMA is designed to transfer data from storage device to server without passing the data through the CPU and main memory path of TCP/IP Ethernet.
Introduction 1 Introduction This document is intended to help the user understand Remote Direct Memory Access and provides step-bystep instructions to configure the RDMA or RoCE feature on Dell EMC PowerEdge server with QLogic network card on VMware ESXi. 1.1 Audience and Scope This white paper is intended for IT administrators and channel partners planning to configure Paravirtual RDMA on Dell EMC PowerEdge servers.
Introduction 1.3 Paravirtual RDMA Paravirtual RDMA (PVRDMA) is a new PCIe virtual network interface card (NIC) which supports standard RDMA APIs and is offered to a virtual machine on VMware vSphere 6.5. PVRDMA architecture In Figure 2, notice that PVRDMA is deployed on two virtual machines: RDMA_QLG_HOST1 and RDMA_QLG_HOST2. The following table describes the components of the architecture.
Introduction Components of PVRDMA architecture 1.4 Component Description PVRDMA NIC The virtual PCIe device providing Ethernet Interface through PVRDMA, the adapter type and RDMA. Verbs RDMA API calls that are proxied to the PVRDMA back-end. The user library provides direct access to the hardware with a path for data. PVRDMA driver Enables the virtual NIC (vNIC) with the IP stack in the kernel space. It also provides full support for Verbs RDMA API in the user space.
Configuring PVRDMA on VMware vSphere 2 Configuring PVRDMA on VMware vSphere This section describes how PRDMA is configured as a virtual NIC when assigned to virtual machines and steps to enable it on host and guest operating systems. This section also includes the test results when using PVRDMA on virtual machines. Note: Ensure that the host is configured and meets the prerequisites to enable PVRDMA. 2.
Configuring PVRDMA on VMware vSphere 3. Assign the uplinks. Assigning the uplinks 4. Attach the vmkernel adapter vmk1 to vDS port group. Attach the vmkernel adapter vmk1 to vDS port group. 5. Click Next and then, click Finish. 2.1.
Configuring PVRDMA on VMware vSphere To enable the features listed: 1. Tag a VMkernel Adapter. a. Go to the host on the vSphere Web Client. b. Under the Configure tab, expand the System section and click Advanced System Settings. Tag a vmkernal Adapter c. Locate Net.PVRDMAvmknic and click Edit. d. Enter the value of the VMkernel adapter that will be used and click OK to finish. 2. Enable the firewall rule for PVRDMA. a. Go to the host on the vSphere Web Client. b.
Configuring PVRDMA on VMware vSphere Enabling the firewall rule for PVRDMA d. Click OK to complete enabling the firewall rule. 3. Assign the PVRDMA adapter to a virtual machine. a. b. c. d. e. 11 Locate the virtual machine on the vSphere web client. Right-click on the VM and choose to Edit. VM Hardware is selected by default. Click the Add new device and select Network Adapter. Select the distributed switch created earlier from the Deploying PVRDMA on VMware vSphere section and click OK.
Configuring PVRDMA on VMware vSphere Change the Adapter Type to PVRDMA f. 12 Expand the New Network * section and select the option PVRDMA as the Adapter Type.
Configuring PVRDMA on VMware vSphere Select the checkbox for Reserve all guest memory g. Expand the Memory section and select the checkbox next to Reserve all guest memory (All locked). h. Click OK to close the window. i. Power on the virtual machine. 2.2 Configuring PVRDMA on a guest operating system Two virtual machines are created describing the configurations for both, the server (VM1) and the client (VM2). 2.2.
Configuring PVRDMA on VMware vSphere 1. Create a virtual machine and add a PVRDMA adapter over a vDS port-group from the vCenter. See Deploying PVRDMA on VMware vSphere for instructions. 2. Install the following packages: a. b. c. d. rdma-core (yum install rdma-core) infiniband-diags (yum install inifiniband-diags) perftest (yum install perftest) libibverbs-utils 3. Use the ibv_devinfo command to get information about InfiniBand devices available on the userspace.
Configuring PVRDMA on VMware vSphere 4. Use the query ib_write_bw -x 0 -d vmw_pvrdma0 –report_gbits to open the connection and wait for the client to connect. Note: The query ib_write_bw is used to start a server and wait for connection. -x uses GID with GID index (Default: IB - no gid . ETH - 0). -d uses IB device (insert the HCA_id). -report_gbits Report Max/Average BW of test in Gbit/sec instead of MB/sec. Open the connection from VM1 2.2.
Summary 3 Summary This white paper describes how to configure PVRDMA for QLogic CNA devices on VMware ESXi and how PVRDMA can be enabled on two virtual machines with Red Hat Enterprise Linux 7.6. A test was performed using perftest which helped gather reports upon data transmission over a PVRDMA configuration. For using features such as vMotion, HA, Snapshots, and DRS together with VMware vSphere, configuring PVRDMA is an optimal choice.
References 4 References • • 17 Configure an ESXi Host for PVRDMA vSphere Networking PVRDMA Deployment and Configuration of QLogic CNA devices in VMware ESXi | Technical White Paper | 401