]> git.proxmox.com Git - ceph.git/blame - ceph/src/spdk/dpdk/doc/guides/nics/vmxnet3.rst
bump version to 15.2.11-pve1
[ceph.git] / ceph / src / spdk / dpdk / doc / guides / nics / vmxnet3.rst
CommitLineData
11fdf7f2
TL
1.. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2010-2014 Intel Corporation.
7c673cae
FG
3
4Poll Mode Driver for Paravirtual VMXNET3 NIC
5============================================
6
7The VMXNET3 adapter is the next generation of a paravirtualized NIC, introduced by VMware* ESXi.
8It is designed for performance, offers all the features available in VMXNET2, and adds several new features such as,
9multi-queue support (also known as Receive Side Scaling, RSS),
10IPv6 offloads, and MSI/MSI-X interrupt delivery.
11One can use the same device in a DPDK application with VMXNET3 PMD introduced in DPDK API.
12
13In this chapter, two setups with the use of the VMXNET3 PMD are demonstrated:
14
15#. Vmxnet3 with a native NIC connected to a vSwitch
16
17#. Vmxnet3 chaining VMs connected to a vSwitch
18
19VMXNET3 Implementation in the DPDK
20----------------------------------
21
22For details on the VMXNET3 device, refer to the VMXNET3 driver's vmxnet3 directory and support manual from VMware*.
23
24For performance details, refer to the following link from VMware:
25
26`http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf <http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf>`_
27
28As a PMD, the VMXNET3 driver provides the packet reception and transmission callbacks, vmxnet3_recv_pkts and vmxnet3_xmit_pkts.
29
30The VMXNET3 PMD handles all the packet buffer memory allocation and resides in guest address space
31and it is solely responsible to free that memory when not needed.
32The packet buffers and features to be supported are made available to hypervisor via VMXNET3 PCI configuration space BARs.
33During RX/TX, the packet buffers are exchanged by their GPAs,
34and the hypervisor loads the buffers with packets in the RX case and sends packets to vSwitch in the TX case.
35
36The VMXNET3 PMD is compiled with vmxnet3 device headers.
37The interface is similar to that of the other PMDs available in the DPDK API.
38The driver pre-allocates the packet buffers and loads the command ring descriptors in advance.
39The hypervisor fills those packet buffers on packet arrival and write completion ring descriptors,
40which are eventually pulled by the PMD.
41After reception, the DPDK application frees the descriptors and loads new packet buffers for the coming packets.
42The interrupts are disabled and there is no notification required.
43This keeps performance up on the RX side, even though the device provides a notification feature.
44
45In the transmit routine, the DPDK application fills packet buffer pointers in the descriptors of the command ring
46and notifies the hypervisor.
47In response the hypervisor takes packets and passes them to the vSwitch, It writes into the completion descriptors ring.
48The rings are read by the PMD in the next transmit routine call and the buffers and descriptors are freed from memory.
49
50Features and Limitations of VMXNET3 PMD
51---------------------------------------
52
53In release 1.6.0, the VMXNET3 PMD provides the basic functionality of packet reception and transmission.
54There are several options available for filtering packets at VMXNET3 device level including:
55
56#. MAC Address based filtering:
57
58 * Unicast, Broadcast, All Multicast modes - SUPPORTED BY DEFAULT
59
60 * Multicast with Multicast Filter table - NOT SUPPORTED
61
62 * Promiscuous mode - SUPPORTED
63
64 * RSS based load balancing between queues - SUPPORTED
65
66#. VLAN filtering:
67
68 * VLAN tag based filtering without load balancing - SUPPORTED
69
70.. note::
71
72
73 * Release 1.6.0 does not support separate headers and body receive cmd_ring and hence,
74 multiple segment buffers are not supported.
75 Only cmd_ring_0 is used for packet buffers, one for each descriptor.
76
77 * Receive and transmit of scattered packets is not supported.
78
79 * Multicast with Multicast Filter table is not supported.
80
81Prerequisites
82-------------
83
84The following prerequisites apply:
85
86* Before starting a VM, a VMXNET3 interface to a VM through VMware vSphere Client must be assigned.
87 This is shown in the figure below.
88
89.. _figure_vmxnet3_int:
90
91.. figure:: img/vmxnet3_int.*
92
93 Assigning a VMXNET3 interface to a VM using VMware vSphere Client
94
95.. note::
96
97 Depending on the Virtual Machine type, the VMware vSphere Client shows Ethernet adaptors while adding an Ethernet device.
98 Ensure that the VM type used offers a VMXNET3 device. Refer to the VMware documentation for a listed of VMs.
99
100.. note::
101
102 Follow the *DPDK Getting Started Guide* to setup the basic DPDK environment.
103
104.. note::
105
106 Follow the *DPDK Sample Application's User Guide*, L2 Forwarding/L3 Forwarding and
107 TestPMD for instructions on how to run a DPDK application using an assigned VMXNET3 device.
108
109VMXNET3 with a Native NIC Connected to a vSwitch
110------------------------------------------------
111
112This section describes an example setup for Phy-vSwitch-VM-Phy communication.
113
114.. _figure_vswitch_vm:
115
116.. figure:: img/vswitch_vm.*
117
118 VMXNET3 with a Native NIC Connected to a vSwitch
119
120.. note::
121
122 Other instructions on preparing to use DPDK such as, hugepage enabling, uio port binding are not listed here.
123 Please refer to *DPDK Getting Started Guide and DPDK Sample Application's User Guide* for detailed instructions.
124
125The packet reception and transmission flow path is::
126
127 Packet generator -> 82576
128 -> VMware ESXi vSwitch
129 -> VMXNET3 device
130 -> Guest VM VMXNET3 port 0 rx burst
131 -> Guest VM 82599 VF port 0 tx burst
132 -> 82599 VF
133 -> Packet generator
134
135VMXNET3 Chaining VMs Connected to a vSwitch
136-------------------------------------------
137
138The following figure shows an example VM-to-VM communication over a Phy-VM-vSwitch-VM-Phy communication channel.
139
140.. _figure_vm_vm_comms:
141
142.. figure:: img/vm_vm_comms.*
143
144 VMXNET3 Chaining VMs Connected to a vSwitch
145
146.. note::
147
148 When using the L2 Forwarding or L3 Forwarding applications,
149 a destination MAC address needs to be written in packets to hit the other VM's VMXNET3 interface.
150
151In this example, the packet flow path is::
152
153 Packet generator -> 82599 VF
154 -> Guest VM 82599 port 0 rx burst
155 -> Guest VM VMXNET3 port 1 tx burst
156 -> VMXNET3 device
157 -> VMware ESXi vSwitch
158 -> VMXNET3 device
159 -> Guest VM VMXNET3 port 0 rx burst
160 -> Guest VM 82599 VF port 1 tx burst
161 -> 82599 VF
162 -> Packet generator