]>
Commit | Line | Data |
---|---|---|
f67539c2 TL |
1 | .. SPDX-License-Identifier: BSD-3-Clause |
2 | Copyright(c) 2017 Wind River Systems, Inc. | |
11fdf7f2 TL |
3 | All rights reserved. |
4 | ||
11fdf7f2 TL |
5 | AVP Poll Mode Driver |
6 | ================================================================= | |
7 | ||
8 | The Accelerated Virtual Port (AVP) device is a shared memory based device | |
9 | only available on `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_ | |
10 | from Wind River Systems. The Wind River Systems virtualization platform | |
11 | currently uses QEMU/KVM as its hypervisor and as such provides support for all | |
12 | of the QEMU supported virtual and/or emulated devices (e.g., virtio, e1000, | |
13 | etc.). The platform offers the virtio device type as the default device when | |
14 | launching a virtual machine or creating a virtual machine port. The AVP device | |
15 | is a specialized device available to customers that require increased | |
16 | throughput and decreased latency to meet the demands of their performance | |
17 | focused applications. | |
18 | ||
19 | The AVP driver binds to any AVP PCI devices that have been exported by the Wind | |
20 | River Systems QEMU/KVM hypervisor. As a user of the DPDK driver API it | |
21 | supports a subset of the full Ethernet device API to enable the application to | |
22 | use the standard device configuration functions and packet receive/transmit | |
23 | functions. | |
24 | ||
25 | These devices enable optimized packet throughput by bypassing QEMU and | |
26 | delivering packets directly to the virtual switch via a shared memory | |
27 | mechanism. This provides DPDK applications running in virtual machines with | |
28 | significantly improved throughput and latency over other device types. | |
29 | ||
30 | The AVP device implementation is integrated with the QEMU/KVM live-migration | |
31 | mechanism to allow applications to seamlessly migrate from one hypervisor node | |
32 | to another with minimal packet loss. | |
33 | ||
34 | ||
35 | Features and Limitations of the AVP PMD | |
36 | --------------------------------------- | |
37 | ||
38 | The AVP PMD driver provides the following functionality. | |
39 | ||
40 | * Receive and transmit of both simple and chained mbuf packets, | |
41 | ||
42 | * Chained mbufs may include up to 5 chained segments, | |
43 | ||
44 | * Up to 8 receive and transmit queues per device, | |
45 | ||
46 | * Only a single MAC address is supported, | |
47 | ||
48 | * The MAC address cannot be modified, | |
49 | ||
50 | * The maximum receive packet length is 9238 bytes, | |
51 | ||
52 | * VLAN header stripping and inserting, | |
53 | ||
54 | * Promiscuous mode | |
55 | ||
56 | * VM live-migration | |
57 | ||
58 | * PCI hotplug insertion and removal | |
59 | ||
60 | ||
61 | Prerequisites | |
62 | ------------- | |
63 | ||
64 | The following prerequisites apply: | |
65 | ||
66 | * A virtual machine running in a Wind River Systems virtualization | |
67 | environment and configured with at least one neutron port defined with a | |
68 | vif-model set to "avp". | |
69 | ||
70 | ||
71 | Launching a VM with an AVP type network attachment | |
72 | -------------------------------------------------- | |
73 | ||
74 | The following example will launch a VM with three network attachments. The | |
75 | first attachment will have a default vif-model of "virtio". The next two | |
76 | network attachments will have a vif-model of "avp" and may be used with a DPDK | |
77 | application which is built to include the AVP PMD driver. | |
78 | ||
79 | .. code-block:: console | |
80 | ||
81 | nova boot --flavor small --image my-image \ | |
82 | --nic net-id=${NETWORK1_UUID} \ | |
83 | --nic net-id=${NETWORK2_UUID},vif-model=avp \ | |
84 | --nic net-id=${NETWORK3_UUID},vif-model=avp \ | |
85 | --security-group default my-instance1 |