2 * Copyright (c) 2008, 2009, 2010, 2011, 2012, 2013, 2014 Nicira, Inc.
4 * Licensed under the Apache License, Version 2.0 (the "License");
5 * you may not use this file except in compliance with the License.
6 * You may obtain a copy of the License at:
8 * http://www.apache.org/licenses/LICENSE-2.0
10 * Unless required by applicable law or agreed to in writing, software
11 * distributed under the License is distributed on an "AS IS" BASIS,
12 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 * See the License for the specific language governing permissions and
14 * limitations under the License.
18 * dpif, the DataPath InterFace.
20 * In Open vSwitch terminology, a "datapath" is a flow-based software switch.
21 * A datapath has no intelligence of its own. Rather, it relies entirely on
22 * its client to set up flows. The datapath layer is core to the Open vSwitch
23 * software switch: one could say, without much exaggeration, that everything
24 * in ovs-vswitchd above dpif exists only to make the correct decisions
25 * interacting with dpif.
27 * Typically, the client of a datapath is the software switch module in
28 * "ovs-vswitchd", but other clients can be written. The "ovs-dpctl" utility
29 * is also a (simple) client.
35 * The terms written in quotes below are defined in later sections.
37 * When a datapath "port" receives a packet, it extracts the headers (the
38 * "flow"). If the datapath's "flow table" contains a "flow entry" matching
39 * the packet, then it executes the "actions" in the flow entry and increments
40 * the flow's statistics. If there is no matching flow entry, the datapath
41 * instead appends the packet to an "upcall" queue.
47 * A datapath has a set of ports that are analogous to the ports on an Ethernet
48 * switch. At the datapath level, each port has the following information
51 * - A name, a short string that must be unique within the host. This is
52 * typically a name that would be familiar to the system administrator,
53 * e.g. "eth0" or "vif1.1", but it is otherwise arbitrary.
55 * - A 32-bit port number that must be unique within the datapath but is
56 * otherwise arbitrary. The port number is the most important identifier
57 * for a port in the datapath interface.
59 * - A type, a short string that identifies the kind of port. On a Linux
60 * host, typical types are "system" (for a network device such as eth0),
61 * "internal" (for a simulated port used to connect to the TCP/IP stack),
62 * and "gre" (for a GRE tunnel).
64 * - A Netlink PID for each upcall reading thread (see "Upcall Queuing and
67 * The dpif interface has functions for adding and deleting ports. When a
68 * datapath implements these (e.g. as the Linux and netdev datapaths do), then
69 * Open vSwitch's ovs-vswitchd daemon can directly control what ports are used
70 * for switching. Some datapaths might not implement them, or implement them
71 * with restrictions on the types of ports that can be added or removed,
72 * on systems where port membership can only be changed by some external
75 * Each datapath must have a port, sometimes called the "local port", whose
76 * name is the same as the datapath itself, with port number 0. The local port
79 * Ports are available as "struct netdev"s. To obtain a "struct netdev *" for
80 * a port named 'name' with type 'port_type', in a datapath of type
81 * 'datapath_type', call netdev_open(name, dpif_port_open_type(datapath_type,
82 * port_type). The netdev can be used to get and set important data related to
85 * - MTU (netdev_get_mtu(), netdev_set_mtu()).
87 * - Ethernet address (netdev_get_etheraddr(), netdev_set_etheraddr()).
89 * - Statistics such as the number of packets and bytes transmitted and
90 * received (netdev_get_stats()).
92 * - Carrier status (netdev_get_carrier()).
94 * - Speed (netdev_get_features()).
96 * - QoS queue configuration (netdev_get_queue(), netdev_set_queue() and
99 * - Arbitrary port-specific configuration parameters (netdev_get_config(),
100 * netdev_set_config()). An example of such a parameter is the IP
101 * endpoint for a GRE tunnel.
107 * The flow table is a collection of "flow entries". Each flow entry contains:
109 * - A "flow", that is, a summary of the headers in an Ethernet packet. The
110 * flow must be unique within the flow table. Flows are fine-grained
111 * entities that include L2, L3, and L4 headers. A single TCP connection
112 * consists of two flows, one in each direction.
114 * In Open vSwitch userspace, "struct flow" is the typical way to describe
115 * a flow, but the datapath interface uses a different data format to
116 * allow ABI forward- and backward-compatibility. Refer to OVS_KEY_ATTR_*
117 * and "struct ovs_key_*" in include/odp-netlink.h for details.
118 * lib/odp-util.h defines several functions for working with these flows.
120 * - A "mask" that, for each bit in the flow, specifies whether the datapath
121 * should consider the corresponding flow bit when deciding whether a
122 * given packet matches the flow entry. The original datapath design did
123 * not support matching: every flow entry was exact match. With the
124 * addition of a mask, the interface supports datapaths with a spectrum of
125 * wildcard matching capabilities, from those that only support exact
126 * matches to those that support bitwise wildcarding on the entire flow
127 * key, as well as datapaths with capabilities somewhere in between.
129 * Datapaths do not provide a way to query their wildcarding capabilities,
130 * nor is it expected that the client should attempt to probe for the
131 * details of their support. Instead, a client installs flows with masks
132 * that wildcard as many bits as acceptable. The datapath then actually
133 * wildcards as many of those bits as it can and changes the wildcard bits
134 * that it does not support into exact match bits. A datapath that can
135 * wildcard any bit, for example, would install the supplied mask, an
136 * exact-match only datapath would install an exact-match mask regardless
137 * of what mask the client supplied, and a datapath in the middle of the
138 * spectrum would selectively change some wildcard bits into exact match
141 * Regardless of the requested or installed mask, the datapath retains the
142 * original flow supplied by the client. (It does not, for example, "zero
143 * out" the wildcarded bits.) This allows the client to unambiguously
144 * identify the flow entry in later flow table operations.
146 * The flow table does not have priorities; that is, all flow entries have
147 * equal priority. Detecting overlapping flow entries is expensive in
148 * general, so the datapath is not required to do it. It is primarily the
149 * client's responsibility not to install flow entries whose flow and mask
150 * combinations overlap.
152 * - A list of "actions" that tell the datapath what to do with packets
153 * within a flow. Some examples of actions are OVS_ACTION_ATTR_OUTPUT,
154 * which transmits the packet out a port, and OVS_ACTION_ATTR_SET, which
155 * modifies packet headers. Refer to OVS_ACTION_ATTR_* and "struct
156 * ovs_action_*" in include/odp-netlink.h for details. lib/odp-util.h
157 * defines several functions for working with datapath actions.
159 * The actions list may be empty. This indicates that nothing should be
160 * done to matching packets, that is, they should be dropped.
162 * (In case you are familiar with OpenFlow, datapath actions are analogous
163 * to OpenFlow actions.)
165 * - Statistics: the number of packets and bytes that the flow has
166 * processed, the last time that the flow processed a packet, and the
167 * union of all the TCP flags in packets processed by the flow. (The
168 * latter is 0 if the flow is not a TCP flow.)
170 * The datapath's client manages the flow table, primarily in reaction to
171 * "upcalls" (see below).
177 * A datapath sometimes needs to notify its client that a packet was received.
178 * The datapath mechanism to do this is called an "upcall".
180 * Upcalls are used in two situations:
182 * - When a packet is received, but there is no matching flow entry in its
183 * flow table (a flow table "miss"), this causes an upcall of type
184 * DPIF_UC_MISS. These are called "miss" upcalls.
186 * - A datapath action of type OVS_ACTION_ATTR_USERSPACE causes an upcall of
187 * type DPIF_UC_ACTION. These are called "action" upcalls.
189 * An upcall contains an entire packet. There is no attempt to, e.g., copy
190 * only as much of the packet as normally needed to make a forwarding decision.
191 * Such an optimization is doable, but experimental prototypes showed it to be
192 * of little benefit because an upcall typically contains the first packet of a
193 * flow, which is usually short (e.g. a TCP SYN). Also, the entire packet can
194 * sometimes really be needed.
196 * After a client reads a given upcall, the datapath is finished with it, that
197 * is, the datapath doesn't maintain any lingering state past that point.
199 * The latency from the time that a packet arrives at a port to the time that
200 * it is received from dpif_recv() is critical in some benchmarks. For
201 * example, if this latency is 1 ms, then a netperf TCP_CRR test, which opens
202 * and closes TCP connections one at a time as quickly as it can, cannot
203 * possibly achieve more than 500 transactions per second, since every
204 * connection consists of two flows with 1-ms latency to set up each one.
206 * To receive upcalls, a client has to enable them with dpif_recv_set(). A
207 * datapath should generally support being opened multiple times (e.g. so that
208 * one may run "ovs-dpctl show" or "ovs-dpctl dump-flows" while "ovs-vswitchd"
209 * is also running) but need not support more than one of these clients
210 * enabling upcalls at once.
213 * Upcall Queuing and Ordering
214 * ---------------------------
216 * The datapath's client reads upcalls one at a time by calling dpif_recv().
217 * When more than one upcall is pending, the order in which the datapath
218 * presents upcalls to its client is important. The datapath's client does not
219 * directly control this order, so the datapath implementer must take care
222 * The minimal behavior, suitable for initial testing of a datapath
223 * implementation, is that all upcalls are appended to a single queue, which is
224 * delivered to the client in order.
226 * The datapath should ensure that a high rate of upcalls from one particular
227 * port cannot cause upcalls from other sources to be dropped or unreasonably
228 * delayed. Otherwise, one port conducting a port scan or otherwise initiating
229 * high-rate traffic spanning many flows could suppress other traffic.
230 * Ideally, the datapath should present upcalls from each port in a "round
231 * robin" manner, to ensure fairness.
233 * The client has no control over "miss" upcalls and no insight into the
234 * datapath's implementation, so the datapath is entirely responsible for
235 * queuing and delivering them. On the other hand, the datapath has
236 * considerable freedom of implementation. One good approach is to maintain a
237 * separate queue for each port, to prevent any given port's upcalls from
238 * interfering with other ports' upcalls. If this is impractical, then another
239 * reasonable choice is to maintain some fixed number of queues and assign each
240 * port to one of them. Ports assigned to the same queue can then interfere
241 * with each other, but not with ports assigned to different queues. Other
242 * approaches are also possible.
244 * The client has some control over "action" upcalls: it can specify a 32-bit
245 * "Netlink PID" as part of the action. This terminology comes from the Linux
246 * datapath implementation, which uses a protocol called Netlink in which a PID
247 * designates a particular socket and the upcall data is delivered to the
248 * socket's receive queue. Generically, though, a Netlink PID identifies a
249 * queue for upcalls. The basic requirements on the datapath are:
251 * - The datapath must provide a Netlink PID associated with each port. The
252 * client can retrieve the PID with dpif_port_get_pid().
254 * - The datapath must provide a "special" Netlink PID not associated with
255 * any port. dpif_port_get_pid() also provides this PID. (ovs-vswitchd
256 * uses this PID to queue special packets that must not be lost even if a
257 * port is otherwise busy, such as packets used for tunnel monitoring.)
259 * The minimal behavior of dpif_port_get_pid() and the treatment of the Netlink
260 * PID in "action" upcalls is that dpif_port_get_pid() returns a constant value
261 * and all upcalls are appended to a single queue.
263 * The preferred behavior is:
265 * - Each port has a PID that identifies the queue used for "miss" upcalls
266 * on that port. (Thus, if each port has its own queue for "miss"
267 * upcalls, then each port has a different Netlink PID.)
269 * - "miss" upcalls for a given port and "action" upcalls that specify that
270 * port's Netlink PID add their upcalls to the same queue. The upcalls
271 * are delivered to the datapath's client in the order that the packets
272 * were received, regardless of whether the upcalls are "miss" or "action"
275 * - Upcalls that specify the "special" Netlink PID are queued separately.
281 * The datapath interface works with packets in a particular form. This is the
282 * form taken by packets received via upcalls (i.e. by dpif_recv()). Packets
283 * supplied to the datapath for processing (i.e. to dpif_execute()) also take
286 * A VLAN tag is represented by an 802.1Q header. If the layer below the
287 * datapath interface uses another representation, then the datapath interface
288 * must perform conversion.
290 * The datapath interface requires all packets to fit within the MTU. Some
291 * operating systems internally process packets larger than MTU, with features
292 * such as TSO and UFO. When such a packet passes through the datapath
293 * interface, it must be broken into multiple MTU or smaller sized packets for
294 * presentation as upcalls. (This does not happen often, because an upcall
295 * typically contains the first packet of a flow, which is usually short.)
297 * Some operating system TCP/IP stacks maintain packets in an unchecksummed or
298 * partially checksummed state until transmission. The datapath interface
299 * requires all host-generated packets to be fully checksummed (e.g. IP and TCP
300 * checksums must be correct). On such an OS, the datapath interface must fill
301 * in these checksums.
303 * Packets passed through the datapath interface must be at least 14 bytes
304 * long, that is, they must have a complete Ethernet header. They are not
305 * required to be padded to the minimum Ethernet length.
311 * Typically, the client of a datapath begins by configuring the datapath with
312 * a set of ports. Afterward, the client runs in a loop polling for upcalls to
315 * For each upcall received, the client examines the enclosed packet and
316 * figures out what should be done with it. For example, if the client
317 * implements a MAC-learning switch, then it searches the forwarding database
318 * for the packet's destination MAC and VLAN and determines the set of ports to
319 * which it should be sent. In any case, the client composes a set of datapath
320 * actions to properly dispatch the packet and then directs the datapath to
321 * execute those actions on the packet (e.g. with dpif_execute()).
323 * Most of the time, the actions that the client executed on the packet apply
324 * to every packet with the same flow. For example, the flow includes both
325 * destination MAC and VLAN ID (and much more), so this is true for the
326 * MAC-learning switch example above. In such a case, the client can also
327 * direct the datapath to treat any further packets in the flow in the same
328 * way, using dpif_flow_put() to add a new flow entry.
330 * Other tasks the client might need to perform, in addition to reacting to
333 * - Periodically polling flow statistics, perhaps to supply to its own
336 * - Deleting flow entries from the datapath that haven't been used
337 * recently, to save memory.
339 * - Updating flow entries whose actions should change. For example, if a
340 * MAC learning switch learns that a MAC has moved, then it must update
341 * the actions of flow entries that sent packets to the MAC at its old
344 * - Adding and removing ports to achieve a new configuration.
350 * Most of the dpif functions are fully thread-safe: they may be called from
351 * any number of threads on the same or different dpif objects. The exceptions
354 * - dpif_port_poll() and dpif_port_poll_wait() are conditionally
355 * thread-safe: they may be called from different threads only on
356 * different dpif objects.
358 * - dpif_flow_dump_next() is conditionally thread-safe: It may be called
359 * from different threads with the same 'struct dpif_flow_dump', but all
360 * other parameters must be different for each thread.
362 * - dpif_flow_dump_done() is conditionally thread-safe: All threads that
363 * share the same 'struct dpif_flow_dump' must have finished using it.
364 * This function must then be called exactly once for a particular
365 * dpif_flow_dump to finish the corresponding flow dump operation.
367 * - Functions that operate on 'struct dpif_port_dump' are conditionally
368 * thread-safe with respect to those objects. That is, one may dump ports
369 * from any number of threads at once, but each thread must use its own
370 * struct dpif_port_dump.
380 #include "dp-packet.h"
382 #include "openflow/openflow.h"
383 #include "openvswitch/ofp-meter.h"
384 #include "ovs-numa.h"
397 struct flow_wildcards
;
401 int dp_register_provider(const struct dpif_class
*);
402 int dp_unregister_provider(const char *type
);
403 void dp_disallow_provider(const char *type
);
404 void dp_enumerate_types(struct sset
*types
);
405 const char *dpif_normalize_type(const char *);
407 int dp_enumerate_names(const char *type
, struct sset
*names
);
408 void dp_parse_name(const char *datapath_name
, char **name
, char **type
);
410 int dpif_open(const char *name
, const char *type
, struct dpif
**);
411 int dpif_create(const char *name
, const char *type
, struct dpif
**);
412 int dpif_create_and_open(const char *name
, const char *type
, struct dpif
**);
413 void dpif_close(struct dpif
*);
415 bool dpif_run(struct dpif
*);
416 void dpif_wait(struct dpif
*);
418 const char *dpif_name(const struct dpif
*);
419 const char *dpif_base_name(const struct dpif
*);
420 const char *dpif_type(const struct dpif
*);
422 bool dpif_cleanup_required(const struct dpif
*);
424 int dpif_delete(struct dpif
*);
426 /* Statistics for a dpif as a whole. */
427 struct dpif_dp_stats
{
428 uint64_t n_hit
; /* Number of flow table matches. */
429 uint64_t n_missed
; /* Number of flow table misses. */
430 uint64_t n_lost
; /* Number of misses not sent to userspace. */
431 uint64_t n_flows
; /* Number of flows present. */
432 uint64_t n_mask_hit
; /* Number of mega flow masks visited for
433 flow table matches. */
434 uint32_t n_masks
; /* Number of mega flow masks. */
436 int dpif_get_dp_stats(const struct dpif
*, struct dpif_dp_stats
*);
438 int dpif_set_features(struct dpif
*, uint32_t new_features
);
440 int dpif_get_n_offloaded_flows(struct dpif
*dpif
, uint64_t *n_flows
);
443 /* Port operations. */
445 const char *dpif_port_open_type(const char *datapath_type
,
446 const char *port_type
);
447 int dpif_port_add(struct dpif
*, struct netdev
*, odp_port_t
*port_nop
);
448 int dpif_port_del(struct dpif
*, odp_port_t port_no
, bool local_delete
);
450 /* A port within a datapath.
452 * 'name' and 'type' are suitable for passing to netdev_open(). */
454 char *name
; /* Network device name, e.g. "eth0". */
455 char *type
; /* Network device type, e.g. "system". */
456 odp_port_t port_no
; /* Port number within datapath. */
458 void dpif_port_clone(struct dpif_port
*, const struct dpif_port
*);
459 void dpif_port_destroy(struct dpif_port
*);
460 bool dpif_port_exists(const struct dpif
*dpif
, const char *devname
);
461 int dpif_port_query_by_number(const struct dpif
*, odp_port_t port_no
,
463 int dpif_port_query_by_name(const struct dpif
*, const char *devname
,
465 int dpif_port_get_name(struct dpif
*, odp_port_t port_no
,
466 char *name
, size_t name_size
);
467 uint32_t dpif_port_get_pid(const struct dpif
*, odp_port_t port_no
);
469 struct dpif_port_dump
{
470 const struct dpif
*dpif
;
474 void dpif_port_dump_start(struct dpif_port_dump
*, const struct dpif
*);
475 bool dpif_port_dump_next(struct dpif_port_dump
*, struct dpif_port
*);
476 int dpif_port_dump_done(struct dpif_port_dump
*);
478 /* Iterates through each DPIF_PORT in DPIF, using DUMP as state.
480 * Arguments all have pointer type.
482 * If you break out of the loop, then you need to free the dump structure by
483 * hand using dpif_port_dump_done(). */
484 #define DPIF_PORT_FOR_EACH(DPIF_PORT, DUMP, DPIF) \
485 for (dpif_port_dump_start(DUMP, DPIF); \
486 (dpif_port_dump_next(DUMP, DPIF_PORT) \
488 : (dpif_port_dump_done(DUMP), false)); \
491 int dpif_port_poll(const struct dpif
*, char **devnamep
);
492 void dpif_port_poll_wait(const struct dpif
*);
494 /* Flow table operations. */
496 struct dpif_flow_stats
{
503 /* more statistics info for offloaded packets and bytes */
504 struct dpif_flow_detailed_stats
{
507 /* n_offload_packets are a subset of n_packets */
508 uint64_t n_offload_packets
;
509 /* n_offload_bytes are a subset of n_bytes */
510 uint64_t n_offload_bytes
;
515 struct dpif_flow_attrs
{
516 bool offloaded
; /* True if flow is offloaded to HW. */
517 const char *dp_layer
; /* DP layer the flow is handled in. */
518 const char *dp_extra_info
; /* Extra information provided by DP. */
521 struct dpif_flow_dump_types
{
526 void dpif_flow_stats_extract(const struct flow
*, const struct dp_packet
*packet
,
527 long long int used
, struct dpif_flow_stats
*);
528 void dpif_flow_stats_format(const struct dpif_flow_stats
*, struct ds
*);
530 enum dpif_flow_put_flags
{
531 DPIF_FP_CREATE
= 1 << 0, /* Allow creating a new flow. */
532 DPIF_FP_MODIFY
= 1 << 1, /* Allow modifying an existing flow. */
533 DPIF_FP_ZERO_STATS
= 1 << 2, /* Zero the stats of an existing flow. */
534 DPIF_FP_PROBE
= 1 << 3 /* Suppress error messages, if any. */
537 bool dpif_probe_feature(struct dpif
*, const char *name
,
538 const struct ofpbuf
*key
, const struct ofpbuf
*actions
,
539 const ovs_u128
*ufid
);
540 int dpif_flow_flush(struct dpif
*);
541 int dpif_flow_put(struct dpif
*, enum dpif_flow_put_flags
,
542 const struct nlattr
*key
, size_t key_len
,
543 const struct nlattr
*mask
, size_t mask_len
,
544 const struct nlattr
*actions
, size_t actions_len
,
545 const ovs_u128
*ufid
, const unsigned pmd_id
,
546 struct dpif_flow_stats
*);
547 int dpif_flow_del(struct dpif
*,
548 const struct nlattr
*key
, size_t key_len
,
549 const ovs_u128
*ufid
, const unsigned pmd_id
,
550 struct dpif_flow_stats
*);
551 int dpif_flow_get(struct dpif
*,
552 const struct nlattr
*key
, size_t key_len
,
553 const ovs_u128
*ufid
, const unsigned pmd_id
,
554 struct ofpbuf
*, struct dpif_flow
*);
556 /* Flow dumping interface
557 * ======================
559 * This interface allows iteration through all of the flows currently installed
560 * in a datapath. It is somewhat complicated by two requirements:
562 * - Efficient support for dumping flows in parallel from multiple threads.
564 * - Allow callers to avoid making unnecessary copies of data returned by
565 * the interface across several flows in cases where the dpif
566 * implementation has to maintain a copy of that information anyhow.
567 * (That is, allow the client visibility into any underlying batching as
568 * part of its own batching.)
574 * 1. Call dpif_flow_dump_create().
575 * 2. In each thread that participates in the dump (which may be just a single
576 * thread if parallelism isn't important):
577 * (a) Call dpif_flow_dump_thread_create().
578 * (b) Call dpif_flow_dump_next() repeatedly until it returns 0.
579 * (c) Call dpif_flow_dump_thread_destroy().
580 * 3. Call dpif_flow_dump_destroy().
582 * All error reporting is deferred to the call to dpif_flow_dump_destroy().
584 struct dpif_flow_dump
*dpif_flow_dump_create(const struct dpif
*, bool terse
,
585 struct dpif_flow_dump_types
*);
586 int dpif_flow_dump_destroy(struct dpif_flow_dump
*);
588 struct dpif_flow_dump_thread
*dpif_flow_dump_thread_create(
589 struct dpif_flow_dump
*);
590 void dpif_flow_dump_thread_destroy(struct dpif_flow_dump_thread
*);
592 #define PMD_ID_NULL OVS_CORE_UNSPEC
594 /* A datapath flow as dumped by dpif_flow_dump_next(). */
596 const struct nlattr
*key
; /* Flow key, as OVS_KEY_ATTR_* attrs. */
597 size_t key_len
; /* 'key' length in bytes. */
598 const struct nlattr
*mask
; /* Flow mask, as OVS_KEY_ATTR_* attrs. */
599 size_t mask_len
; /* 'mask' length in bytes. */
600 const struct nlattr
*actions
; /* Actions, as OVS_ACTION_ATTR_ */
601 size_t actions_len
; /* 'actions' length in bytes. */
602 ovs_u128 ufid
; /* Unique flow identifier. */
603 bool ufid_present
; /* True if 'ufid' was provided by datapath.*/
604 unsigned pmd_id
; /* Datapath poll mode driver id. */
605 struct dpif_flow_stats stats
; /* Flow statistics. */
606 struct dpif_flow_attrs attrs
; /* Flow attributes. */
608 int dpif_flow_dump_next(struct dpif_flow_dump_thread
*,
609 struct dpif_flow
*flows
, int max_flows
);
611 #define DPIF_FLOW_BUFSIZE 2048
613 /* Operation batching interface.
615 * Some datapaths are faster at performing N operations together than the same
616 * N operations individually, hence an interface for batching.
620 DPIF_OP_FLOW_PUT
= 1,
626 /* offload_type argument types to (*operate) interface */
627 enum dpif_offload_type
{
628 DPIF_OFFLOAD_AUTO
, /* Offload if possible, fallback to software. */
629 DPIF_OFFLOAD_NEVER
, /* Never offload to hardware. */
630 DPIF_OFFLOAD_ALWAYS
, /* Always offload to hardware. */
633 /* Add or modify a flow.
635 * The flow is specified by the Netlink attributes with types OVS_KEY_ATTR_* in
636 * the 'key_len' bytes starting at 'key'. The associated actions are specified
637 * by the Netlink attributes with types OVS_ACTION_ATTR_* in the 'actions_len'
638 * bytes starting at 'actions'.
640 * - If the flow's key does not exist in the dpif, then the flow will be
641 * added if 'flags' includes DPIF_FP_CREATE. Otherwise the operation will
644 * If the operation succeeds, then 'stats', if nonnull, will be zeroed.
646 * - If the flow's key does exist in the dpif, then the flow's actions will
647 * be updated if 'flags' includes DPIF_FP_MODIFY. Otherwise the operation
648 * will fail with EEXIST. If the flow's actions are updated, then its
649 * statistics will be zeroed if 'flags' includes DPIF_FP_ZERO_STATS, and
650 * left as-is otherwise.
652 * If the operation succeeds, then 'stats', if nonnull, will be set to the
653 * flow's statistics before the update.
655 * - If the datapath implements multiple pmd thread with its own flow
656 * table, 'pmd_id' should be used to specify the particular polling
657 * thread for the operation. PMD_ID_NULL means that the flow should
658 * be put on all the polling threads.
660 struct dpif_flow_put
{
662 enum dpif_flow_put_flags flags
; /* DPIF_FP_*. */
663 const struct nlattr
*key
; /* Flow to put. */
664 size_t key_len
; /* Length of 'key' in bytes. */
665 const struct nlattr
*mask
; /* Mask to put. */
666 size_t mask_len
; /* Length of 'mask' in bytes. */
667 const struct nlattr
*actions
; /* Actions to perform on flow. */
668 size_t actions_len
; /* Length of 'actions' in bytes. */
669 const ovs_u128
*ufid
; /* Optional unique flow identifier. */
670 unsigned pmd_id
; /* Datapath poll mode driver id. */
673 struct dpif_flow_stats
*stats
; /* Optional flow statistics. */
678 * The flow is specified by the Netlink attributes with types OVS_KEY_ATTR_* in
679 * the 'key_len' bytes starting at 'key', or the unique identifier 'ufid'. If
680 * the flow was created using 'ufid', then 'ufid' must be specified to delete
681 * the flow. If both are specified, 'key' will be ignored for flow deletion.
682 * Succeeds with status 0 if the flow is deleted, or fails with ENOENT if the
683 * dpif does not contain such a flow.
685 * Callers should always provide the 'key' to improve dpif logging in the event
686 * of errors or unexpected behaviour.
688 * If the datapath implements multiple polling thread with its own flow table,
689 * 'pmd_id' should be used to specify the particular polling thread for the
690 * operation. PMD_ID_NULL means that the flow should be deleted from all the
693 * If the operation succeeds, then 'stats', if nonnull, will be set to the
694 * flow's statistics before its deletion. */
695 struct dpif_flow_del
{
697 const struct nlattr
*key
; /* Flow to delete. */
698 size_t key_len
; /* Length of 'key' in bytes. */
699 const ovs_u128
*ufid
; /* Unique identifier of flow to delete. */
700 bool terse
; /* OK to skip sending/receiving full flow
702 unsigned pmd_id
; /* Datapath poll mode driver id. */
705 struct dpif_flow_stats
*stats
; /* Optional flow statistics. */
708 /* Executes actions on a specified packet.
710 * Performs the 'actions_len' bytes of actions in 'actions' on the Ethernet
711 * frame in 'packet' and on the packet metadata in 'md'. May modify both
714 * Some dpif providers do not implement every action. The Linux kernel
715 * datapath, in particular, does not implement ARP field modification. If
716 * 'needs_help' is true, the dpif layer executes in userspace all of the
717 * actions that it can, and for OVS_ACTION_ATTR_OUTPUT and
718 * OVS_ACTION_ATTR_USERSPACE actions it passes the packet through to the dpif
721 * This works even if 'actions_len' is too long for a Netlink attribute. */
722 struct dpif_execute
{
724 const struct nlattr
*actions
; /* Actions to execute on packet. */
725 size_t actions_len
; /* Length of 'actions' in bytes. */
727 bool probe
; /* Suppress error messages. */
728 unsigned int mtu
; /* Maximum transmission unit to fragment.
729 0 if not a fragmented packet */
731 const struct flow
*flow
; /* Flow extracted from 'packet'. */
733 /* Input, but possibly modified as a side effect of execution. */
734 struct dp_packet
*packet
; /* Packet to execute. */
737 /* Queries the dpif for a flow entry.
739 * The flow is specified by the Netlink attributes with types OVS_KEY_ATTR_* in
740 * the 'key_len' bytes starting at 'key', or the unique identifier 'ufid'. If
741 * the flow was created using 'ufid', then 'ufid' must be specified to fetch
742 * the flow. If both are specified, 'key' will be ignored for the flow query.
743 * 'buffer' must point to an initialized buffer, with a recommended size of
744 * DPIF_FLOW_BUFSIZE bytes.
746 * On success, 'flow' will be populated with the mask, actions, stats and attrs
747 * for the datapath flow corresponding to 'key'. The mask and actions may point
748 * within '*buffer', or may point at RCU-protected data. Therefore, callers
749 * that wish to hold these over quiescent periods must make a copy of these
750 * fields before quiescing.
752 * Callers should always provide 'key' to improve dpif logging in the event of
753 * errors or unexpected behaviour.
755 * If the datapath implements multiple polling thread with its own flow table,
756 * 'pmd_id' should be used to specify the particular polling thread for the
757 * operation. PMD_ID_NULL means that the datapath will return the first
758 * matching flow from any poll thread.
760 * Succeeds with status 0 if the flow is fetched, or fails with ENOENT if no
761 * such flow exists. Other failures are indicated with a positive errno value.
763 struct dpif_flow_get
{
765 const struct nlattr
*key
; /* Flow to get. */
766 size_t key_len
; /* Length of 'key' in bytes. */
767 const ovs_u128
*ufid
; /* Unique identifier of flow to get. */
768 unsigned pmd_id
; /* Datapath poll mode driver id. */
769 struct ofpbuf
*buffer
; /* Storage for output parameters. */
772 struct dpif_flow
*flow
; /* Resulting flow from datapath. */
775 int dpif_execute(struct dpif
*, struct dpif_execute
*);
778 enum dpif_op_type type
;
781 struct dpif_flow_put flow_put
;
782 struct dpif_flow_del flow_del
;
783 struct dpif_execute execute
;
784 struct dpif_flow_get flow_get
;
788 void dpif_operate(struct dpif
*, struct dpif_op
**ops
, size_t n_ops
,
789 enum dpif_offload_type
);
793 enum dpif_upcall_type
{
794 DPIF_UC_MISS
, /* Miss in flow table. */
795 DPIF_UC_ACTION
, /* OVS_ACTION_ATTR_USERSPACE action. */
799 const char *dpif_upcall_type_to_string(enum dpif_upcall_type
);
801 /* A packet passed up from the datapath to userspace.
803 * The 'packet', 'key' and 'userdata' may point into data in a buffer
804 * provided by the caller, so the buffer should be released only after the
805 * upcall processing has been finished.
807 * While being processed, the 'packet' may be reallocated, so the packet must
808 * be separately released with ofpbuf_uninit().
812 struct dp_packet packet
; /* Packet data,'dp_packet' should be the first
813 member to avoid a hole. This is because
814 'rte_mbuf' in dp_packet is aligned atleast
815 on a 64-byte boundary */
816 enum dpif_upcall_type type
;
817 struct nlattr
*key
; /* Flow key. */
818 size_t key_len
; /* Length of 'key' in bytes. */
819 ovs_u128 ufid
; /* Unique flow identifier for 'key'. */
820 struct nlattr
*mru
; /* Maximum receive unit. */
821 struct nlattr
*hash
; /* Packet hash. */
822 struct nlattr
*cutlen
; /* Number of bytes shrink from the end. */
824 /* DPIF_UC_ACTION only. */
825 struct nlattr
*userdata
; /* Argument to OVS_ACTION_ATTR_USERSPACE. */
826 struct nlattr
*out_tun_key
; /* Output tunnel key. */
827 struct nlattr
*actions
; /* Argument to OVS_ACTION_ATTR_USERSPACE. */
830 /* A callback to notify higher layer of dpif about to be purged, so that
831 * higher layer could try reacting to this (e.g. grabbing all flow stats
832 * before they are gone). This function is currently implemented only by
835 * The caller needs to provide the 'aux' pointer passed down by higher
836 * layer from the dpif_register_notify_cb() function and the 'pmd_id' of
837 * the polling thread.
839 typedef void dp_purge_callback(void *aux
, unsigned pmd_id
);
841 void dpif_register_dp_purge_cb(struct dpif
*, dp_purge_callback
*, void *aux
);
843 /* A callback to process an upcall, currently implemented only by dpif-netdev.
845 * The caller provides the 'packet' and 'flow' to process, the corresponding
846 * 'ufid' as generated by odp_flow_key_hash(), the polling thread id 'pmd_id',
847 * the 'type' of the upcall, and if 'type' is DPIF_UC_ACTION then the
848 * 'userdata' attached to the action.
850 * The callback must fill in 'actions' with the datapath actions to apply to
851 * 'packet'. 'wc' and 'put_actions' will either be both null or both nonnull.
852 * If they are nonnull, then the caller will install a flow entry to process
853 * all future packets that match 'flow' and 'wc'; the callback must store a
854 * wildcard mask suitable for that purpose into 'wc'. If the actions to store
855 * into the flow entry are the same as 'actions', then the callback may leave
856 * 'put_actions' empty; otherwise it must store the desired actions into
859 * Returns 0 if successful, ENOSPC if the flow limit has been reached and no
860 * flow should be installed, or some otherwise a positive errno value. */
861 typedef int upcall_callback(const struct dp_packet
*packet
,
862 const struct flow
*flow
,
865 enum dpif_upcall_type type
,
866 const struct nlattr
*userdata
,
867 struct ofpbuf
*actions
,
868 struct flow_wildcards
*wc
,
869 struct ofpbuf
*put_actions
,
872 void dpif_register_upcall_cb(struct dpif
*, upcall_callback
*, void *aux
);
874 int dpif_recv_set(struct dpif
*, bool enable
);
875 int dpif_handlers_set(struct dpif
*, uint32_t n_handlers
);
876 int dpif_set_config(struct dpif
*, const struct smap
*cfg
);
877 int dpif_port_set_config(struct dpif
*, odp_port_t
, const struct smap
*cfg
);
878 int dpif_recv(struct dpif
*, uint32_t handler_id
, struct dpif_upcall
*,
880 void dpif_recv_purge(struct dpif
*);
881 void dpif_recv_wait(struct dpif
*, uint32_t handler_id
);
882 void dpif_enable_upcall(struct dpif
*);
883 void dpif_disable_upcall(struct dpif
*);
885 void dpif_print_packet(struct dpif
*, struct dpif_upcall
*);
888 void dpif_meter_get_features(const struct dpif
*,
889 struct ofputil_meter_features
*);
890 int dpif_meter_set(struct dpif
*, ofproto_meter_id meter_id
,
891 struct ofputil_meter_config
*);
892 int dpif_meter_get(const struct dpif
*, ofproto_meter_id meter_id
,
893 struct ofputil_meter_stats
*, uint16_t n_bands
);
894 int dpif_meter_del(struct dpif
*, ofproto_meter_id meter_id
,
895 struct ofputil_meter_stats
*, uint16_t n_bands
);
899 /* Bit-mask for hashing a flow down to a bucket. */
900 #define BOND_MASK 0xff
901 #define BOND_BUCKETS (BOND_MASK + 1)
903 int dpif_bond_add(struct dpif
*, uint32_t bond_id
, odp_port_t
*member_map
);
904 int dpif_bond_del(struct dpif
*, uint32_t bond_id
);
905 int dpif_bond_stats_get(struct dpif
*, uint32_t bond_id
, uint64_t *n_bytes
);
906 bool dpif_supports_lb_output_action(const struct dpif
*);
911 void dpif_get_netflow_ids(const struct dpif
*,
912 uint8_t *engine_type
, uint8_t *engine_id
);
914 int dpif_queue_to_priority(const struct dpif
*, uint32_t queue_id
,
917 int dpif_get_pmds_for_port(const struct dpif
* dpif
, odp_port_t port_no
,
918 unsigned int **pmds
, size_t *n
);
920 char *dpif_get_dp_version(const struct dpif
*);
921 bool dpif_supports_tnl_push_pop(const struct dpif
*);
922 bool dpif_supports_explicit_drop_action(const struct dpif
*);
927 void log_flow_message(const struct dpif
*dpif
, int error
,
928 const struct vlog_module
*module
,
929 const char *operation
,
930 const struct nlattr
*key
, size_t key_len
,
931 const struct nlattr
*mask
, size_t mask_len
,
932 const ovs_u128
*ufid
,
933 const struct dpif_flow_stats
*stats
,
934 const struct nlattr
*actions
, size_t actions_len
);
935 void log_flow_put_message(const struct dpif
*,
936 const struct vlog_module
*,
937 const struct dpif_flow_put
*,
939 void log_flow_del_message(const struct dpif
*,
940 const struct vlog_module
*,
941 const struct dpif_flow_del
*,
943 void log_execute_message(const struct dpif
*,
944 const struct vlog_module
*,
945 const struct dpif_execute
*,
946 bool subexecute
, int error
);
947 void log_flow_get_message(const struct dpif
*,
948 const struct vlog_module
*,
949 const struct dpif_flow_get
*,