]> git.proxmox.com Git - mirror_frr.git/blob - doc/developer/lists.rst
Merge pull request #13235 from Orange-OpenSource/link-state
[mirror_frr.git] / doc / developer / lists.rst
1 .. _lists:
2
3 Type-safe containers
4 ====================
5
6 .. note::
7
8 This section previously used the term *list*; it was changed to *container*
9 to be more clear.
10
11 Common container interface
12 --------------------------
13
14 FRR includes a set of container implementations with abstracted
15 common APIs. The purpose of this is easily allow swapping out one
16 data structure for another while also making the code easier to read and write.
17 There is one API for unsorted containers and a similar but not identical API
18 for sorted containers - and heaps use a middle ground of both.
19
20 For unsorted containers, the following implementations exist:
21
22 - single-linked list with tail pointer (e.g. STAILQ in BSD)
23
24 - double-linked list
25
26 - atomic single-linked list with tail pointer
27
28
29 Being partially sorted, the oddball structure:
30
31 - an 8-ary heap
32
33
34 For sorted containers, these data structures are implemented:
35
36 - single-linked list
37
38 - atomic single-linked list
39
40 - skiplist
41
42 - red-black tree (based on OpenBSD RB_TREE)
43
44 - hash table (note below)
45
46 Except for hash tables, each of the sorted data structures has a variant with
47 unique and non-unique items. Hash tables always require unique items
48 and mostly follow the "sorted" API but use the hash value as sorting
49 key. Also, iterating while modifying does not work with hash tables.
50 Conversely, the heap always has non-unique items, but iterating while modifying
51 doesn't work either.
52
53
54 The following sorted structures are likely to be implemented at some point
55 in the future:
56
57 - atomic skiplist
58
59 - atomic hash table (note below)
60
61
62 The APIs are all designed to be as type-safe as possible. This means that
63 there will be a compiler warning when an item doesn't match the container, or
64 the return value has a different type, or other similar situations. **You
65 should never use casts with these APIs.** If a cast is necessary in relation
66 to these APIs, there is probably something wrong with the overall design.
67
68 Only the following pieces use dynamically allocated memory:
69
70 - the hash table itself is dynamically grown and shrunk
71
72 - skiplists store up to 4 next pointers inline but will dynamically allocate
73 memory to hold an item's 5th up to 16th next pointer (if they exist)
74
75 - the heap uses a dynamically grown and shrunk array of items
76
77 Cheat sheet
78 -----------
79
80 Available types:
81
82 ::
83
84 DECLARE_LIST
85 DECLARE_ATOMLIST
86 DECLARE_DLIST
87
88 DECLARE_HEAP
89
90 DECLARE_SORTLIST_UNIQ
91 DECLARE_SORTLIST_NONUNIQ
92 DECLARE_ATOMLIST_UNIQ
93 DECLARE_ATOMLIST_NONUNIQ
94 DECLARE_SKIPLIST_UNIQ
95 DECLARE_SKIPLIST_NONUNIQ
96 DECLARE_RBTREE_UNIQ
97 DECLARE_RBTREE_NONUNIQ
98
99 DECLARE_HASH
100
101 Functions provided:
102
103 +------------------------------------+-------+------+------+---------+------------+
104 | Function | LIST | HEAP | HASH | \*_UNIQ | \*_NONUNIQ |
105 +====================================+=======+======+======+=========+============+
106 | _init, _fini | yes | yes | yes | yes | yes |
107 +------------------------------------+-------+------+------+---------+------------+
108 | _first, _next, _next_safe, | yes | yes | yes | yes | yes |
109 | | | | | | |
110 | _const_first, _const_next | | | | | |
111 +------------------------------------+-------+------+------+---------+------------+
112 | _last, _prev, _prev_safe, | DLIST | -- | -- | RB only | RB only |
113 | | only | | | | |
114 | _const_last, _const_prev | | | | | |
115 +------------------------------------+-------+------+------+---------+------------+
116 | _swap_all | yes | yes | yes | yes | yes |
117 +------------------------------------+-------+------+------+---------+------------+
118 | _anywhere | yes | -- | -- | -- | -- |
119 +------------------------------------+-------+------+------+---------+------------+
120 | _add_head, _add_tail, _add_after | yes | -- | -- | -- | -- |
121 +------------------------------------+-------+------+------+---------+------------+
122 | _add | -- | yes | yes | yes | yes |
123 +------------------------------------+-------+------+------+---------+------------+
124 | _member | yes | yes | yes | yes | yes |
125 +------------------------------------+-------+------+------+---------+------------+
126 | _del, _pop | yes | yes | yes | yes | yes |
127 +------------------------------------+-------+------+------+---------+------------+
128 | _find, _const_find | -- | -- | yes | yes | -- |
129 +------------------------------------+-------+------+------+---------+------------+
130 | _find_lt, _find_gteq, | -- | -- | -- | yes | yes |
131 | | | | | | |
132 | _const_find_lt, _const_find_gteq | | | | | |
133 +------------------------------------+-------+------+------+---------+------------+
134 | use with frr_each() macros | yes | yes | yes | yes | yes |
135 +------------------------------------+-------+------+------+---------+------------+
136
137
138
139 Datastructure type setup
140 ------------------------
141
142 Each of the data structures has a ``PREDECL_*`` and a ``DECLARE_*`` macro to
143 set up an "instantiation" of the container. This works somewhat similar to C++
144 templating, though much simpler.
145
146 **In all following text, the Z prefix is replaced with a name chosen
147 for the instance of the datastructure.**
148
149 The common setup pattern will look like this:
150
151 .. code-block:: c
152
153 #include <typesafe.h>
154
155 PREDECL_XXX(Z);
156 struct item {
157 int otherdata;
158 struct Z_item mylistitem;
159 }
160
161 struct Z_head mylisthead;
162
163 /* unsorted: */
164 DECLARE_XXX(Z, struct item, mylistitem);
165
166 /* sorted, items that compare as equal cannot be added to list */
167 int compare_func(const struct item *a, const struct item *b);
168 DECLARE_XXX_UNIQ(Z, struct item, mylistitem, compare_func);
169
170 /* sorted, items that compare as equal can be added to list */
171 int compare_func(const struct item *a, const struct item *b);
172 DECLARE_XXX_NONUNIQ(Z, struct item, mylistitem, compare_func);
173
174 /* hash tables: */
175 int compare_func(const struct item *a, const struct item *b);
176 uint32_t hash_func(const struct item *a);
177 DECLARE_XXX(Z, struct item, mylistitem, compare_func, hash_func);
178
179 ``XXX`` is replaced with the name of the data structure, e.g. ``SKIPLIST``
180 or ``ATOMLIST``. The ``DECLARE_XXX`` invocation can either occur in a `.h`
181 file (if the container needs to be accessed from several C files) or it can be
182 placed in a `.c` file (if the container is only accessed from that file.) The
183 ``PREDECL_XXX`` invocation defines the ``struct Z_item`` and ``struct
184 Z_head`` types and must therefore occur before these are used.
185
186 To switch between compatible data structures, only these two lines need to be
187 changes. To switch to a data structure with a different API, some source
188 changes are necessary.
189
190 Common iteration macros
191 -----------------------
192
193 The following iteration macros work across all data structures:
194
195 .. c:macro:: frr_each(Z, head, item)
196
197 Equivalent to:
198
199 .. code-block:: c
200
201 for (item = Z_first(&head); item; item = Z_next(&head, item))
202
203 Note that this will fail if the container is modified while being iterated
204 over.
205
206 .. c:macro:: frr_each_safe(Z, head, item)
207
208 Same as the previous, but the next element is pre-loaded into a "hidden"
209 variable (named ``Z_safe``.) Equivalent to:
210
211 .. code-block:: c
212
213 for (item = Z_first(&head); item; item = next) {
214 next = Z_next_safe(&head, item);
215 ...
216 }
217
218 .. warning::
219
220 Iterating over hash tables while adding or removing items is not
221 possible. The iteration position will be corrupted when the hash
222 tables is resized while iterating. This will cause items to be
223 skipped or iterated over twice.
224
225 .. c:macro:: frr_each_from(Z, head, item, from)
226
227 Iterates over the container, starting at item ``from``. This variant is
228 "safe" as in the previous macro. Equivalent to:
229
230 .. code-block:: c
231
232 for (item = from; item; item = from) {
233 from = Z_next_safe(&head, item);
234 ...
235 }
236
237 .. note::
238
239 The ``from`` variable is written to. This is intentional - you can
240 resume iteration after breaking out of the loop by keeping the ``from``
241 value persistent and reusing it for the next loop.
242
243 .. c:macro:: frr_rev_each(Z, head, item)
244 .. c:macro:: frr_rev_each_safe(Z, head, item)
245 .. c:macro:: frr_rev_each_from(Z, head, item, from)
246
247 Reverse direction variants of the above. Only supported on containers that
248 implement ``_last`` and ``_prev`` (i.e. ``RBTREE`` and ``DLIST``).
249
250 To iterate over ``const`` pointers, add ``_const`` to the name of the
251 datastructure (``Z`` above), e.g. ``frr_each (mylist, head, item)`` becomes
252 ``frr_each (mylist_const, head, item)``.
253
254 Common API
255 ----------
256
257 The following documentation assumes that a container has been defined using
258 ``Z`` as the name, and ``itemtype`` being the type of the items (e.g.
259 ``struct item``.)
260
261 .. c:function:: void Z_init(struct Z_head *)
262
263 Initializes the container for use. For most implementations, this just sets
264 some values. Hash tables are the only implementation that allocates
265 memory in this call.
266
267 .. c:function:: void Z_fini(struct Z_head *)
268
269 Reverse the effects of :c:func:`Z_init()`. The container must be empty
270 when this function is called.
271
272 .. warning::
273
274 This function may ``assert()`` if the container is not empty.
275
276 .. c:function:: size_t Z_count(const struct Z_head *)
277
278 Returns the number of items in a structure. All structures store a
279 counter in their `Z_head` so that calling this function completes
280 in O(1).
281
282 .. note::
283
284 For atomic containers with concurrent access, the value will already be
285 outdated by the time this function returns and can therefore only be
286 used as an estimate.
287
288 .. c:function:: bool Z_member(const struct Z_head *, const itemtype *)
289
290 Determines whether some item is a member of the given container. The
291 item must either be valid on some container, or set to all zeroes.
292
293 On some containers, if no faster way to determine membership is possible,
294 this is simply ``item == Z_find(head, item)``.
295
296 Not currently available for atomic containers.
297
298 .. c:function:: const itemtype *Z_const_first(const struct Z_head *)
299 .. c:function:: itemtype *Z_first(struct Z_head *)
300
301 Returns the first item in the structure, or ``NULL`` if the structure is
302 empty. This is O(1) for all data structures except red-black trees
303 where it is O(log n).
304
305 .. c:function:: const itemtype *Z_const_last(const struct Z_head *)
306 .. c:function:: itemtype *Z_last(struct Z_head *)
307
308 Last item in the structure, or ``NULL``. Only available on containers
309 that support reverse iteration (i.e. ``RBTREE`` and ``DLIST``).
310
311 .. c:function:: itemtype *Z_pop(struct Z_head *)
312
313 Remove and return the first item in the structure, or ``NULL`` if the
314 structure is empty. Like :c:func:`Z_first`, this is O(1) for all
315 data structures except red-black trees where it is O(log n) again.
316
317 This function can be used to build queues (with unsorted structures) or
318 priority queues (with sorted structures.)
319
320 Another common pattern is deleting all container items:
321
322 .. code-block:: c
323
324 while ((item = Z_pop(head)))
325 item_free(item);
326
327 .. note::
328
329 This function can - and should - be used with hash tables. It is not
330 affected by the "modification while iterating" problem. To remove
331 all items from a hash table, use the loop demonstrated above.
332
333 .. c:function:: const itemtype *Z_const_next(const struct Z_head *, const itemtype *prev)
334 .. c:function:: itemtype *Z_next(struct Z_head *, itemtype *prev)
335
336 Return the item that follows after ``prev``, or ``NULL`` if ``prev`` is
337 the last item.
338
339 .. warning::
340
341 ``prev`` must not be ``NULL``! Use :c:func:`Z_next_safe()` if
342 ``prev`` might be ``NULL``.
343
344 .. c:function:: itemtype *Z_next_safe(struct Z_head *, itemtype *prev)
345
346 Same as :c:func:`Z_next()`, except that ``NULL`` is returned if
347 ``prev`` is ``NULL``.
348
349 .. c:function:: const itemtype *Z_const_prev(const struct Z_head *, const itemtype *next)
350 .. c:function:: itemtype *Z_prev(struct Z_head *, itemtype *next)
351 .. c:function:: itemtype *Z_prev_safe(struct Z_head *, itemtype *next)
352
353 As above, but preceding item. Only available on structures that support
354 reverse iteration (i.e. ``RBTREE`` and ``DLIST``).
355
356 .. c:function:: itemtype *Z_del(struct Z_head *, itemtype *item)
357
358 Remove ``item`` from the container and return it.
359
360 .. note::
361
362 This function's behaviour is undefined if ``item`` is not actually
363 on the container. Some structures return ``NULL`` in this case while
364 others return ``item``. The function may also call ``assert()`` (but
365 most don't.)
366
367 .. c:function:: itemtype *Z_swap_all(struct Z_head *, struct Z_head *)
368
369 Swap the contents of 2 containers (of identical type). This exchanges the
370 contents of the two head structures and updates pointers if necessary for
371 the particular data structure. Fast for all structures.
372
373 (Not currently available on atomic containers.)
374
375 .. todo::
376
377 ``Z_del_after()`` / ``Z_del_hint()``?
378
379 API for unsorted structures
380 ---------------------------
381
382 Since the insertion position is not pre-defined for unsorted data, there
383 are several functions exposed to insert data:
384
385 .. note::
386
387 ``item`` must not be ``NULL`` for any of the following functions.
388
389 .. c:macro:: DECLARE_XXX(Z, type, field)
390
391 :param listtype XXX: ``LIST``, ``DLIST`` or ``ATOMLIST`` to select a data
392 structure implementation.
393 :param token Z: Gives the name prefix that is used for the functions
394 created for this instantiation. ``DECLARE_XXX(foo, ...)``
395 gives ``struct foo_item``, ``foo_add_head()``, ``foo_count()``, etc. Note
396 that this must match the value given in ``PREDECL_XXX(foo)``.
397 :param typename type: Specifies the data type of the list items, e.g.
398 ``struct item``. Note that ``struct`` must be added here, it is not
399 automatically added.
400 :param token field: References a struct member of ``type`` that must be
401 typed as ``struct foo_item``. This struct member is used to
402 store "next" pointers or other data structure specific data.
403
404 .. c:function:: void Z_add_head(struct Z_head *, itemtype *item)
405
406 Insert an item at the beginning of the structure, before the first item.
407 This is an O(1) operation for non-atomic lists.
408
409 .. c:function:: void Z_add_tail(struct Z_head *, itemtype *item)
410
411 Insert an item at the end of the structure, after the last item.
412 This is also an O(1) operation for non-atomic lists.
413
414 .. c:function:: void Z_add_after(struct Z_head *, itemtype *after, itemtype *item)
415
416 Insert ``item`` behind ``after``. If ``after`` is ``NULL``, the item is
417 inserted at the beginning of the list as with :c:func:`Z_add_head`.
418 This is also an O(1) operation for non-atomic lists.
419
420 A common pattern is to keep a "previous" pointer around while iterating:
421
422 .. code-block:: c
423
424 itemtype *prev = NULL, *item;
425
426 frr_each_safe(Z, head, item) {
427 if (something) {
428 Z_add_after(head, prev, item);
429 break;
430 }
431 prev = item;
432 }
433
434 .. todo::
435
436 maybe flip the order of ``item`` & ``after``?
437 ``Z_add_after(head, item, after)``
438
439 .. c:function:: bool Z_anywhere(const itemtype *)
440
441 Returns whether an item is a member of *any* container of this type.
442 The item must either be valid on some container, or set to all zeroes.
443
444 Guaranteed to be fast (pointer compare or similar.)
445
446 Not currently available for sorted and atomic containers. Might be added
447 for sorted containers at some point (when needed.)
448
449
450 API for sorted structures
451 -------------------------
452
453 Sorted data structures do not need to have an insertion position specified,
454 therefore the insertion calls are different from unsorted containers. Also,
455 sorted containers can be searched for a value.
456
457 .. c:macro:: DECLARE_XXX_UNIQ(Z, type, field, compare_func)
458
459 :param listtype XXX: One of the following:
460 ``SORTLIST`` (single-linked sorted list), ``SKIPLIST`` (skiplist),
461 ``RBTREE`` (RB-tree) or ``ATOMSORT`` (atomic single-linked list).
462 :param token Z: Gives the name prefix that is used for the functions
463 created for this instantiation. ``DECLARE_XXX(foo, ...)``
464 gives ``struct foo_item``, ``foo_add()``, ``foo_count()``, etc. Note
465 that this must match the value given in ``PREDECL_XXX(foo)``.
466 :param typename type: Specifies the data type of the items, e.g.
467 ``struct item``. Note that ``struct`` must be added here, it is not
468 automatically added.
469 :param token field: References a struct member of ``type`` that must be
470 typed as ``struct foo_item``. This struct member is used to
471 store "next" pointers or other data structure specific data.
472 :param funcptr compare_func: Item comparison function, must have the
473 following function signature:
474 ``int function(const itemtype *, const itemtype*)``. This function
475 may be static if the container is only used in one file.
476
477 .. c:macro:: DECLARE_XXX_NONUNIQ(Z, type, field, compare_func)
478
479 Same as above, but allow adding multiple items to the container that compare
480 as equal in ``compare_func``. Ordering between these items is undefined
481 and depends on the container implementation.
482
483 .. c:function:: itemtype *Z_add(struct Z_head *, itemtype *item)
484
485 Insert an item at the appropriate sorted position. If another item exists
486 in the container that compares as equal (``compare_func()`` == 0), ``item``
487 is not inserted and the already-existing item in the container is
488 returned. Otherwise, on successful insertion, ``NULL`` is returned.
489
490 For ``_NONUNIQ`` containers, this function always returns NULL since
491 ``item`` can always be successfully added to the container.
492
493 .. c:function:: const itemtype *Z_const_find(const struct Z_head *, const itemtype *ref)
494 .. c:function:: itemtype *Z_find(struct Z_head *, const itemtype *ref)
495
496 Search the container for an item that compares equal to ``ref``. If no
497 equal item is found, return ``NULL``.
498
499 This function is likely used with a temporary stack-allocated value for
500 ``ref`` like so:
501
502 .. code-block:: c
503
504 itemtype searchfor = { .foo = 123 };
505
506 itemtype *item = Z_find(head, &searchfor);
507
508 .. note::
509
510 The ``Z_find()`` function is only available for containers that contain
511 unique items (i.e. ``DECLARE_XXX_UNIQ``.) This is because on a container
512 with non-unique items, more than one item may compare as equal to
513 the item that is searched for.
514
515 .. c:function:: const itemtype *Z_const_find_gteq(const struct Z_head *, const itemtype *ref)
516 .. c:function:: itemtype *Z_find_gteq(struct Z_head *, const itemtype *ref)
517
518 Search the container for an item that compares greater or equal to
519 ``ref``. See :c:func:`Z_find()` above.
520
521 .. c:function:: const itemtype *Z_const_find_lt(const struct Z_head *, const itemtype *ref)
522 .. c:function:: itemtype *Z_find_lt(struct Z_head *, const itemtype *ref)
523
524 Search the container for an item that compares less than
525 ``ref``. See :c:func:`Z_find()` above.
526
527
528 API for hash tables
529 -------------------
530
531 .. c:macro:: DECLARE_HASH(Z, type, field, compare_func, hash_func)
532
533 :param listtype HASH: Only ``HASH`` is currently available.
534 :param token Z: Gives the name prefix that is used for the functions
535 created for this instantiation. ``DECLARE_XXX(foo, ...)``
536 gives ``struct foo_item``, ``foo_add()``, ``foo_count()``, etc. Note
537 that this must match the value given in ``PREDECL_XXX(foo)``.
538 :param typename type: Specifies the data type of the items, e.g.
539 ``struct item``. Note that ``struct`` must be added here, it is not
540 automatically added.
541 :param token field: References a struct member of ``type`` that must be
542 typed as ``struct foo_item``. This struct member is used to
543 store "next" pointers or other data structure specific data.
544 :param funcptr compare_func: Item comparison function, must have the
545 following function signature:
546 ``int function(const itemtype *, const itemtype*)``. This function
547 may be static if the container is only used in one file. For hash tables,
548 this function is only used to check for equality, the ordering is
549 ignored.
550 :param funcptr hash_func: Hash calculation function, must have the
551 following function signature:
552 ``uint32_t function(const itemtype *)``. The hash value for items
553 stored in a hash table is cached in each item, so this value need not
554 be cached by the user code.
555
556 .. warning::
557
558 Items that compare as equal cannot be inserted. Refer to the notes
559 about sorted structures in the previous section.
560
561
562 .. c:function:: void Z_init_size(struct Z_head *, size_t size)
563
564 Same as :c:func:`Z_init()` but preset the minimum hash table to
565 ``size``.
566
567 Hash tables also support :c:func:`Z_add()` and :c:func:`Z_find()` with
568 the same semantics as noted above. :c:func:`Z_find_gteq()` and
569 :c:func:`Z_find_lt()` are **not** provided for hash tables.
570
571 Hash table invariants
572 ^^^^^^^^^^^^^^^^^^^^^
573
574 There are several ways to injure yourself using the hash table API.
575
576 First, note that there are two functions related to computing uniqueness of
577 objects inserted into the hash table. There is a hash function and a comparison
578 function. The hash function computes the hash of the object. Our hash table
579 implementation uses `chaining
580 <https://en.wikipedia.org/wiki/Hash_table#Separate_chaining_with_linked_lists>`_.
581 This means that your hash function does not have to be perfect; multiple
582 objects having the same computed hash will be placed into a linked list
583 corresponding to that key. The closer to perfect the hash function, the better
584 performance, as items will be more evenly distributed and the chain length will
585 not be long on any given lookup, minimizing the number of list operations
586 required to find the correct item. However, the comparison function *must* be
587 perfect, in the sense that any two unique items inserted into the hash table
588 must compare not equal. At insertion time, if you try to insert an item that
589 compares equal to an existing item the insertion will not happen and
590 ``hash_get()`` will return the existing item. However, this invariant *must* be
591 maintained while the object is in the hash table. Suppose you insert items
592 ``A`` and ``B`` into the hash table which both hash to the same value ``1234``
593 but do not compare equal. They will be placed in a chain like so::
594
595 1234 : A -> B
596
597 Now suppose you do something like this elsewhere in the code::
598
599 *A = *B
600
601 I.e. you copy all fields of ``B`` into ``A``, such that the comparison function
602 now says that they are equal based on their contents. At this point when you
603 look up ``B`` in the hash table, ``hash_get()`` will search the chain for the
604 first item that compares equal to ``B``, which will be ``A``. This leads to
605 insidious bugs.
606
607 .. warning::
608
609 Never modify the values looked at by the comparison or hash functions after
610 inserting an item into a hash table.
611
612 A similar situation can occur with the hash allocation function. ``hash_get()``
613 accepts a function pointer that it will call to get the item that should be
614 inserted into the list if the provided item is not already present. There is a
615 builtin function, ``hash_alloc_intern``, that will simply return the item you
616 provided; if you always want to store the value you pass to ``hash_get`` you
617 should use this one. If you choose to provide a different one, that function
618 *must* return a new item that hashes and compares equal to the one you provided
619 to ``hash_get()``. If it does not the behavior of the hash table is undefined.
620
621 .. warning::
622
623 Always make sure your hash allocation function returns a value that hashes
624 and compares equal to the item you provided to ``hash_get()``.
625
626 Finally, if you maintain pointers to items you have inserted into a hash table,
627 then before deallocating them you must release them from the hash table. This
628 is basic memory management but worth repeating as bugs have arisen from failure
629 to do this.
630
631
632 API for heaps
633 -------------
634
635 Heaps provide the same API as the sorted data structures, except:
636
637 * none of the find functions (:c:func:`Z_find()`, :c:func:`Z_find_gteq()`
638 or :c:func:`Z_find_lt()`) are available.
639 * iterating over the heap yields the items in semi-random order, only the
640 first item is guaranteed to be in order and actually the "lowest" item
641 on the heap. Being a heap, only the rebalancing performed on removing the
642 first item (either through :c:func:`Z_pop()` or :c:func:`Z_del()`) causes
643 the new lowest item to bubble up to the front.
644 * all heap modifications are O(log n). However, cacheline efficiency and
645 latency is likely quite a bit better than with other data structures.
646
647 Atomic lists
648 ------------
649
650 `atomlist.h` provides an unsorted and a sorted atomic single-linked list.
651 Since atomic memory accesses can be considerably slower than plain memory
652 accessses (depending on the CPU type), these lists should only be used where
653 necessary.
654
655 The following guarantees are provided regarding concurrent access:
656
657 - the operations are lock-free but not wait-free.
658
659 Lock-free means that it is impossible for all threads to be blocked. Some
660 thread will always make progress, regardless of what other threads do. (This
661 even includes a random thread being stopped by a debugger in a random
662 location.)
663
664 Wait-free implies that the time any single thread might spend in one of the
665 calls is bounded. This is not provided here since it is not normally
666 relevant to practical operations. What this means is that if some thread is
667 hammering a particular list with requests, it is possible that another
668 thread is blocked for an extended time. The lock-free guarantee still
669 applies since the hammering thread is making progress.
670
671 - without a RCU mechanism in place, the point of contention for atomic lists
672 is memory deallocation. As it is, **a rwlock is required for correct
673 operation**. The *read* lock must be held for all accesses, including
674 reading the list, adding items to the list, and removing items from the
675 list. The *write* lock must be acquired and released before deallocating
676 any list element. If this is not followed, an use-after-free can occur
677 as a MT race condition when an element gets deallocated while another
678 thread is accessing the list.
679
680 .. note::
681
682 The *write* lock does not need to be held for deleting items from the
683 list, and there should not be any instructions between the
684 ``pthread_rwlock_wrlock`` and ``pthread_rwlock_unlock``. The write lock
685 is used as a sequence point, not as an exclusion mechanism.
686
687 - insertion operations are always safe to do with the read lock held.
688 Added items are immediately visible after the insertion call returns and
689 should not be touched anymore.
690
691 - when removing a *particular* (pre-determined) item, the caller must ensure
692 that no other thread is attempting to remove that same item. If this cannot
693 be guaranteed by architecture, a separate lock might need to be added.
694
695 - concurrent `pop` calls are always safe to do with only the read lock held.
696 This does not fall under the previous rule since the `pop` call will select
697 the next item if the first is already being removed by another thread.
698
699 **Deallocation locking still applies.** Assume another thread starts
700 reading the list, but gets task-switched by the kernel while reading the
701 first item. `pop` will happily remove and return that item. If it is
702 deallocated without acquiring and releasing the write lock, the other thread
703 will later resume execution and try to access the now-deleted element.
704
705 - the list count should be considered an estimate. Since there might be
706 concurrent insertions or removals in progress, it might already be outdated
707 by the time the call returns. No attempt is made to have it be correct even
708 for a nanosecond.
709
710 Overall, atomic lists are well-suited for MT queues; concurrent insertion,
711 iteration and removal operations will work with the read lock held.
712
713 Code snippets
714 ^^^^^^^^^^^^^
715
716 Iteration:
717
718 .. code-block:: c
719
720 struct item *i;
721
722 pthread_rwlock_rdlock(&itemhead_rwlock);
723 frr_each(itemlist, &itemhead, i) {
724 /* lock must remain held while iterating */
725 ...
726 }
727 pthread_rwlock_unlock(&itemhead_rwlock);
728
729 Head removal (pop) and deallocation:
730
731 .. code-block:: c
732
733 struct item *i;
734
735 pthread_rwlock_rdlock(&itemhead_rwlock);
736 i = itemlist_pop(&itemhead);
737 pthread_rwlock_unlock(&itemhead_rwlock);
738
739 /* i might still be visible for another thread doing an
740 * frr_each() (but won't be returned by another pop()) */
741 ...
742
743 pthread_rwlock_wrlock(&itemhead_rwlock);
744 pthread_rwlock_unlock(&itemhead_rwlock);
745 /* i now guaranteed to be gone from the list.
746 * note nothing between wrlock() and unlock() */
747 XFREE(MTYPE_ITEM, i);
748
749 FAQ
750 ---
751
752 What are the semantics of ``const`` in the container APIs?
753 ``const`` pointers to list heads and/or items are interpreted to mean that
754 both the container itself as well as the data items are read-only.
755
756 Why is it ``PREDECL`` + ``DECLARE`` instead of ``DECLARE`` + ``DEFINE``?
757 The rule is that a ``DEFINE`` must be in a ``.c`` file, and linked exactly
758 once because it defines some kind of global symbol. This is not the case
759 for the data structure macros; they only define ``static`` symbols and it
760 is perfectly fine to include both ``PREDECL`` and ``DECLARE`` in a header
761 file. It is also perfectly fine to have the same ``DECLARE`` statement in
762 2 ``.c`` files, but only **if the macro arguments are identical.** Maybe
763 don't do that unless you really need it.
764
765 FRR lists
766 ---------
767
768 .. TODO::
769
770 document
771
772 BSD lists
773 ---------
774
775 .. TODO::
776
777 refer to external docs