]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | [/ |
2 | Copyright Oliver Kowalke 2016. | |
3 | Distributed under the Boost Software License, Version 1.0. | |
4 | (See accompanying file LICENSE_1_0.txt or copy at | |
5 | http://www.boost.org/LICENSE_1_0.txt | |
6 | ] | |
7 | ||
8 | [/ import path is relative to this .qbk file] | |
9 | [import ../examples/work_sharing.cpp] | |
10 | ||
11 | [#migration] | |
12 | [section:migration Migrating fibers between threads] | |
13 | ||
14 | [heading Overview] | |
15 | ||
16 | Each fiber owns a stack and manages its execution state, including all | |
17 | registers and CPU flags, the instruction pointer and the stack pointer. That | |
18 | means, in general, a fiber is not bound to a specific thread.[footnote The | |
19 | ["main] fiber on each thread, that is, the fiber on which the thread is | |
20 | launched, cannot migrate to any other thread. Also __boost_fiber__ implicitly | |
21 | creates a dispatcher fiber for each thread [mdash] this cannot migrate | |
22 | either.][superscript,][footnote Of course it would be problematic to migrate a | |
23 | fiber that relies on [link thread_local_storage thread-local storage].] | |
24 | ||
25 | Migrating a fiber from a logical CPU with heavy workload to another | |
26 | logical CPU with a lighter workload might speed up the overall execution. | |
27 | Note that in the case of NUMA-architectures, it is not always advisable to | |
28 | migrate data between threads. Suppose fiber ['f] is running on logical CPU | |
29 | ['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on | |
30 | the physical memory located at ['node0]. Migrating the fiber from ['cpu0] to | |
31 | another logical CPU ['cpuX] which is part of a different NUMA node ['nodeX] | |
32 | might reduce the performance of the application due to increased latency of | |
33 | memory access. | |
34 | ||
35 | Only fibers that are contained in __algo__[s] ready queue can migrate between | |
36 | threads. You cannot migrate a running fiber, nor one that is __blocked__. You | |
37 | cannot migrate a fiber if its [member_link context..is_context] method returns | |
38 | `true` for `pinned_context`. | |
39 | ||
40 | In __boost_fiber__ a fiber is migrated by invoking __context_detach__ on the | |
41 | thread from which the fiber migrates and __context_attach__ on the thread to | |
42 | which the fiber migrates. | |
43 | ||
44 | Thus, fiber migration is accomplished by sharing state between instances of a | |
45 | user-coded __algo__ implementation running on different threads. The fiber[s] | |
46 | original thread calls [member_link algorithm..awakened], passing the | |
47 | fiber[s] [class_link context][^*]. The `awakened()` implementation calls | |
48 | __context_detach__. | |
49 | ||
50 | At some later point, when the same or a different thread calls [member_link | |
51 | algorithm..pick_next], the `pick_next()` implementation selects a ready | |
52 | fiber and calls __context_attach__ on it before returning it. | |
53 | ||
54 | As stated above, a `context` for which `is_context(pinned_context) == true` | |
55 | must never be passed to either __context_detach__ or __context_attach__. It | |
56 | may only be returned from `pick_next()` called by the ['same] thread that | |
57 | passed that context to `awakened()`. | |
58 | ||
59 | [heading Example of work sharing] | |
60 | ||
61 | In the example [@../../examples/work_sharing.cpp work_sharing.cpp] | |
62 | multiple worker fibers are created on the main thread. Each fiber gets a | |
63 | character as parameter at construction. This character is printed out ten times. | |
64 | Between each iteration the fiber calls __yield__. That puts the fiber in the | |
65 | ready queue of the fiber-scheduler ['shared_ready_queue], running in the current | |
66 | thread. | |
67 | The next fiber ready to be executed is dequeued from the shared ready queue | |
68 | and resumed by ['shared_ready_queue] running on ['any participating thread]. | |
69 | ||
70 | All instances of ['shared_ready_queue] share one global concurrent queue, used | |
71 | as ready queue. This mechanism shares all worker fibers between all instances | |
72 | of ['shared_ready_queue], thus between all participating threads. | |
73 | ||
74 | ||
75 | [heading Setup of threads and fibers] | |
76 | ||
77 | In `main()` the fiber-scheduler is installed and the worker fibers and the | |
78 | threads are launched. | |
79 | ||
80 | [main_ws] | |
81 | ||
82 | The start of the threads is synchronized with a barrier. The main fiber of | |
83 | each thread (including main thread) is suspended until all worker fibers are | |
84 | complete. When the main fiber returns from __cond_wait__, the thread | |
85 | terminates: the main thread joins all other threads. | |
86 | ||
87 | [thread_fn_ws] | |
88 | ||
89 | Each worker fiber executes function `whatevah()` with character `me` as | |
90 | parameter. The fiber yields in a loop and prints out a message if it was migrated | |
91 | to another thread. | |
92 | ||
93 | [fiber_fn_ws] | |
94 | ||
95 | ||
96 | [heading Scheduling fibers] | |
97 | ||
98 | The fiber scheduler `shared_ready_queue` is like `round_robin`, except that it | |
99 | shares a common ready queue among all participating threads. A thread | |
100 | participates in this pool by executing [function_link use_scheduling_algorithm] | |
101 | before any other __boost_fiber__ operation. | |
102 | ||
103 | The important point about the ready queue is that it[s] a class static, common | |
104 | to all instances of shared_ready_queue. | |
105 | Fibers that are enqueued via __algo_awakened__ (fibers that are ready to be | |
106 | resumed) are thus available to all threads. | |
107 | It is required to reserve a separate, scheduler-specific queue for the thread[s] | |
108 | main fiber and dispatcher fibers: these may ['not] be shared between threads! | |
109 | When we[,]re passed either of these fibers, push it there instead of in the | |
110 | shared queue: it would be Bad News for thread B to retrieve and attempt to | |
111 | execute thread A[s] main fiber. | |
112 | ||
113 | [awakened_ws] | |
114 | ||
115 | When __algo_pick_next__ gets called inside one thread, a fiber is dequeued from | |
116 | ['rqueue_] and will be resumed in that thread. | |
117 | ||
118 | [pick_next_ws] | |
119 | ||
120 | ||
121 | The source code above is found in | |
122 | [@../../examples/work_sharing.cpp work_sharing.cpp]. | |
123 | ||
124 | [endsect] |