]> git.proxmox.com Git - ceph.git/blame - ceph/src/boost/libs/fiber/doc/integration.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / fiber / doc / integration.qbk
CommitLineData
7c673cae
FG
1[/
2 Copyright Oliver Kowalke, Nat Goodspeed 2015.
3 Distributed under the Boost Software License, Version 1.0.
4 (See accompanying file LICENSE_1_0.txt or copy at
5 http://www.boost.org/LICENSE_1_0.txt
6]
7
8[/ import path is relative to this .qbk file]
9
10[#integration]
11[section:integration Sharing a Thread with Another Main Loop]
12
13[section Overview]
14
15As always with cooperative concurrency, it is important not to let any one
16fiber monopolize the processor too long: that could ["starve] other ready
17fibers. This section discusses a couple of solutions.
18
19[endsect]
20[section Event-Driven Program]
21
22Consider a classic event-driven program, organized around a main loop that
23fetches and dispatches incoming I/O events. You are introducing
24__boost_fiber__ because certain asynchronous I/O sequences are logically
25sequential, and for those you want to write and maintain code that looks and
26acts sequential.
27
28You are launching fibers on the application[s] main thread because certain of
29their actions will affect its user interface, and the application[s] UI
30framework permits UI operations only on the main thread. Or perhaps those
31fibers need access to main-thread data, and it would be too expensive in
32runtime (or development time) to robustly defend every such data item with
33thread synchronization primitives.
34
35You must ensure that the application[s] main loop ['itself] doesn[t] monopolize
36the processor: that the fibers it launches will get the CPU cycles they need.
37
38The solution is the same as for any fiber that might claim the CPU for an
39extended time: introduce calls to [ns_function_link this_fiber..yield]. The
40most straightforward approach is to call `yield()` on every iteration of your
41existing main loop. In effect, this unifies the application[s] main loop with
42__boost_fiber__[s] internal main loop. `yield()` allows the fiber manager to
43run any fibers that have become ready since the previous iteration of the
44application[s] main loop. When these fibers have had a turn, control passes to
45the thread[s] main fiber, which returns from `yield()` and resumes the
46application[s] main loop.
47
48[endsect]
49[#embedded_main_loop]
50[section Embedded Main Loop]
51
52More challenging is when the application[s] main loop is embedded in some other
53library or framework. Such an application will typically, after performing all
54necessary setup, pass control to some form of `run()` function from which
55control does not return until application shutdown.
56
57A __boost_asio__ program might call
58[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run.html
59`io_service::run()`] in this way.
60
61In general, the trick is to arrange to pass control to [ns_function_link
62this_fiber..yield] frequently. You could use an
63[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/high_resolution_timer.html
64Asio timer] for that purpose. You could instantiate the timer, arranging to
65call a handler function when the timer expires.
66The handler function could call `yield()`, then reset the timer and arrange to
67wake up again on its next expiration.
68
69Since, in this thought experiment, we always pass control to the fiber manager
70via `yield()`, the calling fiber is never blocked. Therefore there is always
71at least one ready fiber. Therefore the fiber manager never calls [member_link
72algorithm..suspend_until].
73
74Using
75[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/post.html
76`io_service::post()`] instead of setting a timer for some nonzero interval
77would be unfriendly to other threads. When all I/O is pending and all fibers
78are blocked, the io_service and the fiber manager would simply spin the CPU,
79passing control back and forth to each other. Using a timer allows tuning the
80responsiveness of this thread relative to others.
81
82[endsect]
83[section Deeper Dive into __boost_asio__]
84
85By now the alert reader is thinking: but surely, with Asio in particular, we
86ought to be able to do much better than periodic polling pings!
87
88[/ @path link is relative to (eventual) doc/html/index.html, hence ../..]
89This turns out to be surprisingly tricky. We present a possible approach in
90[@../../examples/asio/round_robin.hpp `examples/asio/round_robin.hpp`].
91
92[import ../examples/asio/round_robin.hpp]
93[import ../examples/asio/autoecho.cpp]
94
95One consequence of using __boost_asio__ is that you must always let Asio
96suspend the running thread. Since Asio is aware of pending I/O requests, it
97can arrange to suspend the thread in such a way that the OS will wake it on
98I/O completion. No one else has sufficient knowledge.
99
100So the fiber scheduler must depend on Asio for suspension and resumption. It
101requires Asio handler calls to wake it.
102
103One dismaying implication is that we cannot support multiple threads calling
104[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run.html
105`io_service::run()`] on the same `io_service` instance. The reason is that
106Asio provides no way to constrain a particular handler to be called only on a
107specified thread. A fiber scheduler instance is locked to a particular thread:
108that instance cannot manage any other thread[s] fibers. Yet if we allow
109multiple threads to call `io_service::run()` on the same `io_service`
110instance, a fiber scheduler which needs to sleep can have no guarantee that it
111will reawaken in a timely manner. It can set an Asio timer, as described above
112[mdash] but that timer[s] handler may well execute on a different thread!
113
114Another implication is that since an Asio-aware fiber scheduler (not to
115mention [link callbacks_asio `boost::fibers::asio::yield`]) depends on handler
116calls from the `io_service`, it is the application[s] responsibility to ensure
117that
118[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/stop.html
119`io_service::stop()`] is not called until every fiber has terminated.
120
121It is easier to reason about the behavior of the presented `asio::round_robin`
122scheduler if we require that after initial setup, the thread[s] main fiber is
123the fiber that calls `io_service::run()`, so let[s] impose that requirement.
124
125Naturally, the first thing we must do on each thread using a custom fiber
126scheduler is call [function_link use_scheduling_algorithm]. However, since
127`asio::round_robin` requires an `io_service` instance, we must first declare
128that.
129
130[asio_rr_setup]
131
132`use_scheduling_algorithm()` instantiates `asio::round_robin`, which naturally
133calls its constructor:
134
135[asio_rr_ctor]
136
137`asio::round_robin` binds the passed `io_service` reference and initializes a
138[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/steady_timer.html
139`boost::asio::steady_timer`]:
140
141[asio_rr_suspend_timer]
142
143Then it calls
144[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/add_service.html
145`boost::asio::add_service()`] with a nested `service` struct:
146
147[asio_rr_service_top]
148 ...
149[asio_rr_service_bottom]
150
151The `service` struct has a couple of roles.
152
153Its foremost role is to manage a
154[^std::unique_ptr<[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service__work.html
155`boost::asio::io_service::work`]>]. We want the `io_service` instance to
156continue its main loop even when there is no pending Asio I/O.
157
158But when
159[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service__service/shutdown_service.html
160`boost::asio::io_service::service::shutdown_service()`] is called, we discard
161the `io_service::work` instance so the `io_service` can shut down properly.
162
163Its other purpose is to
164[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/post.html
165`post()`] a lambda (not yet shown).
166Let[s] walk further through the example program before coming back to explain
167that lambda.
168
169The `service` constructor returns to `asio::round_robin`[s] constructor,
170which returns to `use_scheduling_algorithm()`, which returns to the
171application code.
172
173Once it has called `use_scheduling_algorithm()`, the application may now
174launch some number of fibers:
175
176[asio_rr_launch_fibers]
177
178Since we don[t] specify a [class_link launch], these fibers are ready
179to run, but have not yet been entered.
180
181Having set everything up, the application calls
182[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run.html
183`io_service::run()`]:
184
185[asio_rr_run]
186
187Now what?
188
189Because this `io_service` instance owns an `io_service::work` instance,
190`run()` does not immediately return. But [mdash] none of the fibers that will
191perform actual work has even been entered yet!
192
193Without that initial `post()` call in `service`[s] constructor, ['nothing]
194would happen. The application would hang right here.
195
196So, what should the `post()` handler execute? Simply [ns_function_link
197this_fiber..yield]?
198
199That would be a promising start. But we have no guarantee that any of the
200other fibers will initiate any Asio operations to keep the ball rolling. For
201all we know, every other fiber could reach a similar `this_fiber::yield()`
202call first. Control would return to the `post()` handler, which would return
203to Asio, and... the application would hang.
204
205The `post()` handler could `post()` itself again. But as discussed in [link
206embedded_main_loop the previous section], once there are actual I/O operations
207in flight [mdash] once we reach a state in which no fiber is ready [mdash]
208that would cause the thread to spin.
209
210We could, of course, set an Asio timer [mdash] again as [link
211embedded_main_loop previously discussed]. But in this ["deeper dive,] we[,]re
212trying to do a little better.
213
214The key to doing better is that since we[,]re in a fiber, we can run an actual
215loop [mdash] not just a chain of callbacks. We can wait for ["something to
216happen] by calling
217[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/run_one.html
218`io_service::run_one()`] [mdash] or we can execute already-queued Asio
219handlers by calling
220[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/poll.html
221`io_service::poll()`].
222
223Here[s] the body of the lambda passed to the `post()` call.
224
225[asio_rr_service_lambda]
226
227We want this loop to exit once the `io_service` instance has been
228[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/io_service/stopped.html
229`stopped()`].
230
231As long as there are ready fibers, we interleave running ready Asio handlers
232with running ready fibers.
233
234If there are no ready fibers, we wait by calling `run_one()`. Once any Asio
235handler has been called [mdash] no matter which [mdash] `run_one()` returns.
236That handler may have transitioned some fiber to ready state, so we loop back
237to check again.
238
239(We won[t] describe `awakened()`, `pick_next()` or `has_ready_fibers()`, as
240these are just like [member_link round_robin..awakened], [member_link
241round_robin..pick_next] and [member_link round_robin..has_ready_fibers].)
242
243That leaves `suspend_until()` and `notify()`.
244
245Doubtless you have been asking yourself: why are we calling
246`io_service::run_one()` in the lambda loop? Why not call it in
247`suspend_until()`, whose very API was designed for just such a purpose?
248
249Under normal circumstances, when the fiber manager finds no ready fibers, it
250calls [member_link algorithm..suspend_until]. Why test
251`has_ready_fibers()` in the lambda loop? Why not leverage the normal
252mechanism?
253
254The answer is: it matters who[s] asking.
255
256Consider the lambda loop shown above. The only __boost_fiber__ APIs it engages
257are `has_ready_fibers()` and [ns_function_link this_fiber..yield]. `yield()`
258does not ['block] the calling fiber: the calling fiber does not become
259unready. It is immediately passed back to [member_link
260algorithm..awakened], to be resumed in its turn when all other ready
261fibers have had a chance to run. In other words: during a `yield()` call,
262['there is always at least one ready fiber.]
263
264As long as this lambda loop is still running, the fiber manager does not call
265`suspend_until()` because it always has a fiber ready to run.
266
267However, the lambda loop ['itself] can detect the case when no ['other] fibers are
268ready to run: the running fiber is not ['ready] but ['running.]
269
270That said, `suspend_until()` and `notify()` are in fact called during orderly
271shutdown processing, so let[s] try a plausible implementation.
272
273[asio_rr_suspend_until]
274
275As you might expect, `suspend_until()` sets an
276[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/steady_timer.html
277`asio::steady_timer`] to
278[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/basic_waitable_timer/expires_at.html
279`expires_at()`] the passed
280[@http://en.cppreference.com/w/cpp/chrono/steady_clock
281`std::chrono::steady_clock::time_point`]. Usually.
282
283As indicated in comments, we avoid setting `suspend_timer_` multiple times to
284the ['same] `time_point` value since every `expires_at()` call cancels any
285previous
286[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/basic_waitable_timer/async_wait.html
287`async_wait()`] call. There is a chance that we could spin. Reaching
288`suspend_until()` means the fiber manager intends to yield the processor to
289Asio. Cancelling the previous `async_wait()` call would fire its handler,
290causing `run_one()` to return, potentially causing the fiber manager to call
291`suspend_until()` again with the same `time_point` value...
292
293Given that we suspend the thread by calling `io_service::run_one()`, what[s]
294important is that our `async_wait()` call will cause a handler to run, which
295will cause `run_one()` to return. It[s] not so important specifically what
296that handler does.
297
298[asio_rr_notify]
299
300Since an `expires_at()` call cancels any previous `async_wait()` call, we can
301make `notify()` simply call `steady_timer::expires_at()`. That should cause
302the `io_service` to call the `async_wait()` handler with `operation_aborted`.
303
304The comments in `notify()` explain why we call `expires_at()` rather than
305[@http://www.boost.org/doc/libs/release/doc/html/boost_asio/reference/basic_waitable_timer/cancel.html
306`cancel()`].
307
308[/ @path link is relative to (eventual) doc/html/index.html, hence ../..]
309This `boost::fibers::asio::round_robin` implementation is used in
310[@../../examples/asio/autoecho.cpp `examples/asio/autoecho.cpp`].
311
312It seems possible that you could put together a more elegant Fiber / Asio
313integration. But as noted at the outset: it[s] tricky.
314
315[endsect]
316[endsect]