]> git.proxmox.com Git - ceph.git/blame - ceph/src/boost/libs/thread/doc/futures.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / thread / doc / futures.qbk
CommitLineData
7c673cae
FG
1[/
2 (C) Copyright 2008-11 Anthony Williams.
3 (C) Copyright 2012-2015 Vicente J. Botet Escriba.
4 Distributed under the Boost Software License, Version 1.0.
5 (See accompanying file LICENSE_1_0.txt or copy at
6 http://www.boost.org/LICENSE_1_0.txt).
7]
8
9[section:futures Futures]
10
11[template future_state_link[link_text] [link thread.synchronization.futures.reference.future_state [link_text]]]
12[def __uninitialized__ [future_state_link `boost::future_state::uninitialized`]]
13[def __ready__ [future_state_link `boost::future_state::ready`]]
14[def __waiting__ [future_state_link `boost::future_state::waiting`]]
15
16[def __future_uninitialized__ `boost::future_uninitialized`]
17[def __broken_promise__ `boost::broken_promise`]
18[def __future_already_retrieved__ `boost::future_already_retrieved`]
19[def __task_moved__ `boost::task_moved`]
20[def __task_already_started__ `boost::task_already_started`]
21[def __promise_already_satisfied__ `boost::promise_already_satisfied`]
22
23[def __thread_interrupted__ `boost::thread_interrupted`]
24
25
26[template unique_future_link[link_text] [link thread.synchronization.futures.reference.unique_future [link_text]]]
27[def __unique_future__ [unique_future_link `future`]]
28[def __unique_future `future`]
29
30[template unique_future_get_link[link_text] [link thread.synchronization.futures.reference.unique_future.get [link_text]]]
31[def __unique_future_get__ [unique_future_get_link `boost::future<R>::get()`]]
32
33[template unique_future_wait_link[link_text] [link thread.synchronization.futures.reference.unique_future.wait [link_text]]]
34[def __unique_future_wait__ [unique_future_wait_link `boost::future<R>::wait()`]]
35
36[template unique_future_is_ready_link[link_text] [link thread.synchronization.futures.reference.unique_future.is_ready [link_text]]]
37[def __unique_future_is_ready__ [unique_future_is_ready_link `boost::future<R>::is_ready()`]]
38
39[template unique_future_has_value_link[link_text] [link thread.synchronization.futures.reference.unique_future.has_value [link_text]]]
40[def __unique_future_has_value__ [unique_future_has_value_link `boost::future<R>::has_value()`]]
41
42[template unique_future_has_exception_link[link_text] [link thread.synchronization.futures.reference.unique_future.has_exception [link_text]]]
43[def __unique_future_has_exception__ [unique_future_has_exception_link `boost::future<R>::has_exception()`]]
44
45[template unique_future_get_state_link[link_text] [link thread.synchronization.futures.reference.unique_future.get_state [link_text]]]
46[def __unique_future_get_state__ [unique_future_get_state_link `boost::future<R>::get_state()`]]
47
48[template shared_future_link[link_text] [link thread.synchronization.futures.reference.shared_future [link_text]]]
49[def __shared_future__ [shared_future_link `boost::shared_future`]]
50
51[template shared_future_get_link[link_text] [link thread.synchronization.futures.reference.shared_future.get [link_text]]]
52[def __shared_future_get__ [shared_future_get_link `boost::shared_future<R>::get()`]]
53
54[template shared_future_wait_link[link_text] [link thread.synchronization.futures.reference.shared_future.wait [link_text]]]
55[def __shared_future_wait__ [shared_future_wait_link `boost::shared_future<R>::wait()`]]
56
57[template shared_future_is_ready_link[link_text] [link thread.synchronization.futures.reference.shared_future.is_ready [link_text]]]
58[def __shared_future_is_ready__ [shared_future_is_ready_link `boost::shared_future<R>::is_ready()`]]
59
60[template shared_future_has_value_link[link_text] [link thread.synchronization.futures.reference.shared_future.has_value [link_text]]]
61[def __shared_future_has_value__ [shared_future_has_value_link `boost::shared_future<R>::has_value()`]]
62
63[template shared_future_has_exception_link[link_text] [link thread.synchronization.futures.reference.shared_future.has_exception [link_text]]]
64[def __shared_future_has_exception__ [shared_future_has_exception_link `boost::shared_future<R>::has_exception()`]]
65
66[template shared_future_get_state_link[link_text] [link thread.synchronization.futures.reference.shared_future.get_state [link_text]]]
67[def __shared_future_get_state__ [shared_future_get_state_link `boost::shared_future<R>::get_state()`]]
68
69[template promise_link[link_text] [link thread.synchronization.futures.reference.promise [link_text]]]
70[def __promise__ [promise_link `boost::promise`]]
71
72[template packaged_task_link[link_text] [link thread.synchronization.futures.reference.packaged_task [link_text]]]
73[def __packaged_task__ [packaged_task_link `boost::packaged_task`]]
74[def __packaged_task [packaged_task_link `boost::packaged_task`]]
75
76[template wait_for_any_link[link_text] [link thread.synchronization.futures.reference.wait_for_any [link_text]]]
77[def __wait_for_any__ [wait_for_any_link `boost::wait_for_any()`]]
78
79[template wait_for_all_link[link_text] [link thread.synchronization.futures.reference.wait_for_all [link_text]]]
80[def __wait_for_all__ [wait_for_all_link `boost::wait_for_all()`]]
81
82
83[section:overview Overview]
84
85The futures library provides a means of handling synchronous future values, whether those values are generated by another thread, or
86on a single thread in response to external stimuli, or on-demand.
87
88This is done through the provision of four class templates: __unique_future__ and __shared_future__ which are used to retrieve the
89asynchronous results, and __promise__ and __packaged_task__ which are used to generate the asynchronous results.
90
91An instance of __unique_future__ holds the one and only reference to a result. Ownership can be transferred between instances using
92the move constructor or move-assignment operator, but at most one instance holds a reference to a given asynchronous result. When
93the result is ready, it is returned from __unique_future_get__ by rvalue-reference to allow the result to be moved or copied as
94appropriate for the type.
95
96On the other hand, many instances of __shared_future__ may reference the same result. Instances can be freely copied and assigned,
97and __shared_future_get__ returns a `const` reference so that multiple calls to __shared_future_get__ are safe. You can move an
98instance of __unique_future__ into an instance of __shared_future__, thus transferring ownership of the associated asynchronous
99result, but not vice-versa.
100
101`boost::async` is a simple way of running asynchronous tasks. A call to `boost::async` returns a __unique_future__ that will contain the result of the task.
102
103
104You can wait for futures either individually or with one of the __wait_for_any__ and __wait_for_all__ functions.
105
106[endsect]
107
108[section:creating Creating asynchronous values]
109
110You can set the value in a future with either a __promise__ or a __packaged_task__. A __packaged_task__ is a callable object that
111wraps a function or callable object. When the packaged task is invoked, it invokes the contained function in turn, and populates a
112future with the return value. This is an answer to the perennial question: "how do I return a value from a thread?": package the
113function you wish to run as a __packaged_task__ and pass the packaged task to the thread constructor. The future retrieved from the
114packaged task can then be used to obtain the return value. If the function throws an exception, that is stored in the future in
115place of the return value.
116
117 int calculate_the_answer_to_life_the_universe_and_everything()
118 {
119 return 42;
120 }
121
122 boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
123 boost::__unique_future__<int> fi=pt.get_future();
124
125 boost::thread task(boost::move(pt)); // launch task on a thread
126
127 fi.wait(); // wait for it to finish
128
129 assert(fi.is_ready());
130 assert(fi.has_value());
131 assert(!fi.has_exception());
132 assert(fi.get_state()==boost::future_state::ready);
133 assert(fi.get()==42);
134
135
136A __promise__ is a bit more low level: it just provides explicit functions to store a value or an exception in the associated
137future. A promise can therefore be used where the value may come from more than one possible source, or where a single operation may
138produce multiple values.
139
140 boost::promise<int> pi;
141 boost::__unique_future__<int> fi;
142 fi=pi.get_future();
143
144 pi.set_value(42);
145
146 assert(fi.is_ready());
147 assert(fi.has_value());
148 assert(!fi.has_exception());
149 assert(fi.get_state()==boost::future_state::ready);
150 assert(fi.get()==42);
151
152[endsect]
153
154[section:lazy_futures Wait Callbacks and Lazy Futures]
155
156Both __promise__ and __packaged_task__ support ['wait callbacks] that are invoked when a thread blocks in a call to `wait()` or
157`timed_wait()` on a future that is waiting for the result from the __promise__ or __packaged_task__, in the thread that is doing the
158waiting. These can be set using the `set_wait_callback()` member function on the __promise__ or __packaged_task__ in question.
159
160This allows ['lazy futures] where the result is not actually computed until it is needed by some thread. In the example below, the
161call to `f.get()` invokes the callback `invoke_lazy_task`, which runs the task to set the value. If you remove the call to
162`f.get()`, the task is not ever run.
163
164 int calculate_the_answer_to_life_the_universe_and_everything()
165 {
166 return 42;
167 }
168
169 void invoke_lazy_task(boost::packaged_task<int>& task)
170 {
171 try
172 {
173 task();
174 }
175 catch(boost::task_already_started&)
176 {}
177 }
178
179 int main()
180 {
181 boost::packaged_task<int> task(calculate_the_answer_to_life_the_universe_and_everything);
182 task.set_wait_callback(invoke_lazy_task);
183 boost::__unique_future__<int> f(task.get_future());
184
185 assert(f.get()==42);
186 }
187
188
189[endsect]
190
191[section:at_thread_exit Handling Detached Threads and Thread Specific Variables]
192
193Detached threads pose a problem for objects with thread storage duration.
194If we use a mechanism other than `thread::__join` to wait for a __thread to complete its work - such as waiting for a future to be ready -
195then the destructors of thread specific variables will still be running after the waiting thread has resumed.
196This section explain how the standard mechanism can be used to make such synchronization safe by ensuring that the
197objects with thread storage duration are destroyed prior to the future being made ready. e.g.
198
199 int find_the_answer(); // uses thread specific objects
200 void thread_func(boost::promise<int>&& p)
201 {
202 p.set_value_at_thread_exit(find_the_answer());
203 }
204
205 int main()
206 {
207 boost::promise<int> p;
208 boost::thread t(thread_func,boost::move(p));
209 t.detach(); // we're going to wait on the future
210 std::cout<<p.get_future().get()<<std::endl;
211 }
212
213When the call to `get()` returns, we know that not only is the future value ready, but the thread specific variables
214on the other thread have also been destroyed.
215
216Such mechanisms are provided for `boost::condition_variable`, `boost::promise` and `boost::packaged_task`. e.g.
217
218 void task_executor(boost::packaged_task<void(int)> task,int param)
219 {
220 task.make_ready_at_thread_exit(param); // execute stored task
221 } // destroy thread specific and wake threads waiting on futures from task
222
223Other threads can wait on a future obtained from the task without having to worry about races due to the execution of
224destructors of the thread specific objects from the task's thread.
225
226 boost::condition_variable cv;
227 boost::mutex m;
228 complex_type the_data;
229 bool data_ready;
230
231 void thread_func()
232 {
233 boost::unique_lock<std::mutex> lk(m);
234 the_data=find_the_answer();
235 data_ready=true;
236 boost::notify_all_at_thread_exit(cv,boost::move(lk));
237 } // destroy thread specific objects, notify cv, unlock mutex
238
239 void waiting_thread()
240 {
241 boost::unique_lock<std::mutex> lk(m);
242 while(!data_ready)
243 {
244 cv.wait(lk);
245 }
246 process(the_data);
247 }
248
249The waiting thread is guaranteed that the thread specific objects used by `thread_func()` have been destroyed by the time
250`process(the_data)` is called. If the lock on `m` is released and re-acquired after setting `data_ready` and before calling
251`boost::notify_all_at_thread_exit()` then this does NOT hold, since the thread may return from the wait due to a
252spurious wake-up.
253
254[endsect]
255
256[section:async Executing asynchronously]
257
258`boost::async` is a simple way of running asynchronous tasks to make use of the available hardware concurrency.
259A call to `boost::async` returns a `boost::future` that will contain the result of the task. Depending on
260the launch policy, the task is either run asynchronously on its own thread or synchronously on whichever thread
261calls the `wait()` or `get()` member functions on that `future`.
262
263A launch policy of either boost::launch::async, which asks the runtime to create an asynchronous thread,
264or boost::launch::deferred, which indicates you simply want to defer the function call until a later time (lazy evaluation).
265This argument is optional - if you omit it your function will use the default policy.
266
267For example, consider computing the sum of a very large array. The first task is to not compute asynchronously when
268the overhead would be significant. The second task is to split the work into two pieces, one executed by the host
269thread and one executed asynchronously.
270
271
272 int parallel_sum(int* data, int size)
273 {
274 int sum = 0;
275 if ( size < 1000 )
276 for ( int i = 0; i < size; ++i )
277 sum += data[i];
278 else {
279 auto handle = boost::async(parallel_sum, data+size/2, size-size/2);
280 sum += parallel_sum(data, size/2);
281 sum += handle.get();
282 }
283 return sum;
284 }
285
286
287
288[endsect]
289
290[section:shared Shared Futures]
291
292`shared_future` is designed to be shared between threads,
293that is to allow multiple concurrent get operations.
294
295[heading Multiple get]
296
297The second `get()` call in the following example is undefined.
298
299 void bad_second_use( type arg ) {
300
301 auto ftr = async( [=]{ return work( arg ); } );
302 if ( cond1 )
303 {
304 use1( ftr.get() );
305 } else
306 {
307 use2( ftr.get() );
308 }
309 use3( ftr.get() ); // second use is undefined
310 }
311
312Using a `shared_future` solves the issue
313
314 void good_second_use( type arg ) {
315
316 shared_future<type> ftr = async( [=]{ return work( arg ); } );
317 if ( cond1 )
318 {
319 use1( ftr.get() );
320 } else
321 {
322 use2( ftr.get() );
323 }
324 use3( ftr.get() ); // second use is defined
325 }
326
327[heading share()]
328
329Naming the return type when declaring the `shared_future` is needed; auto is not available within template argument lists.
330Here `share()` could be used to simplify the code
331
332 void better_second_use( type arg ) {
333
334 auto ftr = async( [=]{ return work( arg ); } ).share();
335 if ( cond1 )
336 {
337 use1( ftr.get() );
338 } else
339 {
340 use2( ftr.get() );
341 }
342 use3( ftr.get() ); // second use is defined
343 }
344
345[heading Writing on get()]
346
347The user can either read or write the future variable.
348
349 void write_to_get( type arg ) {
350
351 auto ftr = async( [=]{ return work( arg ); } ).share();
352 if ( cond1 )
353 {
354 use1( ftr.get() );
355 } else
356 {
357 if ( cond2 )
358 use2( ftr.get() );
359 else
360 ftr.get() = something(); // assign to non-const reference.
361 }
362 use3( ftr.get() ); // second use is defined
363 }
364
365This works because the `shared_future<>::get()` function returns a non-const reference to the appropriate storage.
366Of course the access to this storage must be ensured by the user. The library doesn't ensure the access to the internal storage is thread safe.
367
368There has been some work by the C++ standard committee on an `atomic_future` that behaves as an `atomic` variable, that is thread_safe,
369and a `shared_future` that can be shared between several threads, but there were not enough consensus and time to get it ready for C++11.
370
371[endsect]
372
373[section:make_ready_future Making immediate futures easier]
374
375Some functions may know the value at the point of construction. In these cases the value is immediately available,
376but needs to be returned as a future or shared_future. By using make_ready_future a future
377can be created which holds a pre-computed result in its shared state.
378
379Without these features it is non-trivial to create a future directly from a value.
380First a promise must be created, then the promise is set, and lastly the future is retrieved from the promise.
381This can now be done with one operation.
382
383[heading make_ready_future]
384
385This function creates a future for a given value. If no value is given then a future<void> is returned.
386This function is primarily useful in cases where sometimes, the return value is immediately available, but sometimes
387it is not. The example below illustrates, that in an error path the value is known immediately, however in other paths
388the function must return an eventual value represented as a future.
389
390
391 boost::future<int> compute(int x)
392 {
393 if (x == 0) return boost::make_ready_future(0);
394 if (x < 0) return boost::make_ready_future<int>(std::logic_error("Error"));
395 boost::future<int> f1 = boost::async([]() { return x+1; });
396 return f1;
397 }
398
399There are two variations of this function. The first takes a value of any type, and returns a future of that type.
400The input value is passed to the shared state of the returned future. The second version takes no input and returns a future<void>.
401
402[endsect]
403
404[section:then Associating future continuations]
405
406In asynchronous programming, it is very common for one asynchronous operation, on completion, to invoke a second
407operation and pass data to it. The current C++ standard does not allow one to register a continuation to a future.
408With `.then`, instead of waiting for the result, a continuation is "attached" to the asynchronous operation, which is
409invoked when the result is ready. Continuations registered using the `.then` function will help to avoid blocking waits
410or wasting threads on polling, greatly improving the responsiveness and scalability of an application.
411
412`future.then()` provides the ability to sequentially compose two futures by declaring one to be the continuation of another.
413With `.then()` the antecedent future is ready (has a value or exception stored in the shared state) before the continuation
414starts as instructed by the lambda function.
415
416In the example below the `future<string>` `f2` is registered to be a continuation of `future<int>` `f1` using the `.then()` member
417function. This operation takes a lambda function which describes how `f2` should proceed after `f1` is ready.
418
419
420 #include <boost/thread/future.hpp>
421 using namespace boost;
422 int main()
423 {
424 future<int> f1 = async([]() { return 123; });
425 future<string> f2 = f1.then([](future<int> f) { return f.get().to_string(); // here .get() won't block });
426 }
427
428One key feature of this function is the ability to chain multiple asynchronous operations. In asynchronous programming,
429it's common to define a sequence of operations, in which each continuation executes only when the previous one completes.
430In some cases, the antecedent future produces a value that the continuation accepts as input. By using `future.then()`,
431creating a chain of continuations becomes straightforward and intuitive:
432
433 myFuture.then(...).then(...).then(...).
434
435Some points to note are:
436
437* Each continuation will not begin until the preceding has completed.
438* If an exception is thrown, the following continuation can handle it in a try-catch block
439
440
441Input Parameters:
442
443* Lambda function: One option which can be considered is to take two functions, one for
444success and one for error handling. However this option has not been retained for the moment.
445The lambda function takes a future as its input which carries the exception
446through. This makes propagating exceptions straightforward. This approach also simplifies the chaining of continuations.
447* Executor: Providing an overload to `.then`, to take an executor reference places great flexibility over the execution
448of the future in the programmer's hand. As described above, often taking a launch policy is not sufficient for powerful
449asynchronous operations. The lifetime of the executor must outlive the continuation.
450* Launch policy: if the additional flexibility that the executor provides is not required.
451
452Return values: The decision to return a future was based primarily on the ability to chain multiple continuations using
453`.then()`. This benefit of composability gives the programmer incredible control and flexibility over their code. Returning
454a `future` object rather than a `shared_future` is also a much cheaper operation thereby improving performance. A
455`shared_future` object is not necessary to take advantage of the chaining feature. It is also easy to go from a `future`
456to a `shared_future` when needed using future::share().
457
458
459[endsect]
460
461
462[include future_ref.qbk]
463
464[endsect]