]> git.proxmox.com Git - ceph.git/blob - ceph/src/boost/libs/interprocess/doc/interprocess.qbk
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / interprocess / doc / interprocess.qbk
1 [/
2 / Copyright (c) 2005-2012 Ion Gaztanaga
3 /
4 / Distributed under the Boost Software License, Version 1.0. (See accompanying
5 / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
6 /]
7
8 [library Boost.Interprocess
9 [quickbook 1.5]
10 [authors [Gaztanaga, Ion]]
11 [copyright 2005-2015 Ion Gaztanaga]
12 [id interprocess]
13 [dirname interprocess]
14 [purpose Interprocess communication utilities]
15 [license
16 Distributed under the Boost Software License, Version 1.0.
17 (See accompanying file LICENSE_1_0.txt or copy at
18 [@http://www.boost.org/LICENSE_1_0.txt])
19 ]
20 ]
21
22 [section:intro Introduction]
23
24 [*Boost.Interprocess] simplifies the use of common interprocess communication
25 and synchronization mechanisms and offers a wide range of them:
26
27 * Shared memory.
28 * Memory-mapped files.
29 * Semaphores, mutexes, condition variables and upgradable mutex types to place
30 them in shared memory and memory mapped files.
31 * Named versions of those synchronization objects, similar to UNIX/Windows
32 sem_open/CreateSemaphore API.
33 * File locking.
34 * Relative pointers.
35 * Message queues.
36
37 [*Boost.Interprocess] also offers higher-level interprocess mechanisms to allocate
38 dynamically portions of a shared memory or a memory mapped file (in general,
39 to allocate portions of a fixed size memory segment). Using these mechanisms,
40 [*Boost.Interprocess] offers useful tools to construct C++ objects, including
41 STL-like containers, in shared memory and memory mapped files:
42
43 * Dynamic creation of anonymous and named objects in a shared memory or memory
44 mapped file.
45 * STL-like containers compatible with shared memory/memory-mapped files.
46 * STL-like allocators ready for shared memory/memory-mapped files implementing
47 several memory allocation patterns (like pooling).
48
49 [section:introduction_building_interprocess Building Boost.Interprocess]
50
51 There is no need to compile [*Boost.Interprocess], since it's
52 a header only library. Just include your Boost header directory in your
53 compiler include path.
54
55 [*Boost.Interprocess] depends on
56 [@http://www.boost.org/libs/date_time/ [*Boost.DateTime]], which needs
57 separate compilation. However, the subset used by [*Boost.Interprocess] does
58 not need any separate compilation so the user can define `BOOST_DATE_TIME_NO_LIB`
59 to avoid Boost from trying to automatically link the [*Boost.DateTime].
60
61 In POSIX systems, [*Boost.Interprocess] uses pthread system calls to implement
62 classes like mutexes, condition variables, etc... In some operating systems,
63 these POSIX calls are implemented in separate libraries that are not automatically
64 linked by the compiler. For example, in some Linux systems POSIX pthread functions
65 are implemented in `librt.a` library, so you might need to add that library
66 when linking an executable or shared library that uses [*Boost.Interprocess].
67 If you obtain linking errors related to those pthread functions, please revise
68 your system's documentation to know which library implements them.
69
70 [endsect]
71
72 [section:tested_compilers Tested compilers]
73
74 [*Boost.Interprocess] has been tested in the following compilers/platforms:
75
76 * Visual >= 7.1
77 * GCC >= 4.1
78 * Intel 11
79
80 [endsect]
81
82 [endsect]
83
84 [section:quick_guide Quick Guide for the Impatient]
85
86 [section:qg_memory_pool Using shared memory as a pool of unnamed memory blocks]
87
88 You can just allocate a portion of a shared memory segment, copy the
89 message to that buffer, send the offset of that portion of shared
90 memory to another process, and you are done. Let's see the example:
91
92 [import ../example/doc_ipc_message.cpp]
93 [doc_ipc_message]
94
95 [endsect]
96
97 [section:qg_named_interprocess Creating named shared memory objects]
98
99 You want to create objects in a shared memory segment, giving a string name to them so that
100 any other process can find, use and delete them from the segment when the objects are not
101 needed anymore. Example:
102
103 [import ../example/doc_named_alloc.cpp]
104 [doc_named_alloc]
105
106 [endsect]
107
108 [section:qg_offset_ptr Using an offset smart pointer for shared memory]
109
110 [*Boost.Interprocess] offers offset_ptr smart pointer family
111 as an offset pointer that stores the distance between the address of
112 the offset pointer itself and the address of the pointed object.
113 When offset_ptr is placed in a shared memory segment, it
114 can point safely objects stored in the same shared
115 memory segment, even if the segment is mapped in
116 different base addresses in different processes.
117
118 This allows placing objects with pointer members
119 in shared memory. For example, if we want to create
120 a linked list in shared memory:
121
122 [import ../example/doc_offset_ptr.cpp]
123 [doc_offset_ptr]
124
125 To help with basic data structures, [*Boost.Interprocess] offers containers like vector,
126 list, map, so you can avoid these manual data structures just like with standard containers.
127
128 [endsect]
129
130 [section:qg_interprocess_container Creating vectors in shared memory]
131
132 [*Boost.Interprocess] allows creating complex objects in shared memory and memory
133 mapped files. For example, we can construct STL-like containers in shared memory.
134 To do this, we just need to create a special (managed) shared memory segment,
135 declare a [*Boost.Interprocess] allocator and construct the vector in shared memory
136 just if it was any other object.
137
138 The class that allows this complex structures in shared memory is called
139 [classref boost::interprocess::managed_shared_memory] and it's easy to use.
140 Just execute this example without arguments:
141
142 [import ../example/doc_spawn_vector.cpp]
143 [doc_spawn_vector]
144
145 The parent process will create an special shared memory class that allows easy construction
146 of many complex data structures associated with a name. The parent process executes the same
147 program with an additional argument so the child process opens the shared memory and uses
148 the vector and erases it.
149
150 [endsect]
151
152 [section:qg_interprocess_map Creating maps in shared memory]
153
154 Just like a vector, [*Boost.Interprocess] allows creating maps in
155 shared memory and memory mapped files. The only difference is that
156 like standard associative containers, [*Boost.Interprocess]'s map needs
157 also the comparison functor when an allocator is passed in the constructor:
158
159 [import ../example/doc_map.cpp]
160 [doc_map]
161
162 For a more advanced example including containers of containers, see the section
163 [link interprocess.allocators_containers.containers_explained.containers_of_containers Containers of containers].
164
165 [endsect]
166
167 [endsect]
168
169 [section:some_basic_explanations Some basic explanations]
170
171 [section:processes_and_threads Processes And Threads]
172
173 [*Boost.Interprocess] does not work only with processes but also with threads.
174 [*Boost.Interprocess] synchronization mechanisms can synchronize threads
175 from different processes, but also threads from the same process.
176
177 [endsect]
178
179 [section:sharing_information Sharing information between processes]
180
181 In the traditional programming model an operating system has multiple processes
182 running and each process has its own address space. To share information between
183 processes we have several alternatives:
184
185 * Two processes share information using a [*file]. To access to the data, each
186 process uses the usual file read/write mechanisms. When updating/reading
187 a file shared between processes, we need some sort of synchronization, to
188 protect readers from writers.
189
190 * Two processes share information that resides in the [*kernel] of the operating
191 system. This is the case, for example, of traditional message queues. The
192 synchronization is guaranteed by the operating system kernel.
193
194 * Two processes can share a [*memory] region. This is the case of classical
195 shared memory or memory mapped files. Once the processes set up the
196 memory region, the processes can read/write the data like any
197 other memory segment without calling the operating system's kernel. This
198 also requires some kind of manual synchronization between processes.
199
200 [endsect]
201
202 [section:persistence Persistence Of Interprocess Mechanisms]
203
204 One of the biggest issues with interprocess communication mechanisms is the lifetime
205 of the interprocess communication mechanism.
206 It's important to know when an interprocess communication mechanism disappears from the
207 system. In [*Boost.Interprocess], we can have 3 types of persistence:
208
209 * [*Process-persistence]: The mechanism lasts until all the processes that have
210 opened the mechanism close it, exit or crash.
211
212 * [*Kernel-persistence]: The mechanism exists until the kernel of the operating
213 system reboots or the mechanism is explicitly deleted.
214
215 * [*Filesystem-persistence]: The mechanism exists until the mechanism is explicitly
216 deleted.
217
218 Some native POSIX and Windows IPC mechanisms have different persistence so it's
219 difficult to achieve portability between Windows and POSIX native mechanisms.
220 [*Boost.Interprocess] classes have the following persistence:
221
222 [table Boost.Interprocess Persistence Table
223 [[Mechanism] [Persistence]]
224 [[Shared memory] [Kernel or Filesystem]]
225 [[Memory mapped file] [Filesystem]]
226 [[Process-shared mutex types] [Process]]
227 [[Process-shared semaphore] [Process]]
228 [[Process-shared condition] [Process]]
229 [[File lock] [Process]]
230 [[Message queue] [Kernel or Filesystem]]
231 [[Named mutex] [Kernel or Filesystem]]
232 [[Named semaphore] [Kernel or Filesystem]]
233 [[Named condition] [Kernel or Filesystem]]
234 ]
235
236 As you can see, [*Boost.Interprocess] defines some mechanisms with "Kernel or Filesystem"
237 persistence. This is because POSIX allows this possibility to native interprocess
238 communication implementations. One could, for example, implement
239 shared memory using memory mapped files and obtain filesystem persistence (for example,
240 there is no proper known way to emulate kernel persistence with a user library
241 for Windows shared memory using native shared memory,
242 or process persistence for POSIX shared memory, so the only portable way is to
243 define "Kernel or Filesystem" persistence).
244
245 [endsect]
246
247 [section:names Names Of Interprocess Mechanisms]
248
249 Some interprocess mechanisms are anonymous objects created in shared memory or
250 memory-mapped files but other interprocess mechanisms need a name or identifier
251 so that two unrelated processes can use the same interprocess mechanism object.
252 Examples of this are shared memory, named mutexes and named semaphores (for example,
253 native windows CreateMutex/CreateSemaphore API family).
254
255 The name used to identify an interprocess mechanism is not portable, even between
256 UNIX systems. For this reason, [*Boost.Interprocess] limits this name to a C++ variable
257 identifier or keyword:
258
259 *Starts with a letter, lowercase or uppercase, such as a letter from a to z or from
260 A to Z. Examples: ['Sharedmemory, sharedmemory, sHaReDmEmOrY...]
261 *Can include letters, underscore, or digits. Examples: ['shm1, shm2and3, ShM3plus4...]
262
263 [endsect]
264
265
266 [section:constructors_destructors_and_resource_lifetime
267 Constructors, destructors and lifetime of Interprocess named resources]
268
269 Named [*Boost.Interprocess] resources (shared memory, memory mapped files,
270 named mutexes/conditions/semaphores) have kernel or filesystem persistency.
271 This means that even if all processes that have opened those resources
272 end, the resource will still be accessible to be opened again and the resource
273 can only be destructed via an explicit call to their static member `remove` function.
274 This behavior can be easily understood, since it's the same mechanism used
275 by functions controlling file opening/creation/erasure:
276
277 [table Boost.Interprocess-Filesystem Analogy
278 [[Named Interprocess resource] [Corresponding std file] [Corresponding POSIX operation]]
279 [[Constructor] [std::fstream constructor][open]]
280 [[Destructor] [std::fstream destructor] [close]]
281 [[Member `remove`] [None. `std::remove`] [unlink]]
282 ]
283
284 Now the correspondence between POSIX and Boost.Interprocess
285 regarding shared memory and named semaphores:
286
287 [table Boost.Interprocess-POSIX shared memory
288 [[`shared_memory_object` operation] [POSIX operation]]
289 [[Constructor] [shm_open]]
290 [[Destructor] [close]]
291 [[Member `remove`] [shm_unlink]]
292 ]
293
294 [table Boost.Interprocess-POSIX named semaphore
295 [[`named_semaphore` operation] [POSIX operation]]
296 [[Constructor] [sem_open]]
297 [[Destructor] [close]]
298 [[Member `remove`] [sem_unlink]]
299 ]
300
301 The most important property is that [*destructors of named resources
302 don't remove the resource from the system], they only liberate resources
303 allocated by the system for use by the process for the named resource.
304 [*To remove the resource from the system the programmer must use
305 `remove`].
306
307 [endsect]
308
309 [section:permissions Permissions]
310
311 Named resources offered by [*Boost.Interprocess] must cope with platform-dependant
312 permission issues also present when creating files. If a programmer wants to
313 shared shared memory, memory mapped files or named synchronization mechanisms
314 (mutexes, semaphores, etc...) between users, it's necessary to specify
315 those permissions. Sadly, traditional UNIX and Windows permissions are very
316 different and [*Boost.Interprocess] does not try to standardize permissions,
317 but does not ignore them.
318
319 All named resource creation functions take an optional
320 [classref boost::interprocess::permissions permissions object] that can be
321 configured with platform-dependant permissions.
322
323 Since each mechanism can be emulated through diferent mechanisms
324 (a semaphore might be implement using mapped files or native semaphores)
325 permissions types could vary when the implementation of a named resource
326 changes (eg.: in Windows mutexes require `synchronize permissions`, but
327 that's not the case of files).
328 To avoid this, [*Boost.Interprocess] relies on file-like permissions,
329 requiring file read-write-delete permissions to open named synchronization mechanisms
330 (mutex, semaphores, etc.) and appropiate read or read-write-delete permissions for
331 shared memory. This approach has two advantages: it's similar to the UNIX philosophy
332 and the programmer does not need to know how the named resource is implemented.
333
334 [endsect]
335
336 [endsect]
337
338 [section:sharedmemorybetweenprocesses Sharing memory between processes]
339
340 [section:sharedmemory Shared memory]
341
342 [section:shared_memory_what_is What is shared memory?]
343
344 Shared memory is the fastest interprocess communication mechanism.
345 The operating system maps a memory segment in the address space of several
346 processes, so that several processes can read and write in that memory segment
347 without calling operating system functions. However, we need some kind of
348 synchronization between processes that read and write shared memory.
349
350 Consider what happens when a server process wants to send an HTML file to a client process
351 that resides in the same machine using network mechanisms:
352
353 * The server must read the file to memory and pass it to the network functions, that
354 copy that memory to the OS's internal memory.
355
356 * The client uses the network functions to copy the data from the OS's internal memory
357 to its own memory.
358
359 As we can see, there are two copies, one from memory to the network and another one
360 from the network to memory. And those copies are made using operating system calls
361 that normally are expensive. Shared memory avoids this overhead, but we need to
362 synchronize both processes:
363
364 * The server maps a shared memory in its address space and also gets access to a
365 synchronization mechanism. The server obtains exclusive access to the memory using
366 the synchronization mechanism and copies the file to memory.
367
368 * The client maps the shared memory in its address space. Waits until the server releases
369 the exclusive access and uses the data.
370
371 Using shared memory, we can avoid two data copies, but we have to synchronize the access
372 to the shared memory segment.
373
374 [endsect]
375
376 [section:shared_memory_steps Creating memory segments that can be shared between processes]
377
378 To use shared memory, we have to perform 2 basic steps:
379
380 * Request to the operating system a memory segment that can be shared between
381 processes. The user can create/destroy/open this memory using a [*shared memory object]:
382 ['An object that represents memory that can be mapped concurrently into the
383 address space of more than one process.].
384
385 * Associate a part of that memory or the whole memory with the address space of the
386 calling process. The operating system looks for a big enough memory address range
387 in the calling process' address space and marks that address range as an
388 special range. Changes in that address range are automatically seen
389 by other process that also have mapped the same shared memory object.
390
391 Once the two steps have been successfully completed, the process can start writing to
392 and reading from the address space to send to and receive data from other processes.
393 Now, let's see how can we do this using [*Boost.Interprocess]:
394
395 [endsect]
396
397 [section:shared_memory_header Header]
398
399 To manage shared memory, you just need to include the following header:
400
401 [c++]
402
403 #include <boost/interprocess/shared_memory_object.hpp>
404
405 [endsect]
406
407 [section:shared_memory_creating_shared_memory_segments Creating shared memory segments]
408
409 As we've mentioned we have to use the `shared_memory_object` class to create, open
410 and destroy shared memory segments that can be mapped by several processes. We can
411 specify the access mode of that shared memory object (read only or read-write),
412 just as if it was a file:
413
414 * Create a shared memory segment. Throws if already created:
415
416 [c++]
417
418 using boost::interprocess;
419 shared_memory_object shm_obj
420 (create_only //only create
421 ,"shared_memory" //name
422 ,read_write //read-write mode
423 );
424
425 * To open or create a shared memory segment:
426
427 [c++]
428
429 using boost::interprocess;
430 shared_memory_object shm_obj
431 (open_or_create //open or create
432 ,"shared_memory" //name
433 ,read_only //read-only mode
434 );
435
436 * To only open a shared memory segment. Throws if does not exist:
437
438 [c++]
439
440 using boost::interprocess;
441 shared_memory_object shm_obj
442 (open_only //only open
443 ,"shared_memory" //name
444 ,read_write //read-write mode
445 );
446
447 When a shared memory object is created, its size is 0.
448 To set the size of the shared memory, the user must use the `truncate` function
449 call, in a shared memory that has been opened with read-write attributes:
450
451 [c++]
452
453 shm_obj.truncate(10000);
454
455 As shared memory has kernel or filesystem persistence, the user must explicitly
456 destroy it. The `remove` operation might fail returning
457 false if the shared memory does not exist, the file is open or the file is
458 still memory mapped by other processes:
459
460 [c++]
461
462 using boost::interprocess;
463 shared_memory_object::remove("shared_memory");
464
465
466 For more details regarding `shared_memory_object` see the
467 [classref boost::interprocess::shared_memory_object] class reference.
468
469 [endsect]
470
471 [section:shared_memory_mapping_shared_memory_segments Mapping Shared Memory Segments]
472
473 Once created or opened, a process just has to map the shared memory object in the process'
474 address space. The user can map the whole shared memory or just part of it. The
475 mapping process is done using the `mapped_region` class. The class represents
476 a memory region that has been mapped from a shared memory or from other devices
477 that have also mapping capabilities (for example, files). A `mapped_region` can be
478 created from any `memory_mappable` object and as you might imagine, `shared_memory_object`
479 is a `memory_mappable` object:
480
481 [c++]
482
483 using boost::interprocess;
484 std::size_t ShmSize = ...
485
486 //Map the second half of the memory
487 mapped_region region
488 ( shm //Memory-mappable object
489 , read_write //Access mode
490 , ShmSize/2 //Offset from the beginning of shm
491 , ShmSize-ShmSize/2 //Length of the region
492 );
493
494 //Get the address of the region
495 region.get_address();
496
497 //Get the size of the region
498 region.get_size();
499
500 The user can specify the offset from the mappable object where the mapped region
501 should start and the size of the mapped region. If no offset or size is specified,
502 the whole mappable object (in this case, shared memory) is mapped. If the offset
503 is specified, but not the size, the mapped region covers from the offset until
504 the end of the mappable object.
505
506 For more details regarding `mapped_region` see the
507 [classref boost::interprocess::mapped_region] class reference.
508
509 [endsect]
510
511 [section:shared_memory_a_simple_example A Simple Example]
512
513 Let's see a simple example of shared memory use. A server process creates a
514 shared memory object, maps it and initializes all the bytes to a value. After that,
515 a client process opens the shared memory, maps it, and checks
516 that the data is correctly initialized:
517
518 [import ../example/doc_shared_memory.cpp]
519 [doc_shared_memory]
520
521 [endsect]
522
523 [section:emulation Emulation for systems without shared memory objects]
524
525 [*Boost.Interprocess] provides portable shared memory in terms of POSIX
526 semantics. Some operating systems don't support shared memory as defined by
527 POSIX:
528
529 * Windows operating systems provide shared memory using memory backed by the
530 paging file but the lifetime semantics are different from the ones
531 defined by POSIX (see [link interprocess.sharedmemorybetweenprocesses.sharedmemory.windows_shared_memory
532 Native windows shared memory] section for more information).
533
534 * Some UNIX systems don't fully support POSIX shared memory objects at all.
535
536 In those platforms, shared memory is emulated with mapped files created
537 in a "boost_interprocess" folder created in a temporary files directory.
538 In Windows platforms, if "Common AppData" key is present
539 in the registry, "boost_interprocess" folder is created in that directory
540 (in XP usually "C:\Documents and Settings\All Users\Application Data" and
541 in Vista "C:\ProgramData").
542 For Windows platforms without that registry key and Unix systems, shared memory is
543 created in the system temporary files directory ("/tmp" or similar).
544
545 Because of this emulation, shared memory has filesystem lifetime in some
546 of those systems.
547
548 [endsect]
549
550 [section:removing Removing shared memory]
551
552 [classref boost::interprocess::shared_memory_object shared_memory_object]
553 provides a static `remove` function to remove a shared memory objects.
554
555 This function [*can] fail if the shared memory objects does not exist or
556 it's opened by another process. Note that this function is similar to the
557 standard C `int remove(const char *path)` function. In UNIX systems,
558 `shared_memory_object::remove` calls `shm_unlink`:
559
560 * The function will remove the name of the shared memory object
561 named by the string pointed to by name.
562
563 * If one or more references to the shared memory object exist when
564 is unlinked, the name will be removed before the function returns, but the
565 removal of the memory object contents will be postponed until all open and
566 map references to the shared memory object have been removed.
567
568 * Even if the object continues to exist after the last function call, reuse of
569 the name will subsequently cause the creation of a
570 [classref boost::interprocess::shared_memory_object] instance to behave as if no
571 shared memory object of this name exists (that is, trying to open an object
572 with that name will fail and an object of the same name can be created again).
573
574 In Windows operating systems, current version supports an usually acceptable emulation
575 of the UNIX unlink behaviour: the file is renamed with a random name and marked as ['to
576 be deleted when the last open handle is closed].
577
578 [endsect]
579
580 [section:anonymous_shared_memory Anonymous shared memory for UNIX systems]
581
582 Creating a shared memory segment and mapping it can be a bit tedious when several
583 processes are involved. When processes are related via `fork()` operating system
584 call in UNIX systems a simpler method is available using anonymous shared memory.
585
586 This feature has been implemented in UNIX systems mapping the device `\dev\zero` or
587 just using the `MAP_ANONYMOUS` in a POSIX conformant `mmap` system call.
588
589 This feature is wrapped in [*Boost.Interprocess] using the `anonymous_shared_memory()`
590 function, which returns a `mapped_region` object holding an anonymous shared memory
591 segment that can be shared by related processes.
592
593 Here is an example:
594
595 [import ../example/doc_anonymous_shared_memory.cpp]
596 [doc_anonymous_shared_memory]
597
598 Once the segment is created, a `fork()` call can
599 be used so that `region` is used to communicate two related processes.
600
601 [endsect]
602
603 [section:windows_shared_memory Native windows shared memory]
604
605 Windows operating system also offers shared memory, but the lifetime of this
606 shared memory is very different to kernel or filesystem lifetime. The shared memory
607 is created backed by the pagefile and it's automatically destroyed when the last
608 process attached to the shared memory is destroyed.
609
610 Because of this reason, there is no effective way to simulate kernel or filesystem
611 persistence using native windows shared memory and [*Boost.Interprocess] emulates
612 shared memory using memory mapped files. This assures portability between POSIX
613 and Windows operating systems.
614
615 However, accessing native windows shared memory is a common request of
616 [*Boost.Interprocess] users because they want to access
617 to shared memory created with other process that don't use
618 [*Boost.Interprocess]. In order to manage the native windows shared memory
619 [*Boost.Interprocess] offers the
620 [classref boost::interprocess::windows_shared_memory windows_shared_memory] class.
621
622 Windows shared memory creation is a bit different from portable shared memory
623 creation: the size of the segment must be specified when creating the object and
624 can't be specified through `truncate` like with the shared memory object.
625 Take in care that when the last process attached to a shared memory is destroyed
626 [*the shared memory is destroyed] so there is [*no persistency] with native windows
627 shared memory.
628
629 Sharing memory between services and user applications is also different. To share memory
630 between services and user applications the name of the shared memory must start with the
631 global namespace prefix `"Global\\"`. This global namespace enables processes on multiple
632 client sessions to communicate with a service application. The server component can create
633 the shared memory in the global namespace. Then a client session can use the "Global\" prefix
634 to open that memory.
635
636 The creation of a shared memory object in the global namespace from a session other than
637 session zero is a privileged operation.
638
639 Let's repeat the same example presented for the portable shared memory object:
640 A server process creates a
641 shared memory object, maps it and initializes all the bytes to a value. After that,
642 a client process opens the shared memory, maps it, and checks
643 that the data is correctly initialized. Take in care that [*if the server exits before
644 the client connects to the shared memory the client connection will fail], because
645 the shared memory segment is destroyed when no proces is attached to the memory.
646
647 This is the server process:
648
649 [import ../example/doc_windows_shared_memory.cpp]
650 [doc_windows_shared_memory]
651
652 As we can see, native windows shared memory needs synchronization to make sure
653 that the shared memory won't be destroyed before the client is launched.
654
655 [endsect]
656
657 [section:xsi_shared_memory XSI shared memory]
658
659 In many UNIX systems, the OS offers another shared memory memory mechanism, XSI
660 (X/Open System Interfaces) shared memory segments, also known as "System V" shared memory.
661 This shared memory mechanism is quite popular and portable, and it's not based in file-mapping
662 semantics, but it uses special functions (`shmget`, `shmat`, `shmdt`, `shmctl`...).
663
664 Unlike POSIX shared memory segments, XSI shared memory segments are not identified by names but
665 by 'keys' usually created with `ftok`. XSI shared memory segments have kernel lifetime and
666 must be explicitly removed. XSI shared memory does not support copy-on-write and partial shared memory mapping
667 but it supports anonymous shared memory.
668
669 [*Boost.Interprocess] offers simple ([classref boost::interprocess::xsi_shared_memory xsi_shared_memory])
670 and managed ([classref boost::interprocess::managed_xsi_shared_memory managed_xsi_shared_memory])
671 shared memory classes to ease the use of XSI shared memory. It also wraps key creation with the
672 simple [classref boost::interprocess::xsi_key xsi_key] class.
673
674 Let's repeat the same example presented for the portable shared memory object:
675 A server process creates a shared memory object, maps it and initializes all the bytes to a value. After that,
676 a client process opens the shared memory, maps it, and checks
677 that the data is correctly initialized.
678
679 This is the server process:
680
681 [import ../example/doc_xsi_shared_memory.cpp]
682 [doc_xsi_shared_memory]
683
684 [endsect]
685
686 [endsect]
687
688 [section:mapped_file Memory Mapped Files]
689
690 [section:mapped_file_what_is What is a memory mapped file?]
691
692 File mapping is the association of a file's contents with a portion of the address space
693 of a process. The system creates a file mapping to associate the file and the address
694 space of the process. A mapped region is the portion of address space that the process
695 uses to access the file's contents. A single file mapping can have several mapped regions,
696 so that the user can associate parts of the file with the address space of the process
697 without mapping the entire file in the address space, since the file can be bigger
698 than the whole address space of the process (a 9GB DVD image file in a usual 32
699 bit systems). Processes read from and write to
700 the file using pointers, just like with dynamic memory. File mapping has the following
701 advantages:
702
703 * Uniform resource use. Files and memory can be treated using the same functions.
704 * Automatic file data synchronization and cache from the OS.
705 * Reuse of C++ utilities (STL containers, algorithms) in files.
706 * Shared memory between two or more applications.
707 * Allows efficient work with a large files, without mapping the whole file into memory
708 * If several processes use the same file mapping to create mapped regions of a file, each
709 process' views contain identical copies of the file on disk.
710
711 File mapping is not only used for interprocess communication, it can be used also to
712 simplify file usage, so the user does not need to use file-management functions to
713 write the file. The user just writes data to the process memory, and the operating
714 systems dumps the data to the file.
715
716 When two processes map the same file in memory, the memory that one process writes is
717 seen by another process, so memory mapped files can be used as an interprocess
718 communication mechanism. We can say that memory-mapped files offer the same interprocess
719 communication services as shared memory with the addition of filesystem persistence.
720 However, as the operating system has to synchronize the file contents with the memory
721 contents, memory-mapped files are not as fast as shared memory.
722
723 [endsect]
724
725 [section:mapped_file_steps Using mapped files]
726
727 To use memory-mapped files, we have to perform 2 basic steps:
728
729 * Create a mappable object that represent an already created file of the
730 filesystem. This object will be used to create multiple mapped regions of the
731 the file.
732
733 * Associate the whole file or parts of the file with the address space of the
734 calling process. The operating system looks for a big enough memory address range
735 in the calling process' address space and marks that address range as an
736 special range. Changes in that address range are automatically seen
737 by other process that also have mapped the same file and those changes
738 are also transferred to the disk automatically.
739
740 Once the two steps have been successfully completed, the process can start writing to
741 and reading from the address space to send to and receive data from other processes
742 and synchronize the file's contents with the changes made to the mapped region.
743 Now, let's see how can we do this using [*Boost.Interprocess]:
744
745 [endsect]
746
747 [section:mapped_file_header Header]
748
749 To manage mapped files, you just need to include the following header:
750
751 [c++]
752
753 #include <boost/interprocess/file_mapping.hpp>
754
755 [endsect]
756
757 [section:mapped_file_creating_file Creating a file mapping]
758
759 First, we have to link a file's contents with the process' address space. To do
760 this, we have to create a mappable object that represents that file. This is
761 achieved in [*Boost.Interprocess] creating a `file_mapping` object:
762
763 [c++]
764
765 using boost::interprocess;
766 file_mapping m_file
767 ("/usr/home/file" //filename
768 ,read_write //read-write mode
769 );
770
771 Now we can use the newly created object to create mapped regions. For more details
772 regarding this class see the
773 [classref boost::interprocess::file_mapping] class reference.
774
775 [endsect]
776
777 [section:mapped_file_mapping_regions Mapping File's Contents In Memory]
778
779 After creating a file mapping, a process just has to map the shared memory in the
780 process' address space. The user can map the whole shared memory or just part of it.
781 The mapping process is done using the `mapped_region` class. as we have said before
782 The class represents a memory region that has been mapped from a shared memory or from other
783 devices that have also mapping capabilities:
784
785 [c++]
786
787 using boost::interprocess;
788 std::size_t FileSize = ...
789
790 //Map the second half of the file
791 mapped_region region
792 ( m_file //Memory-mappable object
793 , read_write //Access mode
794 , FileSize/2 //Offset from the beginning of shm
795 , FileSize-FileSize/2 //Length of the region
796 );
797
798 //Get the address of the region
799 region.get_address();
800
801 //Get the size of the region
802 region.get_size();
803
804
805 The user can specify the offset from the file where the mapped region
806 should start and the size of the mapped region. If no offset or size is specified,
807 the whole file is mapped. If the offset is specified, but not the size,
808 the mapped region covers from the offset until the end of the file.
809
810 If several processes map the same file, and a process modifies a memory range
811 from a mapped region that is also mapped by other process, the changes are
812 inmedially visible to other processes. However, the file contents on disk are
813 not updated immediately, since that would hurt performance (writing to disk
814 is several times slower than writing to memory). If the user wants to make sure
815 that file's contents have been updated, it can flush a range from the view to disk.
816 When the function returns, the flushing process has startd but there is not guarantee that
817 all data has been written to disk:
818
819 [c++]
820
821 //Flush the whole region
822 region.flush();
823
824 //Flush from an offset until the end of the region
825 region.flush(offset);
826
827 //Flush a memory range starting on an offset
828 region.flush(offset, size);
829
830 Remember that the offset is [*not] an offset on the file, but an offset in the
831 mapped region. If a region covers the second half of a file and flushes the
832 whole region, only the half of the file is guaranteed to have been flushed.
833
834 For more details regarding `mapped_region` see the
835 [classref boost::interprocess::mapped_region] class reference.
836
837 [endsect]
838
839 [section:mapped_file_a_simple_example A Simple Example]
840
841 Let's reproduce the same example described in the shared memory section, using
842 memory mapped files. A server process creates a shared
843 memory segment, maps it and initializes all the bytes to a value. After that,
844 a client process opens the shared memory, maps it, and checks
845 that the data is correctly initialized::
846
847 [import ../example/doc_file_mapping.cpp]
848 [doc_file_mapping]
849
850 [endsect]
851
852 [endsect]
853
854 [section:mapped_region More About Mapped Regions]
855
856 [section:mapped_region_one_class One Class To Rule Them All]
857
858 As we have seen, both `shared_memory_object` and `file_mapping` objects can be used
859 to create `mapped_region` objects. A mapped region created from a shared memory
860 object or a file mapping are the same class and this has many advantages.
861
862 One can, for example, mix in STL containers mapped regions from shared memory
863 and memory mapped files. Libraries that only depend on mapped regions can
864 be used to work with shared memory or memory mapped files without recompiling them.
865
866 [endsect]
867
868 [section:mapped_region_address_mapping Mapping Address In Several Processes]
869
870 In the example we have seen, the file or shared memory contents are mapped
871 to the address space of the process, but the address was chosen by the operating
872 system.
873
874 If several processes map the same file/shared memory, the mapping address will be
875 surely different in each process. Since each process might have used its address space
876 in a different way (allocation of more or less dynamic memory, for example), there is
877 no guarantee that the file/shared memory is going to be mapped in the same address.
878
879 If two processes map the same object in different addresses, this invalidates the use
880 of pointers in that memory, since the pointer (which is an absolute address) would
881 only make sense for the process that wrote it. The solution for this is to use offsets
882 (distance) between objects instead of pointers: If two objects are placed in the same
883 shared memory segment by one process, [*the address of each object will be different]
884 in another process but [*the distance between them (in bytes) will be the same].
885
886 So the first advice when mapping shared memory and memory mapped files is to avoid
887 using raw pointers, unless you know what you are doing. Use offsets between data or
888 relative pointers to obtain pointer functionality when an object placed in a mapped
889 region wants to point to an object placed in the same mapped region. [*Boost.Interprocess]
890 offers a smart pointer called [classref boost::interprocess::offset_ptr] that
891 can be safely placed in shared memory and that can be used to point to another
892 object placed in the same shared memory / memory mapped file.
893
894 [endsect]
895
896 [section:mapped_region_fixed_address_mapping Fixed Address Mapping]
897
898 The use of relative pointers is less efficient than using raw pointers, so if a user
899 can succeed mapping the same file or shared memory object in the same address in two
900 processes, using raw pointers can be a good idea.
901
902 To map an object in a fixed address, the user can specify that address in the
903 `mapped region`'s constructor:
904
905 [c++]
906
907 mapped_region region ( shm //Map shared memory
908 , read_write //Map it as read-write
909 , 0 //Map from offset 0
910 , 0 //Map until the end
911 , (void*)0x3F000000 //Map it exactly there
912 );
913
914 However, the user can't map the region in any address, even if the address is not
915 being used. The offset parameter that marks the start of the mapping region
916 is also limited. These limitations are explained in the next section.
917
918 [endsect]
919
920 [section:mapped_region_mapping_problems Mapping Offset And Address Limitations]
921
922 As mentioned, the user can't map the memory mappable object at any address and it can
923 specify the offset of the mappable object that is equivalent to the start of the mapping
924 region to an arbitrary value.
925 Most operating systems limit the mapping address and the offset of the mappable object
926 to a multiple of a value called [*page size]. This is due to the fact that the
927 [*operating system performs mapping operations over whole pages].
928
929 If fixed mapping address is used, ['offset] and ['address]
930 parameters should be multiples of that value.
931 This value is, typically, 4KB or 8KB for 32 bit operating systems.
932
933 [c++]
934
935 //These might fail because the offset is not a multiple of the page size
936 //and we are using fixed address mapping
937 mapped_region region1( shm //Map shared memory
938 , read_write //Map it as read-write
939 , 1 //Map from offset 1
940 , 1 //Map 1 byte
941 , (void*)0x3F000000 //Aligned mapping address
942 );
943
944 //These might fail because the address is not a multiple of the page size
945 mapped_region region2( shm //Map shared memory
946 , read_write //Map it as read-write
947 , 0 //Map from offset 0
948 , 1 //Map 1 byte
949 , (void*)0x3F000001 //Not aligned mapping address
950 );
951
952 Since the operating system performs mapping operations over whole pages, specifying
953 a mapping ['size] or ['offset] that are not multiple of the page size will waste
954 more resources than necessary. If the user specifies the following 1 byte mapping:
955
956 [c++]
957
958 //Map one byte of the shared memory object.
959 //A whole memory page will be used for this.
960 mapped_region region ( shm //Map shared memory
961 , read_write //Map it as read-write
962 , 0 //Map from offset 0
963 , 1 //Map 1 byte
964 );
965
966 The operating system will reserve a whole page that will not be reused by any
967 other mapping so we are going to waste [*(page size - 1)] bytes. If we want
968 to use efficiently operating system resources, we should create regions whose size
969 is a multiple of [*page size] bytes. If the user specifies the following two
970 mapped regions for a file with which has `2*page_size` bytes:
971
972 //Map the first quarter of the file
973 //This will use a whole page
974 mapped_region region1( shm //Map shared memory
975 , read_write //Map it as read-write
976 , 0 //Map from offset 0
977 , page_size/2 //Map page_size/2 bytes
978 );
979
980 //Map the rest of the file
981 //This will use a 2 pages
982 mapped_region region2( shm //Map shared memory
983 , read_write //Map it as read-write
984 , page_size/2 //Map from offset 0
985 , 3*page_size/2 //Map the rest of the shared memory
986 );
987
988 In this example, a half of the page is wasted in the first mapping and another
989 half is wasted in the second because the offset is not a multiple of the
990 page size. The mapping with the minimum resource usage would be to map whole pages:
991
992 //Map the whole first half: uses 1 page
993 mapped_region region1( shm //Map shared memory
994 , read_write //Map it as read-write
995 , 0 //Map from offset 0
996 , page_size //Map a full page_size
997 );
998
999 //Map the second half: uses 1 page
1000 mapped_region region2( shm //Map shared memory
1001 , read_write //Map it as read-write
1002 , page_size //Map from offset 0
1003 , page_size //Map the rest
1004 );
1005
1006 How can we obtain the [*page size]? The `mapped_region` class has a static
1007 function that returns that value:
1008
1009 [c++]
1010
1011 //Obtain the page size of the system
1012 std::size_t page_size = mapped_region::get_page_size();
1013
1014 The operating system might also limit the number of mapped memory regions per
1015 process or per system.
1016
1017 [endsect]
1018
1019 [endsect]
1020
1021 [section:mapped_region_object_limitations Limitations When Constructing Objects In Mapped Regions]
1022
1023 When two processes create a mapped region of the same mappable object, two processes
1024 can communicate writing and reading that memory. A process could construct a C++ object
1025 in that memory so that the second process can use it. However, a mapped region shared
1026 by multiple processes, can't hold any C++ object, because not every class is ready
1027 to be a process-shared object, specially, if the mapped region is mapped in different
1028 address in each process.
1029
1030 [section:offset_pointer Offset pointers instead of raw pointers]
1031
1032 When placing objects in a mapped region and mapping
1033 that region in different address in every process,
1034 raw pointers are a problem since they are only valid for the
1035 process that placed them there. To solve this, [*Boost.Interprocess] offers
1036 a special smart pointer that can be used instead of a raw pointer.
1037 So user classes containing raw pointers (or Boost smart pointers, that
1038 internally own a raw pointer) can't be safely placed in a process shared
1039 mapped region. These pointers must be replaced with offset pointers, and
1040 these pointers must point only to objects placed in the same mapped region
1041 if you want to use these shared objects from different processes.
1042
1043 Of course, a pointer placed in a mapped region shared between processes should
1044 only point to an object of that mapped region. Otherwise, the pointer would
1045 point to an address that it's only valid one process and other
1046 processes may crash when accessing to that address.
1047
1048 [endsect]
1049
1050 [section:references_forbidden References forbidden]
1051
1052 References suffer from the same problem as pointers
1053 (mainly because they are implemented as pointers).
1054 However, it is not possible to create a fully workable
1055 smart reference currently in C++ (for example,
1056 `operator .()` can't be overloaded). Because of this,
1057 if the user wants to put an object in shared memory,
1058 the object can't have any (smart or not) reference
1059 as a member.
1060
1061 References will only work if the mapped region is mapped in the same
1062 base address in all processes sharing a memory segment.
1063 Like pointers, a reference placed in a mapped region should only point
1064 to an object of that mapped region.
1065
1066 [endsect]
1067
1068 [section:virtuality_limitation Virtuality forbidden]
1069
1070 The virtual table pointer and the virtual table
1071 are in the address space of the process
1072 that constructs the object, so if we place a class
1073 with a virtual function or virtual base class, the virtual
1074 pointer placed in shared memory will be invalid for other processes
1075 and they will crash.
1076
1077 This problem is very difficult to solve, since each process needs a
1078 different virtual table pointer and the object that contains that pointer
1079 is shared across many processes. Even if we map the mapped region in
1080 the same address in every process, the virtual table can be in a different
1081 address in every process. To enable virtual functions for objects
1082 shared between processes, deep compiler changes are needed and virtual
1083 functions would suffer a performance hit. That's why
1084 [*Boost.Interprocess] does not have any plan to support virtual function
1085 and virtual inheritance in mapped regions shared between processes.
1086
1087 [endsect]
1088
1089 [section:statics_warning Be careful with static class members]
1090
1091 Static members of classes are global objects shared by
1092 all instances of the class. Because of this, static
1093 members are implemented as global variables in processes.
1094
1095 When constructing a class with static members, each process
1096 has its own copy of the static member, so updating a static
1097 member in one process does not change the value of the static
1098 member the another process. So be careful with these classes. Static
1099 members are not dangerous if they are just constant variables initialized
1100 when the process starts, but they don't change at all (for example,
1101 when used like enums) and their value is the same for all processes.
1102
1103 [endsect]
1104
1105 [endsect]
1106
1107 [endsect]
1108
1109 [section:offset_ptr Mapping Address Independent Pointer: offset_ptr]
1110
1111 When creating shared memory and memory mapped files to communicate two
1112 processes the memory segment can be mapped in a different address in each process:
1113
1114 [c++]
1115
1116 #include<boost/interprocess/shared_memory_object.hpp>
1117
1118 // ...
1119
1120 using boost::interprocess;
1121
1122 //Open a shared memory segment
1123 shared_memory_object shm_obj
1124 (open_only //open or create
1125 ,"shared_memory" //name
1126 ,read_only //read-only mode
1127 );
1128
1129 //Map the whole shared memory
1130 mapped_region region
1131 ( shm //Memory-mappable object
1132 , read_write //Access mode
1133 );
1134
1135 //This address can be different in each process
1136 void *addr = region.get_address();
1137
1138 This makes the creation of complex objects in mapped regions difficult: a C++
1139 class instance placed in a mapped region might have a pointer pointing to
1140 another object also placed in the mapped region. Since the pointer stores an
1141 absolute address, that address is only valid for the process that placed
1142 the object there unless all processes map the mapped region in the same
1143 address.
1144
1145 To be able to simulate pointers in mapped regions, users must use [*offsets]
1146 (distance between objects) instead of absolute addresses. The offset between
1147 two objects in a mapped region is the same for any process that maps the
1148 mapped region, even if that region is placed in different base addresses.
1149 To facilitate the use of offsets, [*Boost.Interprocess] offers
1150 [classref boost::interprocess::offset_ptr offset_ptr].
1151
1152 [classref boost::interprocess::offset_ptr offset_ptr]
1153 wraps all the background operations
1154 needed to offer a pointer-like interface. The class interface is
1155 inspired in Boost Smart Pointers and this smart pointer
1156 stores the offset (distance in bytes)
1157 between the pointee's address and it's own `this` pointer.
1158 Imagine a structure in a common
1159 32 bit processor:
1160
1161 [c++]
1162
1163 struct structure
1164 {
1165 int integer1; //The compiler places this at offset 0 in the structure
1166 offset_ptr<int> ptr; //The compiler places this at offset 4 in the structure
1167 int integer2; //The compiler places this at offset 8 in the structure
1168 };
1169
1170 //...
1171
1172 structure s;
1173
1174 //Assign the address of "integer1" to "ptr".
1175 //"ptr" will store internally "-4":
1176 // (char*)&s.integer1 - (char*)&s.ptr;
1177 s.ptr = &s.integer1;
1178
1179 //Assign the address of "integer2" to "ptr".
1180 //"ptr" will store internally "4":
1181 // (char*)&s.integer2 - (char*)&s.ptr;
1182 s.ptr = &s.integer2;
1183
1184
1185 One of the big problems of
1186 `offset_ptr` is the representation of the null pointer. The null pointer
1187 can't be safely represented like an offset, since the absolute address 0
1188 is always outside of the mapped region. Due to the fact that the segment can be mapped
1189 in a different base address in each process the distance between the address 0
1190 and `offset_ptr` is different for every process.
1191
1192 Some implementations choose the offset 0 (that is, an `offset_ptr`
1193 pointing to itself) as the null pointer pointer representation
1194 but this is not valid for many use cases
1195 since many times structures like linked lists or nodes from STL containers
1196 point to themselves (the
1197 end node in an empty container, for example) and 0 offset value
1198 is needed. An alternative is to store, in addition to the offset, a boolean
1199 to indicate if the pointer is null. However, this increments the size of the
1200 pointer and hurts performance.
1201
1202 In consequence,
1203 [classref boost::interprocess::offset_ptr offset_ptr] defines offset 1
1204 as the null pointer, meaning that this class [*can't] point to the byte
1205 after its own ['this] pointer:
1206
1207 [c++]
1208
1209 using namespace boost::interprocess;
1210
1211 offset_ptr<char> ptr;
1212
1213 //Pointing to the next byte of it's own address
1214 //marks the smart pointer as null.
1215 ptr = (char*)&ptr + 1;
1216
1217 //ptr is equal to null
1218 assert(!ptr);
1219
1220 //This is the same as assigning the null value...
1221 ptr = 0;
1222
1223 //ptr is also equal to null
1224 assert(!ptr);
1225
1226
1227 In practice, this limitation is not important, since a user almost never
1228 wants to point to this address.
1229
1230 [classref boost::interprocess::offset_ptr offset_ptr]
1231 offers all pointer-like operations and
1232 random_access_iterator typedefs, so it can be used in STL
1233 algorithms requiring random access iterators and detected via traits.
1234 For more information about the members and operations of the class, see
1235 [classref boost::interprocess::offset_ptr offset_ptr reference].
1236
1237 [endsect]
1238
1239 [section:synchronization_mechanisms Synchronization mechanisms]
1240
1241 [section:synchronization_mechanisms_overview Synchronization mechanisms overview]
1242
1243 As mentioned before, the ability to shared memory between processes through memory
1244 mapped files or shared memory objects is not very useful if the access to that
1245 memory can't be effectively synchronized. This is the same problem that happens with
1246 thread-synchronization mechanisms, where heap memory and global variables are
1247 shared between threads, but the access to these resources needs to be synchronized
1248 typically through mutex and condition variables. [*Boost.Threads] implements these
1249 synchronization utilities between threads inside the same process. [*Boost.Interprocess]
1250 implements similar mechanisms to synchronize threads from different processes.
1251
1252 [section:synchronization_mechanisms_named_vs_anonymous Named And Anonymous Synchronization Mechanisms]
1253
1254 [*Boost.Interprocess] presents two types of synchronization objects:
1255
1256 * [*Named utilities]: When two processes want
1257 to create an object of such type, both processes must ['create] or ['open] an object
1258 using the same name. This is similar to creating or opening files: a process creates
1259 a file with using a `fstream` with the name ['filename] and another process opens
1260 that file using another `fstream` with the same ['filename] argument.
1261 [*Each process uses a different object to access to the resource, but both processes
1262 are using the same underlying resource].
1263
1264 * [*Anonymous utilities]: Since these utilities have no name, two processes must
1265 share [*the same object] through shared memory or memory mapped files. This is
1266 similar to traditional thread synchronization objects: [*Both processes share the
1267 same object]. Unlike thread synchronization, where global variables and heap
1268 memory is shared between threads of the same process, sharing objects between
1269 two threads from different process can be only possible through mapped regions
1270 that map the same mappable resource (for example, shared memory or memory mapped files).
1271
1272 Each type has it's own advantages and disadvantages:
1273
1274 * Named utilities are easier to handle for simple synchronization tasks, since both process
1275 don't have to create a shared memory region and construct the synchronization mechanism there.
1276
1277 * Anonymous utilities can be serialized to disk when using memory mapped objects obtaining
1278 automatic persistence of synchronization utilities. One could construct a synchronization
1279 utility in a memory mapped file, reboot the system, map the file again, and use the
1280 synchronization utility again without any problem. This can't be achieved with named
1281 synchronization utilities.
1282
1283 The main interface difference between named and anonymous utilities are the constructors.
1284 Usually anonymous utilities have only one constructor, whereas the named utilities have
1285 several constructors whose first argument is a special type that requests creation,
1286 opening or opening or creation of the underlying resource:
1287
1288 [c++]
1289
1290 using namespace boost::interprocess;
1291
1292 //Create the synchronization utility. If it previously
1293 //exists, throws an error
1294 NamedUtility(create_only, ...)
1295
1296 //Open the synchronization utility. If it does not previously
1297 //exist, it's created.
1298 NamedUtility(open_or_create, ...)
1299
1300 //Open the synchronization utility. If it does not previously
1301 //exist, throws an error.
1302 NamedUtility(open_only, ...)
1303
1304 On the other hand the anonymous synchronization utility can only
1305 be created and the processes must synchronize using other mechanisms
1306 who creates the utility:
1307
1308 [c++]
1309
1310 using namespace boost::interprocess;
1311
1312 //Create the synchronization utility.
1313 AnonymousUtility(...)
1314
1315 [endsect]
1316
1317 [section:synchronization_mechanisms_types Types Of Synchronization Mechanisms]
1318
1319 Apart from its named/anonymous nature, [*Boost.Interprocess] presents the following
1320 synchronization utilities:
1321
1322 * Mutexes (named and anonymous)
1323 * Condition variables (named and anonymous)
1324 * Semaphores (named and anonymous)
1325 * Upgradable mutexes
1326 * File locks
1327
1328 [endsect]
1329
1330 [endsect]
1331
1332 [section:mutexes Mutexes]
1333
1334 [section:mutexes_whats_a_mutex What's A Mutex?]
1335
1336 ['Mutex] stands for [*mut]ual [*ex]clusion and it's the most basic form of
1337 synchronization between processes.
1338 Mutexes guarantee that only one thread can lock a given mutex. If a code section
1339 is surrounded by a mutex locking and unlocking, it's guaranteed that only a thread
1340 at a time executes that section of code.
1341 When that thread [*unlocks] the mutex, other threads can enter to that code
1342 region:
1343
1344 [c++]
1345
1346 //The mutex has been previously constructed
1347
1348 lock_the_mutex();
1349
1350 //This code will be executed only by one thread
1351 //at a time.
1352
1353 unlock_the_mutex();
1354
1355 A mutex can also be [*recursive] or [*non-recursive]:
1356
1357 * Recursive mutexes can be locked several times by the same thread. To fully unlock the
1358 mutex, the thread has to unlock the mutex the same times it has locked it.
1359
1360 * Non-recursive mutexes can't be locked several times by the same thread. If a mutex
1361 is locked twice by a thread, the result is undefined, it might throw an error or
1362 the thread could be blocked forever.
1363
1364 [endsect]
1365
1366 [section:mutexes_mutex_operations Mutex Operations]
1367
1368 All the mutex types from [*Boost.Interprocess] implement the following operations:
1369
1370 [blurb ['[*void lock()]]]
1371
1372 [*Effects:]
1373 The calling thread tries to obtain ownership of the mutex, and if another thread has ownership of the mutex, it waits until it can obtain the ownership. If a thread takes ownership of the mutex the mutex must be unlocked by the same thread. If the mutex supports recursive locking, the mutex must be unlocked the same number of times it is locked.
1374
1375 [*Throws:] *interprocess_exception* on error.
1376
1377 [blurb ['[*bool try_lock()]]]
1378
1379 [*Effects:] The calling thread tries to obtain ownership of the mutex, and if another thread has ownership of the mutex returns immediately. If the mutex supports recursive locking, the mutex must be unlocked the same number of times it is locked.
1380
1381 [*Returns:] If the thread acquires ownership of the mutex, returns true, if the another thread has ownership of the mutex, returns false.
1382
1383 [*Throws:] *interprocess_exception* on error.
1384
1385 [blurb ['[*bool timed_lock(const boost::posix_time::ptime &abs_time)]]]
1386
1387 [*Effects:] The calling thread will try to obtain exclusive ownership of the mutex if it can do so in until the specified time is reached. If the mutex supports recursive locking, the mutex must be unlocked the same number of times it is locked.
1388
1389 [*Returns:] If the thread acquires ownership of the mutex, returns true, if the timeout expires returns false.
1390
1391 [*Throws:] *interprocess_exception* on error.
1392
1393 [blurb ['[*void unlock()]]]
1394
1395 [*Precondition:] The thread must have exclusive ownership of the mutex.
1396
1397 [*Effects:] The calling thread releases the exclusive ownership of the mutex. If the mutex supports recursive locking, the mutex must be unlocked the same number of times it is locked.
1398
1399 [*Throws:] An exception derived from *interprocess_exception* on error.
1400
1401 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
1402 are UTC time points, not local time points]
1403
1404 [endsect]
1405
1406 [section:mutexes_interprocess_mutexes Boost.Interprocess Mutex Types And Headers]
1407
1408 Boost.Interprocess offers the following mutex types:
1409
1410 [c++]
1411
1412 #include <boost/interprocess/sync/interprocess_mutex.hpp>
1413
1414 * [classref boost::interprocess::interprocess_mutex interprocess_mutex]: A non-recursive,
1415 anonymous mutex that can be placed in shared memory or memory mapped files.
1416
1417 [c++]
1418
1419 #include <boost/interprocess/sync/interprocess_recursive_mutex.hpp>
1420
1421 * [classref boost::interprocess::interprocess_recursive_mutex interprocess_recursive_mutex]: A recursive,
1422 anonymous mutex that can be placed in shared memory or memory mapped files.
1423
1424 [c++]
1425
1426 #include <boost/interprocess/sync/named_mutex.hpp>
1427
1428 * [classref boost::interprocess::named_mutex named_mutex]: A non-recursive,
1429 named mutex.
1430
1431 [c++]
1432
1433 #include <boost/interprocess/sync/named_recursive_mutex.hpp>
1434
1435 * [classref boost::interprocess::named_recursive_mutex named_recursive_mutex]: A recursive,
1436 named mutex.
1437
1438 [endsect]
1439
1440 [section:mutexes_scoped_lock Scoped lock]
1441
1442 It's very important to unlock a mutex after the process has read or written the data.
1443 This can be difficult when dealing with exceptions, so usually mutexes are used
1444 with a scoped lock, a class that can guarantee that a mutex will always be unlocked
1445 even when an exception occurs. To use a scoped lock just include:
1446
1447 [c++]
1448
1449 #include <boost/interprocess/sync/scoped_lock.hpp>
1450
1451 Basically, a scoped lock calls [*unlock()] in its destructor, and a mutex is always
1452 unlocked when an exception occurs. Scoped lock has many constructors to lock,
1453 try_lock, timed_lock a mutex or not to lock it at all.
1454
1455
1456 [c++]
1457
1458 using namespace boost::interprocess;
1459
1460 //Let's create any mutex type:
1461 MutexType mutex;
1462
1463 {
1464 //This will lock the mutex
1465 scoped_lock<MutexType> lock(mutex);
1466
1467 //Some code
1468
1469 //The mutex will be unlocked here
1470 }
1471
1472 {
1473 //This will try_lock the mutex
1474 scoped_lock<MutexType> lock(mutex, try_to_lock);
1475
1476 //Check if the mutex has been successfully locked
1477 if(lock){
1478 //Some code
1479 }
1480
1481 //If the mutex was locked it will be unlocked
1482 }
1483
1484 {
1485 boost::posix_time::ptime abs_time = ...
1486
1487 //This will timed_lock the mutex
1488 scoped_lock<MutexType> lock(mutex, abs_time);
1489
1490 //Check if the mutex has been successfully locked
1491 if(lock){
1492 //Some code
1493 }
1494
1495 //If the mutex was locked it will be unlocked
1496 }
1497
1498 For more information, check the
1499 [classref boost::interprocess::scoped_lock scoped_lock's reference].
1500
1501 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
1502 are UTC time points, not local time points]
1503
1504 [endsect]
1505
1506 [section:mutexes_anonymous_example Anonymous mutex example]
1507
1508 Imagine that two processes need to write traces to a cyclic buffer built
1509 in shared memory. Each process needs to obtain exclusive access to the
1510 cyclic buffer, write the trace and continue.
1511
1512 To protect the cyclic buffer, we can store a process shared mutex in the
1513 cyclic buffer. Each process will lock the mutex before writing the data and
1514 will write a flag when ends writing the traces
1515 (`doc_anonymous_mutex_shared_data.hpp` header):
1516
1517 [import ../example/doc_anonymous_mutex_shared_data.hpp]
1518 [doc_anonymous_mutex_shared_data]
1519
1520 This is the process main process. Creates the shared memory, constructs
1521 the cyclic buffer and start writing traces:
1522
1523 [import ../example/comp_doc_anonymous_mutexA.cpp]
1524 [doc_anonymous_mutexA]
1525
1526 The second process opens the shared memory, obtains access to the cyclic buffer
1527 and starts writing traces:
1528
1529 [import ../example/comp_doc_anonymous_mutexB.cpp]
1530 [doc_anonymous_mutexB]
1531
1532 As we can see, a mutex is useful to protect data but not to notify an event to another
1533 process. For this, we need a condition variable, as we will see in the next section.
1534
1535 [endsect]
1536
1537 [section:mutexes_named_example Named mutex example]
1538
1539 Now imagine that two processes want to write a trace to a file. First they write
1540 their name, and after that they write the message. Since the operating system can
1541 interrupt a process in any moment we can mix parts of the messages of both processes,
1542 so we need a way to write the whole message to the file atomically. To achieve this,
1543 we can use a named mutex so that each process locks the mutex before writing:
1544
1545 [import ../example/doc_named_mutex.cpp]
1546 [doc_named_mutex]
1547
1548 [endsect]
1549
1550 [endsect]
1551
1552 [section:conditions Conditions]
1553
1554 [section:conditions_whats_a_condition What's A Condition Variable?]
1555
1556 In the previous example, a mutex is used to ['lock] but we can't use it to
1557 ['wait] efficiently until the condition to continue is met. A condition variable
1558 can do two things:
1559
1560 * [*wait]: The thread is blocked until some other thread notifies that it can
1561 continue because the condition that lead to waiting has disappeared.
1562
1563 * [*notify]: The thread sends a signal to one blocked thread or to all blocked
1564 threads to tell them that they the condition that provoked their wait has
1565 disappeared.
1566
1567 Waiting in a condition variable is always associated with a mutex.
1568 The mutex must be locked prior to waiting on the condition. When waiting
1569 on the condition variable, the thread unlocks the mutex and waits [*atomically].
1570
1571 When the thread returns from a wait function (because of a signal or a timeout,
1572 for example) the mutex object is again locked.
1573
1574 [endsect]
1575
1576 [section:conditions_interprocess_conditions Boost.Interprocess Condition Types And Headers]
1577
1578 Boost.Interprocess offers the following condition types:
1579
1580 [c++]
1581
1582 #include <boost/interprocess/sync/interprocess_condition.hpp>
1583
1584 * [classref boost::interprocess::interprocess_condition interprocess_condition]:
1585 An anonymous condition variable that can be placed in shared memory or memory
1586 mapped files to be used with [classref boost::interprocess::interprocess_mutex].
1587
1588 [c++]
1589
1590 #include <boost/interprocess/sync/interprocess_condition_any.hpp>
1591
1592 * [classref boost::interprocess::interprocess_condition_any interprocess_condition_any]:
1593 An anonymous condition variable that can be placed in shared memory or memory
1594 mapped files to be used with any lock type.
1595
1596 [c++]
1597
1598 #include <boost/interprocess/sync/named_condition.hpp>
1599
1600 * [classref boost::interprocess::named_condition named_condition]: A named
1601 condition variable to be used with [classref boost::interprocess::named_mutex named_mutex].
1602
1603 [c++]
1604
1605 #include <boost/interprocess/sync/named_condition_any.hpp>
1606
1607 * [classref boost::interprocess::named_condition named_condition]: A named
1608 condition variable to be used with any lock type.
1609
1610 Named conditions are similar to anonymous conditions, but they are used in
1611 combination with named mutexes. Several times, we don't want to store
1612 synchronization objects with the synchronized data:
1613
1614 * We want to change the synchronization method (from interprocess
1615 to intra-process, or without any synchronization) using the same data.
1616 Storing the process-shared anonymous synchronization with the synchronized
1617 data would forbid this.
1618
1619 * We want to send the synchronized data through the network or any other
1620 communication method. Sending the process-shared synchronization objects
1621 wouldn't have any sense.
1622
1623 [endsect]
1624
1625 [section:conditions_anonymous_example Anonymous condition example]
1626
1627 Imagine that a process that writes a trace to a simple shared memory buffer that
1628 another process prints one by one. The first process writes the trace and waits
1629 until the other process prints the data. To achieve this, we can use two
1630 condition variables: the first one is used to block the sender until the second
1631 process prints the message and the second one to block the receiver until the
1632 buffer has a trace to print.
1633
1634 The shared memory trace buffer (doc_anonymous_condition_shared_data.hpp):
1635
1636 [import ../example/doc_anonymous_condition_shared_data.hpp]
1637 [doc_anonymous_condition_shared_data]
1638
1639 This is the process main process. Creates the shared memory, places there
1640 the buffer and starts writing messages one by one until it writes "last message"
1641 to indicate that there are no more messages to print:
1642
1643 [import ../example/comp_doc_anonymous_conditionA.cpp]
1644 [doc_anonymous_conditionA]
1645
1646 The second process opens the shared memory and prints each message
1647 until the "last message" message is received:
1648
1649 [import ../example/comp_doc_anonymous_conditionB.cpp]
1650 [doc_anonymous_conditionB]
1651
1652 With condition variables, a process can block if it can't continue the work,
1653 and when the conditions to continue are met another process can wake it.
1654
1655 [endsect]
1656
1657 [endsect]
1658
1659 [section:semaphores Semaphores]
1660
1661 [section:semaphores_whats_a_semaphores What's A Semaphore?]
1662
1663 A semaphore is a synchronization mechanism between processes based in an internal
1664 count that offers two basic operations:
1665
1666 * [*Wait]: Tests the value of the semaphore count, and waits if the value is less than or
1667 equal than 0. Otherwise, decrements the semaphore count.
1668
1669 * [*Post]: Increments the semaphore count. If any process is blocked, one of those processes
1670 is awoken.
1671
1672 If the initial semaphore count is initialized to 1, a [*Wait] operation is equivalent to a
1673 mutex locking and [*Post] is equivalent to a mutex unlocking. This type of semaphore is known
1674 as a [*binary semaphore].
1675
1676 Although semaphores can be used like mutexes, they have a unique feature: unlike mutexes,
1677 a [*Post] operation need not be executed by the same thread/process that executed the
1678 [*Wait] operation.
1679
1680 [endsect]
1681
1682 [section:semaphores_interprocess_semaphores Boost.Interprocess Semaphore Types And Headers]
1683
1684 Boost.Interprocess offers the following semaphore types:
1685
1686 [c++]
1687
1688 #include <boost/interprocess/sync/interprocess_semaphore.hpp>
1689
1690 * [classref boost::interprocess::interprocess_semaphore interprocess_semaphore]:
1691 An anonymous semaphore that can be placed in shared memory or memory
1692 mapped files.
1693
1694 [c++]
1695
1696 #include <boost/interprocess/sync/named_semaphore.hpp>
1697
1698 * [classref boost::interprocess::named_semaphore named_semaphore]: A named
1699 semaphore.
1700
1701 [endsect]
1702
1703 [section:semaphores_anonymous_example Anonymous semaphore example]
1704
1705 We will implement an integer array in shared memory that will be used to transfer data
1706 from one process to another process. The first process will write some integers
1707 to the array and the process will block if the array is full.
1708
1709 The second process will copy the transmitted data to its own buffer, blocking if
1710 there is no new data in the buffer.
1711
1712 This is the shared integer array (doc_anonymous_semaphore_shared_data.hpp):
1713
1714 [import ../example/doc_anonymous_semaphore_shared_data.hpp]
1715 [doc_anonymous_semaphore_shared_data]
1716
1717 This is the process main process. Creates the shared memory, places there
1718 the integer array and starts integers one by one, blocking if the array
1719 is full:
1720
1721 [import ../example/comp_doc_anonymous_semaphoreA.cpp]
1722 [doc_anonymous_semaphoreA]
1723
1724 The second process opens the shared memory and copies the received integers
1725 to it's own buffer:
1726
1727 [import ../example/comp_doc_anonymous_semaphoreB.cpp]
1728 [doc_anonymous_semaphoreB]
1729
1730 The same interprocess communication can be achieved with a condition variables
1731 and mutexes, but for several synchronization patterns, a semaphore is more
1732 efficient than a mutex/condition combination.
1733
1734 [endsect]
1735
1736 [endsect]
1737
1738 [section:sharable_upgradable_mutexes Sharable and Upgradable Mutexes]
1739
1740 [section:upgradable_whats_a_mutex What's a Sharable and an Upgradable Mutex?]
1741
1742 Sharable and upgradable mutex are special mutex types that offers more locking possibilities
1743 than a normal mutex. Sometimes, we can distinguish between [*reading] the data and
1744 [*modifying] the data. If just some threads need to modify the data, and a plain mutex
1745 is used to protect the data from concurrent access, concurrency is pretty limited:
1746 two threads that only read the data will be serialized instead of being executed
1747 concurrently.
1748
1749 If we allow concurrent access to threads that just read the data but we avoid
1750 concurrent access between threads that read and modify or between threads that modify,
1751 we can increase performance. This is specially true in applications where data reading
1752 is more common than data modification and the synchronized data reading code needs
1753 some time to execute. With a sharable mutex we can acquire 2 lock types:
1754
1755 * [*Exclusive lock]: Similar to a plain mutex. If a thread acquires an exclusive
1756 lock, no other thread can acquire any lock (exclusive or other) until the exclusive
1757 lock is released. If any thread other has any lock other than exclusive, a thread trying
1758 to acquire an exclusive lock will block.
1759 This lock will be acquired by threads that will modify the data.
1760
1761 * [*Sharable lock]: If a thread acquires a sharable lock, other threads
1762 can't acquire the exclusive lock. If any thread has acquired
1763 the exclusive lock a thread trying to acquire a sharable lock will block.
1764 This locking is executed by threads that just need to read the data.
1765
1766 With an upgradable mutex we can acquire previous locks plus a new upgradable lock:
1767
1768 * [*Upgradable lock]: Acquiring an upgradable lock is similar to acquiring
1769 a [*privileged sharable lock]. If a thread acquires an upgradable lock, other threads
1770 can acquire a sharable lock. If any thread has acquired the exclusive or upgradable lock
1771 a thread trying to acquire an upgradable lock will block.
1772 A thread that has acquired an upgradable lock,
1773 is guaranteed to be able to acquire atomically an exclusive lock when other threads
1774 that have acquired a sharable lock release it. This is used for
1775 a thread that [*maybe] needs to modify the data, but usually just needs to read the data.
1776 This thread acquires the upgradable lock and other threads can acquire the sharable lock.
1777 If the upgradable thread reads the data and it has to modify it, the thread can be promoted
1778 to acquire the exclusive lock: when all sharable threads have released the sharable lock, the
1779 upgradable lock is atomically promoted to an exclusive lock. The newly promoted thread
1780 can modify the data and it can be sure that no other thread has modified it while
1781 doing the transition. [*Only 1 thread can acquire the upgradable
1782 (privileged reader) lock].
1783
1784 To sum up:
1785
1786 [table Locking Possibilities for a Sharable Mutex
1787 [[If a thread has acquired the...] [Other threads can acquire...]]
1788 [[Sharable lock] [many sharable locks]]
1789 [[Exclusive lock] [no locks]]
1790 ]
1791
1792 [table Locking Possibilities for an Upgradable Mutex
1793 [[If a thread has acquired the...] [Other threads can acquire...]]
1794 [[Sharable lock] [many sharable locks and 1 upgradable lock]]
1795 [[Upgradable lock] [many sharable locks]]
1796 [[Exclusive lock] [no locks]]
1797 ]
1798
1799 [endsect]
1800
1801 [section:upgradable_transitions Lock transitions for Upgradable Mutex]
1802
1803 A sharable mutex has no option to change the acquired lock for another lock
1804 atomically.
1805
1806 On the other hand, for an upgradable mutex, a thread that has
1807 acquired a lock can try to acquire another lock type atomically.
1808 All lock transitions are not guaranteed to succeed. Even if a transition is guaranteed
1809 to succeed, some transitions will block the thread waiting until other threads release
1810 the sharable locks. [*Atomically] means that no other thread will acquire an Upgradable
1811 or Exclusive lock in the transition, [*so data is guaranteed to remain unchanged]:
1812
1813 [table Transition Possibilities for an Upgradable Mutex
1814 [[If a thread has acquired the...] [It can atomically release the previous lock and...]]
1815 [[Sharable lock] [try to obtain (not guaranteed) immediately the Exclusive lock if no other thread has exclusive or upgrable lock]]
1816 [[Sharable lock] [try to obtain (not guaranteed) immediately the Upgradable lock if no other thread has exclusive or upgrable lock]]
1817 [[Upgradable lock] [obtain the Exclusive lock when all sharable locks are released]]
1818 [[Upgradable lock] [obtain the Sharable lock immediately]]
1819 [[Exclusive lock] [obtain the Upgradable lock immediately]]
1820 [[Exclusive lock] [obtain the Sharable lock immediately]]
1821 ]
1822
1823 As we can see, an upgradable mutex is a powerful synchronization utility that can improve
1824 the concurrency. However, if most of the time we have to modify the data, or the
1825 synchronized code section is very short, it's more efficient to use a plain mutex, since
1826 it has less overhead. Upgradable lock shines when the synchronized code section is bigger
1827 and there are more readers than modifiers.
1828
1829 [endsect]
1830
1831 [section:sharable_upgradable_mutexes_operations Upgradable Mutex Operations]
1832
1833 All the upgradable mutex types from [*Boost.Interprocess] implement
1834 the following operations:
1835
1836 [section:sharable_upgradable_mutexes_operations_exclusive Exclusive Locking (Sharable & Upgradable Mutexes)]
1837
1838 [blurb ['[*void lock()]]]
1839
1840 [*Effects:]
1841 The calling thread tries to obtain exclusive ownership of the mutex, and if
1842 another thread has any ownership of the mutex (exclusive or other),
1843 it waits until it can obtain the ownership.
1844
1845 [*Throws:] *interprocess_exception* on error.
1846
1847 [blurb ['[*bool try_lock()]]]
1848
1849 [*Effects:]
1850 The calling thread tries to acquire exclusive ownership of the mutex without
1851 waiting. If no other thread has any ownership of the mutex (exclusive or other)
1852 this succeeds.
1853
1854 [*Returns:] If it can acquire exclusive ownership immediately returns true.
1855 If it has to wait, returns false.
1856
1857 [*Throws:] *interprocess_exception* on error.
1858
1859 [blurb ['[*bool timed_lock(const boost::posix_time::ptime &abs_time)]]]
1860
1861 [*Effects:]
1862 The calling thread tries to acquire exclusive ownership of the mutex
1863 waiting if necessary until no other thread has any ownership of the mutex
1864 (exclusive or other) or abs_time is reached.
1865
1866 [*Returns:] If acquires exclusive ownership, returns true. Otherwise
1867 returns false.
1868
1869 [*Throws:] *interprocess_exception* on error.
1870
1871 [blurb ['[*void unlock()]]]
1872
1873 [*Precondition:] The thread must have exclusive ownership of the mutex.
1874
1875 [*Effects:] The calling thread releases the exclusive ownership of the mutex.
1876
1877 [*Throws:] An exception derived from *interprocess_exception* on error.
1878
1879 [endsect]
1880
1881 [section:sharable_upgradable_mutexes_operations_sharable Sharable Locking (Sharable & Upgradable Mutexes)]
1882
1883 [blurb ['[*void lock_sharable()]]]
1884
1885 [*Effects:]
1886 The calling thread tries to obtain sharable ownership of the mutex, and if
1887 another thread has exclusive ownership of the mutex,
1888 waits until it can obtain the ownership.
1889
1890 [*Throws:] *interprocess_exception* on error.
1891
1892 [blurb ['[*bool try_lock_sharable()]]]
1893
1894 [*Effects:]
1895 The calling thread tries to acquire sharable ownership of the mutex without
1896 waiting. If no other thread has exclusive ownership of
1897 the mutex this succeeds.
1898
1899 [*Returns:] If it can acquire sharable ownership immediately returns true.
1900 If it has to wait, returns false.
1901
1902 [*Throws:] *interprocess_exception* on error.
1903
1904 [blurb ['[*bool timed_lock_sharable(const boost::posix_time::ptime &abs_time)]]]
1905
1906 [*Effects:]
1907 The calling thread tries to acquire sharable ownership of the mutex
1908 waiting if necessary until no other thread has exclusive
1909 ownership of the mutex or abs_time is reached.
1910
1911 [*Returns:] If acquires sharable ownership, returns true. Otherwise
1912 returns false.
1913
1914 [*Throws:] *interprocess_exception* on error.
1915
1916 [blurb ['[*void unlock_sharable()]]]
1917
1918 [*Precondition:] The thread must have sharable ownership of the mutex.
1919
1920 [*Effects:] The calling thread releases the sharable ownership of the mutex.
1921
1922 [*Throws:] An exception derived from *interprocess_exception* on error.
1923
1924 [endsect]
1925
1926 [section:upgradable_mutexes_operations_upgradable Upgradable Locking (Upgradable Mutex only)]
1927
1928 [blurb ['[*void lock_upgradable()]]]
1929
1930 [*Effects:]
1931 The calling thread tries to obtain upgradable ownership of the mutex, and if
1932 another thread has exclusive or upgradable ownership of the mutex,
1933 waits until it can obtain the ownership.
1934
1935 [*Throws:] *interprocess_exception* on error.
1936
1937 [blurb ['[*bool try_lock_upgradable()]]]
1938
1939 [*Effects:]
1940 The calling thread tries to acquire upgradable ownership of the mutex without
1941 waiting. If no other thread has exclusive or upgradable ownership of
1942 the mutex this succeeds.
1943
1944 [*Returns:] If it can acquire upgradable ownership immediately returns true.
1945 If it has to wait, returns false.
1946
1947 [*Throws:] *interprocess_exception* on error.
1948
1949 [blurb ['[*bool timed_lock_upgradable(const boost::posix_time::ptime &abs_time)]]]
1950
1951 [*Effects:]
1952 The calling thread tries to acquire upgradable ownership of the mutex
1953 waiting if necessary until no other thread has exclusive
1954 ownership of the mutex or abs_time is reached.
1955
1956 [*Returns:] If acquires upgradable ownership, returns true. Otherwise
1957 returns false.
1958
1959 [*Throws:] *interprocess_exception* on error.
1960
1961 [blurb ['[*void unlock_upgradable()]]]
1962
1963 [*Precondition:] The thread must have upgradable ownership of the mutex.
1964
1965 [*Effects:] The calling thread releases the upgradable ownership of the mutex.
1966
1967 [*Throws:] An exception derived from *interprocess_exception* on error.
1968
1969 [endsect]
1970
1971 [section:upgradable_mutexes_operations_demotions Demotions (Upgradable Mutex only)]
1972
1973 [blurb ['[*void unlock_and_lock_upgradable()]]]
1974
1975 [*Precondition:] The thread must have exclusive ownership of the mutex.
1976
1977 [*Effects:] The thread atomically releases exclusive ownership and acquires upgradable
1978 ownership. This operation is non-blocking.
1979
1980 [*Throws:] An exception derived from *interprocess_exception* on error.
1981
1982 [blurb ['[*void unlock_and_lock_sharable()]]]
1983
1984 [*Precondition:] The thread must have exclusive ownership of the mutex.
1985
1986 [*Effects:] The thread atomically releases exclusive ownership and acquires sharable
1987 ownership. This operation is non-blocking.
1988
1989 [*Throws:] An exception derived from *interprocess_exception* on error.
1990
1991 [blurb ['[*void unlock_upgradable_and_lock_sharable()]]]
1992
1993 [*Precondition:] The thread must have upgradable ownership of the mutex.
1994
1995 [*Effects:] The thread atomically releases upgradable ownership and acquires sharable
1996 ownership. This operation is non-blocking.
1997
1998 [*Throws:] An exception derived from *interprocess_exception* on error.
1999
2000 [endsect]
2001
2002 [section:upgradable_mutexes_operations_promotions Promotions (Upgradable Mutex only)]
2003 [blurb ['[*void unlock_upgradable_and_lock()]]]
2004
2005 [*Precondition:] The thread must have upgradable ownership of the mutex.
2006
2007 [*Effects:] The thread atomically releases upgradable ownership and acquires exclusive
2008 ownership. This operation will block until all threads with sharable ownership release it.
2009
2010 [*Throws:] An exception derived from *interprocess_exception* on error.[blurb ['[*bool try_unlock_upgradable_and_lock()]]]
2011
2012 [*Precondition:] The thread must have upgradable ownership of the mutex.
2013
2014 [*Effects:] The thread atomically releases upgradable ownership and tries to acquire exclusive
2015 ownership. This operation will fail if there are threads with sharable ownership, but
2016 it will maintain upgradable ownership.
2017
2018 [*Returns:] If acquires exclusive ownership, returns true. Otherwise
2019 returns false.
2020
2021 [*Throws:] An exception derived from *interprocess_exception* on error.[blurb ['[*bool timed_unlock_upgradable_and_lock(const boost::posix_time::ptime &abs_time)]]]
2022
2023 [*Precondition:] The thread must have upgradable ownership of the mutex.
2024
2025 [*Effects:] The thread atomically releases upgradable ownership and tries to acquire
2026 exclusive ownership, waiting if necessary until abs_time. This operation will fail
2027 if there are threads with sharable ownership or timeout reaches, but it will maintain
2028 upgradable ownership.
2029
2030 [*Returns:] If acquires exclusive ownership, returns true. Otherwise
2031 returns false.
2032
2033 [*Throws:] An exception derived from *interprocess_exception* on error.[blurb ['[*bool try_unlock_sharable_and_lock()]]]
2034
2035 [*Precondition:] The thread must have sharable ownership of the mutex.
2036
2037 [*Effects:] The thread atomically releases sharable ownership and tries to acquire exclusive
2038 ownership. This operation will fail if there are threads with sharable or upgradable ownership,
2039 but it will maintain sharable ownership.
2040
2041 [*Returns:] If acquires exclusive ownership, returns true. Otherwise
2042 returns false.
2043
2044 [*Throws:] An exception derived from *interprocess_exception* on error.[blurb ['[*bool try_unlock_sharable_and_lock_upgradable()]]]
2045
2046 [*Precondition:] The thread must have sharable ownership of the mutex.
2047
2048 [*Effects:] The thread atomically releases sharable ownership and tries to acquire upgradable
2049 ownership. This operation will fail if there are threads with sharable or upgradable ownership,
2050 but it will maintain sharable ownership.
2051
2052 [*Returns:] If acquires upgradable ownership, returns true. Otherwise
2053 returns false.
2054
2055 [*Throws:] An exception derived from *interprocess_exception* on error.
2056
2057 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
2058 are UTC time points, not local time points]
2059
2060 [endsect]
2061
2062 [endsect]
2063
2064 [section:sharable_upgradable_mutexes_mutex_interprocess_mutexes Boost.Interprocess Sharable & Upgradable Mutex Types And Headers]
2065
2066 Boost.Interprocess offers the following sharable mutex types:
2067
2068 [c++]
2069
2070 #include <boost/interprocess/sync/interprocess_sharable_mutex.hpp>
2071
2072 * [classref boost::interprocess::interprocess_sharable_mutex interprocess_sharable_mutex]: A non-recursive,
2073 anonymous sharable mutex that can be placed in shared memory or memory mapped files.
2074
2075 [c++]
2076
2077 #include <boost/interprocess/sync/named_sharable_mutex.hpp>
2078
2079 * [classref boost::interprocess::named_sharable_mutex named_sharable_mutex]: A non-recursive,
2080 named sharable mutex.
2081
2082 Boost.Interprocess offers the following upgradable mutex types:
2083
2084 [c++]
2085
2086 #include <boost/interprocess/sync/interprocess_upgradable_mutex.hpp>
2087
2088 * [classref boost::interprocess::interprocess_upgradable_mutex interprocess_upgradable_mutex]: A non-recursive,
2089 anonymous upgradable mutex that can be placed in shared memory or memory mapped files.
2090
2091 [c++]
2092
2093 #include <boost/interprocess/sync/named_upgradable_mutex.hpp>
2094
2095 * [classref boost::interprocess::named_upgradable_mutex named_upgradable_mutex]: A non-recursive,
2096 named upgradable mutex.
2097
2098 [endsect]
2099
2100 [section:sharable_upgradable_locks Sharable Lock And Upgradable Lock]
2101
2102 As with plain mutexes, it's important to release the acquired lock even in the presence
2103 of exceptions. [*Boost.Interprocess] mutexes are best used with the
2104 [classref boost::interprocess::scoped_lock scoped_lock] utility,
2105 and this class only offers exclusive locking.
2106
2107 As we have sharable locking and upgradable locking with upgradable mutexes, we have two new
2108 utilities: [classref boost::interprocess::sharable_lock sharable_lock] and
2109 [classref boost::interprocess::upgradable_lock upgradable_lock]. Both classes are similar to `scoped_lock`
2110 but `sharable_lock` acquires the sharable lock in the constructor and `upgradable_lock`
2111 acquires the upgradable lock in the constructor.
2112
2113 These two utilities can be use with any synchronization object that offers the needed
2114 operations. For example, a user defined mutex type with no upgradable locking features
2115 can use `sharable_lock` if the synchronization object offers [*lock_sharable()] and
2116 [*unlock_sharable()] operations:
2117
2118 [section:upgradable_mutexes_lock_types Sharable Lock And Upgradable Lock Headers]
2119
2120 [c++]
2121
2122 #include <boost/interprocess/sync/sharable_lock.hpp>
2123
2124 [c++]
2125
2126 #include <boost/interprocess/sync/upgradable_lock.hpp>
2127
2128 [endsect]
2129
2130 `sharable_lock` calls [*unlock_sharable()] in its destructor, and
2131 `upgradable_lock` calls [*unlock_upgradable()] in its destructor, so the
2132 upgradable mutex is always unlocked when an exception occurs.
2133
2134 [c++]
2135
2136 using namespace boost::interprocess;
2137
2138 SharableOrUpgradableMutex sh_or_up_mutex;
2139
2140 {
2141 //This will call lock_sharable()
2142 sharable_lock<SharableOrUpgradableMutex> lock(sh_or_up_mutex);
2143
2144 //Some code
2145
2146 //The mutex will be unlocked here
2147 }
2148
2149 {
2150 //This won't lock the mutex()
2151 sharable_lock<SharableOrUpgradableMutex> lock(sh_or_up_mutex, defer_lock);
2152
2153 //Lock it on demand. This will call lock_sharable()
2154 lock.lock();
2155
2156 //Some code
2157
2158 //The mutex will be unlocked here
2159 }
2160
2161 {
2162 //This will call try_lock_sharable()
2163 sharable_lock<SharableOrUpgradableMutex> lock(sh_or_up_mutex, try_to_lock);
2164
2165 //Check if the mutex has been successfully locked
2166 if(lock){
2167 //Some code
2168 }
2169 //If the mutex was locked it will be unlocked
2170 }
2171
2172 {
2173 boost::posix_time::ptime abs_time = ...
2174
2175 //This will call timed_lock_sharable()
2176 scoped_lock<SharableOrUpgradableMutex> lock(sh_or_up_mutex, abs_time);
2177
2178 //Check if the mutex has been successfully locked
2179 if(lock){
2180 //Some code
2181 }
2182 //If the mutex was locked it will be unlocked
2183 }
2184
2185 UpgradableMutex up_mutex;
2186
2187 {
2188 //This will call lock_upgradable()
2189 upgradable_lock<UpgradableMutex> lock(up_mutex);
2190
2191 //Some code
2192
2193 //The mutex will be unlocked here
2194 }
2195
2196 {
2197 //This won't lock the mutex()
2198 upgradable_lock<UpgradableMutex> lock(up_mutex, defer_lock);
2199
2200 //Lock it on demand. This will call lock_upgradable()
2201 lock.lock();
2202
2203 //Some code
2204
2205 //The mutex will be unlocked here
2206 }
2207
2208 {
2209 //This will call try_lock_upgradable()
2210 upgradable_lock<UpgradableMutex> lock(up_mutex, try_to_lock);
2211
2212 //Check if the mutex has been successfully locked
2213 if(lock){
2214 //Some code
2215 }
2216 //If the mutex was locked it will be unlocked
2217 }
2218
2219 {
2220 boost::posix_time::ptime abs_time = ...
2221
2222 //This will call timed_lock_upgradable()
2223 scoped_lock<UpgradableMutex> lock(up_mutex, abs_time);
2224
2225 //Check if the mutex has been successfully locked
2226 if(lock){
2227 //Some code
2228 }
2229 //If the mutex was locked it will be unlocked
2230 }
2231
2232 [classref boost::interprocess::upgradable_lock upgradable_lock] and
2233 [classref boost::interprocess::sharable_lock sharable_lock] offer
2234 more features and operations, see their reference for more informations
2235
2236 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
2237 are UTC time points, not local time points]
2238
2239 [endsect]
2240
2241 [/section:upgradable_mutexes_example Anonymous Upgradable Mutex Example]
2242
2243 [/import ../example/comp_doc_anonymous_upgradable_mutexA.cpp]
2244 [/doc_anonymous_upgradable_mutexA]
2245
2246
2247 [/import ../example/comp_doc_anonymous_upgradable_mutexB.cpp]
2248 [/doc_anonymous_upgradable_mutexB]
2249
2250 [/endsect]
2251
2252 [endsect]
2253
2254 [section:lock_conversions Lock Transfers Through Move Semantics]
2255
2256 [blurb [*Interprocess uses its own move semantics emulation code for compilers
2257 that don't support rvalues references.
2258 This is a temporary solution until a Boost move semantics library is accepted.]]
2259
2260 Scoped locks and similar utilities offer simple resource management possibilities,
2261 but with advanced mutex types like upgradable mutexes, there are operations where
2262 an acquired lock type is released and another lock type is acquired atomically.
2263 This is implemented by upgradable mutex operations like `unlock_and_lock_sharable()`.
2264
2265 These operations can be managed more effectively using [*lock transfer operations].
2266 A lock transfer operations explicitly indicates that a mutex owned by a lock is
2267 transferred to another lock executing atomic unlocking plus locking operations.
2268
2269 [section:lock_transfer_simple_transfer Simple Lock Transfer]
2270
2271 Imagine that a thread modifies some data in the beginning but after that, it has to
2272 just read it in a long time. The code can acquire the exclusive lock, modify the data
2273 and atomically release the exclusive lock and acquire the sharable lock. With these
2274 sequence we guarantee that no other thread can modify the data in the transition
2275 and that more readers can acquire sharable lock, increasing concurrency.
2276 Without lock transfer operations, this would be coded like this:
2277
2278 [c++]
2279
2280 using boost::interprocess;
2281 interprocess_upgradable_mutex mutex;
2282
2283 //Acquire exclusive lock
2284 mutex.lock();
2285
2286 //Modify data
2287
2288 //Atomically release exclusive lock and acquire sharable lock.
2289 //More threads can acquire the sharable lock and read the data.
2290 mutex.unlock_and_lock_sharable();
2291
2292 //Read data
2293
2294 //Explicit unlocking
2295 mutex.unlock_sharable();
2296
2297
2298 This can be simple, but in the presence of exceptions, it's complicated to know
2299 what type of lock the mutex had when the exception was thrown and what function
2300 we should call to unlock it:
2301
2302 [c++]
2303
2304 try{
2305 //Mutex operations
2306 }
2307 catch(...){
2308 //What should we call? "unlock()" or "unlock_sharable()"
2309 //Is the mutex locked?
2310 }
2311
2312 We can use [*lock transfer] to simplify all this management:
2313
2314 [c++]
2315
2316 using boost::interprocess;
2317 interprocess_upgradable_mutex mutex;
2318
2319 //Acquire exclusive lock
2320 scoped_lock s_lock(mutex);
2321
2322 //Modify data
2323
2324 //Atomically release exclusive lock and acquire sharable lock.
2325 //More threads can acquire the sharable lock and read the data.
2326 sharable_lock(move(s_lock));
2327
2328 //Read data
2329
2330 //The lock is automatically unlocked calling the appropriate unlock
2331 //function even in the presence of exceptions.
2332 //If the mutex was not locked, no function is called.
2333
2334 As we can see, even if an exception is thrown at any moment, the mutex
2335 will be automatically unlocked calling the appropriate `unlock()` or
2336 `unlock_sharable()` method.
2337
2338 [endsect]
2339
2340 [section:lock_transfer_summary Lock Transfer Summary]
2341
2342 There are many lock transfer operations that we can classify according to
2343 the operations presented in the upgradable mutex operations:
2344
2345 * [*Guaranteed to succeed and non-blocking:] Any transition from a more
2346 restrictive lock to a less restrictive one. Scoped -> Upgradable,
2347 Scoped -> Sharable, Upgradable -> Sharable.
2348
2349 * [*Not guaranteed to succeed:] The operation might succeed if no one has
2350 acquired the upgradable or exclusive lock: Sharable -> Exclusive. This
2351 operation is a try operation.
2352
2353 * [*Guaranteed to succeed if using an infinite waiting:] Any transition that will succeed
2354 but needs to wait until all Sharable locks are released: Upgradable -> Scoped.
2355 Since this is a blocking operation, we can also choose not to wait infinitely
2356 and just try or wait until a timeout is reached.
2357
2358 [section:lock_transfer_summary_scoped Transfers To Scoped Lock]
2359
2360 Transfers to `scoped_lock` are guaranteed to succeed only from an `upgradable_lock`
2361 and only if a blocking operation is requested, due to the fact that this operation
2362 needs to wait until all sharable locks are released. The user can also use "try"
2363 or "timed" transfer to avoid infinite locking, but succeed is not guaranteed.
2364
2365 A conversion from a `sharable_lock` is never guaranteed and thus, only a try operation
2366 is permitted:
2367
2368 [c++]
2369
2370 //Conversions to scoped_lock
2371 {
2372 upgradable_lock<Mutex> u_lock(mut);
2373 //This calls unlock_upgradable_and_lock()
2374 scoped_lock<Mutex> e_lock(move(u_lock));
2375 }
2376 {
2377 upgradable_lock<Mutex> u_lock(mut);
2378 //This calls try_unlock_upgradable_and_lock()
2379 scoped_lock<Mutex> e_lock(move(u_lock, try_to_lock));
2380 }
2381 {
2382 boost::posix_time::ptime t = test::delay(100);
2383 upgradable_lock<Mutex> u_lock(mut);
2384 //This calls timed_unlock_upgradable_and_lock()
2385 scoped_lock<Mutex> e_lock(move(u_lock));
2386 }
2387 {
2388 sharable_lock<Mutex> s_lock(mut);
2389 //This calls try_unlock_sharable_and_lock()
2390 scoped_lock<Mutex> e_lock(move(s_lock, try_to_lock));
2391 }
2392
2393 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
2394 are UTC time points, not local time points]
2395
2396 [endsect]
2397
2398 [section:lock_transfer_summary_upgradable Transfers To Upgradable Lock]
2399
2400 A transfer to an `upgradable_lock` is guaranteed to succeed only from a `scoped_lock`
2401 since scoped locking is a more restrictive locking than an upgradable locking. This
2402 operation is also non-blocking.
2403
2404 A transfer from a `sharable_lock` is not guaranteed and only a "try" operation is permitted:
2405
2406 [c++]
2407
2408 //Conversions to upgradable
2409 {
2410 sharable_lock<Mutex> s_lock(mut);
2411 //This calls try_unlock_sharable_and_lock_upgradable()
2412 upgradable_lock<Mutex> u_lock(move(s_lock, try_to_lock));
2413 }
2414 {
2415 scoped_lock<Mutex> e_lock(mut);
2416 //This calls unlock_and_lock_upgradable()
2417 upgradable_lock<Mutex> u_lock(move(e_lock));
2418 }
2419
2420 [endsect]
2421
2422 [section:lock_transfer_summary_sharable Transfers To Sharable Lock]
2423
2424 All transfers to a `sharable_lock` are guaranteed to succeed since both
2425 `upgradable_lock` and `scoped_lock` are more restrictive than `sharable_lock`.
2426 These operations are also non-blocking:
2427
2428 [c++]
2429
2430 //Conversions to sharable_lock
2431 {
2432 upgradable_lock<Mutex> u_lock(mut);
2433 //This calls unlock_upgradable_and_lock_sharable()
2434 sharable_lock<Mutex> s_lock(move(u_lock));
2435 }
2436 {
2437 scoped_lock<Mutex> e_lock(mut);
2438 //This calls unlock_and_lock_sharable()
2439 sharable_lock<Mutex> s_lock(move(e_lock));
2440 }
2441
2442 [endsect]
2443
2444 [endsect]
2445
2446 [section:lock_transfer_not_locked Transferring Unlocked Locks]
2447
2448 In the previous examples, the mutex used in the transfer operation was previously
2449 locked:
2450
2451 [c++]
2452
2453 Mutex mut;
2454
2455 //This calls mut.lock()
2456 scoped_lock<Mutex> e_lock(mut);
2457
2458 //This calls unlock_and_lock_sharable()
2459 sharable_lock<Mutex> s_lock(move(e_lock));
2460 }
2461
2462 but it's possible to execute the transfer with an unlocked source, due to explicit
2463 unlocking, a try, timed or a `defer_lock` constructor:
2464
2465 [c++]
2466
2467 //These operations can leave the mutex unlocked!
2468
2469 {
2470 //Try might fail
2471 scoped_lock<Mutex> e_lock(mut, try_to_lock);
2472 sharable_lock<Mutex> s_lock(move(e_lock));
2473 }
2474 {
2475 //Timed operation might fail
2476 scoped_lock<Mutex> e_lock(mut, time);
2477 sharable_lock<Mutex> s_lock(move(e_lock));
2478 }
2479 {
2480 //Avoid mutex locking
2481 scoped_lock<Mutex> e_lock(mut, defer_lock);
2482 sharable_lock<Mutex> s_lock(move(e_lock));
2483 }
2484 {
2485 //Explicitly call unlock
2486 scoped_lock<Mutex> e_lock(mut);
2487 e_lock.unlock();
2488 //Mutex was explicitly unlocked
2489 sharable_lock<Mutex> s_lock(move(e_lock));
2490 }
2491
2492 If the source mutex was not locked:
2493
2494 * The target lock does not execute the atomic `unlock_xxx_and_lock_xxx` operation.
2495 * The target lock is also unlocked.
2496 * The source lock is released() and the ownership of the mutex is transferred to the target.
2497
2498 [c++]
2499
2500 {
2501 scoped_lock<Mutex> e_lock(mut, defer_lock);
2502 sharable_lock<Mutex> s_lock(move(e_lock));
2503
2504 //Assertions
2505 assert(e_lock.mutex() == 0);
2506 assert(s_lock.mutex() != 0);
2507 assert(e_lock.owns() == false);
2508 }
2509
2510 [endsect]
2511
2512 [section:lock_transfer_failure Transfer Failures]
2513
2514 When executing a lock transfer, the operation can fail:
2515
2516 * The executed atomic mutex unlock plus lock function might throw.
2517 * The executed atomic function might be a "try" or "timed" function that can fail.
2518
2519 In the first case, the mutex ownership is not transferred and the source lock's
2520 destructor will unlock the mutex:
2521
2522 [c++]
2523
2524 {
2525 scoped_lock<Mutex> e_lock(mut, defer_lock);
2526
2527 //This operations throws because
2528 //"unlock_and_lock_sharable()" throws!!!
2529 sharable_lock<Mutex> s_lock(move(e_lock));
2530
2531 //Some code ...
2532
2533 //e_lock's destructor will call "unlock()"
2534 }
2535
2536 In the second case, if an internal "try" or "timed" operation fails (returns "false")
2537 then the mutex ownership is [*not] transferred, the source lock is unchanged and the target
2538 lock's state will the same as a default construction:
2539
2540 [c++]
2541
2542 {
2543 sharable_lock<Mutex> s_lock(mut);
2544
2545 //Internal "try_unlock_sharable_and_lock_upgradable()" returns false
2546 upgradable_lock<Mutex> u_lock(move(s_lock, try_to_lock));
2547
2548 assert(s_lock.mutex() == &mut);
2549 assert(s_lock.owns() == true);
2550 assert(u_lock.mutex() == 0);
2551 assert(u_lock.owns() == false);
2552
2553 //u_lock's destructor does nothing
2554 //s_lock's destructor calls "unlock()"
2555 }
2556
2557 [endsect]
2558
2559 [endsect]
2560
2561 [section:file_lock File Locks]
2562
2563 [section:file_lock_whats_a_file_lock What's A File Lock?]
2564
2565 A file lock is an interprocess synchronization mechanism to protect concurrent
2566 writes and reads to files using a mutex ['embedded] in the file. This ['embedded mutex]
2567 has sharable and exclusive locking capabilities.
2568 With a file lock, an existing file can be used as a mutex without the need
2569 of creating additional synchronization objects to control concurrent file
2570 reads or writes.
2571
2572 Generally speaking, we can have two file locking capabilities:
2573
2574 * [*Advisory locking:] The operating system kernel maintains a list of files that
2575 have been locked. But does not prevent writing to those files even if a process
2576 has acquired a sharable lock or does not prevent reading from the file when a process
2577 has acquired the exclusive lock. Any process can ignore an advisory lock.
2578 This means that advisory locks are for [*cooperating] processes,
2579 processes that can trust each other. This is similar to a mutex protecting data
2580 in a shared memory segment: any process connected to that memory can overwrite the
2581 data but [*cooperative] processes use mutexes to protect the data first acquiring
2582 the mutex lock.
2583
2584 * [*Mandatory locking:] The OS kernel checks every read and write request to verify
2585 that the operation can be performed according to the acquired lock. Reads and writes
2586 block until the lock is released.
2587
2588 [*Boost.Interprocess] implements [*advisory blocking] because of portability reasons.
2589 This means that every process accessing to a file concurrently, must cooperate using
2590 file locks to synchronize the access.
2591
2592 In some systems file locking can be even further refined, leading to [*record locking],
2593 where a user can specify a [*byte range] within the file where the lock is applied.
2594 This allows concurrent write access by several processes if they need to access a
2595 different byte range in the file. [*Boost.Interprocess] does [*not] offer record
2596 locking for the moment, but might offer it in the future. To use a file lock just
2597 include:
2598
2599 [c++]
2600
2601 #include <boost/interprocess/sync/file_lock.hpp>
2602
2603 A file locking is a class that has [*process lifetime]. This means that if a process
2604 holding a file lock ends or crashes, the operating system will automatically unlock
2605 it. This feature is very useful in some situations where we want to assure
2606 automatic unlocking even when the process crashes and avoid leaving blocked resources
2607 in the system. A file lock is constructed using the name of the file as an argument:
2608
2609 [c++]
2610
2611 #include <boost/interprocess/sync/file_lock.hpp>
2612
2613 int main()
2614 {
2615 //This throws if the file does not exist or it can't
2616 //open it with read-write access!
2617 boost::interprocess::file_lock flock("my_file");
2618 return 0;
2619 }
2620
2621
2622 [endsect]
2623
2624 [section:file_lock_operations File Locking Operations]
2625
2626 File locking has normal mutex operations plus sharable locking capabilities.
2627 This means that we can have multiple readers holding the sharable lock and
2628 writers holding the exclusive lock waiting until the readers end their job.
2629
2630 However, file locking does [*not] support upgradable locking or promotion or
2631 demotion (lock transfers), so it's more limited than an upgradable lock.
2632 These are the operations:
2633
2634 [blurb ['[*void lock()]]]
2635
2636 [*Effects:]
2637 The calling thread tries to obtain exclusive ownership of the file lock, and if
2638 another thread has exclusive or sharable ownership of the mutex,
2639 it waits until it can obtain the ownership.
2640
2641 [*Throws:] *interprocess_exception* on error.
2642
2643 [blurb ['[*bool try_lock()]]]
2644
2645 [*Effects:]
2646 The calling thread tries to acquire exclusive ownership of the file lock
2647 without waiting. If no other thread has exclusive or sharable ownership of
2648 the file lock, this succeeds.
2649
2650 [*Returns:] If it can acquire exclusive ownership immediately returns true.
2651 If it has to wait, returns false.
2652
2653 [*Throws:] *interprocess_exception* on error.
2654
2655 [blurb ['[*bool timed_lock(const boost::posix_time::ptime &abs_time)]]]
2656
2657 [*Effects:]
2658 The calling thread tries to acquire exclusive ownership of the file lock
2659 waiting if necessary until no other thread has exclusive or
2660 sharable ownership of the file lock or abs_time is reached.
2661
2662 [*Returns:] If acquires exclusive ownership, returns true. Otherwise
2663 returns false.
2664
2665 [*Throws:] *interprocess_exception* on error.
2666
2667 [blurb ['[*void unlock()]]]
2668
2669 [*Precondition:] The thread must have exclusive ownership of the file lock.
2670
2671 [*Effects:] The calling thread releases the exclusive ownership of the file lock.
2672
2673 [*Throws:] An exception derived from *interprocess_exception* on error.
2674
2675 [blurb ['[*void lock_sharable()]]]
2676
2677 [*Effects:]
2678 The calling thread tries to obtain sharable ownership of the file lock,
2679 and if another thread has exclusive ownership of the file lock,
2680 waits until it can obtain the ownership.
2681
2682 [*Throws:] *interprocess_exception* on error.
2683
2684 [blurb ['[*bool try_lock_sharable()]]]
2685
2686 [*Effects:]
2687 The calling thread tries to acquire sharable ownership of the file
2688 lock without waiting. If no other thread has exclusive ownership of
2689 the file lock, this succeeds.
2690
2691 [*Returns:] If it can acquire sharable ownership immediately returns true.
2692 If it has to wait, returns false.
2693
2694 [*Throws:] *interprocess_exception* on error.
2695
2696 [blurb ['[*bool timed_lock_sharable(const boost::posix_time::ptime &abs_time)]]]
2697
2698 [*Effects:]
2699 The calling thread tries to acquire sharable ownership of the file lock
2700 waiting if necessary until no other thread has exclusive
2701 ownership of the file lock or abs_time is reached.
2702
2703 [*Returns:] If acquires sharable ownership, returns true. Otherwise
2704 returns false.
2705
2706 [*Throws:] *interprocess_exception* on error.
2707
2708 [blurb ['[*void unlock_sharable()]]]
2709
2710 [*Precondition:] The thread must have sharable ownership of the file lock.
2711
2712 [*Effects:] The calling thread releases the sharable ownership of the file lock.
2713
2714 [*Throws:] An exception derived from *interprocess_exception* on error.
2715
2716 For more file locking methods, please
2717 [classref boost::interprocess::file_lock file_lock reference].
2718
2719 [important `boost::posix_time::ptime` absolute time points used by Boost.Interprocess synchronization mechanisms
2720 are UTC time points, not local time points]
2721
2722 [endsect]
2723
2724 [section:file_lock_scoped_locks Scoped Lock and Sharable Lock With File Locking]
2725
2726 [classref boost::interprocess::scoped_lock scoped_lock] and
2727 [classref boost::interprocess::sharable_lock sharable_lock] can be used to make
2728 file locking easier in the presence of exceptions, just like with mutexes:
2729
2730 [c++]
2731
2732 #include <boost/interprocess/sync/file_lock.hpp>
2733 #include <boost/interprocess/sync/sharable_lock.hpp>
2734 //...
2735
2736 using namespace boost::interprocess;
2737 //This process reads the file
2738 // ...
2739 //Open the file lock
2740 file_lock f_lock("my_file");
2741
2742 {
2743 //Construct a sharable lock with the filel lock.
2744 //This will call "f_lock.sharable_lock()".
2745 sharable_lock<file_lock> sh_lock(f_lock);
2746
2747 //Now read the file...
2748
2749 //The sharable lock is automatically released by
2750 //sh_lock's destructor
2751 }
2752
2753 [c++]
2754
2755 #include <boost/interprocess/sync/file_lock.hpp>
2756 #include <boost/interprocess/sync/scoped_lock.hpp>
2757
2758 //...
2759
2760 using namespace boost::interprocess;
2761 //This process writes the file
2762 // ...
2763 //Open the file lock
2764 file_lock f_lock("my_file");
2765
2766 {
2767 //Construct a sharable lock with the filel lock.
2768 //This will call "f_lock.lock()".
2769 scoped_lock<file_lock> e_lock(f_lock);
2770
2771 //Now write the file...
2772
2773 //The exclusive lock is automatically released by
2774 //e_lock's destructor
2775 }
2776
2777 However, lock transfers are only allowed between same type of locks, that is,
2778 from a sharable lock to another sharable lock or from a scoped lock to another
2779 scoped lock. A transfer from a scoped lock to a sharable lock is not allowed,
2780 because [classref boost::interprocess::file_lock file_lock] has no lock
2781 promotion or demotion functions like `unlock_and_lock_sharable()`.
2782 This will produce a compilation error:
2783
2784 [c++]
2785
2786 //Open the file lock
2787 file_lock f_lock("my_file");
2788
2789 scoped_lock<file_lock> e_lock(f_lock);
2790
2791 //Compilation error, f_lock has no "unlock_and_lock_sharable()" member!
2792 sharable_lock<file_lock> e_lock(move(f_lock));
2793
2794
2795 [endsect]
2796
2797 [section:file_lock_not_thread_safe Caution: Synchronization limitations]
2798
2799 If you plan to use file locks just like named mutexes, be careful, because portable
2800 file locks have synchronization limitations, mainly because different implementations
2801 (POSIX, Windows) offer different guarantees. Interprocess file locks have the following
2802 limitations:
2803
2804 * It's unspecified if a `file_lock` synchronizes [*two threads from the same process].
2805 * It's unspecified if a process can use two `file_lock` objects pointing to the same file.
2806
2807 The first limitation comes mainly from POSIX, since a file handle is a per-process attribute
2808 and not a per-thread attribute. This means that if a thread uses a `file_lock` object to lock
2809 a file, other threads will see the file as locked.
2810 Windows file locking mechanism, on the other hand, offer thread-synchronization guarantees
2811 so a thread trying to lock the already locked file, would block.
2812
2813 The second limitation comes from the fact that file locking synchronization state
2814 is tied with a single file descriptor in Windows. This means that if two `file_lock`
2815 objects are created pointing to the same file, no synchronization is guaranteed. In
2816 POSIX, when two file descriptors are used to lock a file if a descriptor is closed,
2817 all file locks set by the calling process are cleared.
2818
2819 To sum up, if you plan to use portable file locking in your processes, use the following
2820 restrictions:
2821
2822 * [*For each file, use a single `file_lock` object per process.]
2823 * [*Use the same thread to lock and unlock a file.]
2824 * If you are using a std::fstream/native file handle to write to the file
2825 while using file locks on that file, [*don't close the file before
2826 releasing all the locks of the file.]
2827
2828 [endsect]
2829
2830 [section:file_lock_careful_iostream Be Careful With Iostream Writing]
2831
2832 As we've seen file locking can be useful to synchronize two processes,
2833 but [*make sure data is written to the file]
2834 before unlocking the file lock. Take in care that iostream classes do some
2835 kind of buffering, so if you want to make sure that other processes can
2836 see the data you've written, you have the following alternatives:
2837
2838 * Use native file functions (read()/write() in Unix systems and ReadFile/WriteFile
2839 in Windows systems) instead of iostream.
2840
2841 * Flush data before unlocking the file lock in writers using `fflush` if you are using
2842 standard C functions or the `flush()` member function when using C++ iostreams. In windows
2843 you can't even use another class to access the same file.
2844
2845 //...
2846
2847 using namespace boost::interprocess;
2848 //This process writes the file
2849 // ...
2850 //Open the file lock
2851 fstream file("my_file")
2852 file_lock f_lock("my_lock_file");
2853
2854 {
2855 scoped_lock<file_lock> e_lock(f_lock);
2856
2857 //Now write the file...
2858
2859 //Flush data before unlocking the exclusive lock
2860 file.flush();
2861 }
2862
2863 [endsect]
2864
2865 [endsect]
2866
2867 [section:message_queue Message Queue]
2868
2869 [section:message_queue_whats_a_mq What's A Message Queue?]
2870
2871 A message queue is similar to a list of messages. Threads can put messages
2872 in the queue and they can also remove messages from the queue. Each message
2873 can have also a [*priority] so that higher priority messages are read before
2874 lower priority messages. Each message has some attributes:
2875
2876 * A priority.
2877 * The length of the message.
2878 * The data (if length is bigger than 0).
2879
2880 A thread can send a message to or receive a message from the message
2881 queue using 3 methods:
2882
2883 * [*Blocking]: If the message queue is full when sending or the message queue
2884 is empty when receiving, the thread is blocked until there
2885 is room for a new message or there is a new message.
2886 * [*Try]: If the message queue is full when sending or the message queue is empty
2887 when receiving, the thread returns immediately with an error.
2888 * [*Timed]: If the message queue is full when sending or the message queue is empty
2889 when receiving, the thread retries the operation until succeeds (returning
2890 successful state) or a timeout is reached (returning a failure).
2891
2892 A message queue [*just copies raw bytes between processes] and does not send
2893 objects. This means that if we want to send an object using a message queue
2894 [*the object must be binary serializable]. For example, we can send integers
2895 between processes but [*not] a `std::string`. You should use [*Boost.Serialization]
2896 or use advanced [*Boost.Interprocess] mechanisms to send complex data between
2897 processes.
2898
2899 The [*Boost.Interprocess] message queue is a named interprocess communication: the
2900 message queue is created with a name and it's opened with a name, just like a file.
2901 When creating a message queue, the user must specify the maximum message size and
2902 the maximum message number that the message queue can store. These parameters will
2903 define the resources (for example the size of the shared memory used to implement
2904 the message queue if shared memory is used).
2905
2906 [c++]
2907
2908 using boost::interprocess;
2909 //Create a message_queue. If the queue
2910 //exists throws an exception
2911 message_queue mq
2912 (create_only //only create
2913 ,"message_queue" //name
2914 ,100 //max message number
2915 ,100 //max message size
2916 );
2917
2918 [c++]
2919
2920 using boost::interprocess;
2921 //Creates or opens a message_queue. If the queue
2922 //does not exist creates it, otherwise opens it.
2923 //Message number and size are ignored if the queue
2924 //is opened
2925 message_queue mq
2926 (open_or_create //open or create
2927 ,"message_queue" //name
2928 ,100 //max message number
2929 ,100 //max message size
2930 );
2931
2932 [c++]
2933
2934 using boost::interprocess;
2935 //Opens a message_queue. If the queue
2936 //does not exist throws an exception.
2937 message_queue mq
2938 (open_only //only open
2939 ,"message_queue" //name
2940 );
2941
2942 The message queue is explicitly removed calling the static `remove` function:
2943
2944 [c++]
2945
2946 using boost::interprocess;
2947 message_queue::remove("message_queue");
2948
2949 The function can fail if the message queue is still being used by any process.
2950
2951 [endsect]
2952
2953 [section:message_queue_example Using a message queue]
2954
2955 To use a message queue you must include the following header:
2956
2957 [c++]
2958
2959 #include <boost/interprocess/ipc/message_queue.hpp>
2960
2961 In the following example, the first process creates the message queue, and writes
2962 an array of integers on it. The other process just reads the array and checks that
2963 the sequence number is correct. This is the first process:
2964
2965 [import ../example/comp_doc_message_queueA.cpp]
2966 [doc_message_queueA]
2967
2968 This is the second process:
2969
2970 [import ../example/comp_doc_message_queueB.cpp]
2971 [doc_message_queueB]
2972
2973 To know more about this class and all its operations, please see the
2974 [classref boost::interprocess::message_queue message_queue] class reference.
2975
2976 [endsect]
2977
2978 [endsect]
2979
2980 [endsect]
2981
2982 [section:managed_memory_segments Managed Memory Segments]
2983
2984 [section:making_ipc_easy Making Interprocess Data Communication Easy]
2985
2986 [section:managed_memory_segments_intro Introduction]
2987
2988 As we have seen, [*Boost.Interprocess] offers some basic classes to create shared memory
2989 objects and file mappings and map those mappable classes to the process' address space.
2990
2991 However, managing those memory segments is not not easy for non-trivial tasks.
2992 A mapped region is a fixed-length memory buffer and creating and destroying objects
2993 of any type dynamically, requires a lot of work, since it would require programming
2994 a memory management algorithm to allocate portions of that segment.
2995 Many times, we also want to associate names to objects created in shared memory, so
2996 all the processes can find the object using the name.
2997
2998 [*Boost.Interprocess] offers 4 managed memory segment classes:
2999
3000 * To manage a shared memory mapped region ([*basic_managed_shared_memory] class).
3001 * To manage a memory mapped file ([*basic_managed_mapped_file]).
3002 * To manage a heap allocated (`operator new`) memory buffer ([*basic_managed_heap_memory] class).
3003 * To manage a user provided fixed size buffer ([*basic_managed_external_buffer] class).
3004
3005 The first two classes manage memory segments that can be shared between processes. The
3006 third is useful to create complex data-bases to be sent though other mechanisms like
3007 message queues to other processes. The fourth class can manage any fixed size memory
3008 buffer. The first two classes will be explained in the next two sections.
3009 [*basic_managed_heap_memory] and [*basic_managed_external_buffer] will be explained later.
3010
3011 The most important services of a managed memory segment are:
3012
3013 * Dynamic allocation of portions of a memory the segment.
3014 * Construction of C++ objects in the memory segment. These objects can be anonymous
3015 or we can associate a name to them.
3016 * Searching capabilities for named objects.
3017 * Customization of many features: memory allocation algorithm, index types or
3018 character types.
3019 * Atomic constructions and destructions so that if the segment is shared between
3020 two processes it's impossible to create two objects associated with the same
3021 name, simplifying synchronization.
3022
3023 [endsect]
3024
3025 [section:managed_memory_segment_int Declaration of managed memory segment classes]
3026
3027 All [*Boost.Interprocess] managed memory segment classes are templatized classes
3028 that can be customized by the user:
3029
3030 [c++]
3031
3032 template
3033 <
3034 class CharType,
3035 class MemoryAlgorithm,
3036 template<class IndexConfig> class IndexType
3037 >
3038 class basic_managed_shared_memory / basic_managed_mapped_file /
3039 basic_managed_heap_memory / basic_external_buffer;
3040
3041 These classes can be customized with the following template parameters:
3042
3043 * *CharType* is the type of the character that will be used to identify
3044 the created named objects (for example, *char* or *wchar_t*)
3045
3046 * *MemoryAlgorithm* is the memory algorithm used to allocate portions of the
3047 segment (for example, rbtree_best_fit ). The internal typedefs of the
3048 memory algorithm also define:
3049 * The synchronization type (`MemoryAlgorithm::mutex_family`) to be used
3050 in all allocation operations.
3051 This allows the use of user-defined mutexes or avoiding internal
3052 locking (maybe code will be externally synchronized by the user).
3053
3054 * The Pointer type (`MemoryAlgorithm::void_pointer`) to be used
3055 by the memory allocation algorithm or additional helper structures
3056 (like a map to maintain object/name associations). All STL compatible
3057 allocators and containers to be used with this managed memory segment
3058 will use this pointer type. The pointer type
3059 will define if the managed memory segment can be mapped between
3060 several processes. For example, if `void_pointer` is `offset_ptr<void>`
3061 we will be able to map the managed segment in different base
3062 addresses in each process. If `void_pointer` is `void*` only fixed
3063 address mapping could be used.
3064
3065 * See [link interprocess.customizing_interprocess.custom_interprocess_alloc Writing a new memory
3066 allocation algorithm] for more details about memory algorithms.
3067
3068 * *IndexType* is the type of index that will be used to store the name-object
3069 association (for example, a map, a hash-map, or an ordered vector).
3070
3071 This way, we can use `char` or `wchar_t` strings to identify created C++
3072 objects in the memory segment, we can plug new shared memory allocation
3073 algorithms, and use the index type that is best suited to our needs.
3074
3075 [endsect]
3076
3077 [endsect]
3078
3079 [section:managed_shared_memory Managed Shared Memory]
3080
3081 [section:managed_memory_common_shm Common Managed Shared Memory Classes]
3082
3083 As seen, *basic_managed_shared_memory* offers a great variety of customization. But
3084 for the average user, a common, default shared memory named object creation is needed.
3085 Because of this, [*Boost.Interprocess] defines the most common managed shared memory
3086 specializations:
3087
3088 [c++]
3089
3090 //!Defines a managed shared memory with c-strings as keys for named objects,
3091 //!the default memory algorithm (with process-shared mutexes,
3092 //!and offset_ptr as internal pointers) as memory allocation algorithm
3093 //!and the default index type as the index.
3094 //!This class allows the shared memory to be mapped in different base
3095 //!in different processes
3096 typedef
3097 basic_managed_shared_memory<char
3098 ,/*Default memory algorithm defining offset_ptr<void> as void_pointer*/
3099 ,/*Default index type*/>
3100 managed_shared_memory;
3101
3102 //!Defines a managed shared memory with wide strings as keys for named objects,
3103 //!the default memory algorithm (with process-shared mutexes,
3104 //!and offset_ptr as internal pointers) as memory allocation algorithm
3105 //!and the default index type as the index.
3106 //!This class allows the shared memory to be mapped in different base
3107 //!in different processes
3108 typedef
3109 basic_managed_shared_memory<wchar_t
3110 ,/*Default memory algorithm defining offset_ptr<void> as void_pointer*/
3111 ,/*Default index type*/>
3112 wmanaged_shared_memory;
3113
3114 `managed_shared_memory` allocates objects in shared memory associated with a c-string and
3115 `wmanaged_shared_memory` allocates objects in shared memory associated with a wchar_t null
3116 terminated string. Both define the pointer type as `offset_ptr<void>` so they can be
3117 used to map the shared memory at different base addresses in different processes.
3118
3119 If the user wants to map the shared memory in the same address in all processes and
3120 want to use raw pointers internally instead of offset pointers, [*Boost.Interprocess]
3121 defines the following types:
3122
3123 [c++]
3124
3125 //!Defines a managed shared memory with c-strings as keys for named objects,
3126 //!the default memory algorithm (with process-shared mutexes,
3127 //!and offset_ptr as internal pointers) as memory allocation algorithm
3128 //!and the default index type as the index.
3129 //!This class allows the shared memory to be mapped in different base
3130 //!in different processes*/
3131 typedef basic_managed_shared_memory
3132 <char
3133 ,/*Default memory algorithm defining void * as void_pointer*/
3134 ,/*Default index type*/>
3135 fixed_managed_shared_memory;
3136
3137 //!Defines a managed shared memory with wide strings as keys for named objects,
3138 //!the default memory algorithm (with process-shared mutexes,
3139 //!and offset_ptr as internal pointers) as memory allocation algorithm
3140 //!and the default index type as the index.
3141 //!This class allows the shared memory to be mapped in different base
3142 //!in different processes
3143 typedef basic_managed_shared_memory
3144 <wchar_t
3145 ,/*Default memory algorithm defining void * as void_pointer*/
3146 ,/*Default index type*/>
3147 wfixed_managed_shared_memory;
3148
3149 [endsect]
3150
3151 [section:constructing_managed_shared_memories Constructing Managed Shared Memory]
3152
3153 Managed shared memory is an advanced class that combines a shared memory object
3154 and a mapped region that covers all the shared memory object. That means that
3155 when we [*create] a new managed shared memory:
3156
3157 * A new shared memory object is created.
3158 * The whole shared memory object is mapped in the process' address space.
3159 * Some helper objects are constructed (name-object index, internal synchronization
3160 objects, internal variables...) in the mapped region to implement
3161 managed memory segment features.
3162
3163 When we [*open] a managed shared memory
3164
3165 * A shared memory object is opened.
3166 * The whole shared memory object is mapped in the process' address space.
3167
3168 To use a managed shared memory, you must include the following header:
3169
3170 [c++]
3171
3172 #include <boost/interprocess/managed_shared_memory.hpp>
3173
3174
3175 [c++]
3176
3177 //1. Creates a new shared memory object
3178 // called "MySharedMemory".
3179 //2. Maps the whole object to this
3180 // process' address space.
3181 //3. Constructs some objects in shared memory
3182 // to implement managed features.
3183 //!! If anything fails, throws interprocess_exception
3184 //
3185 managed_shared_memory segment ( create_only
3186 , "MySharedMemory" //Shared memory object name
3187 , 65536); //Shared memory object size in bytes
3188
3189
3190 [c++]
3191
3192 //1. Opens a shared memory object
3193 // called "MySharedMemory".
3194 //2. Maps the whole object to this
3195 // process' address space.
3196 //3. Obtains pointers to constructed internal objects
3197 // to implement managed features.
3198 //!! If anything fails, throws interprocess_exception
3199 //
3200 managed_shared_memory segment (open_only, "MySharedMemory");//Shared memory object name
3201
3202
3203 [c++]
3204
3205 //1. If the segment was previously created
3206 // equivalent to "open_only" (size is ignored).
3207 //2. Otherwise, equivalent to "create_only"
3208 //!! If anything fails, throws interprocess_exception
3209 //
3210 managed_shared_memory segment ( open_or_create
3211 , "MySharedMemory" //Shared memory object name
3212 , 65536); //Shared memory object size in bytes
3213
3214
3215 When the `managed_shared_memory` object is destroyed, the shared memory
3216 object is automatically unmapped, and all the resources are freed. To remove
3217 the shared memory object from the system you must use the `shared_memory_object::remove`
3218 function. Shared memory object removing might fail if any
3219 process still has the shared memory object mapped.
3220
3221 The user can also map the managed shared memory in a fixed address. This option is
3222 essential when using using `fixed_managed_shared_memory`. To do this, just
3223 add the mapping address as an extra parameter:
3224
3225 [c++]
3226
3227 fixed_managed_shared_memory segment (open_only ,"MyFixedAddressSharedMemory" //Shared memory object name
3228 ,(void*)0x30000000 //Mapping address
3229
3230 [endsect]
3231
3232 [section:windows_managed_memory_common_shm Using native windows shared memory]
3233
3234 Windows users might also want to use native windows shared memory instead of
3235 the portable [classref boost::interprocess::shared_memory_object shared_memory_object]
3236 managed memory. This is achieved through the
3237 [classref boost::interprocess::basic_managed_windows_shared_memory basic_managed_windows_shared_memory]
3238 class. To use it just include:
3239
3240 [c++]
3241
3242 #include <boost/interprocess/managed_windows_shared_memory.hpp>
3243
3244 This class has the same interface as
3245 [classref boost::interprocess::basic_managed_shared_memory basic_managed_shared_memory]
3246 but uses native windows shared memory. Note that this managed class has the same
3247 lifetime issues as the windows shared memory: when the last process attached to the
3248 windows shared memory is detached from the memory (or ends/crashes) the memory is
3249 destroyed. So there is no persistence support for windows shared memory.
3250
3251 To communicate between system services and user applications using `managed_windows_shared_memory`,
3252 please read the explanations given in chapter
3253 [link interprocess.sharedmemorybetweenprocesses.sharedmemory.windows_shared_memory Native windows shared memory].
3254
3255 [endsect]
3256
3257 [section:xsi_managed_memory_common_shm Using XSI (system V) shared memory]
3258
3259 Unix users might also want to use XSI (system V) instead of
3260 the portable [classref boost::interprocess::shared_memory_object shared_memory_object]
3261 managed memory. This is achieved through the
3262 [classref boost::interprocess::basic_managed_xsi_shared_memory basic_managed_xsi_shared_memory]
3263 class. To use it just include:
3264
3265 [c++]
3266
3267 #include <boost/interprocess/managed_xsi_shared_memory.hpp>
3268
3269 This class has nearly the same interface as
3270 [classref boost::interprocess::basic_managed_shared_memory basic_managed_shared_memory]
3271 but uses XSI shared memory as backend.
3272
3273 [endsect]
3274
3275 For more information about managed XSI shared memory capabilities, see
3276 [classref boost::interprocess::basic_managed_xsi_shared_memory basic_managed_xsi_shared_memory] class reference.
3277
3278 [endsect]
3279
3280 [section:managed_mapped_files Managed Mapped File]
3281
3282 [section:managed_memory_common_mfile Common Managed Mapped Files]
3283
3284 As seen, *basic_managed_mapped_file* offers a great variety of customization. But
3285 for the average user, a common, default shared memory named object creation is needed.
3286 Because of this, [*Boost.Interprocess] defines the most common managed mapped file
3287 specializations:
3288
3289 [c++]
3290
3291 //Named object creation managed memory segment
3292 //All objects are constructed in the memory-mapped file
3293 // Names are c-strings,
3294 // Default memory management algorithm(rbtree_best_fit with no mutexes)
3295 // Name-object mappings are stored in the default index type (flat_map)
3296 typedef basic_managed_mapped_file <
3297 char,
3298 rbtree_best_fit<mutex_family, offset_ptr<void> >,
3299 flat_map_index
3300 > managed_mapped_file;
3301
3302 //Named object creation managed memory segment
3303 //All objects are constructed in the memory-mapped file
3304 // Names are wide-strings,
3305 // Default memory management algorithm(rbtree_best_fit with no mutexes)
3306 // Name-object mappings are stored in the default index type (flat_map)
3307 typedef basic_managed_mapped_file<
3308 wchar_t,
3309 rbtree_best_fit<mutex_family, offset_ptr<void> >,
3310 flat_map_index
3311 > wmanaged_mapped_file;
3312
3313 `managed_mapped_file` allocates objects in a memory mapped files associated with a c-string
3314 and `wmanaged_mapped_file` allocates objects in a memory mapped file associated with a wchar_t null
3315 terminated string. Both define the pointer type as `offset_ptr<void>` so they can be
3316 used to map the file at different base addresses in different processes.
3317
3318 [endsect]
3319
3320 [section:constructing_managed_mapped_files Constructing Managed Mapped Files]
3321
3322 Managed mapped file is an advanced class that combines a file
3323 and a mapped region that covers all the file. That means that
3324 when we [*create] a new managed mapped file:
3325
3326 * A new file is created.
3327 * The whole file is mapped in the process' address space.
3328 * Some helper objects are constructed (name-object index, internal synchronization
3329 objects, internal variables...) in the mapped region to implement
3330 managed memory segment features.
3331
3332 When we [*open] a managed mapped file
3333
3334 * A file is opened.
3335 * The whole file is mapped in the process' address space.
3336
3337 To use a managed mapped file, you must include the following header:
3338
3339 [c++]
3340
3341 #include <boost/interprocess/managed_mapped_file.hpp>
3342
3343 [c++]
3344
3345 //1. Creates a new file
3346 // called "MyMappedFile".
3347 //2. Maps the whole file to this
3348 // process' address space.
3349 //3. Constructs some objects in the memory mapped
3350 // file to implement managed features.
3351 //!! If anything fails, throws interprocess_exception
3352 //
3353 managed_mapped_file mfile (create_only, "MyMappedFile", //Mapped file name 65536); //Mapped file size
3354 [c++]
3355
3356 //1. Opens a file
3357 // called "MyMappedFile".
3358 //2. Maps the whole file to this
3359 // process' address space.
3360 //3. Obtains pointers to constructed internal objects
3361 // to implement managed features.
3362 //!! If anything fails, throws interprocess_exception
3363 //
3364 managed_mapped_file mfile (open_only, "MyMappedFile"); //Mapped file name[c++]
3365
3366 //1. If the file was previously created
3367 // equivalent to "open_only".
3368 //2. Otherwise, equivalent to "open_only" (size is ignored)
3369 //
3370 //!! If anything fails, throws interprocess_exception
3371 //
3372 managed_mapped_file mfile (open_or_create, "MyMappedFile", //Mapped file name 65536); //Mapped file size
3373 When the `managed_mapped_file` object is destroyed, the file is
3374 automatically unmapped, and all the resources are freed. To remove
3375 the file from the filesystem you could use standard C `std::remove`
3376 or [*Boost.Filesystem]'s `remove()` functions, but file removing might fail
3377 if any process still has the file mapped in memory or the file is open
3378 by any process.
3379
3380 To obtain a more portable behaviour, use `file_mapping::remove(const char *)` operation, which
3381 will remove the file even if it's being mapped. However, removal will fail in some OS systems if
3382 the file (eg. by C++ file streams) and no delete share permission was granted to the file. But in
3383 most common cases `file_mapping::remove` is portable enough.
3384
3385 [endsect]
3386
3387 For more information about managed mapped file capabilities, see
3388 [classref boost::interprocess::basic_managed_mapped_file basic_managed_mapped_file] class reference.
3389
3390 [endsect]
3391
3392 [section:managed_memory_segment_features Managed Memory Segment Features]
3393
3394 The following features are common to all managed memory segment classes, but
3395 we will use managed shared memory in our examples. We can do the same with
3396 memory mapped files or other managed memory segment classes.
3397
3398 [section:allocate_deallocate Allocating fragments of a managed memory segment]
3399
3400 If a basic raw-byte allocation is needed from a managed memory
3401 segment, (for example, a managed shared memory), to implement
3402 top-level interprocess communications, this class offers
3403 [*allocate] and [*deallocate] functions. The allocation function
3404 comes with throwing and no throwing versions. Throwing version throws
3405 boost::interprocess::bad_alloc (which derives from `std::bad_alloc`)
3406 if there is no more memory and the non-throwing version returns 0 pointer.
3407
3408 [import ../example/doc_managed_raw_allocation.cpp]
3409 [doc_managed_raw_allocation]
3410
3411 [endsect]
3412
3413 [section:segment_offset Obtaining handles to identify data]
3414
3415 The class also offers conversions between absolute addresses that belong to
3416 a managed memory segment and a handle that can be passed using any
3417 interprocess mechanism. That handle can be transformed again to an absolute
3418 address using a managed memory segment that also contains that object.
3419 Handles can be used as keys between processes to identify allocated portions
3420 of a managed memory segment or objects constructed in the managed segment.
3421
3422 [c++]
3423
3424 //Process A obtains the offset of the address
3425 managed_shared_memory::handle handle =
3426 segment.get_handle_from_address(processA_address);
3427
3428 //Process A sends this address using any mechanism to process B
3429
3430 //Process B obtains the handle and transforms it again to an address
3431 managed_shared_memory::handle handle = ...
3432 void * processB_address = segment.get_address_from_handle(handle);
3433
3434 [endsect]
3435
3436 [section:allocation_types Object construction function family]
3437
3438 When constructing objects in a managed memory segment (managed shared memory,
3439 managed mapped files...) associated with a name, the user has a varied object
3440 construction family to "construct" or to "construct if not found". [*Boost.Interprocess]
3441 can construct a single object or an array of objects. The array can be constructed with
3442 the same parameters for all objects or we can define each parameter from a list of iterators:
3443
3444 [c++]
3445
3446 //!Allocates and constructs an object of type MyType (throwing version)
3447 MyType *ptr = managed_memory_segment.construct<MyType>("Name") (par1, par2...);
3448
3449 //!Allocates and constructs an array of objects of type MyType (throwing version)
3450 //!Each object receives the same parameters (par1, par2, ...)
3451 MyType *ptr = managed_memory_segment.construct<MyType>("Name")[count](par1, par2...);
3452
3453 //!Tries to find a previously created object. If not present, allocates
3454 //!and constructs an object of type MyType (throwing version)
3455 MyType *ptr = managed_memory_segment.find_or_construct<MyType>("Name") (par1, par2...);
3456
3457 //!Tries to find a previously created object. If not present, allocates and
3458 //!constructs an array of objects of type MyType (throwing version). Each object
3459 //!receives the same parameters (par1, par2, ...)
3460 MyType *ptr = managed_memory_segment.find_or_construct<MyType>("Name")[count](par1, par2...);
3461
3462 //!Allocates and constructs an array of objects of type MyType (throwing version)
3463 //!Each object receives parameters returned with the expression (*it1++, *it2++,... )
3464 MyType *ptr = managed_memory_segment.construct_it<MyType>("Name")[count](it1, it2...);
3465
3466 //!Tries to find a previously created object. If not present, allocates and constructs
3467 //!an array of objects of type MyType (throwing version). Each object receives
3468 //!parameters returned with the expression (*it1++, *it2++,... )
3469 MyType *ptr = managed_memory_segment.find_or_construct_it<MyType>("Name")[count](it1, it2...);
3470
3471 //!Tries to find a previously created object. Returns a pointer to the object and the
3472 //!count (if it is not an array, returns 1). If not present, the returned pointer is 0
3473 std::pair<MyType *,std::size_t> ret = managed_memory_segment.find<MyType>("Name");
3474
3475 //!Destroys the created object, returns false if not present
3476 bool destroyed = managed_memory_segment.destroy<MyType>("Name");
3477
3478 //!Destroys the created object via pointer
3479 managed_memory_segment.destroy_ptr(ptr);
3480
3481 All these functions have a non-throwing version, that
3482 is invoked with an additional parameter std::nothrow.
3483 For example, for simple object construction:
3484
3485 [c++]
3486
3487 //!Allocates and constructs an object of type MyType (no throwing version)
3488 MyType *ptr = managed_memory_segment.construct<MyType>("Name", std::nothrow) (par1, par2...);
3489
3490 [endsect]
3491
3492 [section:anonymous Anonymous instance construction]
3493
3494 Sometimes, the user doesn't want to create class objects associated with a name.
3495 For this purpose, [*Boost.Interprocess] can create anonymous objects in a managed
3496 memory segment. All named object construction functions are available to construct
3497 anonymous objects. To allocate an anonymous objects, the user must use
3498 "boost::interprocess::anonymous_instance" name instead of a normal name:
3499
3500 [c++]
3501
3502 MyType *ptr = managed_memory_segment.construct<MyType>(anonymous_instance) (par1, par2...);
3503
3504 //Other construct variants can also be used (including non-throwing ones)
3505 ...
3506
3507 //We can only destroy the anonymous object via pointer
3508 managed_memory_segment.destroy_ptr(ptr);
3509
3510 Find functions have no sense here, since anonymous objects have no name.
3511 We can only destroy the anonymous object via pointer.
3512
3513 [endsect]
3514
3515 [section:unique Unique instance construction]
3516
3517 Sometimes, the user wants to emulate a singleton in a managed memory segment. Obviously,
3518 as the managed memory segment is constructed at run-time, the user must construct and
3519 destroy this object explicitly. But how can the user be sure that the object is the only
3520 object of its type in the managed memory segment? This can be emulated using
3521 a named object and checking if it is present before trying to create one, but
3522 all processes must agree in the object's name, that can also conflict with
3523 other existing names.
3524
3525 To solve this, [*Boost.Interprocess] offers a "unique object" creation in a managed memory segment.
3526 Only one instance of a class can be created in a managed memory segment using this
3527 "unique object" service (you can create more named objects of this class, though)
3528 so it makes easier the emulation of singleton-like objects across processes, for example,
3529 to design pooled, shared memory allocators. The object can be searched using the type
3530 of the class as a key.
3531
3532 [c++]
3533
3534 // Construct
3535 MyType *ptr = managed_memory_segment.construct<MyType>(unique_instance) (par1, par2...);
3536
3537 // Find it
3538 std::pair<MyType *,std::size_t> ret = managed_memory_segment.find<MyType>(unique_instance);
3539
3540 // Destroy it
3541 managed_memory_segment.destroy<MyType>(unique_instance);
3542
3543 // Other construct and find variants can also be used (including non-throwing ones)
3544 //...
3545
3546 [c++]
3547
3548 // We can also destroy the unique object via pointer
3549 MyType *ptr = managed_memory_segment.construct<MyType>(unique_instance) (par1, par2...);
3550 managed_shared_memory.destroy_ptr(ptr);
3551
3552 The find function obtains a pointer to the only object of type T that can be created
3553 using this "unique instance" mechanism.
3554
3555 [endsect]
3556
3557 [section:synchronization Synchronization guarantees]
3558
3559 One of the features of named/unique allocations/searches/destructions is that
3560 they are [*atomic]. Named allocations use the recursive synchronization scheme defined by the
3561 internal `mutex_family` typedef defined of the memory allocation algorithm template
3562 parameter (`MemoryAlgorithm`). That is, the mutex type used to synchronize
3563 named/unique allocations is defined by the
3564 `MemoryAlgorithm::mutex_family::recursive_mutex_type` type. For shared memory,
3565 and memory mapped file based managed segments this recursive mutex is defined
3566 as [classref boost::interprocess::interprocess_recursive_mutex interprocess_recursive_mutex].
3567
3568 If two processes can call:
3569
3570 [c++]
3571
3572 MyType *ptr = managed_shared_memory.find_or_construct<MyType>("Name")[count](par1, par2...);
3573
3574 at the same time, but only one process will create the object and the other will
3575 obtain a pointer to the created object.
3576
3577 Raw allocation using `allocate()` can be called also safely while executing
3578 named/anonymous/unique allocations, just like when programming a multithreaded
3579 application inserting an object in a mutex-protected map does not block other threads
3580 from calling new[] while the map thread is searching the place where it has to insert the
3581 new object. The synchronization does happen once the map finds the correct place and
3582 it has to allocate raw memory to construct the new value.
3583
3584 This means that if we are creating or searching for a lot of named objects,
3585 we only block creation/searches from other processes but we don't block another
3586 process if that process is inserting elements in a shared memory vector.
3587
3588 [endsect]
3589
3590 [section:index_types Index types for name/object mappings]
3591
3592 As seen, managed memory segments, when creating named objects, store the name/object
3593 association in an index. The index is a map with the name of the object as a key and
3594 a pointer to the object as the mapped type. The default specializations,
3595 *managed_shared_memory* and *wmanaged_shared_memory*, use *flat_map_index* as the index type.
3596
3597 Each index has its own characteristics, like search-time, insertion time, deletion time,
3598 memory use, and memory allocation patterns. [*Boost.Interprocess] offers 3 index types
3599 right now:
3600
3601 * [*boost::interprocess::flat_map_index flat_map_index]: Based on boost::interprocess::flat_map, an ordered
3602 vector similar to Loki library's AssocVector class, offers great search time and
3603 minimum memory use. But the vector must be reallocated when is full, so all data
3604 must be copied to the new buffer. Ideal when insertions are mainly in initialization
3605 time and in run-time we just need searches.
3606
3607 * [*boost::interprocess::map_index map_index]: Based on boost::interprocess::map, a managed memory ready
3608 version of std::map. Since it's a node based container, it has no reallocations, the tree
3609 must be just rebalanced sometimes. Offers equilibrated insertion/deletion/search
3610 times with more overhead per node comparing to *boost::interprocess::flat_map_index*.
3611 Ideal when searches/insertions/deletions are in random order.
3612
3613 * [*boost::interprocess::null_index null_index]: This index is for people using a managed
3614 memory segment just for raw memory buffer allocations and they don't make use
3615 of named/unique allocations. This class is just empty and saves some space and
3616 compilation time.
3617 If you try to use named object creation with a managed memory segment using this
3618 index, you will get a compilation error.
3619
3620 As an example, if we want to define new managed shared memory class
3621 using *boost::interprocess::map* as the index type we
3622 just must specify [boost::interprocess::map_index map_index] as a template parameter:
3623
3624 [c++]
3625
3626 //This managed memory segment can allocate objects with:
3627 // -> a wchar_t string as key
3628 // -> boost::interprocess::rbtree_best_fit with process-shared mutexes
3629 // as memory allocation algorithm.
3630 // -> boost::interprocess::map<...> as the index to store name/object mappings
3631 //
3632 typedef boost::interprocess::basic_managed_shared_memory
3633 < wchar_t
3634 , boost::interprocess::rbtree_best_fit<boost::interprocess::mutex_family, offset_ptr<void> >
3635 , boost::interprocess::map_index
3636 > my_managed_shared_memory;
3637
3638 [*Boost.Interprocess] plans to offer an *unordered_map* based index as soon as this
3639 container is included in Boost. If these indexes are not enough for you, you can define
3640 your own index type. To know how to do this, go to
3641 [link interprocess.customizing_interprocess.custom_indexes Building custom indexes] section.
3642
3643 [endsect]
3644
3645 [section:managed_memory_segment_segment_manager Segment Manager]
3646
3647 All [*Boost.Interprocess] managed memory segment classes construct in their
3648 respective memory segments (shared memory, memory mapped files, heap memory...)
3649 some structures to implement the memory management algorithm, named allocations,
3650 synchronization objects... All these objects are encapsulated in a single object
3651 called [*segment manager]. A managed memory mapped file and a managed shared
3652 memory use the same [*segment manager] to implement all managed memory segment
3653 features, due to the fact that a [*segment manager] is a class that manages
3654 a fixed size memory buffer. Since both shared memory or memory mapped files
3655 are accessed though a mapped region, and a mapped region is a fixed size
3656 memory buffer, a single [*segment manager] class can manage several managed
3657 memory segment types.
3658
3659 Some [*Boost.Interprocess] classes require a pointer to the segment manager in
3660 their constructors, and the segment manager can be obtained from any managed
3661 memory segment using `get_segment_manager` member:
3662
3663 [c++]
3664
3665 managed_shared_memory::segment_manager *seg_manager =
3666 managed_shm.get_segment_manager();
3667
3668 [endsect]
3669
3670 [section:managed_memory_segment_information Obtaining information about a constructed object]
3671
3672 Once an object is constructed using `construct<>` function family, the
3673 programmer can obtain information about the object using a pointer to the
3674 object. The programmer can obtain the following information:
3675
3676 * Name of the object: If it's a named instance, the name used in the construction
3677 function is returned, otherwise 0 is returned.
3678
3679 * Length of the object: Returns the number of elements of the object (1 if it's
3680 a single value, >=1 if it's an array).
3681
3682 * The type of construction: Whether the object was constructed using a named,
3683 unique or anonymous construction.
3684
3685 Here is an example showing this functionality:
3686
3687 [import ../example/doc_managed_construction_info.cpp]
3688 [doc_managed_construction_info]
3689
3690 [endsect]
3691
3692 [section:managed_memory_segment_atomic_func Executing an object function atomically]
3693
3694 Sometimes the programmer must execute some code, and needs to execute it with the
3695 guarantee that no other process or thread will create or destroy any named, unique
3696 or anonymous object while executing the functor. A user might want to create several
3697 named objects and initialize them, but those objects should be available for the rest of processes
3698 at once.
3699
3700 To achieve this, the programmer can use the `atomic_func()` function offered by
3701 managed classes:
3702
3703 [c++]
3704
3705 //This object function will create several named objects
3706 create_several_objects_func func(/**/);
3707
3708 //While executing the function, no other process will be
3709 //able to create or destroy objects
3710 managed_memory.atomic_func(func);
3711
3712
3713 Note that `atomic_func` does not prevent other processes from allocating raw memory
3714 or executing member functions for already constructed objects (e.g.: another process
3715 might be pushing elements into a vector placed in the segment). The atomic function
3716 only blocks named, unique and anonymous creation, search and destruction
3717 (concurrent calls to `construct<>`, `find<>`, `find_or_construct<>`, `destroy<>`...)
3718 from other processes.
3719
3720 [endsect]
3721
3722 [endsect]
3723
3724 [section:managed_memory_segment_advanced_features Managed Memory Segment Advanced Features]
3725
3726 [section:managed_memory_segment_information Obtaining information about the managed segment]
3727
3728 These functions are available to obtain information about the managed memory segments:
3729
3730 Obtain the size of the memory segment:
3731
3732 [c++]
3733
3734 managed_shm.get_size();
3735
3736 Obtain the number of free bytes of the segment:
3737
3738 [c++]
3739
3740 managed_shm.get_free_memory();
3741
3742 Clear to zero the free memory:
3743
3744 [c++]
3745
3746 managed_shm.zero_free_memory();
3747
3748 Know if all memory has been deallocated, false otherwise:
3749
3750 [c++]
3751
3752 managed_shm.all_memory_deallocated();
3753
3754 Test internal structures of the managed segment. Returns true
3755 if no errors are detected:
3756
3757 [c++]
3758
3759 managed_shm.check_sanity();
3760
3761 Obtain the number of named and unique objects allocated in the segment:
3762
3763 [c++]
3764
3765 managed_shm.get_num_named_objects();
3766 managed_shm.get_num_unique_objects();
3767
3768 [endsect]
3769
3770 [section:growing_managed_memory Growing managed segments]
3771
3772 Once a managed segment is created the managed segment can't be grown. The limitation
3773 is not easily solvable: every process attached to the managed segment would need to
3774 be stopped, notified of the new size, they would need to remap the managed segment
3775 and continue working. Nearly impossible to achieve with a user-level library without
3776 the help of the operating system kernel.
3777
3778 On the other hand, [*Boost.Interprocess] offers off-line segment growing. What does this
3779 mean? That the segment can be grown if no process has mapped the managed segment. If the
3780 application can find a moment where no process is attached it can grow or shrink to fit
3781 the managed segment.
3782
3783 Here we have an example showing how to grow and shrink to fit
3784 [classref boost::interprocess::managed_shared_memory managed_shared_memory]:
3785
3786 [import ../example/doc_managed_grow.cpp]
3787 [doc_managed_grow]
3788
3789 [classref boost::interprocess::managed_mapped_file managed_mapped_file] also
3790 offers a similar function to grow or shrink_to_fit the managed file.
3791 Please, remember that [*no process should be modifying the file/shared memory while
3792 the growing/shrinking process is performed]. Otherwise, the managed segment will be
3793 corrupted.
3794
3795 [endsect]
3796
3797 [section:managed_memory_segment_advanced_index_functions Advanced index functions]
3798
3799 As mentioned, the managed segment stores the information about named and unique
3800 objects in two indexes. Depending on the type of those indexes, the index must
3801 reallocate some auxiliary structures when new named or unique allocations are made.
3802 For some indexes, if the user knows how many named or unique objects are going to
3803 be created it's possible to preallocate some structures to obtain much better
3804 performance. (If the index is an ordered vector it can preallocate memory to avoid
3805 reallocations. If the index is a hash structure it can preallocate the bucket array).
3806
3807 The following functions reserve memory to make the subsequent allocation of
3808 named or unique objects more efficient. These functions are only useful for
3809 pseudo-intrusive or non-node indexes (like `flat_map_index`,
3810 `iunordered_set_index`). These functions have no effect with the
3811 default index (`iset_index`) or other indexes (`map_index`):
3812
3813 [c++]
3814
3815 managed_shm.reserve_named_objects(1000);
3816 managed_shm.reserve_unique_objects(1000);
3817
3818 [c++]
3819
3820 managed_shm.reserve_named_objects(1000);
3821 managed_shm.reserve_unique_objects(1000);
3822
3823 Managed memory segments also offer the possibility to iterate through
3824 constructed named and unique objects for debugging purposes. [*Caution: this
3825 iteration is not thread-safe] so the user should make sure that no other
3826 thread is manipulating named or unique indexes (creating, erasing,
3827 reserving...) in the segment. Other operations not involving indexes can
3828 be concurrently executed (raw memory allocation/deallocations, for example).
3829
3830 The following functions return constant iterators to the range of named and
3831 unique objects stored in the managed segment. Depending on the index type,
3832 iterators might be invalidated after a named or unique
3833 creation/erasure/reserve operation:
3834
3835 [c++]
3836
3837 typedef managed_shared_memory::const_named_iterator const_named_it;
3838 const_named_it named_beg = managed_shm.named_begin();
3839 const_named_it named_end = managed_shm.named_end();
3840
3841 typedef managed_shared_memory::const_unique_iterator const_unique_it;
3842 const_unique_it unique_beg = managed_shm.unique_begin();
3843 const_unique_it unique_end = managed_shm.unique_end();
3844
3845 for(; named_beg != named_end; ++named_beg){
3846 //A pointer to the name of the named object
3847 const managed_shared_memory::char_type *name = named_beg->name();
3848 //The length of the name
3849 std::size_t name_len = named_beg->name_length();
3850 //A constant void pointer to the named object
3851 const void *value = named_beg->value();
3852 }
3853
3854 for(; unique_beg != unique_end; ++unique_beg){
3855 //The typeid(T).name() of the unique object
3856 const char *typeid_name = unique_beg->name();
3857 //The length of the name
3858 std::size_t name_len = unique_beg->name_length();
3859 //A constant void pointer to the unique object
3860 const void *value = unique_beg->value();
3861 }
3862
3863 [endsect]
3864
3865 [section:allocate_aligned Allocating aligned memory portions]
3866
3867 Sometimes it's interesting to be able to allocate aligned fragments of memory
3868 because of some hardware or software restrictions. Sometimes, having
3869 aligned memory is a feature that can be used to improve several
3870 memory algorithms.
3871
3872 This allocation is similar to the previously shown raw memory allocation but
3873 it takes an additional parameter specifying the alignment. There is
3874 a restriction for the alignment: [*the alignment must be power of two].
3875
3876 If a user wants to allocate many aligned blocks (for example aligned to 128 bytes),
3877 the size that minimizes the memory waste is a value that's is nearly a multiple
3878 of that alignment (for example 2*128 - some bytes). The reason for this is that
3879 every memory allocation usually needs some additional metadata in the first
3880 bytes of the allocated buffer. If the user can know the value of "some bytes"
3881 and if the first bytes of a free block of memory are used to fulfill the aligned
3882 allocation, the rest of the block can be left also aligned and ready for the next
3883 aligned allocation. Note that requesting [*a size multiple of the alignment is not optimal]
3884 because lefts the next block of memory unaligned due to the needed metadata.
3885
3886 Once the programmer knows the size of the payload of every memory allocation,
3887 he can request a size that will be optimal to allocate aligned chunks
3888 of memory maximizing both the size of the
3889 request [*and] the possibilities of future aligned allocations. This information
3890 is stored in the PayloadPerAllocation constant of managed memory segments.
3891
3892 Here is a small example showing how aligned allocation is used:
3893
3894 [import ../example/doc_managed_aligned_allocation.cpp]
3895 [doc_managed_aligned_allocation]
3896
3897 [endsect]
3898
3899 [section:managed_memory_segment_multiple_allocations Multiple allocation functions]
3900
3901 [caution This feature is experimental, interface and ABI are unstable]
3902
3903 If an application needs to allocate a lot of memory buffers but it needs
3904 to deallocate them independently, the application is normally forced to loop
3905 calling `allocate()`. Managed memory segments offer an alternative function
3906 to pack several allocations in a single call obtaining memory buffers that:
3907
3908 * are packed contiguously in memory (which improves locality)
3909 * can be independently deallocated.
3910
3911 This allocation method is much faster
3912 than calling `allocate()` in a loop. The downside is that the segment
3913 must provide a contiguous memory segment big enough to hold all the allocations.
3914 Managed memory segments offer this functionality through `allocate_many()` functions.
3915 There are 2 types of `allocate_many` functions:
3916
3917 * Allocation of N buffers of memory with the same size.
3918 * Allocation ot N buffers of memory, each one of different size.
3919
3920 [c++]
3921
3922 //!Allocates n_elements of elem_bytes bytes.
3923 //!Throws bad_alloc on failure. chain.size() is not increased on failure.
3924 void allocate_many(size_type elem_bytes, size_type n_elements, multiallocation_chain &chain);
3925
3926 //!Allocates n_elements, each one of element_lengths[i]*sizeof_element bytes.
3927 //!Throws bad_alloc on failure. chain.size() is not increased on failure.
3928 void allocate_many(const size_type *element_lengths, size_type n_elements, size_type sizeof_element, multiallocation_chain &chain);
3929
3930 //!Allocates n_elements of elem_bytes bytes.
3931 //!Non-throwing version. chain.size() is not increased on failure.
3932 void allocate_many(std::nothrow_t, size_type elem_bytes, size_type n_elements, multiallocation_chain &chain);
3933
3934 //!Allocates n_elements, each one of
3935 //!element_lengths[i]*sizeof_element bytes.
3936 //!Non-throwing version. chain.size() is not increased on failure.
3937 void allocate_many(std::nothrow_t, const size_type *elem_sizes, size_type n_elements, size_type sizeof_element, multiallocation_chain &chain);
3938
3939 //!Deallocates all elements contained in chain.
3940 //!Never throws.
3941 void deallocate_many(multiallocation_chain &chain);
3942
3943 Here is a small example showing all this functionality:
3944
3945 [import ../example/doc_managed_multiple_allocation.cpp]
3946 [doc_managed_multiple_allocation]
3947
3948 Allocating N buffers of the same size improves the performance of pools
3949 and node containers (for example STL-like lists): when inserting a range of
3950 forward iterators in a STL-like list, the insertion function can detect the
3951 number of needed elements and allocate in a single call. The nodes still
3952 can be deallocated.
3953
3954 Allocating N buffers of different sizes can be used to speed up allocation in
3955 cases where several objects must always be allocated at the same time but
3956 deallocated at different times. For example, a class might perform several initial
3957 allocations (some header data for a network packet, for example) in its
3958 constructor but also allocations of buffers that might be reallocated in the future
3959 (the data to be sent through the network). Instead of allocating all the data
3960 independently, the constructor might use `allocate_many()` to speed up the
3961 initialization, but it still can deallocate and expand the memory of the variable
3962 size element.
3963
3964 In general, `allocate_many` is useful with large values of N. Overuse
3965 of `allocate_many` can increase the effective memory usage,
3966 because it can't reuse existing non-contiguous memory fragments that
3967 might be available for some of the elements.
3968
3969 [endsect]
3970
3971 [section:managed_memory_segment_expand_in_place Expand in place memory allocation]
3972
3973 When programming some data structures such as vectors, memory reallocation becomes
3974 an important tool to improve performance. Managed memory segments offer an advanced
3975 reallocation function that offers:
3976
3977 * Forward expansion: An allocated buffer can be expanded so that the end of the buffer
3978 is moved further. New data can be written between the old end and the new end.
3979
3980 * Backwards expansion: An allocated buffer can be expanded so that the beginning of
3981 the buffer is moved backwards. New data can be written between the new beginning
3982 and the old beginning.
3983
3984 * Shrinking: An allocated buffer can be shrunk so that the end of the buffer
3985 is moved backwards. The memory between the new end and the old end can be reused
3986 for future allocations.
3987
3988 The expansion can be combined with the allocation of a new buffer if the expansion
3989 fails obtaining a function with "expand, if fails allocate a new buffer" semantics.
3990
3991 Apart from this features, the function always returns the real size of the
3992 allocated buffer, because many times, due to alignment issues the allocated
3993 buffer a bit bigger than the requested size. Thus, the programmer can maximize
3994 the memory use using `allocation_command`.
3995
3996 Here is the declaration of the function:
3997
3998 [c++]
3999
4000 enum boost::interprocess::allocation_type
4001 {
4002 //Bitwise OR (|) combinable values
4003 boost::interprocess::allocate_new = ...,
4004 boost::interprocess::expand_fwd = ...,
4005 boost::interprocess::expand_bwd = ...,
4006 boost::interprocess::shrink_in_place = ...,
4007 boost::interprocess::nothrow_allocation = ...
4008 };
4009
4010
4011 template<class T>
4012 std::pair<T *, bool>
4013 allocation_command( boost::interprocess::allocation_type command
4014 , std::size_t limit_size
4015 , size_type &prefer_in_recvd_out_size
4016 , T *&reuse_ptr);
4017
4018
4019 [*Preconditions for the function]:
4020
4021 * If the parameter command contains the value `boost::interprocess::shrink_in_place` it can't
4022 contain any of these values: `boost::interprocess::expand_fwd`, `boost::interprocess::expand_bwd`.
4023
4024 * If the parameter command contains `boost::interprocess::expand_fwd` or `boost::interprocess::expand_bwd`, the parameter
4025 `reuse_ptr` must be non-null and returned by a previous allocation function.
4026
4027 * If the parameter command contains the value `boost::interprocess::shrink_in_place`, the parameter
4028 `limit_size` must be equal or greater than the parameter `preferred_size`.
4029
4030 * If the parameter `command` contains any of these values: `boost::interprocess::expand_fwd` or `boost::interprocess::expand_bwd`,
4031 the parameter `limit_size` must be equal or less than the parameter `preferred_size`.
4032
4033 [*Which are the effects of this function:]
4034
4035 * If the parameter command contains the value `boost::interprocess::shrink_in_place`, the function
4036 will try to reduce the size of the memory block referenced by pointer `reuse_ptr`
4037 to the value `preferred_size` moving only the end of the block.
4038 If it's not possible, it will try to reduce the size of the memory block as
4039 much as possible as long as this results in `size(p) <= limit_size`. Success
4040 is reported only if this results in `preferred_size <= size(p)` and `size(p) <= limit_size`.
4041
4042 * If the parameter `command` only contains the value `boost::interprocess::expand_fwd` (with optional
4043 additional `boost::interprocess::nothrow_allocation`), the allocator will try to increase the size of the
4044 memory block referenced by pointer reuse moving only the end of the block to the
4045 value `preferred_size`. If it's not possible, it will try to increase the size
4046 of the memory block as much as possible as long as this results in
4047 `size(p) >= limit_size`. Success is reported only if this results in `limit_size <= size(p)`.
4048
4049 * If the parameter `command` only contains the value `boost::interprocess::expand_bwd` (with optional
4050 additional `boost::interprocess::nothrow_allocation`), the allocator will try to increase the size of
4051 the memory block referenced by pointer `reuse_ptr` only moving the start of the
4052 block to a returned new position `new_ptr`. If it's not possible, it will try to
4053 move the start of the block as much as possible as long as this results in
4054 `size(new_ptr) >= limit_size`. Success is reported only if this results in
4055 `limit_size <= size(new_ptr)`.
4056
4057 * If the parameter `command` only contains the value `boost::interprocess::allocate_new` (with optional
4058 additional `boost::interprocess::nothrow_allocation`), the allocator will try to allocate memory for
4059 `preferred_size` objects. If it's not possible it will try to allocate memory for
4060 at least `limit_size` objects.
4061
4062 * If the parameter `command` only contains a combination of `boost::interprocess::expand_fwd` and
4063 `boost::interprocess::allocate_new`, (with optional additional `boost::interprocess::nothrow_allocation`) the allocator will
4064 try first the forward expansion. If this fails, it would try a new allocation.
4065
4066 * If the parameter `command` only contains a combination of `boost::interprocess::expand_bwd` and
4067 `boost::interprocess::allocate_new` (with optional additional `boost::interprocess::nothrow_allocation`), the allocator will
4068 try first to obtain `preferred_size` objects using both methods if necessary.
4069 If this fails, it will try to obtain `limit_size` objects using both methods if
4070 necessary.
4071
4072 * If the parameter `command` only contains a combination of `boost::interprocess::expand_fwd` and
4073 `boost::interprocess::expand_bwd` (with optional additional `boost::interprocess::nothrow_allocation`), the allocator will
4074 try first forward expansion. If this fails it will try to obtain preferred_size
4075 objects using backwards expansion or a combination of forward and backwards expansion.
4076 If this fails, it will try to obtain `limit_size` objects using both methods if
4077 necessary.
4078
4079 * If the parameter `command` only contains a combination of allocation_new,
4080 `boost::interprocess::expand_fwd` and `boost::interprocess::expand_bwd`, (with optional additional `boost::interprocess::nothrow_allocation`)
4081 the allocator will try first forward expansion. If this fails it will try to obtain
4082 preferred_size objects using new allocation, backwards expansion or a combination of
4083 forward and backwards expansion. If this fails, it will try to obtain `limit_size`
4084 objects using the same methods.
4085
4086 * The allocator always writes the size or the expanded/allocated/shrunk memory block
4087 in `received_size`. On failure the allocator writes in `received_size` a possibly
4088 successful `limit_size` parameter for a new call.
4089
4090 [*Throws an exception if two conditions are met:]
4091
4092 * The allocator is unable to allocate/expand/shrink the memory or there is an
4093 error in preconditions
4094
4095 * The parameter command does not contain `boost::interprocess::nothrow_allocation`.
4096
4097 [*This function returns:]
4098
4099 * The address of the allocated memory or the new address of the expanded memory
4100 as the first member of the pair. If the parameter command contains
4101 `boost::interprocess::nothrow_allocation` the first member will be 0
4102 if the allocation/expansion fails or there is an error in preconditions.
4103
4104 * The second member of the pair will be false if the memory has been allocated,
4105 true if the memory has been expanded. If the first member is 0, the second member
4106 has an undefined value.
4107
4108 [*Notes:]
4109
4110 * If the user chooses `char` as template argument the returned buffer will
4111 be suitably aligned to hold any type.
4112 * If the user chooses `char` as template argument and a backwards expansion is
4113 performed, although properly aligned, the returned buffer might not be
4114 suitable because the distance between the new beginning and the old beginning
4115 might not multiple of the type the user wants to construct, since due to internal
4116 restrictions the expansion can be slightly bigger than the requested bytes. [*When
4117 performing backwards expansion, if you have already constructed objects in the
4118 old buffer, make sure to specify correctly the type.]
4119
4120 Here is a small example that shows the use of `allocation_command`:
4121
4122 [import ../example/doc_managed_allocation_command.cpp]
4123 [doc_managed_allocation_command]
4124
4125 `allocation_command` is a very powerful function that can lead to important
4126 performance gains. It's specially useful when programming vector-like data
4127 structures where the programmer can minimize both the number of allocation
4128 requests and the memory waste.
4129
4130 [endsect]
4131
4132 [section:copy_on_write_read_only Opening managed shared memory and mapped files with Copy On Write or Read Only modes]
4133
4134 When mapping a memory segment based on shared memory or files, there is an option to
4135 open them using [*open_copy_on_write] option. This option is similar to `open_only` but
4136 every change the programmer does with this managed segment is kept private to this process
4137 and is not translated to the underlying device (shared memory or file).
4138
4139 The underlying shared memory or file is opened as read-only so several processes can
4140 share an initial managed segment and make private changes to it. If many processes
4141 open a managed segment in copy on write mode and not modified pages from the managed
4142 segment will be shared between all those processes, with considerable memory savings.
4143
4144 Opening managed shared memory and mapped files with [*open_read_only] maps the
4145 underlying device in memory with [*read-only] attributes. This means that any attempt
4146 to write that memory, either creating objects or locking any mutex might result in an
4147 page-fault error (and thus, program termination) from the OS. Read-only mode opens
4148 the underlying device (shared memory, file...) in read-only mode and
4149 can result in considerable memory savings if several processes just want to process
4150 a managed memory segment without modifying it. Read-only mode operations are limited:
4151
4152 * Read-only mode must be used only from managed classes. If the programmer obtains
4153 the segment manager and tries to use it directly it might result in an access violation.
4154 The reason for this is that the segment manager is placed in the underlying device
4155 and does not nothing about the mode it's been mapped in memory.
4156
4157 * Only const member functions from managed segments should be used.
4158
4159 * Additionally, the `find<>` member function avoids using internal locks and can be
4160 used to look for named and unique objects.
4161
4162 Here is an example that shows the use of these two open modes:
4163
4164 [import ../example/doc_managed_copy_on_write.cpp]
4165 [doc_managed_copy_on_write]
4166
4167 [endsect]
4168
4169 [endsect]
4170
4171 [section:managed_heap_memory_external_buffer Managed Heap Memory And Managed External Buffer]
4172
4173 [*Boost.Interprocess] offers managed shared memory between processes using
4174 `managed_shared_memory` or `managed_mapped_file`. Two processes just map the same
4175 the memory mappable resource and read from and write to that object.
4176
4177 Many times, we don't want to use that shared memory approach and we prefer
4178 to send serialized data through network, local socket or message queues. Serialization
4179 can be done through [*Boost.Serialization] or similar library. However, if two processes
4180 share the same ABI (application binary interface), we could use the same object and
4181 container construction capabilities of `managed_shared_memory` or `managed_heap_memory`
4182 to build all the information in a single buffer that will be sent, for example,
4183 though message queues. The receiver would just copy the data to a local buffer, and it
4184 could read or modify it directly without deserializing the data . This approach can be
4185 much more efficient that a complex serialization mechanism.
4186
4187 Applications for [*Boost.Interprocess] services using non-shared memory buffers:
4188
4189 * Create and use STL compatible containers and allocators,
4190 in systems where dynamic memory is not recommendable.
4191
4192 * Build complex, easily serializable databases in a single buffer:
4193
4194 * To share data between threads
4195
4196 * To save and load information from/to files.
4197
4198 * Duplicate information (containers, allocators, etc...) just copying the contents of
4199 one buffer to another one.
4200
4201 * Send complex information and objects/databases using serial/inter-process/network
4202 communications.
4203
4204 To help with this management, [*Boost.Interprocess] provides two useful classes,
4205 `basic_managed_heap_memory` and `basic_managed_external_buffer`:
4206
4207 [section:managed_external_buffer Managed External Buffer: Constructing all Boost.Interprocess objects in a user provided buffer]
4208
4209 Sometimes, the user wants to create simple objects, STL compatible containers, STL compatible
4210 strings and more, all in a single buffer. This buffer could be a big static buffer,
4211 a memory-mapped auxiliary device or any other user buffer.
4212
4213 This would allow an easy serialization and we-ll just need to copy the buffer to duplicate
4214 all the objects created in the original buffer, including complex objects like
4215 maps, lists.... [*Boost.Interprocess] offers managed memory segment classes to handle user
4216 provided buffers that allow the same functionality as shared memory classes:
4217
4218 [c++]
4219
4220 //Named object creation managed memory segment
4221 //All objects are constructed in a user provided buffer
4222 template <
4223 class CharType,
4224 class MemoryAlgorithm,
4225 template<class IndexConfig> class IndexType
4226 >
4227 class basic_managed_external_buffer;
4228
4229 //Named object creation managed memory segment
4230 //All objects are constructed in a user provided buffer
4231 // Names are c-strings,
4232 // Default memory management algorithm
4233 // (rbtree_best_fit with no mutexes and relative pointers)
4234 // Name-object mappings are stored in the default index type (flat_map)
4235 typedef basic_managed_external_buffer <
4236 char,
4237 rbtree_best_fit<null_mutex_family, offset_ptr<void> >,
4238 flat_map_index
4239 > managed_external_buffer;
4240
4241 //Named object creation managed memory segment
4242 //All objects are constructed in a user provided buffer
4243 // Names are wide-strings,
4244 // Default memory management algorithm
4245 // (rbtree_best_fit with no mutexes and relative pointers)
4246 // Name-object mappings are stored in the default index type (flat_map)
4247 typedef basic_managed_external_buffer<
4248 wchar_t,
4249 rbtree_best_fit<null_mutex_family, offset_ptr<void> >,
4250 flat_map_index
4251 > wmanaged_external_buffer;
4252
4253 To use a managed external buffer, you must include the following header:
4254
4255 [c++]
4256
4257 #include <boost/interprocess/managed_external_buffer.hpp>
4258
4259 Let's see an example of the use of managed_external_buffer:
4260
4261 [import ../example/doc_managed_external_buffer.cpp]
4262 [doc_managed_external_buffer]
4263
4264 [*Boost.Interprocess] STL compatible allocators can also be used to place STL
4265 compatible containers in the user segment.
4266
4267 [classref boost::interprocess::basic_managed_external_buffer basic_managed_external_buffer] can
4268 be also useful to build small databases for embedded systems limiting the size of
4269 the used memory to a predefined memory chunk, instead of letting the database
4270 fragment the heap memory.
4271
4272 [endsect]
4273
4274 [section:managed_heap_memory Managed Heap Memory: Boost.Interprocess machinery in heap memory]
4275
4276 The use of heap memory (new/delete) to obtain a buffer where the user wants to store all
4277 his data is very common, so [*Boost.Interprocess] provides some specialized
4278 classes that work exclusively with heap memory.
4279
4280 These are the classes:
4281
4282 [c++]
4283
4284 //Named object creation managed memory segment
4285 //All objects are constructed in a single buffer allocated via new[]
4286 template <
4287 class CharType,
4288 class MemoryAlgorithm,
4289 template<class IndexConfig> class IndexType
4290 >
4291 class basic_managed_heap_memory;
4292
4293 //Named object creation managed memory segment
4294 //All objects are constructed in a single buffer allocated via new[]
4295 // Names are c-strings,
4296 // Default memory management algorithm
4297 // (rbtree_best_fit with no mutexes and relative pointers)
4298 // Name-object mappings are stored in the default index type (flat_map)
4299 typedef basic_managed_heap_memory <
4300 char,
4301 rbtree_best_fit<null_mutex_family>,
4302 flat_map_index
4303 > managed_heap_memory;
4304
4305 //Named object creation managed memory segment
4306 //All objects are constructed in a single buffer allocated via new[]
4307 // Names are wide-strings,
4308 // Default memory management algorithm
4309 // (rbtree_best_fit with no mutexes and relative pointers)
4310 // Name-object mappings are stored in the default index type (flat_map)
4311 typedef basic_managed_heap_memory<
4312 wchar_t,
4313 rbtree_best_fit<null_mutex_family>,
4314 flat_map_index
4315 > wmanaged_heap_memory;
4316
4317 To use a managed heap memory, you must include the following header:
4318
4319 [c++]
4320
4321 #include <boost/interprocess/managed_heap_memory.hpp>
4322
4323 The use is exactly the same as
4324 [classref boost::interprocess::basic_managed_external_buffer basic_managed_external_buffer],
4325 except that memory is created by
4326 the managed memory segment itself using dynamic (new/delete) memory.
4327
4328 [*basic_managed_heap_memory] also offers a `grow(std::size_t extra_bytes)` function that
4329 tries to resize internal heap memory so that we have room for more objects.
4330 But *be careful*, if memory is reallocated, the old buffer will be copied into
4331 the new one so all the objects will be binary-copied to the new buffer.
4332 To be able to use this function, all pointers constructed in the heap buffer that
4333 point to objects in the heap buffer must be relative pointers (for example `offset_ptr`).
4334 Otherwise, the result is undefined. Here is an example:
4335
4336 [import ../example/doc_managed_heap_memory.cpp]
4337 [doc_managed_heap_memory]
4338
4339 [endsect]
4340
4341 [section:managed_heap_memory_external_buffer_diff Differences between managed memory segments]
4342
4343 All managed memory segments have similar capabilities
4344 (memory allocation inside the memory segment, named object construction...),
4345 but there are some remarkable differences between [*managed_shared_memory],
4346 [*managed_mapped_file] and [*managed_heap_memory], [*managed_external_file].
4347
4348 * Default specializations of managed shared memory and mapped file use process-shared
4349 mutexes. Heap memory and external buffer have no internal synchronization by default.
4350 The cause is that the first two are thought to be shared between processes (although
4351 memory mapped files could be used just to obtain a persistent object data-base for a
4352 process) whereas the last two are thought to be used inside one process to construct
4353 a serialized named object data-base that can be sent though serial interprocess
4354 communications (like message queues, localhost network...).
4355
4356 * The first two create a system-global object (a shared memory object or a file) shared
4357 by several processes, whereas the last two are objects that don't create system-wide
4358 resources.
4359
4360 [endsect]
4361
4362 [section:shared_message_queue_ex Example: Serializing a database through the message queue]
4363
4364 To see the utility of managed heap memory and managed external buffer classes,
4365 the following example shows how a message queue can be used to serialize a whole
4366 database constructed in a memory buffer using [*Boost.Interprocess], send the database
4367 through a message queue and duplicated in another buffer:
4368
4369 [import ../test/message_queue_test.cpp]
4370 [message_queue_test_test_serialize_db]
4371
4372 [endsect]
4373
4374 [endsect]
4375
4376 [endsect]
4377
4378 [section:allocators_containers Allocators, containers and memory allocation algorithms]
4379
4380 [section:allocator_introduction Introduction to Interprocess allocators]
4381
4382 As seen, [*Boost.Interprocess] offers raw memory allocation and object construction
4383 using managed memory segments (managed shared memory, managed mapped files...) and
4384 one of the first user requests is the use of containers in managed shared memories.
4385 To achieve this, [*Boost.Interprocess] makes use of managed memory segment's
4386 memory allocation algorithms to build several memory allocation schemes, including
4387 general purpose and node allocators.
4388
4389 [*Boost.Interprocess] STL compatible allocators are configurable via template parameters.
4390 Allocators define their `pointer` typedef based on the `void_pointer` typedef of the segment manager
4391 passed as template argument. When this `segment_manager::void_pointer` is a relative pointer,
4392 (for example, `offset_ptr<void>`) the user can place these allocators in
4393 memory mapped in different base addresses in several processes.
4394
4395 [section:allocator_properties Properties of [*Boost.Interprocess] allocators]
4396
4397 Container allocators are normally default-constructible because the are stateless.
4398 `std::allocator` and [*Boost.Pool's] `boost::pool_allocator`/`boost::fast_pool_allocator`
4399 are examples of default-constructible allocators.
4400
4401 On the other hand, [*Boost.Interprocess] allocators need to allocate memory from a
4402 concrete memory segment and not from a system-wide memory source (like the heap).
4403 [*Boost.Interprocess] allocators are [*stateful], which means that they must be
4404 configured to tell them where the shared memory or the memory mapped file is.
4405
4406 This information is transmitted at compile-time and run-time: The allocators
4407 receive a template parameter defining the type of the segment manager and
4408 their constructor receive a pointer to the segment manager of the managed memory
4409 segment where the user wants to allocate the values.
4410
4411 [*Boost.Interprocess] allocators have [*no default-constructors] and containers
4412 must be explicitly initialized with a configured allocator:
4413
4414 [c++]
4415
4416 //The allocators must be templatized with the segment manager type
4417 typedef any_interprocess_allocator
4418 <int, managed_shared_memory::segment_manager, ...> Allocator;
4419
4420 //The allocator must be constructed with a pointer to the segment manager
4421 Allocator alloc_instance (segment.get_segment_manager(), ...);
4422
4423 //Containers must be initialized with a configured allocator
4424 typedef my_list<int, Allocator> MyIntList;
4425 MyIntList mylist(alloc_inst);
4426
4427 //This would lead to a compilation error, because
4428 //the allocator has no default constructor
4429 //MyIntList mylist;
4430
4431 [*Boost.Interprocess] allocators also have a `get_segment_manager()` function
4432 that returns the underlying segment manager that they have received in the
4433 constructor:
4434
4435 [c++]
4436
4437 Allocator::segment_manager s = alloc_instance.get_segment_manager();
4438 AnotherType *a = s->construct<AnotherType>(anonymous_instance)(/*Parameters*/);
4439
4440 [endsect]
4441
4442 [section:allocator_swapping Swapping Boost.Interprocess allocators]
4443
4444 When swapping STL containers, there is an active discussion on what to do with
4445 the allocators. Some STL implementations, for example Dinkumware from Visual .NET 2003,
4446 perform a deep swap of the whole container through a temporary when allocators are not equal.
4447 The [@http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2004/n1599.html proposed resolution]
4448 to container swapping is that allocators should be swapped in a non-throwing way.
4449
4450 Unfortunately, this approach is not valid with shared memory. Using heap allocators, if
4451 Group1 of node allocators share a common segregated storage, and Group2 share another common
4452 segregated storage, a simple pointer swapping is needed to swap an allocator of Group1 and another
4453 allocator of Group2. But when the user wants to swap two shared memory allocators, each one
4454 placed in a different shared memory segment, this is not possible. As generally shared memory
4455 is mapped in different addresses in each process, a pointer placed in one segment can't point
4456 to any object placed in other shared memory segment, since in each process, the distance between
4457 the segments is different. However, if both shared memory allocators are in the same segment,
4458 a non-throwing swap is possible, just like heap allocators.
4459
4460 Until a final resolution is achieved. [*Boost.Interprocess] allocators implement a non-throwing
4461 swap function that swaps internal pointers. If an allocator placed in a shared memory segment is
4462 swapped with other placed in a different shared memory segment, the result is undefined. But a
4463 crash is quite sure.
4464
4465 [endsect]
4466
4467 [section:allocator allocator: A general purpose allocator for managed memory segments]
4468
4469 The [classref boost::interprocess::allocator allocator] class defines an allocator class that
4470 uses the managed memory segment's algorithm to allocate and deallocate memory. This is
4471 achieved through the [*segment manager] of the managed memory segment. This allocator
4472 is the equivalent for managed memory segments of the standard `std::allocator`.
4473 [classref boost::interprocess::allocator allocator]
4474 is templatized with the allocated type, and the segment manager.
4475
4476 [*Equality:] Two [classref boost::interprocess::allocator allocator] instances
4477 constructed with the same segment manager compare equal. If an instance is
4478 created using copy constructor, that instance compares equal with the original one.
4479
4480 [*Allocation thread-safety:] Allocation and deallocation are implemented as calls
4481 to the segment manager's allocation function so the allocator offers the same
4482 thread-safety as the segment manager.
4483
4484 To use [classref boost::interprocess::allocator allocator] you must include
4485 the following header:
4486
4487 [c++]
4488
4489 #include <boost/interprocess/allocators/allocator.hpp>
4490
4491
4492 [classref boost::interprocess::allocator allocator] has the following declaration:
4493
4494 [c++]
4495
4496 namespace boost {
4497 namespace interprocess {
4498
4499 template<class T, class SegmentManager>
4500 class allocator;
4501
4502 } //namespace interprocess {
4503 } //namespace boost {
4504
4505 The allocator just provides the needed typedefs and forwards all allocation
4506 and deallocation requests to the segment manager passed in the constructor, just
4507 like `std::allocator` forwards the requests to `operator new[]`.
4508
4509 Using [classref boost::interprocess::allocator allocator] is straightforward:
4510
4511 [import ../example/doc_allocator.cpp]
4512 [doc_allocator]
4513
4514 [endsect]
4515
4516 [endsect]
4517
4518 [section:stl_allocators_segregated_storage Segregated storage node allocators]
4519
4520 Variable size memory algorithms waste
4521 some space in management information for each allocation. Sometimes,
4522 usually for small objects, this is not acceptable. Memory algorithms can
4523 also fragment the managed memory segment under some allocation and
4524 deallocation schemes, reducing their performance. When allocating
4525 many objects of the same type, a simple segregated storage becomes
4526 a fast and space-friendly allocator, as explained in the
4527 [@http://www.boost.org/libs/pool/ [*Boost.Pool]] library.
4528
4529 Segregate storage node
4530 allocators allocate large memory chunks from a general purpose memory
4531 allocator and divide that chunk into several nodes. No bookkeeping information
4532 is stored in the nodes to achieve minimal memory waste: free nodes are linked
4533 using a pointer constructed in the memory of the node.
4534
4535 [*Boost.Interprocess]
4536 offers 3 allocators based on this segregated storage algorithm:
4537 [classref boost::interprocess::node_allocator node_allocator],
4538 [classref boost::interprocess::private_node_allocator private_node_allocator] and
4539 [classref boost::interprocess::cached_node_allocator cached_node_allocator].
4540
4541 To know the details of the implementation of
4542 of the segregated storage pools see the
4543 [link interprocess.architecture.allocators_containers.implementation_segregated_storage_pools Implementation of [*Boost.Interprocess] segregated storage pools]
4544 section.
4545
4546 [section:segregated_allocators_common Additional parameters and functions of segregated storage node allocators]
4547
4548 [classref boost::interprocess::node_allocator node_allocator],
4549 [classref boost::interprocess::private_node_allocator private_node_allocator] and
4550 [classref boost::interprocess::cached_node_allocator cached_node_allocator] implement
4551 the standard allocator interface and the functions explained in the
4552 [link interprocess.allocators_containers.allocator_introduction.allocator_properties Properties of Boost.Interprocess allocators].
4553
4554 All these allocators are templatized by 3 parameters:
4555
4556 * `class T`: The type to be allocated.
4557 * `class SegmentManager`: The type of the segment manager that will be passed in the constructor.
4558 * `std::size_t NodesPerChunk`: The number of nodes that a memory chunk will contain.
4559 This value will define the size of the memory the pool will request to the
4560 segment manager when the pool runs out of nodes. This parameter has a default value.
4561
4562 These allocators also offer the `deallocate_free_chunks()` function. This function will
4563 traverse all the memory chunks of the pool and will return to the managed memory segment
4564 the free chunks of memory. If this function is not used, deallocating the free chunks does
4565 not happen until the pool is destroyed so the only way to return memory allocated
4566 by the pool to the segment before destructing the pool is calling manually this function.
4567 This function is quite time-consuming because it has quadratic complexity (O(N^2)).
4568
4569 [endsect]
4570
4571 [section:node_allocator node_allocator: A process-shared segregated storage]
4572
4573 For heap-memory node allocators (like [*Boost.Pool's] `boost::fast_pool_allocator`
4574 usually a global, thread-shared singleton
4575 pool is used for each node size. This is not possible if you try to share
4576 a node allocator between processes. To achieve this sharing
4577 [classref boost::interprocess::node_allocator node_allocator]
4578 uses the segment manager's unique type allocation service
4579 (see [link interprocess.managed_memory_segments.managed_memory_segment_features.unique Unique instance construction] section).
4580
4581 In the initialization, a
4582 [classref boost::interprocess::node_allocator node_allocator]
4583 object searches this unique object in
4584 the segment. If it is not preset, it builds one. This way, all
4585 [classref boost::interprocess::node_allocator node_allocator]
4586 objects built inside a memory segment share a unique memory pool.
4587
4588 The common segregated storage is not only shared between node_allocators of the
4589 same type, but it is also shared between all node allocators that allocate objects
4590 of the same size, for example, [*node_allocator<uint32>] and [*node_allocator<float32>].
4591 This saves a lot of memory but also imposes an synchronization overhead for each
4592 node allocation.
4593
4594 The dynamically created common segregated storage
4595 integrates a reference count so that a
4596 [classref boost::interprocess::node_allocator node_allocator]
4597 can know if any other
4598 [classref boost::interprocess::node_allocator node_allocator]
4599 is attached to the same common segregated storage. When the last
4600 allocator attached to the pool is destroyed, the pool is destroyed.
4601
4602 [*Equality:] Two [classref boost::interprocess::node_allocator node_allocator] instances
4603 constructed with the same segment manager compare equal. If an instance is
4604 created using copy constructor, that instance compares equal with the original one.
4605
4606 [*Allocation thread-safety:] Allocation and deallocation are implemented as calls
4607 to the shared pool. The shared pool offers the same synchronization guarantees
4608 as the segment manager.
4609
4610 To use [classref boost::interprocess::node_allocator node_allocator],
4611 you must include the following header:
4612
4613 [c++]
4614
4615 #include <boost/interprocess/allocators/node_allocator.hpp>
4616
4617 [classref boost::interprocess::node_allocator node_allocator] has the following declaration:
4618
4619 [c++]
4620
4621 namespace boost {
4622 namespace interprocess {
4623
4624 template<class T, class SegmentManager, std::size_t NodesPerChunk = ...>
4625 class node_allocator;
4626
4627 } //namespace interprocess {
4628 } //namespace boost {
4629
4630 An example using [classref boost::interprocess::node_allocator node_allocator]:
4631
4632 [import ../example/doc_node_allocator.cpp]
4633 [doc_node_allocator]
4634
4635 [endsect]
4636
4637 [section:private_node_allocator private_node_allocator: a private segregated storage]
4638
4639 As said, the node_allocator shares a common segregated storage between
4640 node_allocators that allocate objects of the same size and this optimizes
4641 memory usage. However, it needs a unique/named object construction feature
4642 so that this sharing can be possible. Also
4643 imposes a synchronization overhead per node allocation because of this share.
4644 Sometimes, the unique object service is not available (for example, when
4645 building index types to implement the named allocation service itself) or the
4646 synchronization overhead is not acceptable. Many times the programmer wants to
4647 make sure that the pool is destroyed when the allocator is destroyed, to free
4648 the memory as soon as possible.
4649
4650 So [*private_node_allocator] uses the same segregated storage as `node_allocator`,
4651 but each [*private_node_allocator] has its own segregated storage pool. No synchronization
4652 is used when allocating nodes, so there is far less overhead for an operation
4653 that usually involves just a few pointer operations when allocating and
4654 deallocating a node.
4655
4656 [*Equality:] Two [classref boost::interprocess::private_node_allocator private_node_allocator]
4657 instances [*never] compare equal. Memory allocated with one allocator [*can't] be
4658 deallocated with another one.
4659
4660 [*Allocation thread-safety:] Allocation and deallocation are [*not] thread-safe.
4661
4662 To use [classref boost::interprocess::private_node_allocator private_node_allocator],
4663 you must include the following header:
4664
4665 [c++]
4666
4667 #include <boost/interprocess/allocators/private_node_allocator.hpp>
4668
4669 [classref boost::interprocess::private_node_allocator private_node_allocator]
4670 has the following declaration:
4671
4672 [c++]
4673
4674 namespace boost {
4675 namespace interprocess {
4676
4677 template<class T, class SegmentManager, std::size_t NodesPerChunk = ...>
4678 class private_node_allocator;
4679
4680 } //namespace interprocess {
4681 } //namespace boost {
4682
4683 An example using [classref boost::interprocess::private_node_allocator private_node_allocator]:
4684
4685 [import ../example/doc_private_node_allocator.cpp]
4686 [doc_private_node_allocator]
4687
4688 [endsect]
4689
4690 [section:cached_node_allocator cached_node_allocator: caching nodes to avoid overhead]
4691
4692 The total node sharing of [classref boost::interprocess::node_allocator node_allocator] can impose a high overhead for some
4693 applications and the minimal synchronization overhead of [classref boost::interprocess::private_node_allocator private_node_allocator]
4694 can impose a unacceptable memory waste for other applications.
4695
4696 To solve this, [*Boost.Interprocess] offers an allocator,
4697 [classref boost::interprocess::cached_node_allocator cached_node_allocator], that
4698 allocates nodes from the common pool but caches some of them privately so that following
4699 allocations have no synchronization overhead. When the cache is full, the allocator
4700 returns some cached nodes to the common pool, and those will be available to other
4701 allocators.
4702
4703 [*Equality:] Two [classref boost::interprocess::cached_node_allocator cached_node_allocator]
4704 instances constructed with the same segment manager compare equal. If an instance is
4705 created using copy constructor, that instance compares equal with the original one.
4706
4707 [*Allocation thread-safety:] Allocation and deallocation are [*not] thread-safe.
4708
4709 To use [classref boost::interprocess::cached_node_allocator cached_node_allocator],
4710 you must include the following header:
4711
4712 [c++]
4713
4714 #include <boost/interprocess/allocators/cached_node_allocator.hpp>
4715
4716 [classref boost::interprocess::cached_node_allocator cached_node_allocator]
4717 has the following declaration:
4718
4719 [c++]
4720
4721 namespace boost {
4722 namespace interprocess {
4723
4724 template<class T, class SegmentManager, std::size_t NodesPerChunk = ...>
4725 class cached_node_allocator;
4726
4727 } //namespace interprocess {
4728 } //namespace boost {
4729
4730 A [classref boost::interprocess::cached_node_allocator cached_node_allocator] instance
4731 and a [classref boost::interprocess::node_allocator node_allocator] instance
4732 share the same pool if both instances receive the same template parameters. This means
4733 that nodes returned to the shared pool by one of them can be reused by the other.
4734 Please note that this does not mean that both allocators compare equal, this is just
4735 information for programmers that want to maximize the use of the pool.
4736
4737 [classref boost::interprocess::cached_node_allocator cached_node_allocator], offers
4738 additional functions to control the cache (the cache can be controlled per instance):
4739
4740 * `void set_max_cached_nodes(std::size_t n)`: Sets the maximum cached nodes limit.
4741 If cached nodes reach the limit, some are returned to the shared pool.
4742
4743 * `std::size_t get_max_cached_nodes() const`: Returns the maximum cached nodes limit.
4744
4745 * `void deallocate_cache()`: Returns the cached nodes to the shared pool.
4746
4747 An example using [classref boost::interprocess::cached_node_allocator cached_node_allocator]:
4748
4749 [import ../example/doc_cached_node_allocator.cpp]
4750 [doc_cached_node_allocator]
4751
4752 [endsect]
4753
4754 [endsect]
4755
4756 [section:stl_allocators_adaptive Adaptive pool node allocators]
4757
4758 Node allocators based on simple segregated storage algorithm are both
4759 space-efficient and fast but they have a problem: they only can grow. Every allocated
4760 node avoids any payload to store additional data and that leads to the following limitation:
4761 when a node is deallocated, it's stored in a free list of nodes but memory is not
4762 returned to the segment manager so a deallocated
4763 node can be only reused by other containers using the same node pool.
4764
4765 This behaviour can be problematic if several containers use
4766 [classref boost::interprocess::node_allocator] to temporarily allocate a lot
4767 of objects but they end storing a few of them: the node pool will be full of nodes
4768 that won't be reused wasting memory from the segment.
4769
4770 Adaptive pool based allocators trade some space (the overhead can be as low as 1%)
4771 and performance (acceptable for many applications) with the ability to return free chunks
4772 of nodes to the memory segment, so that they can be used by any other container or managed
4773 object construction. To know the details of the implementation of
4774 of "adaptive pools" see the
4775 [link interprocess.architecture.allocators_containers.implementation_adaptive_pools Implementation of [*Boost.Intrusive] adaptive pools]
4776 section.
4777
4778 Like with segregated storage based node allocators, Boost.Interprocess offers
4779 3 new allocators: [classref boost::interprocess::adaptive_pool adaptive_pool],
4780 [classref boost::interprocess::private_adaptive_pool private_adaptive_pool],
4781 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool].
4782
4783 [section:adaptive_allocators_common Additional parameters and functions of adaptive pool node allocators]
4784
4785 [classref boost::interprocess::adaptive_pool adaptive_pool],
4786 [classref boost::interprocess::private_adaptive_pool private_adaptive_pool] and
4787 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool] implement
4788 the standard allocator interface and the functions explained in the
4789 [link interprocess.allocators_containers.allocator_introduction.allocator_properties Properties of Boost.Interprocess allocators].
4790
4791 All these allocators are templatized by 4 parameters:
4792
4793 * `class T`: The type to be allocated.
4794 * `class SegmentManager`: The type of the segment manager that will be passed in the constructor.
4795 * `std::size_t NodesPerChunk`: The number of nodes that a memory chunk will contain.
4796 This value will define the size of the memory the pool will request to the
4797 segment manager when the pool runs out of nodes. This parameter has a default value.
4798 * `std::size_t MaxFreeChunks`: The maximum number of free chunks that the pool
4799 will hold. If this limit is reached the pool returns the chunks to the segment manager.
4800 This parameter has a default value.
4801
4802 These allocators also offer the `deallocate_free_chunks()` function. This function will
4803 traverse all the memory chunks of the pool and will return to the managed memory segment
4804 the free chunks of memory. This function is much faster than for segregated storage
4805 allocators, because the adaptive pool algorithm offers constant-time access to free
4806 chunks.
4807
4808 [endsect]
4809
4810 [section:adaptive_pool adaptive_pool: a process-shared adaptive pool]
4811
4812 Just like [classref boost::interprocess::node_allocator node_allocator]
4813 a global, process-thread pool is used for each node size. In the
4814 initialization, [classref boost::interprocess::adaptive_pool adaptive_pool]
4815 searches the pool in the segment. If it is not preset, it builds one.
4816 The adaptive pool, is created using a unique name.
4817 The adaptive pool it is also shared between
4818 all node_allocators that allocate objects of the same size, for example,
4819 [*adaptive_pool<uint32>] and [*adaptive_pool<float32>].
4820
4821 The common adaptive pool is destroyed when all the allocators attached
4822 to the pool are destroyed.
4823
4824 [*Equality:] Two [classref boost::interprocess::adaptive_pool adaptive_pool] instances
4825 constructed with the same segment manager compare equal. If an instance is
4826 created using copy constructor, that instance compares equal with the original one.
4827
4828 [*Allocation thread-safety:] Allocation and deallocation are implemented as calls
4829 to the shared pool. The shared pool offers the same synchronization guarantees
4830 as the segment manager.
4831
4832 To use [classref boost::interprocess::adaptive_pool adaptive_pool],
4833 you must include the following header:
4834
4835 [c++]
4836
4837 #include <boost/interprocess/allocators/adaptive_pool.hpp>
4838
4839 [classref boost::interprocess::adaptive_pool adaptive_pool] has the following declaration:
4840
4841 [c++]
4842
4843 namespace boost {
4844 namespace interprocess {
4845
4846 template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeChunks = ...>
4847 class adaptive_pool;
4848
4849 } //namespace interprocess {
4850 } //namespace boost {
4851
4852 An example using [classref boost::interprocess::adaptive_pool adaptive_pool]:
4853
4854 [import ../example/doc_adaptive_pool.cpp]
4855 [doc_adaptive_pool]
4856
4857 [endsect]
4858
4859 [section:private_adaptive_pool private_adaptive_pool: a private adaptive pool]
4860
4861 Just like [classref boost::interprocess::private_node_allocator private_node_allocator]
4862 owns a private segregated storage pool,
4863 [classref boost::interprocess::private_adaptive_pool private_adaptive_pool] owns
4864 its own adaptive pool. If the user wants to avoid the excessive node allocation
4865 synchronization overhead in a container
4866 [classref boost::interprocess::private_adaptive_pool private_adaptive_pool]
4867 is a good choice.
4868
4869 [*Equality:] Two [classref boost::interprocess::private_adaptive_pool private_adaptive_pool]
4870 instances [*never] compare equal. Memory allocated with one allocator [*can't] be
4871 deallocated with another one.
4872
4873 [*Allocation thread-safety:] Allocation and deallocation are [*not] thread-safe.
4874
4875 To use [classref boost::interprocess::private_adaptive_pool private_adaptive_pool],
4876 you must include the following header:
4877
4878 [c++]
4879
4880 #include <boost/interprocess/allocators/private_adaptive_pool.hpp>
4881
4882 [classref boost::interprocess::private_adaptive_pool private_adaptive_pool]
4883 has the following declaration:
4884
4885 [c++]
4886
4887 namespace boost {
4888 namespace interprocess {
4889
4890 template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeChunks = ...>
4891 class private_adaptive_pool;
4892
4893 } //namespace interprocess {
4894 } //namespace boost {
4895
4896 An example using [classref boost::interprocess::private_adaptive_pool private_adaptive_pool]:
4897
4898 [import ../example/doc_private_adaptive_pool.cpp]
4899 [doc_private_adaptive_pool]
4900
4901 [endsect]
4902
4903 [section:cached_adaptive_pool cached_adaptive_pool: Avoiding synchronization overhead]
4904
4905 Adaptive pools have also a cached version. In this allocator the allocator caches
4906 some nodes to avoid the synchronization and bookkeeping overhead of the shared
4907 adaptive pool.
4908 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool]
4909 allocates nodes from the common adaptive pool but caches some of them privately so that following
4910 allocations have no synchronization overhead. When the cache is full, the allocator
4911 returns some cached nodes to the common pool, and those will be available to other
4912 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pools] or
4913 [classref boost::interprocess::adaptive_pool adaptive_pools] of the same managed segment.
4914
4915 [*Equality:] Two [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool]
4916 instances constructed with the same segment manager compare equal. If an instance is
4917 created using copy constructor, that instance compares equal with the original one.
4918
4919 [*Allocation thread-safety:] Allocation and deallocation are [*not] thread-safe.
4920
4921 To use [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool],
4922 you must include the following header:
4923
4924 [c++]
4925
4926 #include <boost/interprocess/allocators/cached_adaptive_pool.hpp>
4927
4928 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool]
4929 has the following declaration:
4930
4931 [c++]
4932
4933 namespace boost {
4934 namespace interprocess {
4935
4936 template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeNodes = ...>
4937 class cached_adaptive_pool;
4938
4939 } //namespace interprocess {
4940 } //namespace boost {
4941
4942 A [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool] instance
4943 and an [classref boost::interprocess::adaptive_pool adaptive_pool] instance
4944 share the same pool if both instances receive the same template parameters. This means
4945 that nodes returned to the shared pool by one of them can be reused by the other.
4946 Please note that this does not mean that both allocators compare equal, this is just
4947 information for programmers that want to maximize the use of the pool.
4948
4949 [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool], offers
4950 additional functions to control the cache (the cache can be controlled per instance):
4951
4952 * `void set_max_cached_nodes(std::size_t n)`: Sets the maximum cached nodes limit.
4953 If cached nodes reach the limit, some are returned to the shared pool.
4954
4955 * `std::size_t get_max_cached_nodes() const`: Returns the maximum cached nodes limit.
4956
4957 * `void deallocate_cache()`: Returns the cached nodes to the shared pool.
4958
4959 An example using [classref boost::interprocess::cached_adaptive_pool cached_adaptive_pool]:
4960
4961 [import ../example/doc_cached_adaptive_pool.cpp]
4962 [doc_cached_adaptive_pool]
4963
4964 [endsect]
4965
4966 [endsect]
4967
4968 [section:containers_explained Interprocess and containers in managed memory segments]
4969
4970 [section:stl_container_requirements Container requirements for Boost.Interprocess allocators]
4971
4972 [*Boost.Interprocess] STL compatible allocators offer a STL compatible allocator
4973 interface and if they define their internal *pointer* typedef as a relative pointer,
4974 they can sbe used to place STL containers in shared memory, memory mapped files or
4975 in a user defined memory segment.
4976
4977 However, as Scott Meyers mentions in his Effective STL
4978 book, Item 10, ['"Be aware of allocator conventions and
4979 restrictions"]:
4980
4981 * ['"the Standard explicitly allows library implementers
4982 to assume that every allocator's pointer typedef is
4983 a synonym for T*"]
4984
4985 * ['"the Standard says that an implementation of the STL is
4986 permitted to assume that all allocator objects of the
4987 same type are equivalent and always compare equal"]
4988
4989 Obviously, if any STL implementation ignores pointer typedefs,
4990 no smart pointer can be used as allocator::pointer. If STL
4991 implementations assume all allocator objects of the same
4992 type compare equal, it will assume that two allocators,
4993 each one allocating from a different memory pool
4994 are equal, which is a complete disaster.
4995
4996 STL containers that we want to place in shared memory or memory
4997 mapped files with [*Boost.Interprocess] can't make any of these assumptions, so:
4998
4999 * STL containers may not assume that memory allocated with
5000 an allocator can be deallocated with other allocators of
5001 the same type. All allocators objects must compare equal
5002 only if memory allocated with one object can be deallocated
5003 with the other one, and this can only tested with
5004 operator==() at run-time.
5005
5006 * Containers' internal pointers should be of the type allocator::pointer
5007 and containers may not assume allocator::pointer is a raw pointer.
5008
5009 * All objects must be constructed-destroyed via
5010 allocator::construct and allocator::destroy functions.
5011
5012 [endsect]
5013
5014 [section:containers STL containers in managed memory segments]
5015
5016 Unfortunately, many STL implementations use raw pointers
5017 for internal data and ignore allocator pointer typedefs
5018 and others suppose at some point that the allocator::typedef
5019 is T*. This is because in practice,
5020 there wasn't need of allocators with a pointer typedef
5021 different from T* for pooled/node memory
5022 allocators.
5023
5024 Until STL implementations handle allocator::pointer typedefs
5025 in a generic way, [*Boost.Interprocess] offers the following classes:
5026
5027 * [*boost:interprocess::vector] is the implementation of `std::vector` ready
5028 to be used in managed memory segments like shared memory. To use it include:
5029
5030 [c++]
5031
5032 #include <boost/interprocess/containers/vector.hpp>
5033
5034 * [*boost:interprocess::deque] is the implementation of `std::deque` ready
5035 to be used in managed memory segments like shared memory. To use it include:
5036
5037 [c++]
5038
5039 #include <boost/interprocess/containers/deque.hpp>
5040
5041 * [classref boost::interprocess::list list] is the implementation of `std::list` ready
5042 to be used in managed memory segments like shared memory. To use it include:
5043
5044 [c++]
5045
5046 #include <boost/interprocess/containers/list.hpp>
5047
5048 * [classref boost::interprocess::slist slist] is the implementation of SGI's `slist` container (singly linked list) ready
5049 to be used in managed memory segments like shared memory. To use it include:
5050
5051 [c++]
5052
5053 #include <boost/interprocess/containers/slist.hpp>
5054
5055 * [classref boost::interprocess::set set]/
5056 [classref boost::interprocess::multiset multiset]/
5057 [classref boost::interprocess::map map]/
5058 [classref boost::interprocess::multimap multimap] family is the implementation of
5059 std::set/multiset/map/multimap family ready
5060 to be used in managed memory segments like shared memory. To use them include:
5061
5062 [c++]
5063
5064 #include <boost/interprocess/containers/set.hpp>
5065 #include <boost/interprocess/containers/map.hpp>
5066
5067 * [classref boost::interprocess::flat_set flat_set]/
5068 [classref boost::interprocess::flat_multiset flat_multiset]/
5069 [classref boost::interprocess::flat_map flat_map]/
5070 [classref boost::interprocess::flat_multimap flat_multimap] classes are the
5071 adaptation and extension of Andrei Alexandrescu's famous AssocVector class
5072 from Loki library, ready for the shared memory. These classes offer the same
5073 functionality as `std::set/multiset/map/multimap` implemented with an ordered vector,
5074 which has faster lookups than the standard ordered associative containers
5075 based on red-black trees, but slower insertions. To use it include:
5076
5077 [c++]
5078
5079 #include <boost/interprocess/containers/flat_set.hpp>
5080 #include <boost/interprocess/containers/flat_map.hpp>
5081
5082 * [classref boost::interprocess::basic_string basic_string]
5083 is the implementation of `std::basic_string` ready
5084 to be used in managed memory segments like shared memory.
5085 It's implemented using a vector-like contiguous storage, so
5086 it has fast c string conversion and can be used with the
5087 [link interprocess.streams.vectorstream vectorstream] iostream formatting classes.
5088 To use it include:
5089
5090 [c++]
5091
5092 #include <boost/interprocess/containers/string.hpp>
5093
5094 All these containers have the same default arguments as standard
5095 containers and they can be used with other, non [*Boost.Interprocess]
5096 allocators (std::allocator, or boost::pool_allocator, for example).
5097
5098 To place any of these containers in managed memory segments, we must
5099 define the allocator template parameter with a [*Boost.Interprocess] allocator
5100 so that the container allocates the values in the managed memory segment.
5101 To place the container itself in shared memory, we construct it
5102 in the managed memory segment just like any other object with [*Boost.Interprocess]:
5103
5104 [import ../example/doc_cont.cpp]
5105 [doc_cont]
5106
5107 These containers also show how easy is to create/modify
5108 an existing container making possible to place it in shared memory.
5109
5110 [endsect]
5111
5112 [section:where_allocate Where is this being allocated?]
5113
5114 [*Boost.Interprocess] containers are placed in shared memory/memory mapped files,
5115 etc... using two mechanisms [*at the same time]:
5116
5117 * [*Boost.Interprocess ]`construct<>`, `find_or_construct<>`... functions. These
5118 functions place a C++ object in the shared memory/memory mapped file. But this
5119 places only the object, but *not* the memory that this object may allocate dynamically.
5120
5121 * Shared memory allocators. These allow allocating shared memory/memory mapped file
5122 portions so that containers can allocate dynamically fragments of memory to store
5123 newly inserted elements.
5124
5125 This means that to place any [*Boost.Interprocess] container (including
5126 [*Boost.Interprocess] strings) in shared memory or memory mapped files,
5127 containers *must*:
5128
5129 * Define their template allocator parameter to a [*Boost.Interprocess] allocator.
5130
5131 * Every container constructor must take the [*Boost.Interprocess] allocator as parameter.
5132
5133 * You must use construct<>/find_or_construct<>... functions to place the container
5134 in the managed memory.
5135
5136 If you do the first two points but you don't use `construct<>` or `find_or_construct<>`
5137 you are creating a container placed *only* in your process but that allocates memory
5138 for contained types from shared memory/memory mapped file.
5139
5140 Let's see an example:
5141
5142 [import ../example/doc_where_allocate.cpp]
5143 [doc_where_allocate]
5144
5145 [endsect]
5146
5147 [section:containers_and_move Move semantics in Interprocess containers]
5148
5149 [*Boost.Interprocess] containers support move semantics, which means that the contents
5150 of a container can be moved from a container two another one, without any copying. The
5151 contents of the source container are transferred to the target container and the source
5152 container is left in default-constructed state.
5153
5154 When using containers of containers, we can also use move-semantics to insert
5155 objects in the container, avoiding unnecessary copies.
5156
5157
5158 To transfer the contents of a container to another one, use
5159 `boost::move()` function, as shown in the example. For more details
5160 about functions supporting move-semantics, see the reference section of
5161 Boost.Interprocess containers:
5162
5163 [import ../example/doc_move_containers.cpp]
5164 [doc_move_containers]
5165
5166 [endsect]
5167
5168 [section:containers_of_containers Containers of containers]
5169
5170 When creating containers of containers, each container needs an allocator.
5171 To avoid using several allocators with complex type definitions, we can take
5172 advantage of the type erasure provided by void allocators and the ability
5173 to implicitly convert void allocators in allocators that allocate other types.
5174
5175 Here we have an example that builds a map in shared memory. Key is a string
5176 and the mapped type is a class that stores several containers:
5177
5178 [import ../example/doc_complex_map.cpp]
5179 [doc_complex_map]
5180
5181 [endsect]
5182
5183 [endsect]
5184
5185 [section:additional_containers Boost containers compatible with Boost.Interprocess]
5186
5187 As mentioned, container developers might need to change their implementation to make them
5188 compatible with Boost.Interprocess, because implementation usually ignore allocators with
5189 smart pointers. Hopefully several Boost containers are compatible with [*Interprocess].
5190
5191 [section:unordered Boost unordered containers]
5192
5193 [*Boost.Unordered] containers are compatible with Interprocess, so programmers can store
5194 hash containers in shared memory and memory mapped files. Here is a small example storing
5195 `unordered_map` in shared memory:
5196
5197 [import ../example/doc_unordered_map.cpp]
5198 [doc_unordered_map]
5199
5200 [endsect]
5201
5202 [section:multi_index Boost.MultiIndex containers]
5203
5204 The widely used [*Boost.MultiIndex] library is compatible with [*Boost.Interprocess] so
5205 we can construct pretty good databases in shared memory. Constructing databases in shared
5206 memory is a bit tougher than in normal memory, usually because those databases contain strings
5207 and those strings need to be placed in shared memory. Shared memory strings require
5208 an allocator in their constructors so this usually makes object insertion a bit more
5209 complicated.
5210
5211 Here is an example that shows how to put a multi index container in shared memory:
5212
5213 [import ../example/doc_multi_index.cpp]
5214 [doc_multi_index]
5215
5216 [endsect]
5217
5218 Programmers can place [*Boost.CircularBuffer] containers in sharecd memory provided
5219 they disable debugging facilities with defines `BOOST_CB_DISABLE_DEBUG` or the more
5220 general `NDEBUG`. The reason is that those debugging facilities are only compatible
5221 with raw pointers.
5222
5223 [endsect]
5224
5225 [endsect]
5226
5227 [section:memory_algorithms Memory allocation algorithms]
5228
5229 [section:simple_seq_fit simple_seq_fit: A simple shared memory management algorithm]
5230
5231 The algorithm is a variation of sequential fit using singly
5232 linked list of free memory buffers. The algorithm is based
5233 on the article about shared memory titled
5234 [@http://home.earthlink.net/~joshwalker1/writing/SharedMemory.html ['"Taming Shared Memory"] ].
5235 The algorithm is as follows:
5236
5237 The shared memory is divided in blocks of free shared memory,
5238 each one with some control data and several bytes of memory
5239 ready to be used. The control data contains a pointer (in
5240 our case offset_ptr) to the next free block and the size of
5241 the block. The allocator consists of a singly linked list
5242 of free blocks, ordered by address. The last block, points
5243 always to the first block:
5244
5245 [c++]
5246
5247 simple_seq_fit memory layout:
5248
5249 main extra allocated free_block_1 allocated free_block_2 allocated free_block_3
5250 header header block ctrl usr block ctrl usr block ctrl usr
5251 _________ _____ _________ _______________ _________ _______________ _________ _______________
5252 | || || || | || || | || || | |
5253 |free|ctrl||extra|| ||next|size| mem || ||next|size| mem || ||next|size| mem |
5254 |_________||_____||_________||_________|_____||_________||_________|_____||_________||_________|_____|
5255 | | | | | | |
5256 |_>_>_>_>_>_>_>_>_>_>_>_>_| |_>_>_>_>_>_>_>_>_>_>_>_>_| |_>_>_>_>_>_>_>_>_>_>_>_| |
5257 | |
5258 |_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<_<__|
5259
5260 When a user requests N bytes of memory, the allocator
5261 traverses the free block list looking for a block large
5262 enough. If the "mem" part of the block has the same
5263 size as the requested memory, we erase the block from
5264 the list and return a pointer to the "mem" part of the
5265 block. If the "mem" part size is bigger than needed,
5266 we split the block in two blocks, one of the requested
5267 size and the other with remaining size. Now, we take
5268 the block with the exact size, erase it from list and
5269 give it to the user.
5270
5271 When the user deallocates a block, we traverse the list (remember
5272 that the list is ordered), and search its place depending on
5273 the block address. Once found, we try to merge the block with
5274 adjacent blocks if possible.
5275
5276 To ease implementation, the size of the free memory block
5277 is measured in multiples of "basic_size" bytes. The basic
5278 size will be the size of the control block aligned to
5279 machine most restrictive alignment.
5280
5281 This algorithm is a low size overhead algorithm suitable for simple allocation
5282 schemes. This algorithm should only be used when size is a major concern, because
5283 the performance of this algorithm suffers when the memory is fragmented. This
5284 algorithm has linear allocation and deallocation time, so when the number
5285 of allocations is high, the user should use a more performance-friendly algorithm.
5286
5287 In most 32 systems, with 8 byte alignment, "basic_size" is 8 bytes.
5288 This means that an allocation request of 1 byte leads to
5289 the creation of a 16 byte block, where 8 bytes are available to the user.
5290 The allocation of 8 bytes leads also to the same 16 byte block.
5291
5292 [endsect]
5293
5294 [section:rbtree_best_fit rbtree_best_fit: Best-fit logarithmic-time complexity allocation]
5295
5296 This algorithm is an advanced algorithm using red-black trees to sort the free
5297 portions of the memory segment by size. This allows logarithmic complexity
5298 allocation. Apart from this, a doubly-linked list of all portions of memory
5299 (free and allocated) is maintained to allow constant-time access to previous
5300 and next blocks when doing merging operations.
5301
5302 The data used to create the red-black tree of free nodes is overwritten by the user
5303 since it's no longer used once the memory is allocated. This maintains the memory
5304 size overhead down to the doubly linked list overhead, which is pretty small (two pointers).
5305 Basically this is the scheme:
5306
5307 [c++]
5308
5309 rbtree_best_fit memory layout:
5310
5311 main allocated block free block allocated block free block
5312 header
5313 _______________ _______________ _________________________________ _______________ _________________________________
5314 | || | || | | || | || | | |
5315 | main header ||next|prev| mem ||next|prev|left|right|parent| mem ||next|prev| mem ||next|prev|left|right|parent| mem |
5316 |_______________||_________|_____||_________|_________________|_____||_________|_____||_________|_________________|_____|
5317
5318
5319 This allocation algorithm is pretty fast and scales well with big shared memory
5320 segments and big number of allocations. To form a block a minimum memory size is needed:
5321 the sum of the doubly linked list and the red-black tree control data.
5322 The size of a block is measured in multiples of the most restrictive alignment value.
5323
5324 In most 32 systems with 8 byte alignment the minimum size of a block is 24 byte.
5325 When a block is allocated the control data related to the red black tree
5326 is overwritten by the user (because it's only needed for free blocks).
5327
5328 In those systems a 1 byte allocation request means that:
5329
5330 * 24 bytes of memory from the segment are used to form a block.
5331 * 16 bytes of them are usable for the user.
5332
5333 For really small allocations (<= 8 bytes), this algorithm wastes more memory than the
5334 simple sequential fit algorithm (8 bytes more).
5335 For allocations bigger than 8 bytes the memory overhead is exactly the same.
5336 This is the default allocation algorithm in [*Boost.Interprocess] managed memory
5337 segments.
5338
5339 [endsect]
5340
5341 [endsect]
5342
5343 [section:streams Direct iostream formatting: vectorstream and bufferstream]
5344
5345 Shared memory, memory-mapped files and all [*Boost.Interprocess] mechanisms are focused
5346 on efficiency. The reason why shared memory is used is that it's the
5347 fastest IPC mechanism available. When passing text-oriented messages through
5348 shared memory, there is need to format the message. Obviously C++ offers
5349 the iostream framework for that work.
5350
5351 Some programmers appreciate the iostream safety and design for memory
5352 formatting but feel that the stringstream family is far from efficient not
5353 when formatting, but when obtaining formatted data to a string, or when
5354 setting the string from which the stream will extract data. An example:
5355
5356 [c++]
5357
5358 //Some formatting elements
5359 std::string my_text = "...";
5360 int number;
5361
5362 //Data reader
5363 std::istringstream input_processor;
5364
5365 //This makes a copy of the string. If not using a
5366 //reference counted string, this is a serious overhead.
5367 input_processor.str(my_text);
5368
5369 //Extract data
5370 while(/*...*/){
5371 input_processor >> number;
5372 }
5373
5374 //Data writer
5375 std::ostringstream output_processor;
5376
5377 //Write data
5378 while(/*...*/){
5379 output_processor << number;
5380 }
5381
5382 //This returns a temporary string. Even with return-value
5383 //optimization this is expensive.
5384 my_text = input_processor.str();
5385
5386 The problem is even worse if the string is a shared-memory string, because
5387 to extract data, we must copy the data first from shared-memory to a
5388 `std::string` and then to a `std::stringstream`. To encode data in a shared memory
5389 string we should copy data from a `std::stringstream` to a `std::string` and then
5390 to the shared-memory string.
5391
5392 Because of this overhead, [*Boost.Interprocess] offers a way to format memory-strings
5393 (in shared memory, memory mapped files or any other memory segment) that
5394 can avoid all unneeded string copy and memory allocation/deallocations, while
5395 using all iostream facilities. [*Boost.Interprocess] *vectorstream* and *bufferstream* implement
5396 vector-based and fixed-size buffer based storage support for iostreams and
5397 all the formatting/locale hard work is done by standard `std::basic_streambuf<>`
5398 and `std::basic_iostream<>` classes.
5399
5400 [section:vectorstream Formatting directly in your character vector: vectorstream]
5401
5402 The *vectorstream* class family (*basic_vectorbuf*, *basic_ivectorstream*
5403 ,*basic_ovectorstream* and *basic_vectorstream*) is an efficient way to obtain
5404 formatted reading/writing directly in a character vector. This way, if
5405 a shared-memory vector is used, data is extracted/written from/to the shared-memory
5406 vector, without additional copy/allocation. We can see the declaration of
5407 basic_vectorstream here:
5408
5409 //!A basic_iostream class that holds a character vector specified by CharVector
5410 //!template parameter as its formatting buffer. The vector must have
5411 //!contiguous storage, like std::vector, boost::interprocess::vector or
5412 //!boost::interprocess::basic_string
5413 template <class CharVector, class CharTraits =
5414 std::char_traits<typename CharVector::value_type> >
5415 class basic_vectorstream
5416 : public std::basic_iostream<typename CharVector::value_type, CharTraits>
5417
5418 {
5419 public:
5420 typedef CharVector vector_type;
5421 typedef typename std::basic_ios
5422 <typename CharVector::value_type, CharTraits>::char_type char_type;
5423 typedef typename std::basic_ios<char_type, CharTraits>::int_type int_type;
5424 typedef typename std::basic_ios<char_type, CharTraits>::pos_type pos_type;
5425 typedef typename std::basic_ios<char_type, CharTraits>::off_type off_type;
5426 typedef typename std::basic_ios<char_type, CharTraits>::traits_type traits_type;
5427
5428 //!Constructor. Throws if vector_type default constructor throws.
5429 basic_vectorstream(std::ios_base::openmode mode
5430 = std::ios_base::in | std::ios_base::out);
5431
5432 //!Constructor. Throws if vector_type(const Parameter &param) throws.
5433 template<class Parameter>
5434 basic_vectorstream(const Parameter &param, std::ios_base::openmode mode
5435 = std::ios_base::in | std::ios_base::out);
5436
5437 ~basic_vectorstream(){}
5438
5439 //!Returns the address of the stored stream buffer.
5440 basic_vectorbuf<CharVector, CharTraits>* rdbuf() const;
5441
5442 //!Swaps the underlying vector with the passed vector.
5443 //!This function resets the position in the stream.
5444 //!Does not throw.
5445 void swap_vector(vector_type &vect);
5446
5447 //!Returns a const reference to the internal vector.
5448 //!Does not throw.
5449 const vector_type &vector() const;
5450
5451 //!Preallocates memory from the internal vector.
5452 //!Resets the stream to the first position.
5453 //!Throws if the internals vector's memory allocation throws.
5454 void reserve(typename vector_type::size_type size);
5455 };
5456
5457 The vector type is templatized, so that we can use any type of vector:
5458 [*std::vector], [classref boost::interprocess::vector]... But the storage must be *contiguous*,
5459 we can't use a deque. We can even use *boost::interprocess::basic_string*, since it has a
5460 vector interface and it has contiguous storage. *We can't use std::string*, because
5461 although some std::string implementation are vector-based, others can have
5462 optimizations and reference-counted implementations.
5463
5464 The user can obtain a const reference to the internal vector using
5465 `vector_type vector() const` function and he also can swap the internal vector
5466 with an external one calling `void swap_vector(vector_type &vect)`.
5467 The swap function resets the stream position.
5468 This functions allow efficient methods to obtain the formatted data avoiding
5469 all allocations and data copies.
5470
5471 Let's see an example to see how to use vectorstream:
5472
5473 [import ../example/doc_vectorstream.cpp]
5474 [doc_vectorstream]
5475
5476 [endsect]
5477
5478 [section:bufferstream Formatting directly in your character buffer: bufferstream]
5479
5480 As seen, vectorstream offers an easy and secure way for efficient iostream
5481 formatting, but many times, we have to read or write formatted data from/to a
5482 fixed size character buffer (a static buffer, a c-string, or any other).
5483 Because of the overhead of stringstream, many developers (specially in
5484 embedded systems) choose sprintf family. The *bufferstream* classes offer
5485 iostream interface with direct formatting in a fixed size memory buffer with
5486 protection against buffer overflows. This is the interface:
5487
5488 //!A basic_iostream class that uses a fixed size character buffer
5489 //!as its formatting buffer.
5490 template <class CharT, class CharTraits = std::char_traits<CharT> >
5491 class basic_bufferstream
5492 : public std::basic_iostream<CharT, CharTraits>
5493
5494 {
5495 public: // Typedefs
5496 typedef typename std::basic_ios
5497 <CharT, CharTraits>::char_type char_type;
5498 typedef typename std::basic_ios<char_type, CharTraits>::int_type int_type;
5499 typedef typename std::basic_ios<char_type, CharTraits>::pos_type pos_type;
5500 typedef typename std::basic_ios<char_type, CharTraits>::off_type off_type;
5501 typedef typename std::basic_ios<char_type, CharTraits>::traits_type traits_type;
5502
5503 //!Constructor. Does not throw.
5504 basic_bufferstream(std::ios_base::openmode mode
5505 = std::ios_base::in | std::ios_base::out);
5506
5507 //!Constructor. Assigns formatting buffer. Does not throw.
5508 basic_bufferstream(CharT *buffer, std::size_t length,
5509 std::ios_base::openmode mode
5510 = std::ios_base::in | std::ios_base::out);
5511
5512 //!Returns the address of the stored stream buffer.
5513 basic_bufferbuf<CharT, CharTraits>* rdbuf() const;
5514
5515 //!Returns the pointer and size of the internal buffer.
5516 //!Does not throw.
5517 std::pair<CharT *, std::size_t> buffer() const;
5518
5519 //!Sets the underlying buffer to a new value. Resets
5520 //!stream position. Does not throw.
5521 void buffer(CharT *buffer, std::size_t length);
5522 };
5523
5524 //Some typedefs to simplify usage
5525 typedef basic_bufferstream<char> bufferstream;
5526 typedef basic_bufferstream<wchar_t> wbufferstream;
5527 // ...
5528
5529 While reading from a fixed size buffer, *bufferstream* activates endbit flag if
5530 we try to read an address beyond the end of the buffer. While writing to a
5531 fixed size buffer, *bufferstream* will active the badbit flag if a buffer overflow
5532 is going to happen and disallows writing. This way, the fixed size buffer
5533 formatting through *bufferstream* is secure and efficient, and offers a good
5534 alternative to sprintf/sscanf functions. Let's see an example:
5535
5536 [import ../example/doc_bufferstream.cpp]
5537 [doc_bufferstream]
5538
5539 As seen, *bufferstream* offers an efficient way to format data without any
5540 allocation and extra copies. This is very helpful in embedded systems, or
5541 formatting inside time-critical loops, where stringstream extra copies would
5542 be too expensive. Unlike sprintf/sscanf, it has protection against buffer
5543 overflows. As we know, according to the *Technical Report on C++ Performance*,
5544 it's possible to design efficient iostreams for embedded platforms, so this
5545 bufferstream class comes handy to format data to stack, static or shared memory
5546 buffers.
5547
5548 [endsect]
5549
5550 [endsect]
5551
5552 [section:interprocess_smart_ptr Ownership smart pointers]
5553
5554 C++ users know the importance of ownership smart pointers when dealing with resources.
5555 Boost offers a wide range of such type of pointers: `intrusive_ptr<>`,
5556 `scoped_ptr<>`, `shared_ptr<>`...
5557
5558 When building complex shared memory/memory mapped files structures, programmers
5559 would like to use also the advantages of these smart pointers. The problem is that
5560 Boost and C++ TR1 smart pointers are not ready to be used for shared memory. The cause
5561 is that those smart pointers contain raw pointers and they use virtual functions,
5562 something that is not possible if you want to place your data in shared memory.
5563 The virtual function limitation makes even impossible to achieve the same level of
5564 functionality of Boost and TR1 with [*Boost.Interprocess] smart pointers.
5565
5566 Interprocess ownership smart pointers are mainly "smart pointers containing smart pointers",
5567 so we can specify the pointer type they contain.
5568
5569 [section:intrusive_ptr Intrusive pointer]
5570
5571 [classref boost::interprocess::intrusive_ptr] is the generalization of `boost::intrusive_ptr<>`
5572 to allow non-raw pointers as intrusive pointer members. As the well-known
5573 `boost::intrusive_ptr` we must specify the pointee type but we also must also specify
5574 the pointer type to be stored in the intrusive_ptr:
5575
5576 [c++]
5577
5578 //!The intrusive_ptr class template stores a pointer to an object
5579 //!with an embedded reference count. intrusive_ptr is parameterized on
5580 //!T (the type of the object pointed to) and VoidPointer(a void pointer type
5581 //!that defines the type of pointer that intrusive_ptr will store).
5582 //!intrusive_ptr<T, void *> defines a class with a T* member whereas
5583 //!intrusive_ptr<T, offset_ptr<void> > defines a class with a offset_ptr<T> member.
5584 //!Relies on unqualified calls to:
5585 //!
5586 //!void intrusive_ptr_add_ref(T * p);
5587 //!void intrusive_ptr_release(T * p);
5588 //!
5589 //!with (p != 0)
5590 //!
5591 //!The object is responsible for destroying itself.
5592 template<class T, class VoidPointer>
5593 class intrusive_ptr;
5594
5595 So `boost::interprocess::intrusive_ptr<MyClass, void*>` is equivalent to
5596 `boost::intrusive_ptr<MyClass>`. But if we want to place the intrusive_ptr in
5597 shared memory we must specify a relative pointer type like
5598 `boost::interprocess::intrusive_ptr<MyClass, boost::interprocess::offset_ptr<void> >`
5599
5600 [import ../example/doc_intrusive.cpp]
5601 [doc_intrusive]
5602
5603 [endsect]
5604
5605 [section:scoped_ptr Scoped pointer]
5606
5607 `boost::interprocess::scoped_ptr<>` is the big brother of `boost::scoped_ptr<>`, which
5608 adds a custom deleter to specify how the pointer passed to the scoped_ptr must be destroyed.
5609 Also, the `pointer` typedef of the deleter will specify the pointer type stored by scoped_ptr.
5610
5611 [c++]
5612
5613 //!scoped_ptr stores a pointer to a dynamically allocated object.
5614 //!The object pointed to is guaranteed to be deleted, either on destruction
5615 //!of the scoped_ptr, or via an explicit reset. The user can avoid this
5616 //!deletion using release().
5617 //!scoped_ptr is parameterized on T (the type of the object pointed to) and
5618 //!Deleter (the functor to be executed to delete the internal pointer).
5619 //!The internal pointer will be of the same pointer type as typename
5620 //!Deleter::pointer type (that is, if typename Deleter::pointer is
5621 //!offset_ptr<void>, the internal pointer will be offset_ptr<T>).
5622 template<class T, class Deleter>
5623 class scoped_ptr;
5624
5625 `scoped_ptr<>` comes handy to implement *rollbacks* with exceptions: if an exception
5626 is thrown or we call `return` in the scope of `scoped_ptr<>` the deleter is
5627 automatically called so that *the deleter can be considered as a rollback* function.
5628 If all goes well, we call `release()` member function to avoid rollback when
5629 the `scoped_ptr` goes out of scope.
5630
5631 [import ../example/doc_scoped_ptr.cpp]
5632 [doc_scoped_ptr]
5633
5634 [endsect]
5635
5636 [section:shared_ptr Shared pointer and weak pointer]
5637
5638 [*Boost.Interprocess] also offers the possibility of creating non-intrusive
5639 reference-counted objects in managed shared memory or mapped files.
5640
5641 Unlike
5642 [@http://www.boost.org/libs/smart_ptr/shared_ptr.htm boost::shared_ptr],
5643 due to limitations of mapped segments [classref boost::interprocess::shared_ptr]
5644 cannot take advantage of virtual functions to maintain the same shared pointer
5645 type while providing user-defined allocators and deleters. The allocator
5646 and the deleter are template parameters of the shared pointer.
5647
5648 Since the reference count and other auxiliary data needed by
5649 [classref boost::interprocess::shared_ptr shared_ptr] must be created also in
5650 the managed segment, and the deleter has to delete the object from
5651 the segment, the user must specify an allocator object and a deleter object
5652 when constructing a non-empty instance of
5653 [classref boost::interprocess::shared_ptr shared_ptr], just like
5654 [*Boost.Interprocess] containers need to pass allocators in their constructors.
5655
5656 Here is the declaration of [classref boost::interprocess::shared_ptr shared_ptr]:
5657
5658 [c++]
5659
5660 template<class T, class VoidAllocator, class Deleter>
5661 class shared_ptr;
5662
5663 * T is the type of the pointed type.
5664 * VoidAllocator is the allocator to be used to allocate auxiliary
5665 elements such as the reference count, the deleter...
5666 The internal `pointer` typedef of the allocator will determine
5667 the type of pointer that shared_ptr will internally use, so
5668 allocators defining `pointer` as `offset_ptr<void>` will
5669 make all internal pointers used by `shared_ptr` to be
5670 also relative pointers. See [classref boost::interprocess::allocator]
5671 for a working allocator.
5672 * Deleter is the function object that will be used to destroy
5673 the pointed object when the last reference to the object
5674 is destroyed. The deleter functor will take a pointer to T
5675 of the same category as the void pointer defined by
5676 `VoidAllocator::pointer`. See [classref boost::interprocess::deleter]
5677 for a generic deleter that erases a object from a managed segment.
5678
5679 With correctly specified parameters, [*Boost.Interprocess] users
5680 can create objects in shared memory that hold shared pointers pointing
5681 to other objects also in shared memory, obtaining the benefits of
5682 reference counting. Let's see how to create a shared pointer in a managed shared memory:
5683
5684 [import ../example/doc_shared_ptr_explicit.cpp]
5685 [doc_shared_ptr_explicit]
5686
5687 [classref boost::interprocess::shared_ptr] is very flexible and
5688 configurable (we can specify the allocator and the deleter, for example),
5689 but as shown the creation of a shared pointer in managed segments
5690 need too much typing.
5691
5692 To simplify this usage, [classref boost::interprocess::shared_ptr] header
5693 offers a shared pointer definition helper class
5694 ([classref boost::interprocess::managed_shared_ptr managed_shared_ptr]) and a function
5695 ([funcref boost::interprocess::make_managed_shared_ptr make_managed_shared_ptr])
5696 to easily construct a shared pointer from a type allocated in a managed segment
5697 with an allocator that will allocate the reference count also in the managed
5698 segment and a deleter that will erase the object from the segment.
5699
5700 These utilities will use a [*Boost.Interprocess] allocator
5701 ([classref boost::interprocess::allocator])
5702 and deleter ([classref boost::interprocess::deleter]) to do their job.
5703 The definition of the previous shared pointer
5704 could be simplified to the following:
5705
5706 [c++]
5707
5708 typedef managed_shared_ptr<MyType, managed_shared_memory>::type my_shared_ptr;
5709
5710 And the creation of a shared pointer can be simplified to this:
5711
5712 [c++]
5713
5714 my_shared_ptr sh_ptr = make_managed_shared_ptr
5715 (segment.construct<MyType>("object to share")(), segment);
5716
5717 [*Boost.Interprocess] also offers a weak pointer named
5718 [classref boost::interprocess::weak_ptr weak_ptr] (with its corresponding
5719 [classref boost::interprocess::managed_weak_ptr managed_weak_ptr] and
5720 [funcref boost::interprocess::make_managed_weak_ptr make_managed_weak_ptr] utilities)
5721 to implement non-owning observers of an object owned by
5722 [classref boost::interprocess::shared_ptr shared_ptr].
5723
5724 Now let's see a detailed example of the use of
5725 [classref boost::interprocess::shared_ptr shared_ptr]:
5726 and
5727 [classref boost::interprocess::weak_ptr weak_ptr]
5728
5729 [import ../example/doc_shared_ptr.cpp]
5730 [doc_shared_ptr]
5731
5732 In general, using [*Boost.Interprocess]' [classref boost::interprocess::shared_ptr shared_ptr]
5733 and [classref boost::interprocess::weak_ptr weak_ptr] is very similar to their
5734 counterparts [@http://www.boost.org/libs/smart_ptr/shared_ptr.htm boost::shared_ptr]
5735 and [@http://www.boost.org/libs/smart_ptr/weak_ptr.htm boost::weak_ptr], but
5736 they need more template parameters and more run-time parameters in their constructors.
5737
5738 Just like [@http://www.boost.org/libs/smart_ptr/shared_ptr.htm boost::shared_ptr]
5739 can be stored in a STL container, [classref boost::interprocess::shared_ptr shared_ptr]
5740 can also be stored in [*Boost.Interprocess] containers.
5741
5742 If a programmer just uses [classref boost::interprocess::shared_ptr shared_ptr]
5743 to be able to insert dynamically constructed objects into a container constructed
5744 in the managed segment, but he does not need to share the ownership of that object with
5745 other objects [classref boost::interprocess::managed_unique_ptr managed_unique_ptr] is a much
5746 faster and easier to use alternative.
5747
5748 [endsect]
5749
5750 [section:unique_ptr Unique pointer]
5751
5752 Unique ownership smart pointers are really useful to free programmers from
5753 manual resource liberation of non-shared objects. [*Boost.Interprocess]'
5754 `unique_ptr` is much like
5755 [classref boost::interprocess::scoped_ptr scoped_ptr] but it's [*moveable]
5756 and can be easily inserted in [*Boost.Interprocess] containers.
5757 Interprocess had its own `unique_ptr` implementation but from Boost 1.57,
5758 [*Boost.Interprocess] uses the improved and generic `boost::unique_ptr`
5759 implementation. Here is the declaration of the unique pointer class:
5760
5761 [c++]
5762
5763 template <class T, class D>
5764 class unique_ptr;
5765
5766 * T is the type of the object pointed by `unique_ptr`.
5767 * D is the deleter that will erase the object type of the object pointed by
5768 the unique_ptr when the unique pointer
5769 is destroyed (and if still has the ownership of the object). If the deleter defines
5770 an internal `pointer` typedef, `unique_ptr`]
5771 will use an internal pointer of the same type. So if `D::pointer` is `offset_ptr<T>`
5772 the unique pointer will store a relative pointer instead of a raw one. This
5773 allows placing `unique_ptr` in shared
5774 memory and memory-mapped files.
5775
5776 `unique_ptr` can release the ownership of
5777 the stored pointer so it's useful also to be used as a rollback function. One of the main
5778 properties of the class is that [*is not copyable, but only moveable]. When a unique
5779 pointer is moved to another one, the ownership of the pointer is transferred from
5780 the source unique pointer to the target unique pointer. If the target unique pointer
5781 owned an object, that object is first deleted before taking ownership of the new object.
5782
5783 [*Boost.Interprocess] also offers auxiliary types to
5784 easily define and construct unique pointers that can be placed in managed segments
5785 and will correctly delete owned object from the segment:
5786 [classref boost::interprocess::managed_unique_ptr managed_unique_ptr]
5787 and
5788 [funcref boost::interprocess::make_managed_unique_ptr make_managed_unique_ptr]
5789 utilities.
5790
5791 Here we see an example of the use `unique_ptr`
5792 including creating containers of such objects:
5793
5794 [import ../example/doc_unique_ptr.cpp]
5795 [doc_unique_ptr]
5796
5797 [endsect]
5798
5799 [endsect]
5800
5801 [section:architecture Architecture and internals]
5802
5803 [section:basic_guidelines Basic guidelines]
5804
5805 When building [*Boost.Interprocess] architecture, I took some basic guidelines that can be
5806 summarized by these points:
5807
5808 * [*Boost.Interprocess] should be portable at least in UNIX and Windows systems. That
5809 means unifying not only interfaces but also behaviour. This is why
5810 [*Boost.Interprocess] has chosen kernel or filesystem persistence for shared memory
5811 and named synchronization mechanisms. Process persistence for shared memory is also
5812 desirable but it's difficult to achieve in UNIX systems.
5813
5814 * [*Boost.Interprocess] inter-process synchronization primitives should be equal to thread
5815 synchronization primitives. [*Boost.Interprocess] aims to have an interface compatible
5816 with the C++ standard thread API.
5817
5818 * [*Boost.Interprocess] architecture should be modular, customizable but efficient. That's
5819 why [*Boost.Interprocess] is based on templates and memory algorithms, index types,
5820 mutex types and other classes are templatizable.
5821
5822 * [*Boost.Interprocess] architecture should allow the same concurrency as thread based
5823 programming. Different mutual exclusion levels are defined so that a process
5824 can concurrently allocate raw memory when expanding a shared memory vector while another
5825 process can be safely searching a named object.
5826
5827 * [*Boost.Interprocess] containers know nothing about [*Boost.Interprocess]. All specific
5828 behaviour is contained in the STL-like allocators. That allows STL vendors to slightly
5829 modify (or better said, generalize) their standard container implementations and obtain
5830 a fully std::allocator and boost::interprocess::allocator compatible container. This also
5831 make [*Boost.Interprocess] containers compatible with standard algorithms.
5832
5833 [*Boost.Interprocess] is built above 3 basic classes: a [*memory algorithm], a
5834 [*segment manager] and a [*managed memory segment]:
5835
5836 [endsect]
5837
5838 [section:architecture_algorithm_to_managed From the memory algorithm to the managed segment]
5839
5840 [section:architecture_memory_algorithm The memory algorithm]
5841
5842 The [*memory algorithm] is an object that is placed in the first bytes of a
5843 shared memory/memory mapped file segment. The [*memory algorithm] can return
5844 portions of that segment to users marking them as used and the user can return those
5845 portions to the [*memory algorithm] so that the [*memory algorithm] mark them as free
5846 again. There is an exception though: some bytes beyond the end of the memory
5847 algorithm object, are reserved and can't be used for this dynamic allocation.
5848 This "reserved" zone will be used to place other additional objects
5849 in a well-known place.
5850
5851 To sum up, a [*memory algorithm] has the same mission as malloc/free of
5852 standard C library, but it just can return portions of the segment
5853 where it is placed. The layout of a memory segment would be:
5854
5855 [c++]
5856
5857 Layout of the memory segment:
5858 ____________ __________ ____________________________________________
5859 | | | |
5860 | memory | reserved | The memory algorithm will return portions |
5861 | algorithm | | of the rest of the segment. |
5862 |____________|__________|____________________________________________|
5863
5864
5865 The [*memory algorithm] takes care of memory synchronizations, just like malloc/free
5866 guarantees that two threads can call malloc/free at the same time. This is usually
5867 achieved placing a process-shared mutex as a member of the memory algorithm. Take
5868 in care that the memory algorithm knows [*nothing] about the segment (if it is
5869 shared memory, a shared memory file, etc.). For the memory algorithm the segment
5870 is just a fixed size memory buffer.
5871
5872 The [*memory algorithm] is also a configuration point for the rest of the
5873 [*Boost.Interprocess]
5874 framework since it defines two basic types as member typedefs:
5875
5876 [c++]
5877
5878 typedef /*implementation dependent*/ void_pointer;
5879 typedef /*implementation dependent*/ mutex_family;
5880
5881
5882 The `void_pointer` typedef defines the pointer type that will be used in the
5883 [*Boost.Interprocess] framework (segment manager, allocators, containers). If the memory
5884 algorithm is ready to be placed in a shared memory/mapped file mapped in different base
5885 addresses, this pointer type will be defined as `offset_ptr<void>` or a similar relative
5886 pointer. If the [*memory algorithm] will be used just with fixed address mapping,
5887 `void_pointer` can be defined as `void*`.
5888
5889 The rest of the interface of a [*Boost.Interprocess] [*memory algorithm] is described in
5890 [link interprocess.customizing_interprocess.custom_interprocess_alloc Writing a new shared memory allocation algorithm]
5891 section. As memory algorithm examples, you can see the implementations
5892 [classref boost::interprocess::simple_seq_fit simple_seq_fit] or
5893 [classref boost::interprocess::rbtree_best_fit rbtree_best_fit] classes.
5894
5895 [endsect]
5896
5897 [section:architecture_segment_manager The segment manager]
5898
5899 The *segment manager*, is an object also placed in the first bytes of the
5900 managed memory segment (shared memory, memory mapped file), that offers more
5901 sophisticated services built above the [*memory algorithm]. How can [*both] the
5902 segment manager and memory algorithm be placed in the beginning of the segment?
5903 That's because the segment manager [*owns] the memory algorithm: The
5904 truth is that the memory algorithm is [*embedded] in the segment manager:
5905
5906
5907 [c++]
5908
5909 The layout of managed memory segment:
5910 _______ _________________
5911 | | | |
5912 | some | memory | other |<- The memory algorithm considers
5913 |members|algorithm|members| "other members" as reserved memory, so
5914 |_______|_________|_______| it does not use it for dynamic allocation.
5915 |_________________________|____________________________________________
5916 | | |
5917 | segment manager | The memory algorithm will return portions |
5918 | | of the rest of the segment. |
5919 |_________________________|____________________________________________|
5920
5921
5922 The [*segment manager] initializes the memory algorithm and tells the memory
5923 manager that it should not use the memory where the rest of the
5924 [*segment manager]'s member are placed for dynamic allocations. The
5925 other members of the [*segment manager] are [*a recursive mutex]
5926 (defined by the memory algorithm's [*mutex_family::recursive_mutex] typedef member),
5927 and [*two indexes (maps)]: one to implement named allocations, and another one to
5928 implement "unique instance" allocations.
5929
5930 * The first index is a map with a pointer to a c-string (the name of the named object)
5931 as a key and a structure with information of the dynamically allocated object
5932 (the most important being the address and the size of the object).
5933
5934 * The second index is used to implement "unique instances"
5935 and is basically the same as the first index,
5936 but the name of the object comes from a `typeid(T).name()` operation.
5937
5938 The memory needed to store [name pointer, object information] pairs in the index is
5939 allocated also via the *memory algorithm*, so we can tell that internal indexes
5940 are just like ordinary user objects built in the segment. The rest of the memory
5941 to store the name of the object, the object itself, and meta-data for
5942 destruction/deallocation is allocated using the *memory algorithm* in a single
5943 `allocate()` call.
5944
5945 As seen, the [*segment manager] knows [*nothing] about shared memory/memory mapped files.
5946 The [*segment manager] itself does not allocate portions of the segment,
5947 it just asks the *memory algorithm* to allocate the needed memory from the rest
5948 of the segment. The [*segment manager] is a class built above the memory algorithm
5949 that offers named object construction, unique instance constructions, and many
5950 other services.
5951
5952 The [*segment manager] is implemented in [*Boost.Interprocess] by
5953 the [classref boost::interprocess::segment_manager segment_manager] class.
5954
5955 [c++]
5956
5957 template<class CharType
5958 ,class MemoryAlgorithm
5959 ,template<class IndexConfig> class IndexType>
5960 class segment_manager;
5961
5962 As seen, the segment manager is quite generic: we can specify the character type
5963 to be used to identify named objects, we can specify the memory algorithm that will
5964 control dynamically the portions of the memory segment, and we can specify
5965 also the index type that will store the [name pointer, object information] mapping.
5966 We can construct our own index types as explained in
5967 [link interprocess.customizing_interprocess.custom_indexes Building custom indexes] section.
5968
5969 [endsect]
5970
5971 [section:architecture_managed_memory Boost.Interprocess managed memory segments]
5972
5973 The [*Boost.Interprocess] managed memory segments that construct the shared memory/memory
5974 mapped file, place there the segment manager and forward the user requests to the
5975 segment manager. For example, [classref boost::interprocess::basic_managed_shared_memory basic_managed_shared_memory]
5976 is a [*Boost.Interprocess] managed memory segment that works with shared memory.
5977 [classref boost::interprocess::basic_managed_mapped_file basic_managed_mapped_file] works with memory mapped files, etc...
5978
5979 Basically, the interface of a [*Boost.Interprocess] managed memory segment is the same as
5980 the [*segment manager] but it also offers functions to "open", "create", or "open or create"
5981 shared memory/memory-mapped files segments and initialize all needed resources.
5982 Managed memory segment classes are not built in shared memory or memory mapped files, they
5983 are normal C++ classes that store a pointer to the segment manager (which is built
5984 in shared memory or memory mapped files).
5985
5986 Apart from this, managed memory segments offer specific functions: `managed_mapped_file`
5987 offers functions to flush memory contents to the file, `managed_heap_memory` offers
5988 functions to expand the memory, etc...
5989
5990 Most of the functions of [*Boost.Interprocess] managed memory segments can be shared
5991 between all managed memory segments, since many times they just forward the functions
5992 to the segment manager. Because of this,
5993 in [*Boost.Interprocess] all managed memory segments derive from a common class that
5994 implements memory-independent (shared memory, memory mapped files) functions:
5995 [@../../boost/interprocess/detail/managed_memory_impl.hpp
5996 boost::interprocess::ipcdetail::basic_managed_memory_impl]
5997
5998 Deriving from this class, [*Boost.Interprocess] implements several managed memory
5999 classes, for different memory backends:
6000
6001 * [classref boost::interprocess::basic_managed_shared_memory basic_managed_shared_memory] (for shared memory).
6002 * [classref boost::interprocess::basic_managed_mapped_file basic_managed_mapped_file] (for memory mapped files).
6003 * [classref boost::interprocess::basic_managed_heap_memory basic_managed_heap_memory] (for heap allocated memory).
6004 * [classref boost::interprocess::basic_managed_external_buffer basic_managed_external_buffer] (for user provided external buffer).
6005
6006 [endsect]
6007
6008 [endsect]
6009
6010 [section:allocators_containers Allocators and containers]
6011
6012 [section:allocators Boost.Interprocess allocators]
6013
6014 The [*Boost.Interprocess] STL-like allocators are fairly simple and follow the usual C++
6015 allocator approach. Normally, allocators for STL containers are based above new/delete
6016 operators and above those, they implement pools, arenas and other allocation tricks.
6017
6018 In [*Boost.Interprocess] allocators, the approach is similar, but all allocators are based
6019 on the *segment manager*. The segment manager is the only one that provides from simple
6020 memory allocation to named object creations. [*Boost.Interprocess] allocators always store
6021 a pointer to the segment manager, so that they can obtain memory from the segment or share
6022 a common pool between allocators.
6023
6024 As you can imagine, the member pointers of the allocator are not a raw pointers, but
6025 pointer types defined by the `segment_manager::void_pointer` type. Apart from this,
6026 the `pointer` typedef of [*Boost.Interprocess] allocators is also of the same type of
6027 `segment_manager::void_pointer`.
6028
6029 This means that if our allocation algorithm defines `void_pointer` as `offset_ptr<void>`,
6030 `boost::interprocess::allocator<T>` will store an `offset_ptr<segment_manager>`
6031 to point to the segment manager and the `boost::interprocess::allocator<T>::pointer` type
6032 will be `offset_ptr<T>`. This way, [*Boost.Interprocess] allocators can be placed in the
6033 memory segment managed by the segment manager, that is, shared memory, memory mapped files,
6034 etc...
6035
6036 [endsect]
6037
6038 [section:implementation_segregated_storage_pools Implementation of [*Boost.Interprocess] segregated storage pools]
6039
6040 Segregated storage pools are simple and follow the classic segregated storage algorithm.
6041
6042 * The pool allocates chunks of memory using the segment manager's raw memory
6043 allocation functions.
6044 * The chunk contains a pointer to form a singly linked list of chunks. The pool
6045 will contain a pointer to the first chunk.
6046 * The rest of the memory of the chunk is divided in nodes of the requested size and
6047 no memory is used as payload for each node. Since the memory of a free node
6048 is not used that memory is used to place a pointer to form a singly linked list of
6049 free nodes. The pool has a pointer to the first free node.
6050 * Allocating a node is just taking the first free node from the list. If the list
6051 is empty, a new chunk is allocated, linked in the list of chunks and the new free
6052 nodes are linked in the free node list.
6053 * Deallocation returns the node to the free node list.
6054 * When the pool is destroyed, the list of chunks is traversed and memory is returned
6055 to the segment manager.
6056
6057 The pool is implemented by the
6058 [@../../boost/interprocess/allocators/detail/node_pool.hpp
6059 private_node_pool and shared_node_pool] classes.
6060
6061 [endsect]
6062
6063 [section:implementation_adaptive_pools Implementation of [*Boost.Interprocess] adaptive pools]
6064
6065 Adaptive pools are a variation of segregated lists but they have a more complicated
6066 approach:
6067
6068 * Instead of using raw allocation, the pool allocates [*aligned] chunks of memory
6069 using the segment manager. This is an [*essential] feature since a node can reach
6070 its chunk information applying a simple mask to its address.
6071
6072 * The chunks contains pointers to form a doubly linked list of chunks and
6073 an additional pointer to create a singly linked list of free nodes placed
6074 on that chunk. So unlike the segregated storage algorithm, the free list
6075 of nodes is implemented [*per chunk].
6076
6077 * The pool maintains the chunks in increasing order of free nodes. This improves
6078 locality and minimizes the dispersion of node allocations across the chunks
6079 facilitating the creation of totally free chunks.
6080
6081 * The pool has a pointer to the chunk with the minimum (but not zero) free nodes.
6082 This chunk is called the "active" chunk.
6083
6084 * Allocating a node is just returning the first free node of the "active" chunk.
6085 The list of chunks is reordered according to the free nodes count.
6086 The pointer to the "active" pool is updated if necessary.
6087
6088 * If the pool runs out of nodes, a new chunk is allocated, and pushed back in the
6089 list of chunks. The pointer to the "active" pool is updated if necessary.
6090
6091 * Deallocation returns the node to the free node list of its chunk and updates
6092 the "active" pool accordingly.
6093
6094 * If the number of totally free chunks exceeds the limit, chunks are returned
6095 to the segment manager.
6096
6097 * When the pool is destroyed, the list of chunks is traversed and memory is returned
6098 to the segment manager.
6099
6100 The adaptive pool is implemented by the
6101 [@../../boost/interprocess/allocators/detail/adaptive_node_pool.hpp
6102 private_adaptive_node_pool and adaptive_node_pool] classes.
6103
6104 [endsect]
6105
6106 [section:architecture_containers Boost.Interprocess containers]
6107
6108 [*Boost.Interprocess] containers are standard conforming counterparts of STL containers
6109 in `boost::interprocess` namespace, but with these little details:
6110
6111 * [*Boost.Interprocess] STL containers don't assume that memory allocated with
6112 an allocator can be deallocated with other allocator of
6113 the same type. They always compare allocators with `operator==()`
6114 to know if this is possible.
6115
6116 * The pointers of the internal structures of the [*Boost.Interprocess] containers are
6117 of the same type the `pointer` type defined by the allocator of the container. This
6118 allows placing containers in managed memory segments mapped in different base addresses.
6119
6120 [endsect]
6121
6122 [endsect]
6123
6124 [section:performance Performance of Boost.Interprocess]
6125
6126 This section tries to explain the performance characteristics of [*Boost.Interprocess],
6127 so that you can optimize [*Boost.Interprocess] usage if you need more performance.
6128
6129 [section:performance_allocations Performance of raw memory allocations]
6130
6131 You can have two types of raw memory allocations with [*Boost.Interprocess] classes:
6132
6133 * [*Explicit]: The user calls `allocate()` and `deallocate()` functions of
6134 managed_shared_memory/managed_mapped_file... managed memory segments. This call is
6135 translated to a `MemoryAlgorithm::allocate()` function, which means that you
6136 will need just the time that the memory algorithm associated with the managed memory segment
6137 needs to allocate data.
6138
6139 * [*Implicit]: For example, you are using `boost::interprocess::allocator<...>` with
6140 [*Boost.Interprocess] containers. This allocator calls the same `MemoryAlgorithm::allocate()`
6141 function than the explicit method, [*every] time a vector/string has to reallocate its
6142 buffer or [*every] time you insert an object in a node container.
6143
6144 If you see that memory allocation is a bottleneck in your application, you have
6145 these alternatives:
6146
6147 * If you use map/set associative containers, try using `flat_map` family instead
6148 of the map family if you mainly do searches and the insertion/removal is mainly done
6149 in an initialization phase. The overhead is now when the ordered vector has to
6150 reallocate its storage and move data. You can also call the `reserve()` method
6151 of these containers when you know beforehand how much data you will insert.
6152 However in these containers iterators are invalidated in insertions so this
6153 substitution is only effective in some applications.
6154
6155 * Use a [*Boost.Interprocess] pooled allocator for node containers, because pooled
6156 allocators call `allocate()` only when the pool runs out of nodes. This is pretty
6157 efficient (much more than the current default general-purpose algorithm) and this
6158 can save a lot of memory. See
6159 [link interprocess.allocators_containers.stl_allocators_segregated_storage Segregated storage node allocators] and
6160 [link interprocess.allocators_containers.stl_allocators_adaptive Adaptive node allocators] for more information.
6161
6162 * Write your own memory algorithm. If you have experience with memory allocation algorithms
6163 and you think another algorithm is better suited than the default one for your application,
6164 you can specify it in all [*Boost.Interprocess] managed memory segments. See the section
6165 [link interprocess.customizing_interprocess.custom_interprocess_alloc Writing a new shared memory allocation algorithm]
6166 to know how to do this. If you think its better than the default one for general-purpose
6167 applications, be polite and donate it to [*Boost.Interprocess] to make it default!
6168
6169 [endsect]
6170
6171 [section:performance_named_allocation Performance of named allocations]
6172
6173 [*Boost.Interprocess] allows the same parallelism as two threads writing to a common
6174 structure, except when the user creates/searches named/unique objects. The steps
6175 when creating a named object are these:
6176
6177 * Lock a recursive mutex (so that you can make named allocations inside
6178 the constructor of the object to be created).
6179
6180 * Try to insert the [name pointer, object information] in the name/object index.
6181 This lookup has to assure that the name has not been used before.
6182 This is achieved calling `insert()` function in the index. So the time this
6183 requires is dependent on the index type (ordered vector, tree, hash...).
6184 This can require a call to the memory algorithm allocation function if
6185 the index has to be reallocated, it's a node allocator, uses pooled allocations...
6186
6187 * Allocate a single buffer to hold the name of the object, the object itself,
6188 and meta-data for destruction (number of objects, etc...).
6189
6190 * Call the constructors of the object being created. If it's an array, one
6191 construtor per array element.
6192
6193 * Unlock the recursive mutex.
6194
6195 The steps when destroying a named object using the name of the object
6196 (`destroy<T>(name)`) are these:
6197
6198 * Lock a recursive mutex .
6199
6200 * Search in the index the entry associated to that name. Copy that information and
6201 erase the index entry. This is done using `find(const key_type &)` and `erase(iterator)`
6202 members of the index. This can require element reordering if the index is a
6203 balanced tree, an ordered vector...
6204
6205 * Call the destructor of the object (many if it's an array).
6206
6207 * Deallocate the memory buffer containing the name, metadata and the object itself
6208 using the allocation algorithm.
6209
6210 * Unlock the recursive mutex.
6211
6212 The steps when destroying a named object using the pointer of the object
6213 (`destroy_ptr(T *ptr)`) are these:
6214
6215 * Lock a recursive mutex .
6216
6217 * Depending on the index type, this can be different:
6218
6219 * If the index is a node index, (marked with `boost::interprocess::is_node_index`
6220 specialization): Take the iterator stored near the object and call
6221 `erase(iterator)`. This can require element reordering if the index is a
6222 balanced tree, an ordered vector...
6223
6224 * If it's not an node index: Take the name stored near the object and erase
6225 the index entry calling `erase(const key &). This can require element reordering
6226 if the index is a balanced tree, an ordered vector...
6227
6228 * Call the destructor of the object (many if it's an array).
6229
6230 * Deallocate the memory buffer containing the name, metadata and the object itself
6231 using the allocation algorithm.
6232
6233 * Unlock the recursive mutex.
6234
6235 If you see that the performance is not good enough you have these alternatives:
6236
6237 * Maybe the problem is that the lock time is too big and it hurts parallelism.
6238 Try to reduce the number of named objects in the global index and if your
6239 application serves several clients try to build a new managed memory segment
6240 for each one instead of using a common one.
6241
6242 * Use another [*Boost.Interprocess] index type if you feel the default one is
6243 not fast enough. If you are not still satisfied, write your own index type. See
6244 [link interprocess.customizing_interprocess.custom_indexes Building custom indexes] for this.
6245
6246 * Destruction via pointer is at least as fast as using the name of the object and
6247 can be faster (in node containers, for example). So if your problem is that you
6248 make at lot of named destructions, try to use the pointer. If the index is a
6249 node index you can save some time.
6250
6251 [endsect]
6252
6253 [endsect]
6254
6255 [endsect]
6256
6257 [section:customizing_interprocess Customizing Boost.Interprocess]
6258
6259 [section:custom_interprocess_alloc Writing a new shared memory allocation algorithm]
6260
6261 If the default algorithm does not satisfy user requirements,
6262 it's easy to provide different algorithms like bitmapping or
6263 more advanced segregated lists to meet requirements. The class implementing
6264 the algorithm must be compatible with shared memory, so it shouldn't have any
6265 virtual function or virtual inheritance or
6266 any indirect base class with virtual function or inheritance.
6267
6268 This is the interface to be implemented:
6269
6270 [c++]
6271
6272 class my_algorithm
6273 {
6274 public:
6275
6276 //!The mutex type to be used by the rest of Interprocess framework
6277 typedef implementation_defined mutex_family;
6278
6279 //!The pointer type to be used by the rest of Interprocess framework
6280 typedef implementation_defined void_pointer;
6281
6282 //!Constructor. "size" is the total size of the managed memory segment,
6283 //!"extra_hdr_bytes" indicates the extra bytes after the sizeof(my_algorithm)
6284 //!that the allocator should not use at all.
6285 my_algorithm (std::size_t size, std::size_t extra_hdr_bytes);
6286
6287 //!Obtains the minimum size needed by the algorithm
6288 static std::size_t get_min_size (std::size_t extra_hdr_bytes);
6289
6290 //!Allocates bytes, returns 0 if there is not more memory
6291 void* allocate (std::size_t nbytes);
6292
6293 //!Deallocates previously allocated bytes
6294 void deallocate (void *adr);
6295
6296 //!Returns the size of the memory segment
6297 std::size_t get_size() const;
6298
6299 //!Increases managed memory in extra_size bytes more
6300 void grow(std::size_t extra_size);
6301 /*...*/
6302 };
6303
6304 Let's see the public typedefs to define:
6305
6306 [c++]
6307
6308 typedef /* . . . */ void_pointer;
6309 typedef /* . . . */ mutex_family;
6310
6311 The `void_pointer` typedef specifies the pointer type to be used in
6312 the [*Boost.Interprocess] framework that uses the algorithm. For example, if we define
6313
6314 [c++]
6315
6316 typedef void * void_pointer;
6317
6318 all [*Boost.Interprocess] framework using this algorithm will use raw pointers as members.
6319 But if we define:
6320
6321 [c++]
6322
6323 typedef offset_ptr<void> void_pointer;
6324
6325 then all [*Boost.Interprocess] framework will use relative pointers.
6326
6327 The `mutex_family` is a structure containing typedefs
6328 for different interprocess_mutex types to be used in the [*Boost.Interprocess]
6329 framework. For example the defined
6330
6331 [c++]
6332
6333 struct mutex_family
6334 {
6335 typedef boost::interprocess::interprocess_mutex mutex_type;
6336 typedef boost::interprocess::interprocess_recursive_mutex recursive_mutex_type;
6337 };
6338
6339 defines all interprocess_mutex types using boost::interprocess interprocess_mutex types.
6340 The user can specify the desired mutex family.
6341
6342 [c++]
6343
6344 typedef mutex_family mutex_family;
6345
6346 The new algorithm (let's call it *my_algorithm*) must implement all the functions
6347 that boost::interprocess::rbtree_best_fit class offers:
6348
6349 * [*my_algorithm]'s constructor must take 2 arguments:
6350 * [*size] indicates the total size of the managed memory segment, and
6351 [*my_algorithm] object will be always constructed a at offset 0
6352 of the memory segment.
6353
6354 * The [*extra_hdr_bytes] parameter indicates the number of bytes after
6355 the offset `sizeof(my_algorithm)` that [*my_algorithm] can't use at all. This extra
6356 bytes will be used to store additional data that should not be overwritten.
6357 So, [*my_algorithm] will be placed at address XXX of the memory segment, and will
6358 manage the [*[XXX + sizeof(my_algorithm) + extra_hdr_bytes, XXX + size)] range of
6359 the segment.
6360
6361 * The [*get_min_size()] function should return the minimum space the algorithm
6362 needs to be valid with the passed [*extra_hdr_bytes] parameter. This function will
6363 be used to check if the memory segment is big enough to place the algorithm there.
6364
6365 * The [*allocate()] function must return 0 if there is no more available memory.
6366 The memory returned by [*my_algorithm]
6367 must be aligned to the most restrictive memory alignment of the system.
6368 This function should be executed with the synchronization capabilities offered
6369 by `typename mutex_family::mutex_type` interprocess_mutex. That means, that if we define
6370 `typedef mutex_family mutex_family;` then this function should offer
6371 the same synchronization as if it was surrounded by an interprocess_mutex lock/unlock.
6372 Normally, this is implemented using a member of type `mutex_family::mutex_type`, but
6373 it could be done using atomic instructions or lock free algorithms.
6374
6375 * The [*deallocate()] function must make the returned buffer available for new
6376 allocations. This function should offer the same synchronization as `allocate()`.
6377
6378 * The [*size()] function will return the passed [*size] parameter in the constructor.
6379 So, [*my_algorithm] should store the size internally.
6380
6381 * The [*grow()] function will expand the managed memory by [*my_algorithm] in [*extra_size]
6382 bytes. So [*size()] function should return the updated size,
6383 and the new managed memory range will be (if the address where the algorithm is
6384 constructed is XXX): [*[XXX + sizeof(my_algorithm) + extra_hdr_bytes, XXX + old_size + extra_size)].
6385 This function should offer the same synchronization as `allocate()`.
6386
6387 That's it. Now we can create new managed shared memory that uses our new algorithm:
6388
6389 [c++]
6390
6391 //Managed memory segment to allocate named (c-string) objects
6392 //using a user-defined memory allocation algorithm
6393 basic_managed_shared_memory<char,
6394 ,my_algorithm
6395 ,flat_map_index>
6396 my_managed_shared_memory;
6397
6398 [endsect]
6399
6400 [section:custom_allocators Building custom STL compatible allocators for Boost.Interprocess]
6401
6402 If provided STL-like allocators don't satisfy user needs, the user
6403 can implement another STL compatible allocator using raw memory allocation
6404 and named object construction functions.
6405 The user can this way implement more suitable allocation
6406 schemes on top of basic shared memory allocation schemes,
6407 just like more complex allocators are built on top of
6408 new/delete functions.
6409
6410 When using a managed memory segment, [*get_segment_manager()]
6411 function returns a pointer to the segment manager. With this pointer,
6412 the raw memory allocation and named object construction functions can be
6413 called directly:
6414
6415 [c++]
6416
6417 //Create the managed shared memory and initialize resources
6418 managed_shared_memory segment
6419 (create_only
6420 ,"/MySharedMemory" //segment name
6421 ,65536); //segment size in bytes
6422
6423 //Obtain the segment manager
6424 managed_shared_memory::segment_manager *segment_mngr
6425 = segment.get_segment_manager();
6426
6427 //With the segment manager, now we have access to all allocation functions
6428 segment_mngr->deallocate(segment_mngr->allocate(32));
6429 segment_mngr->construct<int>("My_Int")[32](0);
6430 segment_mngr->destroy<int>("My_Int");
6431
6432 //Initialize the custom, managed memory segment compatible
6433 //allocator with the segment manager.
6434 //
6435 //MySTLAllocator uses segment_mngr->xxx functions to
6436 //implement its allocation scheme
6437 MySTLAllocator<int> stl_alloc(segment_mngr);
6438
6439 //Alias a new vector type that uses the custom STL compatible allocator
6440 typedef std::vector<int, MySTLAllocator<int> > MyVect;
6441
6442 //Construct the vector in shared memory with the allocator as constructor parameter
6443 segment.construct<MyVect>("MyVect_instance")(stl_alloc);
6444
6445 The user can create new STL compatible allocators that use the segment manager to access
6446 to all memory management/object construction functions. All [*Boost.Interprocess]' STL
6447 compatible allocators are based on this approach. [*Remember] that to be compatible with
6448 managed memory segments, allocators should define their *pointer* typedef as the same
6449 pointer family as `segment_manager::void_pointer` typedef. This means that if `segment_manager::void_pointer` is
6450 `offset_ptr<void>`, `MySTLAllocator<int>` should define `pointer` as `offset_ptr<int>`. The
6451 reason for this is that allocators are members of containers, and if we want to put
6452 the container in a managed memory segment, the allocator should be ready for that.
6453
6454 [endsect]
6455
6456 [section:custom_indexes Building custom indexes]
6457
6458 The managed memory segment uses a name/object index to
6459 speed up object searching and creation. Default specializations of
6460 managed memory segments (`managed_shared_memory` for example),
6461 use `boost::interprocess::flat_map` as index.
6462
6463 However, the index type can be chosen via template parameter, so that
6464 the user can define its own index type if he needs that. To construct
6465 a new index type, the user must create a class with the following guidelines:
6466
6467 * The interface of the index must follow the common public interface of std::map
6468 and std::tr1::unordered_map including public typedefs.
6469 The `value_type` typedef can be of type:
6470
6471 [c++]
6472
6473 std::pair<key_type, mapped_type>
6474
6475 or
6476
6477 [c++]
6478
6479 std::pair<const key_type, mapped_type>
6480
6481
6482 so that ordered arrays or deques can be used as index types.
6483 Some known classes following this basic interface are `boost::unordered_map`,
6484 `boost::interprocess::flat_map` and `boost::interprocess::map`.
6485
6486 * The class must be a class template taking only a traits struct of this type:
6487
6488 [c++]
6489
6490 struct index_traits
6491 {
6492 typedef /*...*/ key_type;
6493 typedef /*...*/ mapped_type;
6494 typedef /*...*/ segment_manager;
6495 };
6496
6497 [c++]
6498
6499 template <class IndexTraits>
6500 class my_index_type;
6501
6502 The `key_type` typedef of the passed `index_traits` will be a specialization of the
6503 following class:
6504
6505 [c++]
6506
6507 //!The key of the named allocation information index. Stores a to
6508 //!a null string and the length of the string to speed up sorting
6509 template<...>
6510 struct index_key
6511 {
6512 typedef /*...*/ char_type;
6513 typedef /*...*/ const_char_ptr_t;
6514
6515 //Pointer to the object's name (null terminated)
6516 const_char_ptr_t mp_str;
6517
6518 //Length of the name buffer (null NOT included)
6519 std::size_t m_len;
6520
6521 //!Constructor of the key
6522 index_key (const CharT *name, std::size_t length);
6523
6524 //!Less than function for index ordering
6525 bool operator < (const index_key & right) const;
6526
6527 //!Equal to function for index ordering
6528 bool operator == (const index_key & right) const;
6529 };
6530
6531 The `mapped_type` is not directly modified by the customized index but it is needed to
6532 define the index type. The *segment_manager* will be the type of the segment manager that
6533 will manage the index. `segment_manager` will define interesting internal types like
6534 `void_pointer` or `mutex_family`.
6535
6536 * The constructor of the customized index type must take a pointer to segment_manager
6537 as constructor argument:
6538
6539 [c++]
6540
6541 constructor(segment_manager *segment_mngr);
6542
6543 * The index must provide a memory reservation function, that optimizes the index if the
6544 user knows the number of elements to be inserted in the index:
6545
6546 [c++]
6547
6548 void reserve(std::size_t n);
6549
6550 For example, the index type `flat_map_index` based in `boost::interprocess::flat_map`
6551 is just defined as:
6552
6553 [import ../../../boost/interprocess/indexes/flat_map_index.hpp]
6554 [flat_map_index]
6555
6556
6557 If the user is defining a node container based index (a container whose iterators
6558 are not invalidated when inserting or erasing other elements), [*Boost.Interprocess] can
6559 optimize named object destruction when destructing via pointer. [*Boost.Interprocess] can
6560 store an iterator next to the object and instead of using the name of the object to erase
6561 the index entry, it uses the iterator, which is a faster operation. So if you are creating
6562 a new node container based index (for example, a tree), you should define an
6563 specialization of `boost::interprocess::is_node_index<...>` defined in
6564 `<boost/interprocess/detail/utilities.hpp>`:
6565
6566 [c++]
6567
6568 //!Trait classes to detect if an index is a node
6569 //!index. This allows more efficient operations
6570 //!when deallocating named objects.
6571 template<class MapConfig>
6572 struct is_node_index
6573 <my_index<MapConfig> >
6574 {
6575 static const bool value = true;
6576 };
6577
6578 Interprocess also defines other index types:
6579
6580 * [*boost::map_index] uses *boost::interprocess::map* as index type.
6581
6582 * [*boost::null_index] that uses an dummy index type if the user just needs
6583 anonymous allocations and wants to save some space and class instantations.
6584
6585 Defining a new managed memory segment that uses the new index is easy. For
6586 example, a new managed shared memory that uses the new index:
6587
6588 [c++]
6589
6590 //!Defines a managed shared memory with a c-strings as
6591 //!a keys, the red-black tree best fit algorithm (with process-shared mutexes
6592 //!and offset_ptr pointers) as raw shared memory management algorithm
6593 //!and a custom index
6594 typedef
6595 basic_managed_shared_memory <
6596 char,
6597 rbtree_best_fit<mutex_family>,
6598 my_index_type
6599 >
6600 my_managed_shared_memory;
6601
6602 [endsect]
6603
6604 [endsect]
6605
6606
6607 [section:acknowledgements_notes Acknowledgements, notes and links]
6608
6609 [section:notes_windows Notes for Windows users]
6610
6611 [section:notes_windows_com_init COM Initialization]
6612
6613 [*Boost.Interprocess] uses the Windows COM library to implement some features and initializes
6614 it with concurrency model `COINIT_APARTMENTTHREADED`.
6615 If the COM library was already initialized by the calling thread for another concurrency model, [*Boost.Interprocess]
6616 handles this gracefully and uses COM calls for the already initialized model. If for some reason, you
6617 want [*Boost.Interprocess] to initialize the COM library with another model, define the macro
6618 `BOOST_INTERPROCESS_WINDOWS_COINIT_MODEL` before including [*Boost.Interprocess] to one of these values:
6619
6620 * `COINIT_APARTMENTTHREADED_BIPC`
6621 * `COINIT_MULTITHREADED_BIPC`
6622 * `COINIT_DISABLE_OLE1DDE_BIPC`
6623 * `COINIT_SPEED_OVER_MEMORY_BIPC`
6624
6625 [endsect]
6626
6627 [section:notes_windows_shm_folder Shared memory emulation folder]
6628
6629 Shared memory (`shared_memory_object`) is implemented in windows using memory mapped files, placed in a
6630 shared directory in the shared documents folder (`SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders\Common AppData`).
6631 This directory name is the last bootup time obtained via COM calls (if `BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME`) defined
6632 or searching the system log for a startup event (the default implementation), so that each bootup shared memory is created in a new
6633 folder obtaining kernel persistence shared memory.
6634
6635 If using `BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME`, due to COM implementation related errors,
6636 in Boost 1.48 & Boost 1.49 the bootup-time folder was dumped and files
6637 were directly created in shared documents folder, reverting to filesystem persistence shared memory. Boost 1.50 fixed those issues
6638 and recovered bootup time directory and kernel persistence. If you need to reproduce Boost 1.48 & Boost 1.49 behaviour to communicate
6639 with applications compiled with that version, comment `#define BOOST_INTERPROCESS_HAS_KERNEL_BOOTTIME` directive
6640 in the Windows configuration part of `boost/interprocess/detail/workaround.hpp`.
6641
6642 If using the default implementation, (`BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME` undefined) and the Startup Event is not
6643 found, this might be due to some buggy software that floods or erases the event log.
6644
6645 In any error case (shared documents folder is not defined or bootup time could not be obtained, the library throws an error. You still
6646 can use [*Boost.Interprocess] definining your own directory as the shared directory. Just define `BOOST_INTERPROCESS_SHARED_DIR_PATH`
6647 when using the library and that path will be used to place shared memory files.
6648
6649 [endsect]
6650
6651 [section:boost_use_windows_h BOOST_USE_WINDOWS_H support]
6652
6653 If `BOOST_USE_WINDOWS_H` is defined, <windows.h> and other windows SDK files are included,
6654 otherwise the library declares needed functions and structures to reduce the impact of including
6655 those heavy headers.
6656
6657 [endsect]
6658
6659 [endsect]
6660
6661 [section:notes_linux Notes for Linux users]
6662
6663 [section:notes_linux_shm_folder Shared memory emulation folder]
6664
6665 On systems without POSIX shared memory support shared memory objects are implemented as memory mapped files, using a directory
6666 placed in "/tmp" that can include (if `BOOST_INTERPROCESS_HAS_KERNEL_BOOTTIME` is defined) the last bootup time (if the Os supports it).
6667 As in Windows, in any error case obtaining this directory the library throws an error . You still can use
6668 [*Boost.Interprocess] definining your own directory as the shared directory. Just define `BOOST_INTERPROCESS_SHARED_DIR_PATH`
6669 when using the library and that path will be used to place shared memory files.
6670
6671 [endsect]
6672
6673 [section:notes_linux_overcommit Overcommit]
6674
6675 The committed address space is the total amount of virtual memory (swap or physical memory/RAM) that the kernel might have to supply
6676 if all applications decide to access all of the memory they've requested from the kernel.
6677 By default, Linux allows processes to commit more virtual memory than available in the system. If that memory is not
6678 accessed, no physical memory + swap is actually used.
6679
6680 The reason for this behaviour is that Linux tries to optimize memory usage on forked processes; fork() creates a full copy of
6681 the process space, but with overcommitted memory, in this new forked instance only pages which have been written to actually need
6682 to be allocated by the kernel. If applications access more memory than available, then the kernel must free memory in the hard way:
6683 the OOM (Out Of Memory)-killer picks some processes to kill in order to recover memory.
6684
6685 [*Boost.Interprocess] has no way to change this behaviour and users might suffer the OOM-killer when accessing shared memory.
6686 According to the [@http://www.kernel.org/doc/Documentation/vm/overcommit-accounting Kernel documentation], the
6687 Linux kernel supports several overcommit modes. If you need non-kill guarantees in your application, you should
6688 change this overcommit behaviour.
6689
6690 [endsect]
6691
6692 [endsect]
6693
6694 [section:thanks_to Thanks to...]
6695
6696 Many people have contributed with ideas and revisions, so this is the place to
6697 thank them:
6698
6699 * Thanks to all people who have shown interest in the library and have downloaded
6700 and tested the snapshots.
6701
6702 * Thanks to [*Francis Andre] and [*Anders Hybertz] for their ideas and suggestions.
6703 Many of them are not implemented yet but I hope to include them when library gets some stability.
6704
6705 * Thanks to [*Matt Doyle], [*Steve LoBasso], [*Glenn Schrader], [*Hiang Swee Chiang],
6706 [*Phil Endecott], [*Rene Rivera],
6707 [*Harold Pirtle], [*Paul Ryan],
6708 [*Shumin Wu], [*Michal Wozniak], [*Peter Johnson],
6709 [*Alex Ott], [*Shane Guillory], [*Steven Wooding]
6710 and [*Kim Barrett] for their bug fixes and library testing.
6711
6712 * Thanks to [*Martin Adrian] who suggested the use of Interprocess framework for user defined buffers.
6713
6714 * Thanks to [*Synge Todo] for his boostbook-doxygen patch to improve Interprocess documentation.
6715
6716 * Thanks to [*Olaf Krzikalla] for his Intrusive library. I have taken some ideas to
6717 improve red black tree implementation from his library.
6718
6719 * Thanks to [*Daniel James] for his unordered_map/set family and his help with allocators.
6720 His great unordered implementation has been a reference to design exception safe containers.
6721
6722 * Thanks to [*Howard Hinnant] for his amazing help, specially explaining allocator swapping,
6723 move semantics and for developing upgradable mutex and lock transfer features.
6724
6725 * Thanks to [*Pavel Vozenilek] for his continuous review process, suggestions, code and
6726 help. He is the major supporter of Interprocess library. The library has grown with his
6727 many and great advices.
6728
6729 * And finally, thank you to all Boosters. [*Long live to C++!]
6730
6731 [endsect]
6732
6733 [section:release_notes Release Notes]
6734
6735 [section:release_notes_boost_1_62_00 Boost 1.62 Release]
6736 * Fixed bugs:
6737 * [@https://github.com/boostorg/interprocess/pull/27 GitHub Pull #27 (['"Fix undefined behavior"])].
6738
6739 [endsect]
6740
6741 [section:release_notes_boost_1_61_00 Boost 1.61 Release]
6742 * Fixed bugs:
6743 * [@https://github.com/boostorg/interprocess/pull/23 GitHub Pull #23 (['"Fixed case sensetive for linux mingw"])].
6744
6745 [endsect]
6746
6747 [section:release_notes_boost_1_60_00 Boost 1.60 Release]
6748 * Improved [classref boost::interprocess::offset_ptr offset_ptr] performance and removed any undefined behaviour. No
6749 special cases needed for different compilers.
6750 * Fixed bugs:
6751 * [@https://svn.boost.org/trac/boost/ticket/11699 Trac #11699 (['"Forward declarations of std templates causes stack corruption under Visual Studio 2015"])].
6752
6753 [endsect]
6754
6755 [section:release_notes_boost_1_59_00 Boost 1.59 Release]
6756 * Fixed bugs:
6757 * [@https://svn.boost.org/trac/boost/ticket/5139 Trac #5139 (['"Initial Stream Position in Boost.Interprocess.Vectorstream"])].
6758 * [@https://github.com/boostorg/interprocess/pull/19 GitHub Pull #19 (['"Fix exception visibility"])]. Thanks to Romain-Geissler.
6759
6760 [endsect]
6761
6762 [section:release_notes_boost_1_58_00 Boost 1.58 Release]
6763 * Reduced some compile-time dependencies. Updated to Boost.Container changes.
6764 * Fixed bugs:
6765 * [@https://github.com/boostorg/interprocess/pull/13 GitHub Pull #13 (['"haiku: we don't have XSI shared memory, so don't try to use it"])].
6766 Thanks to Jessica Hamilton.
6767
6768 [endsect]
6769
6770 [section:release_notes_boost_1_57_00 Boost 1.57 Release]
6771 * Removed `unique_ptr`, now forwards boost::interprocess::unique_ptr to the general purpose
6772 `boost::movelib::unique_ptr` class from [*Boost.Move]. This implementation is closer to the standard
6773 `std::unique_ptr` implementation and it's better maintained.
6774 * Fixed bugs:
6775 * [@https://svn.boost.org/trac/boost/ticket/10262 Trac #10262 (['"AIX 6.1 bug with variable definition hz"])].
6776 * [@https://svn.boost.org/trac/boost/ticket/10229 Trac #10229 (['"Compiling errors in interprocess\detail\os_file_functions.hpp"])].
6777 * [@https://svn.boost.org/trac/boost/ticket/10506 Trac #10506 (['"Infinite loop in create_or_open_file"])].
6778 * [@https://github.com/boostorg/interprocess/pull/11 GitHub Pull #11 (['"Compile fix for BOOST_USE_WINDOWS_H"])].
6779
6780 * Reorganized Doxygen marks to obtain a better header reference.
6781
6782 [endsect]
6783
6784 [section:release_notes_boost_1_56_00 Boost 1.56 Release]
6785 * Fixed bugs:
6786 * [@https://svn.boost.org/trac/boost/ticket/9221 Trac #9221 (['"message_queue deadlock on linux"])].
6787 * [@https://svn.boost.org/trac/boost/ticket/9226 Trac #9226 (['"On some computers, Common Appdata is empty in registry, so boost interprocess cannot work"])].
6788 * [@https://svn.boost.org/trac/boost/ticket/9262 Trac #9262 (['"windows_intermodule_singleton breaks when calling a Debug dll from a Release executable"])].
6789 * [@https://svn.boost.org/trac/boost/ticket/9284 Trac #9284 (['"WaitForSingleObject(mutex) must handle WAIT_ABANDONED"])].
6790 * [@https://svn.boost.org/trac/boost/ticket/9285 Trac #9285 (['"CreateMutex returns NULL if fails"])].
6791 * [@https://svn.boost.org/trac/boost/ticket/9288 Trac #9288 (['"timed_wait does not check if it has expired"])].
6792 * [@https://svn.boost.org/trac/boost/ticket/9408 Trac #9408 (['"Android does not support XSI_SHARED_MEMORY_OBJECTS"]]).
6793 * [@https://svn.boost.org/trac/boost/ticket/9729 Trac #9729 (['"crash on managed_external_buffer object construction"]]).
6794 * [@https://svn.boost.org/trac/boost/ticket/9767 Trac #9767 (['"bootstamp generation causes error in case of corrupt Windows Event Log"])].
6795 * [@https://svn.boost.org/trac/boost/ticket/9835 Trac #9835 (['"Boost Interprocess fails to compile with Android NDK GCC 4.8, -Werror=unused-variable"])].
6796 * [@https://svn.boost.org/trac/boost/ticket/9911 Trac #9911 (['"get_tmp_base_dir(...) failure"])].
6797 * [@https://svn.boost.org/trac/boost/ticket/9946 Trac #9946 (['"ret_ptr uninitialized in init_atomic_func, fini_atomic_func"])].
6798 * [@https://svn.boost.org/trac/boost/ticket/10011 Trac #10011 (['"segment_manager::find( unique_instance_t* ) fails to compile"])].
6799 * [@https://svn.boost.org/trac/boost/ticket/10021 Trac #10021 (['"Interprocess and BOOST_USE_WINDOWS_H"])].
6800 * [@https://svn.boost.org/trac/boost/ticket/10230 Trac #10230 (['"No Sleep in interprocess::winapi"])].
6801 * [@https://github.com/boostorg/interprocess/pull/2 GitHub Pull #2] (['"Provide support for the Cray C++ compiler. The Cray compiler defines __GNUC__"]]).
6802 * [@https://github.com/boostorg/interprocess/pull/3 GitHub Pull #3] (['"Fix/mingw interprocess_exception throw in file_wrapper::priv_open_or_create"]]).
6803
6804 * [*ABI breaking]: [@https://svn.boost.org/trac/boost/ticket/9221 #9221] showed
6805 that `BOOST_INTERPROCESS_MSG_QUEUE_CIRCULAR_INDEX` option of message queue,
6806 was completely broken so an ABI break was necessary to have a working implementation.
6807
6808 * Simplified, refactored and unified (timed_)lock code based on try_lock().
6809 There were several bugs when handling timeout expirations.
6810
6811 * Changed the implementation of condition variables' destructors to allow POSIX semantics
6812 (the condition variable can be destroyed after all waiting threads have been woken up)..
6813
6814 * Added `BOOST_INTERPROCESS_SHARED_DIR_PATH` option to define the shared directory used to place shared memory objects
6815 when implemented as memory mapped files.
6816
6817 * Added support for `BOOST_USE_WINDOWS_H`. When this macro is defined Interprocess does not declare
6818 used Windows API function and types, includes all needed windows SDK headers and uses types and
6819 functions declared by the Windows SDK.
6820
6821 * Added `get_size` to [classref ::boost:interprocess:windows_shared_memory].
6822
6823 [endsect]
6824
6825 [section:release_notes_boost_1_55_00 Boost 1.55 Release]
6826
6827 * Fixed bugs [@https://svn.boost.org/trac/boost/ticket/7156 #7156] (['"interprocess buffer streams leak memory on construction"]]).
6828 [@https://svn.boost.org/trac/boost/ticket/7164 #7164] (['"Two potential bugs with b::int::vector of b::i::weak_ptr"]]).
6829 [@https://svn.boost.org/trac/boost/ticket/7860 #7860] (['"smart_ptr's yield_k and spinlock utilities can improve spinlock-based sychronization primitives"]]).
6830 [@https://svn.boost.org/trac/boost/ticket/8277 #8277] (['"docs for named_mutex erroneously refer to interprocess_mutex"]]).
6831 [@https://svn.boost.org/trac/boost/ticket/8976 #8976] (['"shared_ptr fails to compile if used with a scoped_allocator"]]).
6832 [@https://svn.boost.org/trac/boost/ticket/9008 #9008] (['"conditions variables fast enough only when opening a multiprocess browser"]]).
6833 [@https://svn.boost.org/trac/boost/ticket/9065 #9065] (['"atomic_cas32 inline assembly wrong on ppc32"]]).
6834 [@https://svn.boost.org/trac/boost/ticket/9073 #9073] (['"Conflict names 'realloc'"]]).
6835
6836 [endsect]
6837
6838 [section:release_notes_boost_1_54_00 Boost 1.54 Release]
6839
6840 * Added support for platform-specific flags to mapped_region (ticket #8030)
6841 * Fixed bugs [@https://svn.boost.org/trac/boost/ticket/7484 #7484],
6842 [@https://svn.boost.org/trac/boost/ticket/7598 #7598],
6843 [@https://svn.boost.org/trac/boost/ticket/7682 #7682],
6844 [@https://svn.boost.org/trac/boost/ticket/7923 #7923],
6845 [@https://svn.boost.org/trac/boost/ticket/7924 #7924],
6846 [@https://svn.boost.org/trac/boost/ticket/7928 #7928],
6847 [@https://svn.boost.org/trac/boost/ticket/7936 #7936],
6848 [@https://svn.boost.org/trac/boost/ticket/8521 #8521],
6849 [@https://svn.boost.org/trac/boost/ticket/8595 #8595].
6850
6851 * [*ABI breaking]: Changed bootstamp function in Windows to use EventLog service start time
6852 as system bootup time. Previously used `LastBootupTime` from WMI was unstable with
6853 time synchronization and hibernation and unusable in practice. If you really need
6854 to obtain pre Boost 1.54 behaviour define `BOOST_INTERPROCESS_BOOTSTAMP_IS_LASTBOOTUPTIME`
6855 from command line or `detail/workaround.hpp`.
6856
6857 [endsect]
6858
6859 [section:release_notes_boost_1_53_00 Boost 1.53 Release]
6860
6861 * Fixed GCC -Wshadow warnings.
6862 * Experimental multiple allocation interface improved and changed again. Still unstable.
6863 * Replaced deprecated BOOST_NO_XXXX with newer BOOST_NO_CXX11_XXX macros.
6864 * [*ABI breaking]: changed node pool allocators internals for improved efficiency.
6865 * Fixed bug [@https://svn.boost.org/trac/boost/ticket/7795 #7795].
6866
6867 [endsect]
6868
6869 [section:release_notes_boost_1_52_00 Boost 1.52 Release]
6870
6871 * Added `shrink_by` and `advise` functions in `mapped_region`.
6872 * [*ABI breaking:] Reimplemented `message_queue` with a circular buffer index (the
6873 old behavior used an ordered array, leading to excessive copies). This
6874 should greatly increase performance but breaks ABI. Old behaviour/ABI can be used
6875 undefining macro `BOOST_INTERPROCESS_MSG_QUEUE_CIRCULAR_INDEX` in `boost/interprocess/detail/workaround.hpp`
6876 * Improved `message_queue` insertion time avoiding priority search for common cases
6877 (both array and circular buffer configurations).
6878 * Implemented `interproces_sharable_mutex` and `interproces_condition_any`.
6879 * Improved `offset_ptr` performance.
6880 * Added integer overflow checks.
6881
6882 [endsect]
6883
6884 [section:release_notes_boost_1_51_00 Boost 1.51 Release]
6885
6886 * Synchronous and asynchronous flushing for `mapped_region::flush`.
6887 * [*Source & ABI breaking]: Removed `get_offset` method from `mapped_region` as
6888 it has no practical utility and `m_offset` member was not for anything else.
6889 * [*Source & ABI breaking]: Removed `flush` from `managed_shared_memory`.
6890 as it is unspecified according to POSIX:
6891 [@http://pubs.opengroup.org/onlinepubs/009695399/functions/msync.html
6892 ['"The effect of msync() on a shared memory object or a typed memory object is unspecified"] ].
6893 * Fixed bug
6894 [@https://svn.boost.org/trac/boost/ticket/7152 #7152],
6895
6896 [endsect]
6897
6898 [section:release_notes_boost_1_50_00 Boost 1.50 Release]
6899
6900 * Fixed bugs
6901 [@https://svn.boost.org/trac/boost/ticket/3750 #3750],
6902 [@https://svn.boost.org/trac/boost/ticket/6727 #6727],
6903 [@https://svn.boost.org/trac/boost/ticket/6648 #6648],
6904
6905 * Shared memory in windows has again kernel persistence: kernel bootstamp
6906 and WMI has received some fixes and optimizations. This causes incompatibility
6907 with Boost 1.48 and 1.49 but the user can comment `#define BOOST_INTERPROCESS_HAS_KERNEL_BOOTTIME`
6908 in the windows configuration part to get Boost 1.48 & Boost 1.49 behaviour.
6909
6910 [endsect]
6911
6912 [section:release_notes_boost_1_49_00 Boost 1.49 Release]
6913
6914 * Fixed bugs
6915 [@https://svn.boost.org/trac/boost/ticket/6531 #6531],
6916 [@https://svn.boost.org/trac/boost/ticket/6412 #6412],
6917 [@https://svn.boost.org/trac/boost/ticket/6398 #6398],
6918 [@https://svn.boost.org/trac/boost/ticket/6340 #6340],
6919 [@https://svn.boost.org/trac/boost/ticket/6319 #6319],
6920 [@https://svn.boost.org/trac/boost/ticket/6287 #6287],
6921 [@https://svn.boost.org/trac/boost/ticket/6265 #6265],
6922 [@https://svn.boost.org/trac/boost/ticket/6233 #6233],
6923 [@https://svn.boost.org/trac/boost/ticket/6147 #6147],
6924 [@https://svn.boost.org/trac/boost/ticket/6134 #6134],
6925 [@https://svn.boost.org/trac/boost/ticket/6058 #6058],
6926 [@https://svn.boost.org/trac/boost/ticket/6054 #6054],
6927 [@https://svn.boost.org/trac/boost/ticket/5772 #5772],
6928 [@https://svn.boost.org/trac/boost/ticket/5738 #5738],
6929 [@https://svn.boost.org/trac/boost/ticket/5622 #5622],
6930 [@https://svn.boost.org/trac/boost/ticket/5552 #5552],
6931 [@https://svn.boost.org/trac/boost/ticket/5518 #5518],
6932 [@https://svn.boost.org/trac/boost/ticket/4655 #4655],
6933 [@https://svn.boost.org/trac/boost/ticket/4452 #4452],
6934 [@https://svn.boost.org/trac/boost/ticket/4383 #4383],
6935 [@https://svn.boost.org/trac/boost/ticket/4297 #4297].
6936
6937 * Fixed timed functions in mutex implementations to fulfill POSIX requirements:
6938 ['Under no circumstance shall the function fail with a timeout if the mutex can be locked
6939 immediately. The validity of the abs_timeout parameter need not be checked if the mutex
6940 can be locked immediately.]
6941
6942 [endsect]
6943
6944 [section:release_notes_boost_1_48_00 Boost 1.48 Release]
6945
6946 * Fixed bugs
6947 [@https://svn.boost.org/trac/boost/ticket/2796 #2796],
6948 [@https://svn.boost.org/trac/boost/ticket/4031 #4031],
6949 [@https://svn.boost.org/trac/boost/ticket/4251 #4251],
6950 [@https://svn.boost.org/trac/boost/ticket/4452 #4452],
6951 [@https://svn.boost.org/trac/boost/ticket/4895 #4895],
6952 [@https://svn.boost.org/trac/boost/ticket/5077 #5077],
6953 [@https://svn.boost.org/trac/boost/ticket/5120 #5120],
6954 [@https://svn.boost.org/trac/boost/ticket/5123 #5123],
6955 [@https://svn.boost.org/trac/boost/ticket/5230 #5230],
6956 [@https://svn.boost.org/trac/boost/ticket/5197 #5197],
6957 [@https://svn.boost.org/trac/boost/ticket/5287 #5287],
6958 [@https://svn.boost.org/trac/boost/ticket/5294 #5294],
6959 [@https://svn.boost.org/trac/boost/ticket/5306 #5306],
6960 [@https://svn.boost.org/trac/boost/ticket/5308 #5308],
6961 [@https://svn.boost.org/trac/boost/ticket/5392 #5392],
6962 [@https://svn.boost.org/trac/boost/ticket/5409 #5409],
6963
6964 * Added support to customize offset_ptr and allow
6965 creating custom managed segments that might be shared between
6966 32 and 64 bit processes.
6967
6968 * Shared memory in windows has again filesystem lifetime: kernel bootstamp
6969 and WMI use to get a reliable timestamp was causing a lot of trouble.
6970
6971 [endsect]
6972
6973 [section:release_notes_boost_1_46_00 Boost 1.46 Release]
6974
6975 * Fixed bugs
6976 [@https://svn.boost.org/trac/boost/ticket/4557 #4557],
6977 [@https://svn.boost.org/trac/boost/ticket/4979 #4979],
6978 [@https://svn.boost.org/trac/boost/ticket/4907 #4907],
6979 [@https://svn.boost.org/trac/boost/ticket/4895 #4895].
6980
6981 [endsect]
6982
6983 [section:release_notes_boost_1_45_00 Boost 1.45 Release]
6984
6985 * Fixed bugs
6986 [@https://svn.boost.org/trac/boost/ticket/1080 #1080],
6987 [@https://svn.boost.org/trac/boost/ticket/3284 #3284],
6988 [@https://svn.boost.org/trac/boost/ticket/3439 #3439],
6989 [@https://svn.boost.org/trac/boost/ticket/3448 #3448],
6990 [@https://svn.boost.org/trac/boost/ticket/3582 #3582],
6991 [@https://svn.boost.org/trac/boost/ticket/3682 #3682],
6992 [@https://svn.boost.org/trac/boost/ticket/3829 #3829],
6993 [@https://svn.boost.org/trac/boost/ticket/3846 #3846],
6994 [@https://svn.boost.org/trac/boost/ticket/3914 #3914],
6995 [@https://svn.boost.org/trac/boost/ticket/3947 #3947],
6996 [@https://svn.boost.org/trac/boost/ticket/3950 #3950],
6997 [@https://svn.boost.org/trac/boost/ticket/3951 #3951],
6998 [@https://svn.boost.org/trac/boost/ticket/3985 #3985],
6999 [@https://svn.boost.org/trac/boost/ticket/4010 #4010],
7000 [@https://svn.boost.org/trac/boost/ticket/4417 #4417],
7001 [@https://svn.boost.org/trac/boost/ticket/4019 #4019],
7002 [@https://svn.boost.org/trac/boost/ticket/4039 #4039],
7003 [@https://svn.boost.org/trac/boost/ticket/4218 #4218],
7004 [@https://svn.boost.org/trac/boost/ticket/4230 #4230],
7005 [@https://svn.boost.org/trac/boost/ticket/4250 #4250],
7006 [@https://svn.boost.org/trac/boost/ticket/4297 #4297],
7007 [@https://svn.boost.org/trac/boost/ticket/4350 #4350],
7008 [@https://svn.boost.org/trac/boost/ticket/4352 #4352],
7009 [@https://svn.boost.org/trac/boost/ticket/4426 #4426],
7010 [@https://svn.boost.org/trac/boost/ticket/4516 #4516],
7011 [@https://svn.boost.org/trac/boost/ticket/4524 #4524],
7012 [@https://svn.boost.org/trac/boost/ticket/4557 #4557],
7013 [@https://svn.boost.org/trac/boost/ticket/4606 #4606],
7014 [@https://svn.boost.org/trac/boost/ticket/4685 #4685],
7015 [@https://svn.boost.org/trac/boost/ticket/4694 #4694].
7016
7017 * Added support for standard rvalue reference move semantics
7018 (tested on GCC 4.5 and VC10).
7019
7020 * Permissions can be detailed for interprocess named resources.
7021
7022 * `mapped_region::flush` initiates disk flushing but does not guarantee it's completed
7023 when returns, since it is not portable.
7024
7025 * FreeBSD and MacOS now use posix semaphores to implement named semaphores and mutex.
7026
7027 [endsect]
7028
7029 [section:release_notes_boost_1_41_00 Boost 1.41 Release]
7030
7031 * Support for POSIX shared memory in Mac OS.
7032 * [*ABI breaking]: Generic `semaphore` and `named_semaphore` now implemented more efficiently with atomic operations.
7033 * More robust file opening in Windows platforms with active Anti-virus software.
7034
7035 [endsect]
7036
7037 [section:release_notes_boost_1_40_00 Boost 1.40 Release]
7038
7039 * Windows shared memory is created in Shared Documents folder so that it can be shared
7040 between services and processes
7041 * Fixed bugs
7042 [@https://svn.boost.org/trac/boost/ticket/2967 #2967],
7043 [@https://svn.boost.org/trac/boost/ticket/2973 #2973],
7044 [@https://svn.boost.org/trac/boost/ticket/2992 #2992],
7045 [@https://svn.boost.org/trac/boost/ticket/3138 #3138],
7046 [@https://svn.boost.org/trac/boost/ticket/3166 #3166],
7047 [@https://svn.boost.org/trac/boost/ticket/3205 #3205].
7048
7049 [endsect]
7050
7051 [section:release_notes_boost_1_39_00 Boost 1.39 Release]
7052
7053 * Added experimental `stable_vector` container.
7054 * `shared_memory_object::remove` has now POSIX `unlink` semantics and
7055 `file_mapping::remove` was added to obtain POSIX `unlink` semantics with mapped files.
7056 * Shared memory in windows has now kernel lifetime instead of filesystem lifetime: shared
7057 memory will disappear when the system reboots.
7058 * Updated move semantics.
7059 * Fixed bugs
7060 [@https://svn.boost.org/trac/boost/ticket/2722 #2722],
7061 [@https://svn.boost.org/trac/boost/ticket/2729 #2729],
7062 [@https://svn.boost.org/trac/boost/ticket/2766 #2766],
7063 [@https://svn.boost.org/trac/boost/ticket/1390 #1390],
7064 [@https://svn.boost.org/trac/boost/ticket/2589 #2589],
7065
7066 [endsect]
7067
7068 [section:release_notes_boost_1_38_00 Boost 1.38 Release]
7069
7070 * Updated documentation to show rvalue-references funcions instead of emulation functions.
7071 * More non-copyable classes are now movable.
7072 * Move-constructor and assignments now leave moved object in default-constructed state
7073 instead of just swapping contents.
7074 * Several bugfixes (
7075 [@https://svn.boost.org/trac/boost/ticket/2391 #2391],
7076 [@https://svn.boost.org/trac/boost/ticket/2431 #2431],
7077 [@https://svn.boost.org/trac/boost/ticket/1390 #1390],
7078 [@https://svn.boost.org/trac/boost/ticket/2570 #2570],
7079 [@https://svn.boost.org/trac/boost/ticket/2528 #2528].
7080
7081 [endsect]
7082
7083 [section:release_notes_boost_1_37_00 Boost 1.37 Release]
7084
7085 * Containers can be used now in recursive types.
7086 * Added `BOOST_INTERPROCESS_FORCE_GENERIC_EMULATION` macro option to force the use
7087 of generic emulation code for process-shared synchronization primitives instead of
7088 native POSIX functions.
7089 * Added placement insertion members to containers
7090 * `boost::posix_time::pos_inf` value is now handled portably for timed functions.
7091 * Update some function parameters from `iterator` to `const_iterator` in containers
7092 to keep up with the draft of the next standard.
7093 * Documentation fixes.
7094
7095 [endsect]
7096
7097 [section:release_notes_boost_1_36_00 Boost 1.36 Release]
7098
7099 * Added anonymous shared memory for UNIX systems.
7100 * Fixed erroneous `void` return types from `flat_map::erase()` functions.
7101 * Fixed missing move semantics on managed memory classes.
7102 * Added copy_on_write and open_read_only options for shared memory and mapped file managed classes.
7103 * [*ABI breaking]: Added to `mapped_region` the mode used to create it.
7104 * Corrected instantiation errors in void allocators.
7105 * `shared_ptr` is movable and supports aliasing.
7106
7107 [endsect]
7108
7109 [section:release_notes_boost_1_35_00 Boost 1.35 Release]
7110
7111 * Added auxiliary utilities to ease the definition and construction of
7112 [classref boost::interprocess::shared_ptr shared_ptr],
7113 [classref boost::interprocess::weak_ptr weak_ptr] and
7114 `unique_ptr`. Added explanations
7115 and examples of these smart pointers in the documentation.
7116
7117 * Optimized vector:
7118 * 1) Now works with raw pointers as much as possible when
7119 using allocators defining `pointer` as an smart pointer. This increases
7120 performance and improves compilation times.
7121 * 2) A bit of metaprogramming
7122 to avoid using move_iterator when the type has trivial copy constructor
7123 or assignment and improve performance.
7124 * 3) Changed custom algorithms
7125 with standard ones to take advantage of optimized standard algorithms.
7126 * 4) Removed unused code.
7127
7128 * [*ABI breaking]: Containers don't derive from allocators, to avoid problems with allocators
7129 that might define virtual functions with the same names as container
7130 member functions. That would convert container functions in virtual functions
7131 and might disallow some of them if the returned type does not lead to a covariant return.
7132 Allocators are now stored as base classes of internal structs.
7133
7134 * Implemented [classref boost::interprocess::named_mutex named_mutex] and
7135 [classref boost::interprocess::named_semaphore named_semaphore] with POSIX
7136 named semaphores in systems supporting that option.
7137 [classref boost::interprocess::named_condition named_condition] has been
7138 accordingly changed to support interoperability with
7139 [classref boost::interprocess::named_mutex named_mutex].
7140
7141 * Reduced template bloat for node and adaptive allocators extracting node
7142 implementation to a class that only depends on the memory algorithm, instead of
7143 the segment manager + node size + node number...
7144
7145 * Fixed bug in `mapped_region` in UNIX when mapping address was provided but
7146 the region was mapped in another address.
7147
7148 * Added `aligned_allocate` and `allocate_many` functions to managed memory segments.
7149
7150 * Improved documentation about managed memory segments.
7151
7152 * [*Boost.Interprocess] containers are now documented in the Reference section.
7153
7154 * Correction of typos and documentation errors.
7155
7156 * Added `get_instance_name`, `get_instance_length` and `get_instance_type` functions
7157 to managed memory segments.
7158
7159 * Corrected suboptimal buffer expansion bug in `rbtree_best_fit`.
7160
7161 * Added iteration of named and unique objects in a segment manager.
7162
7163 * Fixed leak in [classref boost::interprocess::vector vector].
7164
7165 * Added support for Solaris.
7166
7167 * Optimized [classref boost::interprocess::segment_manager segment_manager]
7168 to avoid code bloat associated with templated instantiations.
7169
7170 * Fixed bug for UNIX: No slash ('/') was being added as the first character
7171 for shared memory names, leading to errors in some UNIX systems.
7172
7173 * Fixed bug in VC-8.0: Broken function inlining in core offset_ptr functions.
7174
7175 * Code examples changed to use new BoostBook code import features.
7176
7177 * Added aligned memory allocation function to memory algorithms.
7178
7179 * Fixed bug in `deque::clear()` and `deque::erase()`, they were declared private.
7180
7181 * Fixed bug in `deque::erase()`. Thanks to Steve LoBasso.
7182
7183 * Fixed bug in `atomic_dec32()`. Thanks to Glenn Schrader.
7184
7185 * Improved (multi)map/(multi)set constructors taking iterators. Now those have
7186 linear time if the iterator range is already sorted.
7187
7188 * [*ABI breaking]: (multi)map/(multi)set now reduce their node size. The color
7189 bit is embedded in the parent pointer. Now, the size of a node is the size of
7190 3 pointers in most systems. This optimization is activated for raw and `offset_ptr`
7191 pointers.
7192
7193 * (multi)map/(multi)set now reuse memory from old nodes in the assignment operator.
7194
7195 * [*ABI breaking]: Implemented node-containers based on intrusive containers.
7196 This saves code size, since many instantiations share the same algorithms.
7197
7198 * Corrected code to be compilable with Visual C++ 8.0.
7199
7200 * Added function to zero free memory in memory algorithms and the segment manager.
7201 This function is useful for security reasons and to improve compression ratios
7202 for files created with `managed_mapped_file`.
7203
7204 * Added support for intrusive index types in managed memory segments.
7205 Intrusive indexes save extra memory allocations to allocate the index
7206 since with just one
7207 allocation, we allocate room for the value, the name and the hook to insert
7208 the object in the index.
7209
7210 * Created new index type: [*iset_index]. It's an index based on
7211 an intrusive set (rb-tree).
7212
7213 * Created new index type: [*iunordered_set_index]. It's an index
7214 based on a pseudo-intrusive unordered set (hash table).
7215
7216 * [*ABI breaking]: The intrusive index [*iset_index] is now the default
7217 index type.
7218
7219 * Optimized vector to take advantage of `boost::has_trivial_destructor`.
7220 This optimization avoids calling destructors of elements that have a trivial destructor.
7221
7222 * Optimized vector to take advantage of `has_trivial_destructor_after_move` trait.
7223 This optimization avoids calling destructors of elements that have a trivial destructor
7224 if the element has been moved (which is the case of many movable types). This trick
7225 was provided by Howard Hinnant.
7226
7227 * Added security check to avoid integer overflow bug in allocators and
7228 named construction functions.
7229
7230 * Added alignment checks to forward and backwards expansion functions.
7231
7232 * Fixed bug in atomic functions for PPC.
7233
7234 * Fixed race-condition error when creating and opening a managed segment.
7235
7236 * Added adaptive pools.
7237
7238 * [*Source breaking]: Changed node allocators' template parameter order
7239 to make them easier to use.
7240
7241 * Added support for native windows shared memory.
7242
7243 * Added more tests.
7244
7245 * Corrected the presence of private functions in the reference section.
7246
7247 * Added function (`deallocate_free_chunks()`) to manually deallocate completely free
7248 chunks from node allocators.
7249
7250 * Implemented N1780 proposal to LWG issue 233: ['Insertion hints in associative containers]
7251 in interprocess [classref boost::interprocess::multiset multiset] and
7252 [classref boost::interprocess::multimap multimap] classes.
7253
7254 * [*Source breaking]: A shared memory object is now used including
7255 `shared_memory_object.hpp` header instead of `shared memory.hpp`.
7256
7257 * [*ABI breaking]: Changed global mutex when initializing managed shared memory
7258 and memory mapped files. This change tries to minimize deadlocks.
7259
7260 * [*Source breaking]: Changed shared memory, memory mapped files and mapped region's
7261 open mode to a single `mode_t` type.
7262
7263 * Added extra WIN32_LEAN_AND_MEAN before including DateTime headers to avoid socket
7264 redefinition errors when using Interprocess and Asio in windows.
7265
7266 * [*ABI breaking]: `mapped_region` constructor no longer requires classes
7267 derived from memory_mappable, but classes must fulfill the MemoryMappable concept.
7268
7269 * Added in-place reallocation capabilities to basic_string.
7270
7271 * [*ABI breaking]: Reimplemented and optimized small string optimization. The narrow
7272 string class has zero byte overhead with an internal 11 byte buffer in 32 systems!
7273
7274 * Added move semantics to containers. Improves
7275 performance when using containers of containers.
7276
7277 * [*ABI breaking]: End nodes of node containers (list, slist, map/set) are now
7278 embedded in the containers instead of allocated using the allocator. This
7279 allows no-throw move-constructors and improves performance.
7280
7281 * [*ABI breaking]: [*slist] and [*list] containers now have constant-time
7282 ['size()] function. The size of the container is added as a member.
7283
7284 [endsect]
7285
7286 [endsect]
7287
7288 [section:books_and_links Books and interesting links]
7289
7290 Some useful references about the C++ programming language, C++ internals,
7291 shared memory, allocators and containers used to design [*Boost.Interprocess].
7292
7293 [section:references_books Books]
7294
7295 * Great book about multithreading, and POSIX: [*['"Programming with Posix Threads"]],
7296 [*David R. Butenhof]
7297
7298 * The UNIX inter-process bible: [*['"UNIX Network Programming, Volume 2: Interprocess Communications"]],
7299 [*W. Richard Stevens]
7300
7301 * Current STL allocator issues: [*['"Effective STL"]], [*Scott Meyers]
7302
7303 * My C++ bible: [*['"Thinking in C++, Volume 1 & 2"]], [*Bruce Eckel and Chuck Allison]
7304
7305 * The book every C++ programmer should read: [*['"Inside the C++ Object Model"]], [*Stanley B. Lippman]
7306
7307 * A must-read: [*['"ISO/IEC TR 18015: Technical Report on C++ Performance"]], [*ISO WG21-SC22 members.]
7308
7309 [endsect]
7310
7311 [section:references_links Links]
7312
7313 * A framework to put the STL in shared memory: [@http://allocator.sourceforge.net/ ['"A C++ Standard Allocator for the Standard Template Library"] ].
7314
7315 * Instantiating C++ objects in shared memory: [@http://www.cs.ubc.ca/local/reading/proceedings/cascon94/htm/english/abs/hon.htm ['"Using objects in shared memory for C++ application"] ].
7316
7317 * A shared memory allocator and relative pointer: [@http://home.earthlink.net/~joshwalker1/writing/SharedMemory.html ['"Taming Shared Memory"] ].
7318
7319 [endsect]
7320
7321 [endsect]
7322
7323 [section:future_improvements Future improvements...]
7324
7325 There are some Interprocess features that I would like to implement and some
7326 [*Boost.Interprocess] code that can be much better. Let's see some ideas:
7327
7328 [section:win32_sync Win32 synchronization is too basic]
7329
7330 Win32 version of shared mutexes and shared conditions are based on "spin and wait"
7331 atomic instructions. This leads to poor performance and does not manage any issues
7332 like priority inversions. We would need very serious help from threading experts on
7333 this. And I'm not sure that this can be achieved in user-level software. Posix based
7334 implementations use PTHREAD_PROCESS_SHARED attribute to place mutexes in shared memory,
7335 so there are no such problems. I'm not aware of any implementation that simulates
7336 PTHREAD_PROCESS_SHARED attribute for Win32. We should be able to construct these
7337 primitives in memory mapped files, so that we can get filesystem persistence just like
7338 with POSIX primitives.
7339
7340 [endsect]
7341
7342 [section:future_objectnames Use of wide character names on Boost.Interprocess basic resources]
7343
7344 Currently Interprocess only allows *char* based names for basic named
7345 objects. However, several operating systems use *wchar_t* names for resources
7346 (mapped files, for example).
7347 In the future Interprocess should try to present a portable narrow/wide char interface.
7348 To do this, it would be useful to have a boost wstring <-> string conversion
7349 utilities to translate resource names (escaping needed characters
7350 that can conflict with OS names) in a portable way. It would be interesting also
7351 the use of [*boost::filesystem] paths to avoid operating system specific issues.
7352
7353 [endsect]
7354
7355 [section:future_security Security attributes]
7356
7357 [*Boost.Interprocess] does not define security attributes for shared memory and
7358 synchronization objects. Standard C++ also ignores security attributes with files
7359 so adding security attributes would require some serious work.
7360
7361 [endsect]
7362
7363 [section:future_ipc Future inter-process communications]
7364
7365 [*Boost.Interprocess] offers a process-shared message queue based on
7366 [*Boost.Interprocess] primitives like mutexes and conditions. I would want to
7367 develop more mechanisms, like stream-oriented named fifo so that we can use it
7368 with a iostream-interface wrapper (we can imitate Unix pipes).
7369
7370 C++ needs more complex mechanisms and it would be nice to have a stream and
7371 datagram oriented PF_UNIX-like mechanism in C++. And for very fast inter-process
7372 remote calls Solaris doors is an interesting alternative to implement for C++.
7373 But the work to implement PF_UNIX-like sockets and doors would be huge
7374 (and it might be difficult in a user-level library). Any network expert volunteer?
7375
7376 [endsect]
7377
7378 [endsect]
7379
7380 [endsect]
7381
7382 [section:indexes_reference Indexes and Reference]
7383
7384 [section:index Indexes]
7385
7386 [include auto_index_helpers.qbk]
7387
7388 [named_index class_name Class Index]
7389 [named_index typedef_name Typedef Index]
7390 [named_index function_name Function Index]
7391 [/named_index macro_name Macro Index]
7392 [/index]
7393
7394 [endsect]
7395
7396 [xinclude autodoc.xml]
7397
7398 [endsect]