2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
8 Note: There may be an updated version of this malloc obtainable at
9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10 Check before installing!
14 This library is all in one file to simplify the most common usage:
15 ftp it, compile it (-O3), and link it into another program. All of
16 the compile-time options default to reasonable values for use on
17 most platforms. You might later want to step through various
18 compile-time and dynamic tuning options.
20 For convenience, an include file for code using this malloc is at:
21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22 You don't really need this .h file unless you call functions not
23 defined in your system include files. The .h file contains only the
24 excerpts from this file needed for using this malloc on ANSI C/C++
25 systems, so long as you haven't changed compile-time options about
26 naming and tuning parameters. If you do, then you can create your
27 own malloc.h that does include all settings by cutting at the point
28 indicated below. Note that you may already by default be using a C
29 library containing a malloc that is based on some version of this
30 malloc (for example in linux). You might still want to use the one
31 in this file to customize settings or to avoid overheads associated
32 with library versions.
36 Supported pointer/size_t representation: 4 or 8 bytes
37 size_t MUST be an unsigned type of the same width as
38 pointers. (If you are using an ancient system that declares
39 size_t as a signed type, or need it to be a different width
40 than pointers, you can use a previous release of this malloc
41 (e.g. 2.7.2) supporting these.)
43 Alignment: 8 bytes (minimum)
44 This suffices for nearly all current machines and C compilers.
45 However, you can define MALLOC_ALIGNMENT to be wider than this
46 if necessary (up to 128bytes), at the expense of using more space.
48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49 8 or 16 bytes (if 8byte sizes)
50 Each malloced chunk has a hidden word of overhead holding size
51 and status information, and additional cross-check word
52 if FOOTERS is defined.
54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55 8-byte ptrs: 32 bytes (including overhead)
57 Even a request for zero bytes (i.e., malloc(0)) returns a
58 pointer to something of the minimum allocatable size.
59 The maximum overhead wastage (i.e., number of extra bytes
60 allocated than were requested in malloc) is less than or equal
61 to the minimum size, except for requests >= mmap_threshold that
62 are serviced via mmap(), where the worst case wastage is about
63 32 bytes plus the remainder from a system page (the minimal
64 mmap unit); typically 4096 or 8192 bytes.
66 Security: static-safe; optionally more or less
67 The "security" of malloc refers to the ability of malicious
68 code to accentuate the effects of errors (for example, freeing
69 space that is not currently malloc'ed or overwriting past the
70 ends of chunks) in code that calls malloc. This malloc
71 guarantees not to modify any memory locations below the base of
72 heap, i.e., static variables, even in the presence of usage
73 errors. The routines additionally detect most improper frees
74 and reallocs. All this holds as long as the static bookkeeping
75 for malloc itself is not corrupted by some other means. This
76 is only one aspect of security -- these checks do not, and
77 cannot, detect all possible programming errors.
79 If FOOTERS is defined nonzero, then each allocated chunk
80 carries an additional check word to verify that it was malloced
81 from its space. These check words are the same within each
82 execution of a program using malloc, but differ across
83 executions, so externally crafted fake chunks cannot be
84 freed. This improves security by rejecting frees/reallocs that
85 could corrupt heap memory, in addition to the checks preventing
86 writes to statics that are always on. This may further improve
87 security at the expense of time and space overhead. (Note that
88 FOOTERS may also be worth using with MSPACES.)
90 By default detected errors cause the program to abort (calling
91 "abort()"). You can override this to instead proceed past
92 errors by defining PROCEED_ON_ERROR. In this case, a bad free
93 has no effect, and a malloc that encounters a bad address
94 caused by user overwrites will ignore the bad address by
95 dropping pointers and indices to all known memory. This may
96 be appropriate for programs that should continue if at all
97 possible in the face of programming errors, although they may
98 run out of memory because dropped memory is never reclaimed.
100 If you don't like either of these options, you can define
101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102 else. And if if you are sure that your program using malloc has
103 no errors or vulnerabilities, you can define INSECURE to 1,
104 which might (or might not) provide a small performance improvement.
106 It is also possible to limit the maximum total allocatable
107 space, using malloc_set_footprint_limit. This is not
108 designed as a security feature in itself (calls to set limits
109 are not screened or privileged), but may be useful as one
110 aspect of a secure implementation.
112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113 When USE_LOCKS is defined, each public call to malloc, free,
114 etc is surrounded with a lock. By default, this uses a plain
115 pthread mutex, win32 critical section, or a spin-lock if if
116 available for the platform and not disabled by setting
117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118 recursive versions are used instead (which are not required for
119 base functionality but may be needed in layered extensions).
120 Using a global lock is not especially fast, and can be a major
121 bottleneck. It is designed only to provide minimal protection
122 in concurrent environments, and to provide a basis for
123 extensions. If you are using malloc in a concurrent program,
124 consider instead using nedmalloc
125 (http://www.nedprod.com/programs/portable/nedmalloc/) or
126 ptmalloc (See http://www.malloc.de), which are derived from
127 versions of this malloc.
129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130 This malloc can use unix sbrk or any emulation (invoked using
131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133 memory. On most unix systems, it tends to work best if both
134 MORECORE and MMAP are enabled. On Win32, it uses emulations
135 based on VirtualAlloc. It also uses common C library functions
138 Compliance: I believe it is compliant with the Single Unix Specification
139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
142 * Overview of algorithms
144 This is not the fastest, most space-conserving, most portable, or
145 most tunable malloc ever written. However it is among the fastest
146 while also being among the most space-conserving, portable and
147 tunable. Consistent balance across these factors results in a good
148 general-purpose allocator for malloc-intensive programs.
150 In most ways, this malloc is a best-fit allocator. Generally, it
151 chooses the best-fitting existing chunk for a request, with ties
152 broken in approximately least-recently-used order. (This strategy
153 normally maintains low fragmentation.) However, for requests less
154 than 256bytes, it deviates from best-fit when there is not an
155 exactly fitting available chunk by preferring to use space adjacent
156 to that used for the previous small request, as well as by breaking
157 ties in approximately most-recently-used order. (These enhance
158 locality of series of small allocations.) And for very large requests
159 (>= 256Kb by default), it relies on system memory mapping
160 facilities, if supported. (This helps avoid carrying around and
161 possibly fragmenting memory used only for large chunks.)
163 All operations (except malloc_stats and mallinfo) have execution
164 times that are bounded by a constant factor of the number of bits in
165 a size_t, not counting any clearing in calloc or copying in realloc,
166 or actions surrounding MORECORE and MMAP that have times
167 proportional to the number of non-contiguous regions returned by
168 system allocation routines, which is often just 1. In real-time
169 applications, you can optionally suppress segment traversals using
170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171 system allocators return non-contiguous spaces, at the typical
172 expense of carrying around more memory and increased fragmentation.
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
216 originating spaces. Normally, this requires use of locks.
218 ------------------------- Compile-time options ---------------------------
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix. Beware that there seem to be some
228 cases where this malloc might not be a pure drop-in replacement for
229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230 SetDIBits()) may be due to bugs in some video driver implementations
231 when pixel buffers are malloc()ed, and the region spans more than
232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233 default granularity, pixel buffers may straddle virtual allocation
234 regions more often than when using the Microsoft allocator. You can
235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236 buffers rather than using malloc(). If this is not possible,
237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239 conditions use _MSC_VER to distinguish them.
241 DLMALLOC_EXPORT default: extern
242 Defines how public APIs are declared. If you want to export via a
243 Windows DLL, you might define this as
244 #define DLMALLOC_EXPORT extern __declspec(dllexport)
245 If you want a POSIX ELF shared object, you might use
246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
249 Controls the minimum alignment for malloc'ed chunks. It must be a
250 power of two and at least 8, even on machines for which smaller
251 alignments would suffice. It may be defined as larger than this
252 though. Note however that code and data structures are optimized for
253 the case of 8-byte alignment.
255 MSPACES default: 0 (false)
256 If true, compile in support for independent allocation spaces.
257 This is only supported if HAVE_MMAP is true.
259 ONLY_MSPACES default: 0 (false)
260 If true, only compile in mspace versions, not regular versions.
262 USE_LOCKS default: 0 (false)
263 Causes each call to each public routine to be surrounded with
264 pthread or WIN32 mutex lock/unlock. (If set true, this can be
265 overridden on a per-mspace basis for mspace versions.) If set to a
266 non-zero value other than 1, locks are used, but their
267 implementation is left out, so lock functions must be supplied manually,
270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271 If true, uses custom spin locks for locking. This is currently
272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273 MS compilers. Otherwise, posix locks or win32 critical sections are
276 USE_RECURSIVE_LOCKS default: not defined
277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278 uses plain mutexes. This is not required for malloc proper, but may
279 be needed for layered allocators such as nedmalloc.
281 LOCK_AT_FORK default: not defined
282 If defined nonzero, performs pthread_atfork upon initialization
283 to initialize child lock while holding parent lock. The implementation
284 assumes that pthread locks (not custom locks) are being used. In other
285 cases, you may need to customize the implementation.
288 If true, provide extra checking and dispatching by placing
289 information in the footers of allocated chunks. This adds
290 space and time overhead.
293 If true, omit checks for usage errors and heap space overwrites.
295 USE_DL_PREFIX default: NOT defined
296 Causes compiler to prefix all public routines with the string 'dl'.
297 This can be useful when you only want to use this malloc in one part
298 of a program, using your regular system malloc elsewhere.
300 MALLOC_INSPECT_ALL default: NOT defined
301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302 perform traversal of all heap space. Unless access to these
303 functions is otherwise restricted, you probably do not want to
304 include them in secure implementations.
306 ABORT default: defined as abort()
307 Defines how to abort on failed checks. On most systems, a failed
308 check cannot die with an "assert" or even print an informative
309 message, because the underlying print routines in turn call malloc,
310 which will fail again. Generally, the best policy is to simply call
311 abort(). It's not very useful to do more than this because many
312 errors due to overwriting will show up as address faults (null, odd
313 addresses etc) rather than malloc-triggered checks, so will also
314 abort. Also, most compilers know that abort() does not return, so
315 can better optimize code conditionally calling it.
317 PROCEED_ON_ERROR default: defined as 0 (false)
318 Controls whether detected bad addresses cause them to bypassed
319 rather than aborting. If set, detected bad arguments to free and
320 realloc are ignored. And all bookkeeping information is zeroed out
321 upon a detected overwrite of freed heap space, thus losing the
322 ability to ever return it from malloc again, but enabling the
323 application to proceed. If PROCEED_ON_ERROR is defined, the
324 static variable malloc_corruption_error_count is compiled in
325 and can be examined to see if errors have occurred. This option
326 generates slower code than the default abort policy.
328 DEBUG default: NOT defined
329 The DEBUG setting is mainly intended for people trying to modify
330 this code or diagnose problems when porting to new platforms.
331 However, it may also be able to better isolate user errors than just
332 using runtime checks. The assertions in the check routines spell
333 out in more detail the assumptions and invariants underlying the
334 algorithms. The checking is fairly extensive, and will slow down
335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336 set will attempt to check every non-mmapped allocated and free chunk
337 in the course of computing the summaries.
339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340 Debugging assertion failures can be nearly impossible if your
341 version of the assert macro causes malloc to be called, which will
342 lead to a cascade of further failures, blowing the runtime stack.
343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344 which will usually make debugging easier.
346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347 The action to take before "return 0" when malloc fails to be able to
348 return memory because there is none available.
350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351 True if this system supports sbrk or an emulation of it.
353 MORECORE default: sbrk
354 The name of the sbrk-style system routine to call to obtain more
355 memory. See below for guidance on writing custom MORECORE
356 functions. The type of the argument to sbrk/MORECORE varies across
357 systems. It cannot be size_t, because it supports negative
358 arguments, so it is normally the signed type of the same width as
359 size_t (sometimes declared as "intptr_t"). It doesn't much matter
360 though. Internally, we only call it with arguments less than half
361 the max value of a size_t, which should work across all reasonable
362 possibilities, although sometimes generating compiler warnings.
364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365 If true, take advantage of fact that consecutive calls to MORECORE
366 with positive arguments always return contiguous increasing
367 addresses. This is true of unix sbrk. It does not hurt too much to
368 set it true anyway, since malloc copes with non-contiguities.
369 Setting it false when definitely non-contiguous saves time
370 and possibly wasted space it would take to discover this though.
372 MORECORE_CANNOT_TRIM default: NOT defined
373 True if MORECORE cannot release space back to the system when given
374 negative arguments. This is generally necessary only if you are
375 using a hand-crafted MORECORE function that cannot handle negative
378 NO_SEGMENT_TRAVERSAL default: 0
379 If non-zero, suppresses traversals of memory segments
380 returned by either MORECORE or CALL_MMAP. This disables
381 merging of segments that are contiguous, and selectively
382 releasing them to the OS if unused, but bounds execution times.
384 HAVE_MMAP default: 1 (true)
385 True if this system supports mmap or an emulation of it. If so, and
386 HAVE_MORECORE is not true, MMAP is used for all system
387 allocation. If set and HAVE_MORECORE is true as well, MMAP is
388 primarily used to directly allocate very large blocks. It is also
389 used as a backup strategy in cases where MORECORE fails to provide
390 space from system. Note: A single call to MUNMAP is assumed to be
391 able to unmap memory that may have be allocated using multiple calls
392 to MMAP, so long as they are adjacent.
394 HAVE_MREMAP default: 1 on linux, else 0
395 If true realloc() uses mremap() to re-allocate large blocks and
396 extend or shrink allocation spaces.
398 MMAP_CLEARS default: 1 except on WINCE.
399 True if mmap clears memory so calloc doesn't need to. This is true
400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
402 USE_BUILTIN_FFS default: 0 (i.e., not used)
403 Causes malloc to use the builtin ffs() function to compute indices.
404 Some compilers may recognize and intrinsify ffs to be faster than the
405 supplied C version. Also, the case of x86 using gcc is special-cased
406 to an asm instruction, so is already as fast as it can be, and so
407 this setting has no effect. Similarly for Win32 under recent MS compilers.
408 (On most x86s, the asm version is only slightly faster than the C version.)
410 malloc_getpagesize default: derive from system includes, or 4096.
411 The system page size. To the extent possible, this malloc manages
412 memory from the system in page-size units. This may be (and
413 usually is) a function rather than a constant. This is ignored
414 if WIN32, where page size is determined using getSystemInfo during
417 USE_DEV_RANDOM default: 0 (i.e., not used)
418 Causes malloc to use /dev/random to initialize secure magic seed for
419 stamping footers. Otherwise, the current time is used.
421 NO_MALLINFO default: 0
422 If defined, don't compile "mallinfo". This can be a simple way
423 of dealing with mismatches between system declarations and
426 MALLINFO_FIELD_TYPE default: size_t
427 The type of the fields in the mallinfo struct. This was originally
428 defined as "int" in SVID etc, but is more usefully defined as
429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
431 NO_MALLOC_STATS default: 0
432 If defined, don't compile "malloc_stats". This avoids calls to
433 fprintf and bringing in stdio dependencies you might not want.
435 REALLOC_ZERO_BYTES_FREES default: not defined
436 This should be set if a call to realloc with zero bytes should
437 be the same as a call to free. Some people think it should. Otherwise,
438 since this malloc returns a unique pointer for malloc(0), so does
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444 Define these if your system does not have these header files.
445 You might need to manually insert some of the declarations they provide.
447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448 system_info.dwAllocationGranularity in WIN32,
450 Also settable using mallopt(M_GRANULARITY, x)
451 The unit for allocating and deallocating memory from the system. On
452 most systems with contiguous MORECORE, there is no reason to
453 make this more than a page. However, systems with MMAP tend to
454 either require or encourage larger granularities. You can increase
455 this value to prevent system allocation functions to be called so
456 often, especially if they are slow. The value must be at least one
457 page and must be a power of two. Setting to 0 causes initialization
458 to either page size or win32 region size. (Note: In previous
459 versions of malloc, the equivalent of this option was called
462 DEFAULT_TRIM_THRESHOLD default: 2MB
463 Also settable using mallopt(M_TRIM_THRESHOLD, x)
464 The maximum amount of unused top-most memory to keep before
465 releasing via malloc_trim in free(). Automatic trimming is mainly
466 useful in long-lived programs using contiguous MORECORE. Because
467 trimming via sbrk can be slow on some systems, and can sometimes be
468 wasteful (in cases where programs immediately afterward allocate
469 more large chunks) the value should be high enough so that your
470 overall system performance would improve by releasing this much
471 memory. As a rough guide, you might set to a value close to the
472 average size of a process (program) running on your system.
473 Releasing this much memory would allow such a process to run in
474 memory. Generally, it is worth tuning trim thresholds when a
475 program undergoes phases where several large chunks are allocated
476 and released in ways that can reuse each other's storage, perhaps
477 mixed with phases where there are no such chunks at all. The trim
478 value must be greater than page size to have any useful effect. To
479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480 some people use of mallocing a huge space and then freeing it at
481 program startup, in an attempt to reserve system memory, doesn't
482 have the intended effect under automatic trimming, since that memory
483 will immediately be returned to the system.
485 DEFAULT_MMAP_THRESHOLD default: 256K
486 Also settable using mallopt(M_MMAP_THRESHOLD, x)
487 The request size threshold for using MMAP to directly service a
488 request. Requests of at least this size that cannot be allocated
489 using already-existing space will be serviced via mmap. (If enough
490 normal freed space already exists it is used instead.) Using mmap
491 segregates relatively large chunks of memory so that they can be
492 individually obtained and released from the host system. A request
493 serviced through mmap is never reused by any other request (at least
494 not directly; the system may just so happen to remap successive
495 requests to the same locations). Segregating space in this way has
496 the benefits that: Mmapped space can always be individually released
497 back to the system, which helps keep the system level memory demands
498 of a long-lived program low. Also, mapped memory doesn't become
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
501 release them. However, it has the disadvantage that the space
502 cannot be reclaimed, consolidated, and then used to service later
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
505 value of "large" may vary across systems. The default is an
506 empirically derived value that works well in most systems. You can
507 disable mmap by setting to MAX_SIZE_T.
509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510 The number of consolidated frees between checks to release
511 unused segments when freeing. When using non-contiguous segments,
512 especially with multiple mspaces, checking only for topmost space
513 doesn't always suffice to trigger trimming. To compensate for this,
514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515 current number of segments, if greater) try to release unused
516 segments to the OS when freeing chunks that result in
517 consolidation. The best value for this parameter is a compromise
518 between slowing down frees with relatively costly checks that
519 rarely trigger versus holding on to unused memory. To effectively
520 disable, set to MAX_SIZE_T. This may lead to a very slight speed
521 improvement at the expense of carrying around more memory.
524 /* Version identifier to allow people to support multiple versions */
525 #ifndef DLMALLOC_VERSION
526 #define DLMALLOC_VERSION 20806
527 #endif /* DLMALLOC_VERSION */
529 #ifndef DLMALLOC_EXPORT
530 #define DLMALLOC_EXPORT extern
538 #define LACKS_FCNTL_H
540 #endif /* _WIN32_WCE */
543 #define WIN32_LEAN_AND_MEAN
547 #define HAVE_MORECORE 0
548 #define LACKS_UNISTD_H
549 #define LACKS_SYS_PARAM_H
550 #define LACKS_SYS_MMAN_H
551 #define LACKS_STRING_H
552 #define LACKS_STRINGS_H
553 #define LACKS_SYS_TYPES_H
554 #define LACKS_ERRNO_H
555 #define LACKS_SCHED_H
556 #ifndef MALLOC_FAILURE_ACTION
557 #define MALLOC_FAILURE_ACTION
558 #endif /* MALLOC_FAILURE_ACTION */
560 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
561 #define MMAP_CLEARS 0
563 #define MMAP_CLEARS 1
564 #endif /* _WIN32_WCE */
565 #endif /*MMAP_CLEARS */
568 #if defined(DARWIN) || defined(_DARWIN)
569 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
570 #ifndef HAVE_MORECORE
571 #define HAVE_MORECORE 0
573 /* OSX allocators provide 16 byte alignment */
574 #ifndef MALLOC_ALIGNMENT
575 #define MALLOC_ALIGNMENT ((size_t)16U)
577 #endif /* HAVE_MORECORE */
580 #ifndef LACKS_SYS_TYPES_H
581 #include <sys/types.h> /* For size_t */
582 #endif /* LACKS_SYS_TYPES_H */
584 /* The maximum possible size_t value has all bits set */
585 #define MAX_SIZE_T (~(size_t)0)
587 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
588 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
589 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
590 #endif /* USE_LOCKS */
592 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
593 #if ((defined(__GNUC__) && \
594 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
595 defined(__i386__) || defined(__x86_64__))) || \
596 (defined(_MSC_VER) && _MSC_VER>=1310))
597 #ifndef USE_SPIN_LOCKS
598 #define USE_SPIN_LOCKS 1
599 #endif /* USE_SPIN_LOCKS */
601 #error "USE_SPIN_LOCKS defined without implementation"
602 #endif /* ... locks available... */
603 #elif !defined(USE_SPIN_LOCKS)
604 #define USE_SPIN_LOCKS 0
605 #endif /* USE_LOCKS */
608 #define ONLY_MSPACES 0
609 #endif /* ONLY_MSPACES */
613 #else /* ONLY_MSPACES */
615 #endif /* ONLY_MSPACES */
617 #ifndef MALLOC_ALIGNMENT
618 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
619 #endif /* MALLOC_ALIGNMENT */
624 #define ABORT abort()
626 #ifndef ABORT_ON_ASSERT_FAILURE
627 #define ABORT_ON_ASSERT_FAILURE 1
628 #endif /* ABORT_ON_ASSERT_FAILURE */
629 #ifndef PROCEED_ON_ERROR
630 #define PROCEED_ON_ERROR 0
631 #endif /* PROCEED_ON_ERROR */
635 #endif /* INSECURE */
636 #ifndef MALLOC_INSPECT_ALL
637 #define MALLOC_INSPECT_ALL 0
638 #endif /* MALLOC_INSPECT_ALL */
641 #endif /* HAVE_MMAP */
643 #define MMAP_CLEARS 1
644 #endif /* MMAP_CLEARS */
647 #define HAVE_MREMAP 1
648 #define _GNU_SOURCE /* Turns on mremap() definition */
650 #define HAVE_MREMAP 0
652 #endif /* HAVE_MREMAP */
653 #ifndef MALLOC_FAILURE_ACTION
654 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
655 #endif /* MALLOC_FAILURE_ACTION */
656 #ifndef HAVE_MORECORE
658 #define HAVE_MORECORE 0
659 #else /* ONLY_MSPACES */
660 #define HAVE_MORECORE 1
661 #endif /* ONLY_MSPACES */
662 #endif /* HAVE_MORECORE */
664 #define MORECORE_CONTIGUOUS 0
665 #else /* !HAVE_MORECORE */
666 #define MORECORE_DEFAULT sbrk
667 #ifndef MORECORE_CONTIGUOUS
668 #define MORECORE_CONTIGUOUS 1
669 #endif /* MORECORE_CONTIGUOUS */
670 #endif /* HAVE_MORECORE */
671 #ifndef DEFAULT_GRANULARITY
672 #if (MORECORE_CONTIGUOUS || defined(WIN32))
673 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
674 #else /* MORECORE_CONTIGUOUS */
675 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
676 #endif /* MORECORE_CONTIGUOUS */
677 #endif /* DEFAULT_GRANULARITY */
678 #ifndef DEFAULT_TRIM_THRESHOLD
679 #ifndef MORECORE_CANNOT_TRIM
680 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
681 #else /* MORECORE_CANNOT_TRIM */
682 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
683 #endif /* MORECORE_CANNOT_TRIM */
684 #endif /* DEFAULT_TRIM_THRESHOLD */
685 #ifndef DEFAULT_MMAP_THRESHOLD
687 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
688 #else /* HAVE_MMAP */
689 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
690 #endif /* HAVE_MMAP */
691 #endif /* DEFAULT_MMAP_THRESHOLD */
692 #ifndef MAX_RELEASE_CHECK_RATE
694 #define MAX_RELEASE_CHECK_RATE 4095
696 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
697 #endif /* HAVE_MMAP */
698 #endif /* MAX_RELEASE_CHECK_RATE */
699 #ifndef USE_BUILTIN_FFS
700 #define USE_BUILTIN_FFS 0
701 #endif /* USE_BUILTIN_FFS */
702 #ifndef USE_DEV_RANDOM
703 #define USE_DEV_RANDOM 0
704 #endif /* USE_DEV_RANDOM */
706 #define NO_MALLINFO 0
707 #endif /* NO_MALLINFO */
708 #ifndef MALLINFO_FIELD_TYPE
709 #define MALLINFO_FIELD_TYPE size_t
710 #endif /* MALLINFO_FIELD_TYPE */
711 #ifndef NO_MALLOC_STATS
712 #define NO_MALLOC_STATS 0
713 #endif /* NO_MALLOC_STATS */
714 #ifndef NO_SEGMENT_TRAVERSAL
715 #define NO_SEGMENT_TRAVERSAL 0
716 #endif /* NO_SEGMENT_TRAVERSAL */
719 mallopt tuning options. SVID/XPG defines four standard parameter
720 numbers for mallopt, normally defined in malloc.h. None of these
721 are used in this malloc, so setting them has no effect. But this
722 malloc does support the following options.
725 #define M_TRIM_THRESHOLD (-1)
726 #define M_GRANULARITY (-2)
727 #define M_MMAP_THRESHOLD (-3)
729 /* ------------------------ Mallinfo declarations ------------------------ */
733 This version of malloc supports the standard SVID/XPG mallinfo
734 routine that returns a struct containing usage properties and
735 statistics. It should work on any system that has a
736 /usr/include/malloc.h defining struct mallinfo. The main
737 declaration needed is the mallinfo struct that is returned (by-copy)
738 by mallinfo(). The malloinfo struct contains a bunch of fields that
739 are not even meaningful in this version of malloc. These fields are
740 are instead filled by mallinfo() with other numbers that might be of
743 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
744 /usr/include/malloc.h file that includes a declaration of struct
745 mallinfo. If so, it is included; else a compliant version is
746 declared below. These must be precisely the same for mallinfo() to
747 work. The original SVID version of this struct, defined on most
748 systems with mallinfo, declares all fields as ints. But some others
749 define as unsigned long. If your system defines the fields using a
750 type of different width than listed here, you MUST #include your
751 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
754 /* #define HAVE_USR_INCLUDE_MALLOC_H */
756 #ifdef HAVE_USR_INCLUDE_MALLOC_H
757 #include "/usr/include/malloc.h"
758 #else /* HAVE_USR_INCLUDE_MALLOC_H */
759 #ifndef STRUCT_MALLINFO_DECLARED
760 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
761 #define _STRUCT_MALLINFO
762 #define STRUCT_MALLINFO_DECLARED 1
764 MALLINFO_FIELD_TYPE arena
; /* non-mmapped space allocated from system */
765 MALLINFO_FIELD_TYPE ordblks
; /* number of free chunks */
766 MALLINFO_FIELD_TYPE smblks
; /* always 0 */
767 MALLINFO_FIELD_TYPE hblks
; /* always 0 */
768 MALLINFO_FIELD_TYPE hblkhd
; /* space in mmapped regions */
769 MALLINFO_FIELD_TYPE usmblks
; /* maximum total allocated space */
770 MALLINFO_FIELD_TYPE fsmblks
; /* always 0 */
771 MALLINFO_FIELD_TYPE uordblks
; /* total allocated space */
772 MALLINFO_FIELD_TYPE fordblks
; /* total free space */
773 MALLINFO_FIELD_TYPE keepcost
; /* releasable (via malloc_trim) space */
775 #endif /* STRUCT_MALLINFO_DECLARED */
776 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
777 #endif /* NO_MALLINFO */
780 Try to persuade compilers to inline. The most critical functions for
781 inlining are defined as macros, so these aren't used for them.
785 #if defined(__GNUC__)
786 #define FORCEINLINE __inline __attribute__ ((always_inline))
787 #elif defined(_MSC_VER)
788 #define FORCEINLINE __forceinline
792 #if defined(__GNUC__)
793 #define NOINLINE __attribute__ ((noinline))
794 #elif defined(_MSC_VER)
795 #define NOINLINE __declspec(noinline)
804 #define FORCEINLINE inline
806 #endif /* __cplusplus */
813 /* ------------------- Declarations of public routines ------------------- */
815 #ifndef USE_DL_PREFIX
816 #define dlcalloc calloc
818 #define dlmalloc malloc
819 #define dlmemalign memalign
820 #define dlposix_memalign posix_memalign
821 #define dlrealloc realloc
822 #define dlrealloc_in_place realloc_in_place
823 #define dlvalloc valloc
824 #define dlpvalloc pvalloc
825 #define dlmallinfo mallinfo
826 #define dlmallopt mallopt
827 #define dlmalloc_trim malloc_trim
828 #define dlmalloc_stats malloc_stats
829 #define dlmalloc_usable_size malloc_usable_size
830 #define dlmalloc_footprint malloc_footprint
831 #define dlmalloc_max_footprint malloc_max_footprint
832 #define dlmalloc_footprint_limit malloc_footprint_limit
833 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
834 #define dlmalloc_inspect_all malloc_inspect_all
835 #define dlindependent_calloc independent_calloc
836 #define dlindependent_comalloc independent_comalloc
837 #define dlbulk_free bulk_free
838 #endif /* USE_DL_PREFIX */
842 Returns a pointer to a newly allocated chunk of at least n bytes, or
843 null if no space is available, in which case errno is set to ENOMEM
846 If n is zero, malloc returns a minimum-sized chunk. (The minimum
847 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848 systems.) Note that size_t is an unsigned type, so calls with
849 arguments that would be negative if signed are interpreted as
850 requests for huge amounts of space, which will often fail. The
851 maximum supported value of n differs across systems, but is in all
852 cases less than the maximum representable value of a size_t.
854 DLMALLOC_EXPORT
void* dlmalloc(size_t);
858 Releases the chunk of memory pointed to by p, that had been previously
859 allocated using malloc or a related routine such as realloc.
860 It has no effect if p is null. If p was not malloced or already
861 freed, free(p) will by default cause the current program to abort.
863 DLMALLOC_EXPORT
void dlfree(void*);
866 calloc(size_t n_elements, size_t element_size);
867 Returns a pointer to n_elements * element_size bytes, with all locations
870 DLMALLOC_EXPORT
void* dlcalloc(size_t, size_t);
873 realloc(void* p, size_t n)
874 Returns a pointer to a chunk of size n that contains the same data
875 as does chunk p up to the minimum of (n, p's size) bytes, or null
876 if no space is available.
878 The returned pointer may or may not be the same as p. The algorithm
879 prefers extending p in most cases when possible, otherwise it
880 employs the equivalent of a malloc-copy-free sequence.
882 If p is null, realloc is equivalent to malloc.
884 If space is not available, realloc returns null, errno is set (if on
885 ANSI) and p is NOT freed.
887 if n is for fewer bytes than already held by p, the newly unused
888 space is lopped off and freed if possible. realloc with a size
889 argument of zero (re)allocates a minimum-sized chunk.
891 The old unix realloc convention of allowing the last-free'd chunk
892 to be used as an argument to realloc is not supported.
894 DLMALLOC_EXPORT
void* dlrealloc(void*, size_t);
897 realloc_in_place(void* p, size_t n)
898 Resizes the space allocated for p to size n, only if this can be
899 done without moving p (i.e., only if there is adjacent space
900 available if n is greater than p's current allocated size, or n is
901 less than or equal to p's size). This may be used instead of plain
902 realloc if an alternative allocation strategy is needed upon failure
903 to expand space; for example, reallocation of a buffer that must be
904 memory-aligned or cleared. You can use realloc_in_place to trigger
905 these alternatives only when needed.
907 Returns p if successful; otherwise null.
909 DLMALLOC_EXPORT
void* dlrealloc_in_place(void*, size_t);
912 memalign(size_t alignment, size_t n);
913 Returns a pointer to a newly allocated chunk of n bytes, aligned
914 in accord with the alignment argument.
916 The alignment argument should be a power of two. If the argument is
917 not a power of two, the nearest greater power is used.
918 8-byte alignment is guaranteed by normal malloc calls, so don't
919 bother calling memalign with an argument of 8 or less.
921 Overreliance on memalign is a sure way to fragment space.
923 DLMALLOC_EXPORT
void* dlmemalign(size_t, size_t);
926 int posix_memalign(void** pp, size_t alignment, size_t n);
927 Allocates a chunk of n bytes, aligned in accord with the alignment
928 argument. Differs from memalign only in that it (1) assigns the
929 allocated memory to *pp rather than returning it, (2) fails and
930 returns EINVAL if the alignment is not a power of two (3) fails and
931 returns ENOMEM if memory cannot be allocated.
933 DLMALLOC_EXPORT
int dlposix_memalign(void**, size_t, size_t);
937 Equivalent to memalign(pagesize, n), where pagesize is the page
938 size of the system. If the pagesize is unknown, 4096 is used.
940 DLMALLOC_EXPORT
void* dlvalloc(size_t);
943 mallopt(int parameter_number, int parameter_value)
944 Sets tunable parameters The format is to provide a
945 (parameter-number, parameter-value) pair. mallopt then sets the
946 corresponding parameter to the argument value if it can (i.e., so
947 long as the value is meaningful), and returns 1 if successful else
948 0. To workaround the fact that mallopt is specified to use int,
949 not size_t parameters, the value -1 is specially treated as the
950 maximum unsigned size_t value.
952 SVID/XPG/ANSI defines four standard param numbers for mallopt,
953 normally defined in malloc.h. None of these are use in this malloc,
954 so setting them has no effect. But this malloc also supports other
955 options in mallopt. See below for details. Briefly, supported
956 parameters are as follows (listed defaults are for "typical"
959 Symbol param # default allowed param values
960 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
961 M_GRANULARITY -2 page size any power of 2 >= page size
962 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
964 DLMALLOC_EXPORT
int dlmallopt(int, int);
968 Returns the number of bytes obtained from the system. The total
969 number of bytes allocated by malloc, realloc etc., is less than this
970 value. Unlike mallinfo, this function returns only a precomputed
971 result, so can be called frequently to monitor memory consumption.
972 Even if locks are otherwise defined, this function does not use them,
973 so results might not be up to date.
975 DLMALLOC_EXPORT
size_t dlmalloc_footprint(void);
978 malloc_max_footprint();
979 Returns the maximum number of bytes obtained from the system. This
980 value will be greater than current footprint if deallocated space
981 has been reclaimed by the system. The peak number of bytes allocated
982 by malloc, realloc etc., is less than this value. Unlike mallinfo,
983 this function returns only a precomputed result, so can be called
984 frequently to monitor memory consumption. Even if locks are
985 otherwise defined, this function does not use them, so results might
988 DLMALLOC_EXPORT
size_t dlmalloc_max_footprint(void);
991 malloc_footprint_limit();
992 Returns the number of bytes that the heap is allowed to obtain from
993 the system, returning the last value returned by
994 malloc_set_footprint_limit, or the maximum size_t value if
995 never set. The returned value reflects a permission. There is no
996 guarantee that this number of bytes can actually be obtained from
999 DLMALLOC_EXPORT
size_t dlmalloc_footprint_limit();
1002 malloc_set_footprint_limit();
1003 Sets the maximum number of bytes to obtain from the system, causing
1004 failure returns from malloc and related functions upon attempts to
1005 exceed this value. The argument value may be subject to page
1006 rounding to an enforceable limit; this actual value is returned.
1007 Using an argument of the maximum possible size_t effectively
1008 disables checks. If the argument is less than or equal to the
1009 current malloc_footprint, then all future allocations that require
1010 additional system memory will fail. However, invocation cannot
1011 retroactively deallocate existing used memory.
1013 DLMALLOC_EXPORT
size_t dlmalloc_set_footprint_limit(size_t bytes
);
1015 #if MALLOC_INSPECT_ALL
1017 malloc_inspect_all(void(*handler)(void *start,
1020 void* callback_arg),
1022 Traverses the heap and calls the given handler for each managed
1023 region, skipping all bytes that are (or may be) used for bookkeeping
1024 purposes. Traversal does not include include chunks that have been
1025 directly memory mapped. Each reported region begins at the start
1026 address, and continues up to but not including the end address. The
1027 first used_bytes of the region contain allocated data. If
1028 used_bytes is zero, the region is unallocated. The handler is
1029 invoked with the given callback argument. If locks are defined, they
1030 are held during the entire traversal. It is a bad idea to invoke
1031 other malloc functions from within the handler.
1033 For example, to count the number of in-use chunks with size greater
1034 than 1000, you could write:
1035 static int count = 0;
1036 void count_chunks(void* start, void* end, size_t used, void* arg) {
1037 if (used >= 1000) ++count;
1040 malloc_inspect_all(count_chunks, NULL);
1042 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1044 DLMALLOC_EXPORT
void dlmalloc_inspect_all(void(*handler
)(void*, void *, size_t, void*),
1047 #endif /* MALLOC_INSPECT_ALL */
1052 Returns (by copy) a struct containing various summary statistics:
1054 arena: current total non-mmapped bytes allocated from system
1055 ordblks: the number of free chunks
1056 smblks: always zero.
1057 hblks: current number of mmapped regions
1058 hblkhd: total bytes held in mmapped regions
1059 usmblks: the maximum total allocated space. This will be greater
1060 than current total if trimming has occurred.
1061 fsmblks: always zero
1062 uordblks: current total allocated space (normal or mmapped)
1063 fordblks: total free space
1064 keepcost: the maximum number of bytes that could ideally be released
1065 back to system via malloc_trim. ("ideally" means that
1066 it ignores page restrictions etc.)
1068 Because these fields are ints, but internal bookkeeping may
1069 be kept as longs, the reported values may wrap around zero and
1072 DLMALLOC_EXPORT
struct mallinfo
dlmallinfo(void);
1073 #endif /* NO_MALLINFO */
1076 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1078 independent_calloc is similar to calloc, but instead of returning a
1079 single cleared space, it returns an array of pointers to n_elements
1080 independent elements that can hold contents of size elem_size, each
1081 of which starts out cleared, and can be independently freed,
1082 realloc'ed etc. The elements are guaranteed to be adjacently
1083 allocated (this is not guaranteed to occur with multiple callocs or
1084 mallocs), which may also improve cache locality in some
1087 The "chunks" argument is optional (i.e., may be null, which is
1088 probably the most typical usage). If it is null, the returned array
1089 is itself dynamically allocated and should also be freed when it is
1090 no longer needed. Otherwise, the chunks array must be of at least
1091 n_elements in length. It is filled in with the pointers to the
1094 In either case, independent_calloc returns this pointer array, or
1095 null if the allocation failed. If n_elements is zero and "chunks"
1096 is null, it returns a chunk representing an array with zero elements
1097 (which should be freed if not wanted).
1099 Each element must be freed when it is no longer needed. This can be
1100 done all at once using bulk_free.
1102 independent_calloc simplifies and speeds up implementations of many
1103 kinds of pools. It may also be useful when constructing large data
1104 structures that initially have a fixed number of fixed-sized nodes,
1105 but the number is not known at compile time, and some of the nodes
1106 may later need to be freed. For example:
1108 struct Node { int item; struct Node* next; };
1110 struct Node* build_list() {
1112 int n = read_number_of_nodes_needed();
1113 if (n <= 0) return 0;
1114 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1115 if (pool == 0) die();
1116 // organize into a linked list...
1117 struct Node* first = pool[0];
1118 for (i = 0; i < n-1; ++i)
1119 pool[i]->next = pool[i+1];
1120 free(pool); // Can now free the array (or not, if it is needed later)
1124 DLMALLOC_EXPORT
void** dlindependent_calloc(size_t, size_t, void**);
1127 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1129 independent_comalloc allocates, all at once, a set of n_elements
1130 chunks with sizes indicated in the "sizes" array. It returns
1131 an array of pointers to these elements, each of which can be
1132 independently freed, realloc'ed etc. The elements are guaranteed to
1133 be adjacently allocated (this is not guaranteed to occur with
1134 multiple callocs or mallocs), which may also improve cache locality
1135 in some applications.
1137 The "chunks" argument is optional (i.e., may be null). If it is null
1138 the returned array is itself dynamically allocated and should also
1139 be freed when it is no longer needed. Otherwise, the chunks array
1140 must be of at least n_elements in length. It is filled in with the
1141 pointers to the chunks.
1143 In either case, independent_comalloc returns this pointer array, or
1144 null if the allocation failed. If n_elements is zero and chunks is
1145 null, it returns a chunk representing an array with zero elements
1146 (which should be freed if not wanted).
1148 Each element must be freed when it is no longer needed. This can be
1149 done all at once using bulk_free.
1151 independent_comallac differs from independent_calloc in that each
1152 element may have a different size, and also that it does not
1153 automatically clear elements.
1155 independent_comalloc can be used to speed up allocation in cases
1156 where several structs or objects must always be allocated at the
1157 same time. For example:
1162 void send_message(char* msg) {
1163 int msglen = strlen(msg);
1164 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1166 if (independent_comalloc(3, sizes, chunks) == 0)
1168 struct Head* head = (struct Head*)(chunks[0]);
1169 char* body = (char*)(chunks[1]);
1170 struct Foot* foot = (struct Foot*)(chunks[2]);
1174 In general though, independent_comalloc is worth using only for
1175 larger values of n_elements. For small values, you probably won't
1176 detect enough difference from series of malloc calls to bother.
1178 Overuse of independent_comalloc can increase overall memory usage,
1179 since it cannot reuse existing noncontiguous small chunks that
1180 might be available for some of the elements.
1182 DLMALLOC_EXPORT
void** dlindependent_comalloc(size_t, size_t*, void**);
1185 bulk_free(void* array[], size_t n_elements)
1186 Frees and clears (sets to null) each non-null pointer in the given
1187 array. This is likely to be faster than freeing them one-by-one.
1188 If footers are used, pointers that have been allocated in different
1189 mspaces are not freed or cleared, and the count of all such pointers
1190 is returned. For large arrays of pointers with poor locality, it
1191 may be worthwhile to sort this array before calling bulk_free.
1193 DLMALLOC_EXPORT
size_t dlbulk_free(void**, size_t n_elements
);
1197 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1198 round up n to nearest pagesize.
1200 DLMALLOC_EXPORT
void* dlpvalloc(size_t);
1203 malloc_trim(size_t pad);
1205 If possible, gives memory back to the system (via negative arguments
1206 to sbrk) if there is unused memory at the `high' end of the malloc
1207 pool or in unused MMAP segments. You can call this after freeing
1208 large blocks of memory to potentially reduce the system-level memory
1209 requirements of a program. However, it cannot guarantee to reduce
1210 memory. Under some allocation patterns, some large free blocks of
1211 memory will be locked between two used chunks, so they cannot be
1212 given back to the system.
1214 The `pad' argument to malloc_trim represents the amount of free
1215 trailing space to leave untrimmed. If this argument is zero, only
1216 the minimum amount of memory to maintain internal data structures
1217 will be left. Non-zero arguments can be supplied to maintain enough
1218 trailing space to service future expected allocations without having
1219 to re-obtain memory from the system.
1221 Malloc_trim returns 1 if it actually released any memory, else 0.
1223 DLMALLOC_EXPORT
int dlmalloc_trim(size_t);
1227 Prints on stderr the amount of space obtained from the system (both
1228 via sbrk and mmap), the maximum amount (which may be more than
1229 current if malloc_trim and/or munmap got called), and the current
1230 number of bytes allocated via malloc (or realloc, etc) but not yet
1231 freed. Note that this is the number of bytes allocated, not the
1232 number requested. It will be larger than the number requested
1233 because of alignment and bookkeeping overhead. Because it includes
1234 alignment wastage as being in use, this figure may be greater than
1235 zero even when no user-level chunks are allocated.
1237 The reported current and maximum system memory can be inaccurate if
1238 a program makes other calls to system memory allocation functions
1239 (normally sbrk) outside of malloc.
1241 malloc_stats prints only the most commonly interesting statistics.
1242 More information can be obtained by calling mallinfo.
1244 DLMALLOC_EXPORT
void dlmalloc_stats(void);
1247 malloc_usable_size(void* p);
1249 Returns the number of bytes you can actually use in
1250 an allocated chunk, which may be more than you requested (although
1251 often not) due to alignment and minimum size constraints.
1252 You can use this many bytes without worrying about
1253 overwriting other allocated objects. This is not a particularly great
1254 programming practice. malloc_usable_size can be more useful in
1255 debugging and assertions, for example:
1258 assert(malloc_usable_size(p) >= 256);
1260 size_t dlmalloc_usable_size(void*);
1262 #endif /* ONLY_MSPACES */
1267 mspace is an opaque type representing an independent
1268 region of space that supports mspace_malloc, etc.
1270 typedef void* mspace
;
1273 create_mspace creates and returns a new independent space with the
1274 given initial capacity, or, if 0, the default granularity size. It
1275 returns null if there is no system memory available to create the
1276 space. If argument locked is non-zero, the space uses a separate
1277 lock to control access. The capacity of the space will grow
1278 dynamically as needed to service mspace_malloc requests. You can
1279 control the sizes of incremental increases of this space by
1280 compiling with a different DEFAULT_GRANULARITY or dynamically
1281 setting with mallopt(M_GRANULARITY, value).
1283 DLMALLOC_EXPORT mspace
create_mspace(size_t capacity
, int locked
);
1286 destroy_mspace destroys the given space, and attempts to return all
1287 of its memory back to the system, returning the total number of
1288 bytes freed. After destruction, the results of access to all memory
1289 used by the space become undefined.
1291 DLMALLOC_EXPORT
size_t destroy_mspace(mspace msp
);
1294 create_mspace_with_base uses the memory supplied as the initial base
1295 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1296 space is used for bookkeeping, so the capacity must be at least this
1297 large. (Otherwise 0 is returned.) When this initial space is
1298 exhausted, additional memory will be obtained from the system.
1299 Destroying this space will deallocate all additionally allocated
1300 space (if possible) but not the initial base.
1302 DLMALLOC_EXPORT mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
);
1305 mspace_track_large_chunks controls whether requests for large chunks
1306 are allocated in their own untracked mmapped regions, separate from
1307 others in this mspace. By default large chunks are not tracked,
1308 which reduces fragmentation. However, such chunks are not
1309 necessarily released to the system upon destroy_mspace. Enabling
1310 tracking by setting to true may increase fragmentation, but avoids
1311 leakage when relying on destroy_mspace to release all memory
1312 allocated using this space. The function returns the previous
1315 DLMALLOC_EXPORT
int mspace_track_large_chunks(mspace msp
, int enable
);
1319 mspace_malloc behaves as malloc, but operates within
1322 DLMALLOC_EXPORT
void* mspace_malloc(mspace msp
, size_t bytes
);
1325 mspace_free behaves as free, but operates within
1328 If compiled with FOOTERS==1, mspace_free is not actually needed.
1329 free may be called instead of mspace_free because freed chunks from
1330 any space are handled by their originating spaces.
1332 DLMALLOC_EXPORT
void mspace_free(mspace msp
, void* mem
);
1335 mspace_realloc behaves as realloc, but operates within
1338 If compiled with FOOTERS==1, mspace_realloc is not actually
1339 needed. realloc may be called instead of mspace_realloc because
1340 realloced chunks from any space are handled by their originating
1343 DLMALLOC_EXPORT
void* mspace_realloc(mspace msp
, void* mem
, size_t newsize
);
1346 mspace_calloc behaves as calloc, but operates within
1349 DLMALLOC_EXPORT
void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
);
1352 mspace_memalign behaves as memalign, but operates within
1355 DLMALLOC_EXPORT
void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
);
1358 mspace_independent_calloc behaves as independent_calloc, but
1359 operates within the given space.
1361 DLMALLOC_EXPORT
void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
1362 size_t elem_size
, void* chunks
[]);
1365 mspace_independent_comalloc behaves as independent_comalloc, but
1366 operates within the given space.
1368 DLMALLOC_EXPORT
void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
1369 size_t sizes
[], void* chunks
[]);
1372 mspace_footprint() returns the number of bytes obtained from the
1373 system for this space.
1375 DLMALLOC_EXPORT
size_t mspace_footprint(mspace msp
);
1378 mspace_max_footprint() returns the peak number of bytes obtained from the
1379 system for this space.
1381 DLMALLOC_EXPORT
size_t mspace_max_footprint(mspace msp
);
1386 mspace_mallinfo behaves as mallinfo, but reports properties of
1389 DLMALLOC_EXPORT
struct mallinfo
mspace_mallinfo(mspace msp
);
1390 #endif /* NO_MALLINFO */
1393 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1395 DLMALLOC_EXPORT
size_t mspace_usable_size(const void* mem
);
1398 mspace_malloc_stats behaves as malloc_stats, but reports
1399 properties of the given space.
1401 DLMALLOC_EXPORT
void mspace_malloc_stats(mspace msp
);
1404 mspace_trim behaves as malloc_trim, but
1405 operates within the given space.
1407 DLMALLOC_EXPORT
int mspace_trim(mspace msp
, size_t pad
);
1410 An alias for mallopt.
1412 DLMALLOC_EXPORT
int mspace_mallopt(int, int);
1414 #endif /* MSPACES */
1417 } /* end of extern "C" */
1418 #endif /* __cplusplus */
1421 ========================================================================
1422 To make a fully customizable malloc.h header file, cut everything
1423 above this line, put into file malloc.h, edit to suit, and #include it
1424 on the next line, as well as in programs that use this malloc.
1425 ========================================================================
1428 /* #include "malloc.h" */
1430 /*------------------------------ internal #includes ---------------------- */
1433 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1434 #endif /* _MSC_VER */
1435 #if !NO_MALLOC_STATS
1436 #include <stdio.h> /* for printing in malloc_stats */
1437 #endif /* NO_MALLOC_STATS */
1438 #ifndef LACKS_ERRNO_H
1439 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1440 #endif /* LACKS_ERRNO_H */
1442 #if ABORT_ON_ASSERT_FAILURE
1444 #define assert(x) if(!(x)) ABORT
1445 #else /* ABORT_ON_ASSERT_FAILURE */
1447 #endif /* ABORT_ON_ASSERT_FAILURE */
1454 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1455 #include <time.h> /* for magic initialization */
1457 #ifndef LACKS_STDLIB_H
1458 #include <stdlib.h> /* for abort() */
1459 #endif /* LACKS_STDLIB_H */
1460 #ifndef LACKS_STRING_H
1461 #include <string.h> /* for memset etc */
1462 #endif /* LACKS_STRING_H */
1464 #ifndef LACKS_STRINGS_H
1465 #include <strings.h> /* for ffs */
1466 #endif /* LACKS_STRINGS_H */
1467 #endif /* USE_BUILTIN_FFS */
1469 #ifndef LACKS_SYS_MMAN_H
1470 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1471 #if (defined(linux) && !defined(__USE_GNU))
1473 #include <sys/mman.h> /* for mmap */
1476 #include <sys/mman.h> /* for mmap */
1478 #endif /* LACKS_SYS_MMAN_H */
1479 #ifndef LACKS_FCNTL_H
1481 #endif /* LACKS_FCNTL_H */
1482 #endif /* HAVE_MMAP */
1483 #ifndef LACKS_UNISTD_H
1484 #include <unistd.h> /* for sbrk, sysconf */
1485 #else /* LACKS_UNISTD_H */
1486 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1487 extern void* sbrk(ptrdiff_t);
1488 #endif /* FreeBSD etc */
1489 #endif /* LACKS_UNISTD_H */
1491 /* Declarations for locking */
1494 #if defined (__SVR4) && defined (__sun) /* solaris */
1496 #elif !defined(LACKS_SCHED_H)
1498 #endif /* solaris or LACKS_SCHED_H */
1499 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1500 #include <pthread.h>
1501 #endif /* USE_RECURSIVE_LOCKS ... */
1502 #elif defined(_MSC_VER)
1504 /* These are already defined on AMD64 builds */
1507 #endif /* __cplusplus */
1508 LONG __cdecl
_InterlockedCompareExchange(LONG
volatile *Dest
, LONG Exchange
, LONG Comp
);
1509 LONG __cdecl
_InterlockedExchange(LONG
volatile *Target
, LONG Value
);
1512 #endif /* __cplusplus */
1513 #endif /* _M_AMD64 */
1514 #pragma intrinsic (_InterlockedCompareExchange)
1515 #pragma intrinsic (_InterlockedExchange)
1516 #define interlockedcompareexchange _InterlockedCompareExchange
1517 #define interlockedexchange _InterlockedExchange
1518 #elif defined(WIN32) && defined(__GNUC__)
1519 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1520 #define interlockedexchange __sync_lock_test_and_set
1522 #else /* USE_LOCKS */
1523 #endif /* USE_LOCKS */
1525 #ifndef LOCK_AT_FORK
1526 #define LOCK_AT_FORK 0
1529 /* Declarations for bit scanning on win32 */
1530 #if defined(_MSC_VER) && _MSC_VER>=1300
1531 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1534 #endif /* __cplusplus */
1535 unsigned char _BitScanForward(unsigned long *index
, unsigned long mask
);
1536 unsigned char _BitScanReverse(unsigned long *index
, unsigned long mask
);
1539 #endif /* __cplusplus */
1541 #define BitScanForward _BitScanForward
1542 #define BitScanReverse _BitScanReverse
1543 #pragma intrinsic(_BitScanForward)
1544 #pragma intrinsic(_BitScanReverse)
1545 #endif /* BitScanForward */
1546 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1549 #ifndef malloc_getpagesize
1550 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1551 # ifndef _SC_PAGE_SIZE
1552 # define _SC_PAGE_SIZE _SC_PAGESIZE
1555 # ifdef _SC_PAGE_SIZE
1556 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1558 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1559 extern size_t getpagesize();
1560 # define malloc_getpagesize getpagesize()
1562 # ifdef WIN32 /* use supplied emulation of getpagesize */
1563 # define malloc_getpagesize getpagesize()
1565 # ifndef LACKS_SYS_PARAM_H
1566 # include <sys/param.h>
1568 # ifdef EXEC_PAGESIZE
1569 # define malloc_getpagesize EXEC_PAGESIZE
1573 # define malloc_getpagesize NBPG
1575 # define malloc_getpagesize (NBPG * CLSIZE)
1579 # define malloc_getpagesize NBPC
1582 # define malloc_getpagesize PAGESIZE
1583 # else /* just guess */
1584 # define malloc_getpagesize ((size_t)4096U)
1595 /* ------------------- size_t and alignment properties -------------------- */
1597 /* The byte and bit size of a size_t */
1598 #define SIZE_T_SIZE (sizeof(size_t))
1599 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1601 /* Some constants coerced to size_t */
1602 /* Annoying but necessary to avoid errors on some platforms */
1603 #define SIZE_T_ZERO ((size_t)0)
1604 #define SIZE_T_ONE ((size_t)1)
1605 #define SIZE_T_TWO ((size_t)2)
1606 #define SIZE_T_FOUR ((size_t)4)
1607 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1608 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1609 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1610 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1612 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1613 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1615 /* True if address a has acceptable alignment */
1616 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1618 /* the number of bytes to offset an address to align it */
1619 #define align_offset(A)\
1620 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1621 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1623 /* -------------------------- MMAP preliminaries ------------------------- */
1626 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1627 checks to fail so compiler optimizer can delete code rather than
1628 using so many "#if"s.
1632 /* MORECORE and MMAP must return MFAIL on failure */
1633 #define MFAIL ((void*)(MAX_SIZE_T))
1634 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1639 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1640 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1641 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1642 #define MAP_ANONYMOUS MAP_ANON
1643 #endif /* MAP_ANON */
1644 #ifdef MAP_ANONYMOUS
1645 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1646 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1647 #else /* MAP_ANONYMOUS */
1649 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1650 is unlikely to be needed, but is supplied just in case.
1652 #define MMAP_FLAGS (MAP_PRIVATE)
1653 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1654 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1655 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1656 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1657 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1658 #endif /* MAP_ANONYMOUS */
1660 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1664 /* Win32 MMAP via VirtualAlloc */
1665 static FORCEINLINE
void* win32mmap(size_t size
) {
1666 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
, PAGE_READWRITE
);
1667 return (ptr
!= 0)? ptr
: MFAIL
;
1670 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1671 static FORCEINLINE
void* win32direct_mmap(size_t size
) {
1672 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
|MEM_TOP_DOWN
,
1674 return (ptr
!= 0)? ptr
: MFAIL
;
1677 /* This function supports releasing coalesed segments */
1678 static FORCEINLINE
int win32munmap(void* ptr
, size_t size
) {
1679 MEMORY_BASIC_INFORMATION minfo
;
1680 char* cptr
= (char*)ptr
;
1682 if (VirtualQuery(cptr
, &minfo
, sizeof(minfo
)) == 0)
1684 if (minfo
.BaseAddress
!= cptr
|| minfo
.AllocationBase
!= cptr
||
1685 minfo
.State
!= MEM_COMMIT
|| minfo
.RegionSize
> size
)
1687 if (VirtualFree(cptr
, 0, MEM_RELEASE
) == 0)
1689 cptr
+= minfo
.RegionSize
;
1690 size
-= minfo
.RegionSize
;
1695 #define MMAP_DEFAULT(s) win32mmap(s)
1696 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1697 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1699 #endif /* HAVE_MMAP */
1703 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1705 #endif /* HAVE_MREMAP */
1708 * Define CALL_MORECORE
1712 #define CALL_MORECORE(S) MORECORE(S)
1713 #else /* MORECORE */
1714 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1715 #endif /* MORECORE */
1716 #else /* HAVE_MORECORE */
1717 #define CALL_MORECORE(S) MFAIL
1718 #endif /* HAVE_MORECORE */
1721 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1724 #define USE_MMAP_BIT (SIZE_T_ONE)
1727 #define CALL_MMAP(s) MMAP(s)
1729 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1732 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1734 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1737 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1738 #else /* DIRECT_MMAP */
1739 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1740 #endif /* DIRECT_MMAP */
1741 #else /* HAVE_MMAP */
1742 #define USE_MMAP_BIT (SIZE_T_ZERO)
1744 #define MMAP(s) MFAIL
1745 #define MUNMAP(a, s) (-1)
1746 #define DIRECT_MMAP(s) MFAIL
1747 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1748 #define CALL_MMAP(s) MMAP(s)
1749 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1750 #endif /* HAVE_MMAP */
1753 * Define CALL_MREMAP
1755 #if HAVE_MMAP && HAVE_MREMAP
1757 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1759 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1761 #else /* HAVE_MMAP && HAVE_MREMAP */
1762 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1763 #endif /* HAVE_MMAP && HAVE_MREMAP */
1765 /* mstate bit set if continguous morecore disabled or failed */
1766 #define USE_NONCONTIGUOUS_BIT (4U)
1768 /* segment bit set in create_mspace_with_base */
1769 #define EXTERN_BIT (8U)
1772 /* --------------------------- Lock preliminaries ------------------------ */
1775 When locks are defined, there is one global lock, plus
1776 one per-mspace lock.
1778 The global lock_ensures that mparams.magic and other unique
1779 mparams values are initialized only once. It also protects
1780 sequences of calls to MORECORE. In many cases sys_alloc requires
1781 two calls, that should not be interleaved with calls by other
1782 threads. This does not protect against direct calls to MORECORE
1783 by other threads not using this lock, so there is still code to
1784 cope the best we can on interference.
1786 Per-mspace locks surround calls to malloc, free, etc.
1787 By default, locks are simple non-reentrant mutexes.
1789 Because lock-protected regions generally have bounded times, it is
1790 OK to use the supplied simple spinlocks. Spinlocks are likely to
1791 improve performance for lightly contended applications, but worsen
1792 performance under heavy contention.
1794 If USE_LOCKS is > 1, the definitions of lock routines here are
1795 bypassed, in which case you will need to define the type MLOCK_T,
1796 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1797 and TRY_LOCK. You must also declare a
1798 static MLOCK_T malloc_global_mutex = { initialization values };.
1803 #define USE_LOCK_BIT (0U)
1804 #define INITIAL_LOCK(l) (0)
1805 #define DESTROY_LOCK(l) (0)
1806 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1807 #define RELEASE_MALLOC_GLOBAL_LOCK()
1811 /* ----------------------- User-defined locks ------------------------ */
1812 /* Define your own lock implementation here */
1813 /* #define INITIAL_LOCK(lk) ... */
1814 /* #define DESTROY_LOCK(lk) ... */
1815 /* #define ACQUIRE_LOCK(lk) ... */
1816 /* #define RELEASE_LOCK(lk) ... */
1817 /* #define TRY_LOCK(lk) ... */
1818 /* static MLOCK_T malloc_global_mutex = ... */
1820 #elif USE_SPIN_LOCKS
1822 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1823 /* Note CAS_LOCK defined to return 0 on success */
1825 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1826 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1827 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1829 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1830 /* Custom spin locks for older gcc on x86 */
1831 static FORCEINLINE
int x86_cas_lock(int *sl
) {
1835 __asm__
__volatile__ ("lock; cmpxchgl %1, %2"
1837 : "r" (val
), "m" (*(sl
)), "0"(cmp
)
1842 static FORCEINLINE
void x86_clear_lock(int* sl
) {
1846 __asm__
__volatile__ ("lock; xchgl %0, %1"
1848 : "m" (*(sl
)), "0"(prev
)
1852 #define CAS_LOCK(sl) x86_cas_lock(sl)
1853 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1855 #else /* Win32 MSC */
1856 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1857 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1859 #endif /* ... gcc spins locks ... */
1861 /* How to yield for a spin lock */
1862 #define SPINS_PER_YIELD 63
1863 #if defined(_MSC_VER)
1864 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1865 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1866 #elif defined (__SVR4) && defined (__sun) /* solaris */
1867 #define SPIN_LOCK_YIELD thr_yield();
1868 #elif !defined(LACKS_SCHED_H)
1869 #define SPIN_LOCK_YIELD sched_yield();
1871 #define SPIN_LOCK_YIELD
1872 #endif /* ... yield ... */
1874 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1875 /* Plain spin locks use single word (embedded in malloc_states) */
1876 static int spin_acquire_lock(int *sl
) {
1878 while (*(volatile int *)sl
!= 0 || CAS_LOCK(sl
)) {
1879 if ((++spins
& SPINS_PER_YIELD
) == 0) {
1887 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1888 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1889 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1890 #define INITIAL_LOCK(sl) (*sl = 0)
1891 #define DESTROY_LOCK(sl) (0)
1892 static MLOCK_T malloc_global_mutex
= 0;
1894 #else /* USE_RECURSIVE_LOCKS */
1895 /* types for lock owners */
1897 #define THREAD_ID_T DWORD
1898 #define CURRENT_THREAD GetCurrentThreadId()
1899 #define EQ_OWNER(X,Y) ((X) == (Y))
1902 Note: the following assume that pthread_t is a type that can be
1903 initialized to (casted) zero. If this is not the case, you will need to
1904 somehow redefine these or not use spin locks.
1906 #define THREAD_ID_T pthread_t
1907 #define CURRENT_THREAD pthread_self()
1908 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1911 struct malloc_recursive_lock
{
1914 THREAD_ID_T threadid
;
1917 #define MLOCK_T struct malloc_recursive_lock
1918 static MLOCK_T malloc_global_mutex
= { 0, 0, (THREAD_ID_T
)0};
1920 static FORCEINLINE
void recursive_release_lock(MLOCK_T
*lk
) {
1921 assert(lk
->sl
!= 0);
1923 CLEAR_LOCK(&lk
->sl
);
1927 static FORCEINLINE
int recursive_acquire_lock(MLOCK_T
*lk
) {
1928 THREAD_ID_T mythreadid
= CURRENT_THREAD
;
1931 if (*((volatile int *)(&lk
->sl
)) == 0) {
1932 if (!CAS_LOCK(&lk
->sl
)) {
1933 lk
->threadid
= mythreadid
;
1938 else if (EQ_OWNER(lk
->threadid
, mythreadid
)) {
1942 if ((++spins
& SPINS_PER_YIELD
) == 0) {
1948 static FORCEINLINE
int recursive_try_lock(MLOCK_T
*lk
) {
1949 THREAD_ID_T mythreadid
= CURRENT_THREAD
;
1950 if (*((volatile int *)(&lk
->sl
)) == 0) {
1951 if (!CAS_LOCK(&lk
->sl
)) {
1952 lk
->threadid
= mythreadid
;
1957 else if (EQ_OWNER(lk
->threadid
, mythreadid
)) {
1964 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1965 #define TRY_LOCK(lk) recursive_try_lock(lk)
1966 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1967 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1968 #define DESTROY_LOCK(lk) (0)
1969 #endif /* USE_RECURSIVE_LOCKS */
1971 #elif defined(WIN32) /* Win32 critical sections */
1972 #define MLOCK_T CRITICAL_SECTION
1973 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1974 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1975 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1976 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1977 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1978 #define NEED_GLOBAL_LOCK_INIT
1980 static MLOCK_T malloc_global_mutex
;
1981 static volatile LONG malloc_global_mutex_status
;
1983 /* Use spin loop to initialize global lock */
1984 static void init_malloc_global_mutex() {
1986 long stat
= malloc_global_mutex_status
;
1989 /* transition to < 0 while initializing, then to > 0) */
1991 interlockedcompareexchange(&malloc_global_mutex_status
, (LONG
)-1, (LONG
)0) == 0) {
1992 InitializeCriticalSection(&malloc_global_mutex
);
1993 interlockedexchange(&malloc_global_mutex_status
, (LONG
)1);
2000 #else /* pthreads-based locks */
2001 #define MLOCK_T pthread_mutex_t
2002 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2003 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2004 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2005 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2006 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2008 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2009 /* Cope with old-style linux recursive lock initialization by adding */
2010 /* skipped internal declaration from pthread.h */
2011 extern int pthread_mutexattr_setkind_np
__P ((pthread_mutexattr_t
*__attr
,
2013 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2014 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2015 #endif /* USE_RECURSIVE_LOCKS ... */
2017 static MLOCK_T malloc_global_mutex
= PTHREAD_MUTEX_INITIALIZER
;
2019 static int pthread_init_lock (MLOCK_T
*lk
) {
2020 pthread_mutexattr_t attr
;
2021 if (pthread_mutexattr_init(&attr
)) return 1;
2022 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2023 if (pthread_mutexattr_settype(&attr
, PTHREAD_MUTEX_RECURSIVE
)) return 1;
2025 if (pthread_mutex_init(lk
, &attr
)) return 1;
2026 if (pthread_mutexattr_destroy(&attr
)) return 1;
2030 #endif /* ... lock types ... */
2032 /* Common code for all lock types */
2033 #define USE_LOCK_BIT (2U)
2035 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2036 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2039 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2040 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2043 #endif /* USE_LOCKS */
2045 /* ----------------------- Chunk representations ------------------------ */
2048 (The following includes lightly edited explanations by Colin Plumb.)
2050 The malloc_chunk declaration below is misleading (but accurate and
2051 necessary). It declares a "view" into memory allowing access to
2052 necessary fields at known offsets from a given base.
2054 Chunks of memory are maintained using a `boundary tag' method as
2055 originally described by Knuth. (See the paper by Paul Wilson
2056 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2057 techniques.) Sizes of free chunks are stored both in the front of
2058 each chunk and at the end. This makes consolidating fragmented
2059 chunks into bigger chunks fast. The head fields also hold bits
2060 representing whether chunks are free or in use.
2062 Here are some pictures to make it clearer. They are "exploded" to
2063 show that the state of a chunk can be thought of as extending from
2064 the high 31 bits of the head field of its header through the
2065 prev_foot and PINUSE_BIT bit of the following chunk header.
2067 A chunk that's in use looks like:
2069 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2070 | Size of previous chunk (if P = 0) |
2071 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2072 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2073 | Size of this chunk 1| +-+
2074 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2080 +- size - sizeof(size_t) available payload bytes -+
2084 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2085 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2086 | Size of next chunk (may or may not be in use) | +-+
2087 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2089 And if it's free, it looks like this:
2092 | User payload (must be in use, or we would have merged!) |
2093 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2094 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2095 | Size of this chunk 0| +-+
2096 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2098 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2100 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2102 +- size - sizeof(struct chunk) unused bytes -+
2104 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2105 | Size of this chunk |
2106 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2108 | Size of next chunk (must be in use, or we would have merged)| +-+
2109 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2113 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116 Note that since we always merge adjacent free chunks, the chunks
2117 adjacent to a free chunk must be in use.
2119 Given a pointer to a chunk (which can be derived trivially from the
2120 payload pointer) we can, in O(1) time, find out whether the adjacent
2121 chunks are free, and if so, unlink them from the lists that they
2122 are on and merge them with the current chunk.
2124 Chunks always begin on even word boundaries, so the mem portion
2125 (which is returned to the user) is also on an even word boundary, and
2126 thus at least double-word aligned.
2128 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2129 chunk size (which is always a multiple of two words), is an in-use
2130 bit for the *previous* chunk. If that bit is *clear*, then the
2131 word before the current chunk size contains the previous chunk
2132 size, and can be used to find the front of the previous chunk.
2133 The very first chunk allocated always has this bit set, preventing
2134 access to non-existent (or non-owned) memory. If pinuse is set for
2135 any given chunk, then you CANNOT determine the size of the
2136 previous chunk, and might even get a memory addressing fault when
2139 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2140 the chunk size redundantly records whether the current chunk is
2141 inuse (unless the chunk is mmapped). This redundancy enables usage
2142 checks within free and realloc, and reduces indirection when freeing
2143 and consolidating chunks.
2145 Each freshly allocated chunk must have both cinuse and pinuse set.
2146 That is, each allocated chunk borders either a previously allocated
2147 and still in-use chunk, or the base of its memory arena. This is
2148 ensured by making all allocations from the `lowest' part of any
2149 found chunk. Further, no free chunk physically borders another one,
2150 so each free chunk is known to be preceded and followed by either
2151 inuse chunks or the ends of memory.
2153 Note that the `foot' of the current chunk is actually represented
2154 as the prev_foot of the NEXT chunk. This makes it easier to
2155 deal with alignments etc but can be very confusing when trying
2156 to extend or adapt this code.
2158 The exceptions to all this are
2160 1. The special chunk `top' is the top-most available chunk (i.e.,
2161 the one bordering the end of available memory). It is treated
2162 specially. Top is never included in any bin, is used only if
2163 no other chunk is available, and is released back to the
2164 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2165 the top chunk is treated as larger (and thus less well
2166 fitting) than any other available chunk. The top chunk
2167 doesn't update its trailing size field since there is no next
2168 contiguous chunk that would have to index off it. However,
2169 space is still allocated for it (TOP_FOOT_SIZE) to enable
2170 separation or merging when space is extended.
2172 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2173 cleared in their head fields. Because they are allocated
2174 one-by-one, each must carry its own prev_foot field, which is
2175 also used to hold the offset this chunk has within its mmapped
2176 region, which is needed to preserve alignment. Each mmapped
2177 chunk is trailed by the first two fields of a fake next-chunk
2178 for sake of usage checks.
2182 struct malloc_chunk
{
2183 size_t prev_foot
; /* Size of previous chunk (if free). */
2184 size_t head
; /* Size and inuse bits. */
2185 struct malloc_chunk
* fd
; /* double links -- used only if free. */
2186 struct malloc_chunk
* bk
;
2189 typedef struct malloc_chunk mchunk
;
2190 typedef struct malloc_chunk
* mchunkptr
;
2191 typedef struct malloc_chunk
* sbinptr
; /* The type of bins of chunks */
2192 typedef unsigned int bindex_t
; /* Described below */
2193 typedef unsigned int binmap_t
; /* Described below */
2194 typedef unsigned int flag_t
; /* The type of various bit flag sets */
2196 /* ------------------- Chunks sizes and alignments ----------------------- */
2198 #define MCHUNK_SIZE (sizeof(mchunk))
2201 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2203 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2204 #endif /* FOOTERS */
2206 /* MMapped chunks need a second word of overhead ... */
2207 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2208 /* ... and additional padding for fake next-chunk at foot */
2209 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2211 /* The smallest size we can malloc is an aligned minimal chunk */
2212 #define MIN_CHUNK_SIZE\
2213 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2215 /* conversion from malloc headers to user pointers, and back */
2216 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2217 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2218 /* chunk associated with aligned address A */
2219 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2221 /* Bounds on request (not chunk) sizes. */
2222 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2223 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2225 /* pad request bytes into a usable size */
2226 #define pad_request(req) \
2227 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2229 /* pad request, checking for minimum (but not maximum) */
2230 #define request2size(req) \
2231 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2234 /* ------------------ Operations on head and foot fields ----------------- */
2237 The head field of a chunk is or'ed with PINUSE_BIT when previous
2238 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2239 use, unless mmapped, in which case both bits are cleared.
2241 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2244 #define PINUSE_BIT (SIZE_T_ONE)
2245 #define CINUSE_BIT (SIZE_T_TWO)
2246 #define FLAG4_BIT (SIZE_T_FOUR)
2247 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2248 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2250 /* Head value for fenceposts */
2251 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2253 /* extraction of fields from head words */
2254 #define cinuse(p) ((p)->head & CINUSE_BIT)
2255 #define pinuse(p) ((p)->head & PINUSE_BIT)
2256 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2257 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2258 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2260 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2262 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2263 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2264 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2266 /* Treat space at ptr +/- offset as a chunk */
2267 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2268 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2270 /* Ptr to next or previous physical malloc_chunk. */
2271 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2272 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2274 /* extract next chunk's pinuse bit */
2275 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2277 /* Get/set size at footer */
2278 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2279 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2281 /* Set size, pinuse bit, and foot */
2282 #define set_size_and_pinuse_of_free_chunk(p, s)\
2283 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2285 /* Set size, pinuse bit, foot, and clear next pinuse */
2286 #define set_free_with_pinuse(p, s, n)\
2287 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2289 /* Get the internal overhead associated with chunk p */
2290 #define overhead_for(p)\
2291 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2293 /* Return true if malloced space is not necessarily cleared */
2295 #define calloc_must_clear(p) (!is_mmapped(p))
2296 #else /* MMAP_CLEARS */
2297 #define calloc_must_clear(p) (1)
2298 #endif /* MMAP_CLEARS */
2300 /* ---------------------- Overlaid data structures ----------------------- */
2303 When chunks are not in use, they are treated as nodes of either
2306 "Small" chunks are stored in circular doubly-linked lists, and look
2309 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2310 | Size of previous chunk |
2311 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2312 `head:' | Size of chunk, in bytes |P|
2313 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2314 | Forward pointer to next chunk in list |
2315 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2316 | Back pointer to previous chunk in list |
2317 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2318 | Unused space (may be 0 bytes long) .
2321 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2322 `foot:' | Size of chunk, in bytes |
2323 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2325 Larger chunks are kept in a form of bitwise digital trees (aka
2326 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2327 free chunks greater than 256 bytes, their size doesn't impose any
2328 constraints on user chunk sizes. Each node looks like:
2330 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2331 | Size of previous chunk |
2332 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333 `head:' | Size of chunk, in bytes |P|
2334 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335 | Forward pointer to next chunk of same size |
2336 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2337 | Back pointer to previous chunk of same size |
2338 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339 | Pointer to left child (child[0]) |
2340 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341 | Pointer to right child (child[1]) |
2342 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343 | Pointer to parent |
2344 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345 | bin index of this chunk |
2346 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 `foot:' | Size of chunk, in bytes |
2351 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2353 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2354 of the same size are arranged in a circularly-linked list, with only
2355 the oldest chunk (the next to be used, in our FIFO ordering)
2356 actually in the tree. (Tree members are distinguished by a non-null
2357 parent pointer.) If a chunk with the same size an an existing node
2358 is inserted, it is linked off the existing node using pointers that
2359 work in the same way as fd/bk pointers of small chunks.
2361 Each tree contains a power of 2 sized range of chunk sizes (the
2362 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2363 tree level, with the chunks in the smaller half of the range (0x100
2364 <= x < 0x140 for the top nose) in the left subtree and the larger
2365 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2366 done by inspecting individual bits.
2368 Using these rules, each node's left subtree contains all smaller
2369 sizes than its right subtree. However, the node at the root of each
2370 subtree has no particular ordering relationship to either. (The
2371 dividing line between the subtree sizes is based on trie relation.)
2372 If we remove the last chunk of a given size from the interior of the
2373 tree, we need to replace it with a leaf node. The tree ordering
2374 rules permit a node to be replaced by any leaf below it.
2376 The smallest chunk in a tree (a common operation in a best-fit
2377 allocator) can be found by walking a path to the leftmost leaf in
2378 the tree. Unlike a usual binary tree, where we follow left child
2379 pointers until we reach a null, here we follow the right child
2380 pointer any time the left one is null, until we reach a leaf with
2381 both child pointers null. The smallest chunk in the tree will be
2382 somewhere along that path.
2384 The worst case number of steps to add, find, or remove a node is
2385 bounded by the number of bits differentiating chunks within
2386 bins. Under current bin calculations, this ranges from 6 up to 21
2387 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2388 is of course much better.
2391 struct malloc_tree_chunk
{
2392 /* The first four fields must be compatible with malloc_chunk */
2395 struct malloc_tree_chunk
* fd
;
2396 struct malloc_tree_chunk
* bk
;
2398 struct malloc_tree_chunk
* child
[2];
2399 struct malloc_tree_chunk
* parent
;
2403 typedef struct malloc_tree_chunk tchunk
;
2404 typedef struct malloc_tree_chunk
* tchunkptr
;
2405 typedef struct malloc_tree_chunk
* tbinptr
; /* The type of bins of trees */
2407 /* A little helper macro for trees */
2408 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2410 /* ----------------------------- Segments -------------------------------- */
2413 Each malloc space may include non-contiguous segments, held in a
2414 list headed by an embedded malloc_segment record representing the
2415 top-most space. Segments also include flags holding properties of
2416 the space. Large chunks that are directly allocated by mmap are not
2417 included in this list. They are instead independently created and
2418 destroyed without otherwise keeping track of them.
2420 Segment management mainly comes into play for spaces allocated by
2421 MMAP. Any call to MMAP might or might not return memory that is
2422 adjacent to an existing segment. MORECORE normally contiguously
2423 extends the current space, so this space is almost always adjacent,
2424 which is simpler and faster to deal with. (This is why MORECORE is
2425 used preferentially to MMAP when both are available -- see
2426 sys_alloc.) When allocating using MMAP, we don't use any of the
2427 hinting mechanisms (inconsistently) supported in various
2428 implementations of unix mmap, or distinguish reserving from
2429 committing memory. Instead, we just ask for space, and exploit
2430 contiguity when we get it. It is probably possible to do
2431 better than this on some systems, but no general scheme seems
2432 to be significantly better.
2434 Management entails a simpler variant of the consolidation scheme
2435 used for chunks to reduce fragmentation -- new adjacent memory is
2436 normally prepended or appended to an existing segment. However,
2437 there are limitations compared to chunk consolidation that mostly
2438 reflect the fact that segment processing is relatively infrequent
2439 (occurring only when getting memory from system) and that we
2440 don't expect to have huge numbers of segments:
2442 * Segments are not indexed, so traversal requires linear scans. (It
2443 would be possible to index these, but is not worth the extra
2444 overhead and complexity for most programs on most platforms.)
2445 * New segments are only appended to old ones when holding top-most
2446 memory; if they cannot be prepended to others, they are held in
2449 Except for the top-most segment of an mstate, each segment record
2450 is kept at the tail of its segment. Segments are added by pushing
2451 segment records onto the list headed by &mstate.seg for the
2454 Segment flags control allocation/merge/deallocation policies:
2455 * If EXTERN_BIT set, then we did not allocate this segment,
2456 and so should not try to deallocate or merge with others.
2457 (This currently holds only for the initial segment passed
2458 into create_mspace_with_base.)
2459 * If USE_MMAP_BIT set, the segment may be merged with
2460 other surrounding mmapped segments and trimmed/de-allocated
2462 * If neither bit is set, then the segment was obtained using
2463 MORECORE so can be merged with surrounding MORECORE'd segments
2464 and deallocated/trimmed using MORECORE with negative arguments.
2467 struct malloc_segment
{
2468 char* base
; /* base address */
2469 size_t size
; /* allocated size */
2470 struct malloc_segment
* next
; /* ptr to next segment */
2471 flag_t sflags
; /* mmap and extern flag */
2474 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2475 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2477 typedef struct malloc_segment msegment
;
2478 typedef struct malloc_segment
* msegmentptr
;
2480 /* ---------------------------- malloc_state ----------------------------- */
2483 A malloc_state holds all of the bookkeeping for a space.
2484 The main fields are:
2487 The topmost chunk of the currently active segment. Its size is
2488 cached in topsize. The actual size of topmost space is
2489 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2490 fenceposts and segment records if necessary when getting more
2491 space from the system. The size at which to autotrim top is
2492 cached from mparams in trim_check, except that it is disabled if
2495 Designated victim (dv)
2496 This is the preferred chunk for servicing small requests that
2497 don't have exact fits. It is normally the chunk split off most
2498 recently to service another small request. Its size is cached in
2499 dvsize. The link fields of this chunk are not maintained since it
2500 is not kept in a bin.
2503 An array of bin headers for free chunks. These bins hold chunks
2504 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2505 chunks of all the same size, spaced 8 bytes apart. To simplify
2506 use in double-linked lists, each bin header acts as a malloc_chunk
2507 pointing to the real first node, if it exists (else pointing to
2508 itself). This avoids special-casing for headers. But to avoid
2509 waste, we allocate only the fd/bk pointers of bins, and then use
2510 repositioning tricks to treat these as the fields of a chunk.
2513 Treebins are pointers to the roots of trees holding a range of
2514 sizes. There are 2 equally spaced treebins for each power of two
2515 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2519 There is one bit map for small bins ("smallmap") and one for
2520 treebins ("treemap). Each bin sets its bit when non-empty, and
2521 clears the bit when empty. Bit operations are then used to avoid
2522 bin-by-bin searching -- nearly all "search" is done without ever
2523 looking at bins that won't be selected. The bit maps
2524 conservatively use 32 bits per map word, even if on 64bit system.
2525 For a good description of some of the bit-based techniques used
2526 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2527 supplement at http://hackersdelight.org/). Many of these are
2528 intended to reduce the branchiness of paths through malloc etc, as
2529 well as to reduce the number of memory locations read or written.
2532 A list of segments headed by an embedded malloc_segment record
2533 representing the initial space.
2535 Address check support
2536 The least_addr field is the least address ever obtained from
2537 MORECORE or MMAP. Attempted frees and reallocs of any address less
2538 than this are trapped (unless INSECURE is defined).
2541 A cross-check field that should always hold same value as mparams.magic.
2543 Max allowed footprint
2544 The maximum allowed bytes to allocate from system (zero means no limit)
2547 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2550 Each space keeps track of current and maximum system memory
2551 obtained via MORECORE or MMAP.
2554 Fields holding the amount of unused topmost memory that should trigger
2555 trimming, and a counter to force periodic scanning to release unused
2556 non-topmost segments.
2559 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2560 around every public call using this mspace.
2563 A void* pointer and a size_t field that can be used to help implement
2564 extensions to this malloc.
2567 /* Bin types, widths and sizes */
2568 #define NSMALLBINS (32U)
2569 #define NTREEBINS (32U)
2570 #define SMALLBIN_SHIFT (3U)
2571 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2572 #define TREEBIN_SHIFT (8U)
2573 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2574 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2575 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2577 struct malloc_state
{
2586 size_t release_checks
;
2588 mchunkptr smallbins
[(NSMALLBINS
+1)*2];
2589 tbinptr treebins
[NTREEBINS
];
2591 size_t max_footprint
;
2592 size_t footprint_limit
; /* zero means no limit */
2595 MLOCK_T mutex
; /* locate lock among fields that rarely change */
2596 #endif /* USE_LOCKS */
2598 void* extp
; /* Unused but available for extensions */
2602 typedef struct malloc_state
* mstate
;
2604 /* ------------- Global malloc_state and malloc_params ------------------- */
2607 malloc_params holds global properties, including those that can be
2608 dynamically set using mallopt. There is a single instance, mparams,
2609 initialized in init_mparams. Note that the non-zeroness of "magic"
2610 also serves as an initialization flag.
2613 struct malloc_params
{
2617 size_t mmap_threshold
;
2618 size_t trim_threshold
;
2619 flag_t default_mflags
;
2622 static struct malloc_params mparams
;
2624 /* Ensure mparams initialized */
2625 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2629 /* The global malloc_state used for all non-"mspace" calls */
2630 static struct malloc_state _gm_
;
2632 #define is_global(M) ((M) == &_gm_)
2634 #endif /* !ONLY_MSPACES */
2636 #define is_initialized(M) ((M)->top != 0)
2638 /* -------------------------- system alloc setup ------------------------- */
2640 /* Operations on mflags */
2642 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2643 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2645 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2647 #define disable_lock(M)
2650 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2651 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2653 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2655 #define disable_mmap(M)
2658 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2659 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2661 #define set_lock(M,L)\
2662 ((M)->mflags = (L)?\
2663 ((M)->mflags | USE_LOCK_BIT) :\
2664 ((M)->mflags & ~USE_LOCK_BIT))
2666 /* page-align a size */
2667 #define page_align(S)\
2668 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2670 /* granularity-align a size */
2671 #define granularity_align(S)\
2672 (((S) + (mparams.granularity - SIZE_T_ONE))\
2673 & ~(mparams.granularity - SIZE_T_ONE))
2676 /* For mmap, use granularity alignment on windows, else page-align */
2678 #define mmap_align(S) granularity_align(S)
2680 #define mmap_align(S) page_align(S)
2683 /* For sys_alloc, enough padding to ensure can malloc request on success */
2684 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2686 #define is_page_aligned(S)\
2687 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2688 #define is_granularity_aligned(S)\
2689 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2691 /* True if segment S holds address A */
2692 #define segment_holds(S, A)\
2693 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2695 /* Return segment holding given address */
2696 static msegmentptr
segment_holding(mstate m
, char* addr
) {
2697 msegmentptr sp
= &m
->seg
;
2699 if (addr
>= sp
->base
&& addr
< sp
->base
+ sp
->size
)
2701 if ((sp
= sp
->next
) == 0)
2706 /* Return true if segment contains a segment link */
2707 static int has_segment_link(mstate m
, msegmentptr ss
) {
2708 msegmentptr sp
= &m
->seg
;
2710 if ((char*)sp
>= ss
->base
&& (char*)sp
< ss
->base
+ ss
->size
)
2712 if ((sp
= sp
->next
) == 0)
2717 #ifndef MORECORE_CANNOT_TRIM
2718 #define should_trim(M,s) ((s) > (M)->trim_check)
2719 #else /* MORECORE_CANNOT_TRIM */
2720 #define should_trim(M,s) (0)
2721 #endif /* MORECORE_CANNOT_TRIM */
2724 TOP_FOOT_SIZE is padding at the end of a segment, including space
2725 that may be needed to place segment records and fenceposts when new
2726 noncontiguous segments are added.
2728 #define TOP_FOOT_SIZE\
2729 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2732 /* ------------------------------- Hooks -------------------------------- */
2735 PREACTION should be defined to return 0 on success, and nonzero on
2736 failure. If you are not using locking, you can redefine these to do
2741 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2742 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2743 #else /* USE_LOCKS */
2746 #define PREACTION(M) (0)
2747 #endif /* PREACTION */
2750 #define POSTACTION(M)
2751 #endif /* POSTACTION */
2753 #endif /* USE_LOCKS */
2756 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2757 USAGE_ERROR_ACTION is triggered on detected bad frees and
2758 reallocs. The argument p is an address that might have triggered the
2759 fault. It is ignored by the two predefined actions, but might be
2760 useful in custom actions that try to help diagnose errors.
2763 #if PROCEED_ON_ERROR
2765 /* A count of the number of corruption errors causing resets */
2766 int malloc_corruption_error_count
;
2768 /* default corruption action */
2769 static void reset_on_error(mstate m
);
2771 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2772 #define USAGE_ERROR_ACTION(m, p)
2774 #else /* PROCEED_ON_ERROR */
2776 #ifndef CORRUPTION_ERROR_ACTION
2777 #define CORRUPTION_ERROR_ACTION(m) ABORT
2778 #endif /* CORRUPTION_ERROR_ACTION */
2780 #ifndef USAGE_ERROR_ACTION
2781 #define USAGE_ERROR_ACTION(m,p) ABORT
2782 #endif /* USAGE_ERROR_ACTION */
2784 #endif /* PROCEED_ON_ERROR */
2787 /* -------------------------- Debugging setup ---------------------------- */
2791 #define check_free_chunk(M,P)
2792 #define check_inuse_chunk(M,P)
2793 #define check_malloced_chunk(M,P,N)
2794 #define check_mmapped_chunk(M,P)
2795 #define check_malloc_state(M)
2796 #define check_top_chunk(M,P)
2799 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2800 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2801 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2802 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2803 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2804 #define check_malloc_state(M) do_check_malloc_state(M)
2806 static void do_check_any_chunk(mstate m
, mchunkptr p
);
2807 static void do_check_top_chunk(mstate m
, mchunkptr p
);
2808 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
);
2809 static void do_check_inuse_chunk(mstate m
, mchunkptr p
);
2810 static void do_check_free_chunk(mstate m
, mchunkptr p
);
2811 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
);
2812 static void do_check_tree(mstate m
, tchunkptr t
);
2813 static void do_check_treebin(mstate m
, bindex_t i
);
2814 static void do_check_smallbin(mstate m
, bindex_t i
);
2815 static void do_check_malloc_state(mstate m
);
2816 static int bin_find(mstate m
, mchunkptr x
);
2817 static size_t traverse_and_check(mstate m
);
2820 /* ---------------------------- Indexing Bins ---------------------------- */
2822 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2823 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2824 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2825 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2827 /* addressing by index. See above about smallbin repositioning */
2828 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2829 #define treebin_at(M,i) (&((M)->treebins[i]))
2831 /* assign tree index for size S to variable I. Use x86 asm if possible */
2832 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2833 #define compute_tree_index(S, I)\
2835 unsigned int X = S >> TREEBIN_SHIFT;\
2838 else if (X > 0xFFFF)\
2841 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2842 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2846 #elif defined (__INTEL_COMPILER)
2847 #define compute_tree_index(S, I)\
2849 size_t X = S >> TREEBIN_SHIFT;\
2852 else if (X > 0xFFFF)\
2855 unsigned int K = _bit_scan_reverse (X); \
2856 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2860 #elif defined(_MSC_VER) && _MSC_VER>=1300
2861 #define compute_tree_index(S, I)\
2863 size_t X = S >> TREEBIN_SHIFT;\
2866 else if (X > 0xFFFF)\
2870 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2871 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2876 #define compute_tree_index(S, I)\
2878 size_t X = S >> TREEBIN_SHIFT;\
2881 else if (X > 0xFFFF)\
2884 unsigned int Y = (unsigned int)X;\
2885 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2886 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2888 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2889 K = 14 - N + ((Y <<= K) >> 15);\
2890 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2895 /* Bit representing maximum resolved size in a treebin at i */
2896 #define bit_for_tree_index(i) \
2897 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2899 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2900 #define leftshift_for_tree_index(i) \
2901 ((i == NTREEBINS-1)? 0 : \
2902 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2904 /* The size of the smallest chunk held in bin with index i */
2905 #define minsize_for_tree_index(i) \
2906 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2907 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2910 /* ------------------------ Operations on bin maps ----------------------- */
2912 /* bit corresponding to given index */
2913 #define idx2bit(i) ((binmap_t)(1) << (i))
2915 /* Mark/Clear bits with given index */
2916 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2917 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2918 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2920 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2921 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2922 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2924 /* isolate the least set bit of a bitmap */
2925 #define least_bit(x) ((x) & -(x))
2927 /* mask with all bits to left of least bit of x on */
2928 #define left_bits(x) ((x<<1) | -(x<<1))
2930 /* mask with all bits to left of or equal to least bit of x on */
2931 #define same_or_left_bits(x) ((x) | -(x))
2933 /* index corresponding to given bit. Use x86 asm if possible */
2935 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2936 #define compute_bit2idx(X, I)\
2939 J = __builtin_ctz(X); \
2943 #elif defined (__INTEL_COMPILER)
2944 #define compute_bit2idx(X, I)\
2947 J = _bit_scan_forward (X); \
2951 #elif defined(_MSC_VER) && _MSC_VER>=1300
2952 #define compute_bit2idx(X, I)\
2955 _BitScanForward((DWORD *) &J, X);\
2959 #elif USE_BUILTIN_FFS
2960 #define compute_bit2idx(X, I) I = ffs(X)-1
2963 #define compute_bit2idx(X, I)\
2965 unsigned int Y = X - 1;\
2966 unsigned int K = Y >> (16-4) & 16;\
2967 unsigned int N = K; Y >>= K;\
2968 N += K = Y >> (8-3) & 8; Y >>= K;\
2969 N += K = Y >> (4-2) & 4; Y >>= K;\
2970 N += K = Y >> (2-1) & 2; Y >>= K;\
2971 N += K = Y >> (1-0) & 1; Y >>= K;\
2972 I = (bindex_t)(N + Y);\
2977 /* ----------------------- Runtime Check Support ------------------------- */
2980 For security, the main invariant is that malloc/free/etc never
2981 writes to a static address other than malloc_state, unless static
2982 malloc_state itself has been corrupted, which cannot occur via
2983 malloc (because of these checks). In essence this means that we
2984 believe all pointers, sizes, maps etc held in malloc_state, but
2985 check all of those linked or offsetted from other embedded data
2986 structures. These checks are interspersed with main code in a way
2987 that tends to minimize their run-time cost.
2989 When FOOTERS is defined, in addition to range checking, we also
2990 verify footer fields of inuse chunks, which can be used guarantee
2991 that the mstate controlling malloc/free is intact. This is a
2992 streamlined version of the approach described by William Robertson
2993 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2994 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2995 of an inuse chunk holds the xor of its mstate and a random seed,
2996 that is checked upon calls to free() and realloc(). This is
2997 (probabalistically) unguessable from outside the program, but can be
2998 computed by any code successfully malloc'ing any chunk, so does not
2999 itself provide protection against code that has already broken
3000 security through some other means. Unlike Robertson et al, we
3001 always dynamically check addresses of all offset chunks (previous,
3002 next, etc). This turns out to be cheaper than relying on hashes.
3006 /* Check if address a is at least as high as any from MORECORE or MMAP */
3007 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3008 /* Check if address of next chunk n is higher than base chunk p */
3009 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3010 /* Check if p has inuse status */
3011 #define ok_inuse(p) is_inuse(p)
3012 /* Check if p has its pinuse bit on */
3013 #define ok_pinuse(p) pinuse(p)
3015 #else /* !INSECURE */
3016 #define ok_address(M, a) (1)
3017 #define ok_next(b, n) (1)
3018 #define ok_inuse(p) (1)
3019 #define ok_pinuse(p) (1)
3020 #endif /* !INSECURE */
3022 #if (FOOTERS && !INSECURE)
3023 /* Check if (alleged) mstate m has expected magic field */
3024 #define ok_magic(M) ((M)->magic == mparams.magic)
3025 #else /* (FOOTERS && !INSECURE) */
3026 #define ok_magic(M) (1)
3027 #endif /* (FOOTERS && !INSECURE) */
3029 /* In gcc, use __builtin_expect to minimize impact of checks */
3031 #if defined(__GNUC__) && __GNUC__ >= 3
3032 #define RTCHECK(e) __builtin_expect(e, 1)
3034 #define RTCHECK(e) (e)
3036 #else /* !INSECURE */
3037 #define RTCHECK(e) (1)
3038 #endif /* !INSECURE */
3040 /* macros to set up inuse chunks with or without footers */
3044 #define mark_inuse_foot(M,p,s)
3046 /* Macros for setting head/foot of non-mmapped chunks */
3048 /* Set cinuse bit and pinuse bit of next chunk */
3049 #define set_inuse(M,p,s)\
3050 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3051 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3053 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3054 #define set_inuse_and_pinuse(M,p,s)\
3055 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3056 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3058 /* Set size, cinuse and pinuse bit of this chunk */
3059 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3060 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3064 /* Set foot of inuse chunk to be xor of mstate and seed */
3065 #define mark_inuse_foot(M,p,s)\
3066 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3068 #define get_mstate_for(p)\
3069 ((mstate)(((mchunkptr)((char*)(p) +\
3070 (chunksize(p))))->prev_foot ^ mparams.magic))
3072 #define set_inuse(M,p,s)\
3073 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3074 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3075 mark_inuse_foot(M,p,s))
3077 #define set_inuse_and_pinuse(M,p,s)\
3078 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3079 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3080 mark_inuse_foot(M,p,s))
3082 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3083 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3084 mark_inuse_foot(M, p, s))
3086 #endif /* !FOOTERS */
3088 /* ---------------------------- setting mparams -------------------------- */
3091 static void pre_fork(void) { ACQUIRE_LOCK(&(gm
)->mutex
); }
3092 static void post_fork_parent(void) { RELEASE_LOCK(&(gm
)->mutex
); }
3093 static void post_fork_child(void) { INITIAL_LOCK(&(gm
)->mutex
); }
3094 #endif /* LOCK_AT_FORK */
3096 /* Initialize mparams */
3097 static int init_mparams(void) {
3098 #ifdef NEED_GLOBAL_LOCK_INIT
3099 if (malloc_global_mutex_status
<= 0)
3100 init_malloc_global_mutex();
3103 ACQUIRE_MALLOC_GLOBAL_LOCK();
3104 if (mparams
.magic
== 0) {
3110 psize
= malloc_getpagesize
;
3111 gsize
= ((DEFAULT_GRANULARITY
!= 0)? DEFAULT_GRANULARITY
: psize
);
3114 SYSTEM_INFO system_info
;
3115 GetSystemInfo(&system_info
);
3116 psize
= system_info
.dwPageSize
;
3117 gsize
= ((DEFAULT_GRANULARITY
!= 0)?
3118 DEFAULT_GRANULARITY
: system_info
.dwAllocationGranularity
);
3122 /* Sanity-check configuration:
3123 size_t must be unsigned and as wide as pointer type.
3124 ints must be at least 4 bytes.
3125 alignment must be at least 8.
3126 Alignment, min chunk size, and page size must all be powers of 2.
3128 if ((sizeof(size_t) != sizeof(char*)) ||
3129 (MAX_SIZE_T
< MIN_CHUNK_SIZE
) ||
3130 (sizeof(int) < 4) ||
3131 (MALLOC_ALIGNMENT
< (size_t)8U) ||
3132 ((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-SIZE_T_ONE
)) != 0) ||
3133 ((MCHUNK_SIZE
& (MCHUNK_SIZE
-SIZE_T_ONE
)) != 0) ||
3134 ((gsize
& (gsize
-SIZE_T_ONE
)) != 0) ||
3135 ((psize
& (psize
-SIZE_T_ONE
)) != 0))
3137 mparams
.granularity
= gsize
;
3138 mparams
.page_size
= psize
;
3139 mparams
.mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
3140 mparams
.trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
3141 #if MORECORE_CONTIGUOUS
3142 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
;
3143 #else /* MORECORE_CONTIGUOUS */
3144 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
|USE_NONCONTIGUOUS_BIT
;
3145 #endif /* MORECORE_CONTIGUOUS */
3148 /* Set up lock for main malloc area */
3149 gm
->mflags
= mparams
.default_mflags
;
3150 (void)INITIAL_LOCK(&gm
->mutex
);
3153 pthread_atfork(&pre_fork
, &post_fork_parent
, &post_fork_child
);
3159 unsigned char buf
[sizeof(size_t)];
3160 /* Try to use /dev/urandom, else fall back on using time */
3161 if ((fd
= open("/dev/urandom", O_RDONLY
)) >= 0 &&
3162 read(fd
, buf
, sizeof(buf
)) == sizeof(buf
)) {
3163 magic
= *((size_t *) buf
);
3167 #endif /* USE_DEV_RANDOM */
3169 magic
= (size_t)(GetTickCount() ^ (size_t)0x55555555U
);
3170 #elif defined(LACKS_TIME_H)
3171 magic
= (size_t)&magic
^ (size_t)0x55555555U
;
3173 magic
= (size_t)(time(0) ^ (size_t)0x55555555U
);
3175 magic
|= (size_t)8U; /* ensure nonzero */
3176 magic
&= ~(size_t)7U; /* improve chances of fault for bad values */
3177 /* Until memory modes commonly available, use volatile-write */
3178 (*(volatile size_t *)(&(mparams
.magic
))) = magic
;
3182 RELEASE_MALLOC_GLOBAL_LOCK();
3186 /* support for mallopt */
3187 static int change_mparam(int param_number
, int value
) {
3189 ensure_initialization();
3190 val
= (value
== -1)? MAX_SIZE_T
: (size_t)value
;
3191 switch(param_number
) {
3192 case M_TRIM_THRESHOLD
:
3193 mparams
.trim_threshold
= val
;
3196 if (val
>= mparams
.page_size
&& ((val
& (val
-1)) == 0)) {
3197 mparams
.granularity
= val
;
3202 case M_MMAP_THRESHOLD
:
3203 mparams
.mmap_threshold
= val
;
3211 /* ------------------------- Debugging Support --------------------------- */
3213 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3214 static void do_check_any_chunk(mstate m
, mchunkptr p
) {
3215 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3216 assert(ok_address(m
, p
));
3219 /* Check properties of top chunk */
3220 static void do_check_top_chunk(mstate m
, mchunkptr p
) {
3221 msegmentptr sp
= segment_holding(m
, (char*)p
);
3222 size_t sz
= p
->head
& ~INUSE_BITS
; /* third-lowest bit can be set! */
3224 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3225 assert(ok_address(m
, p
));
3226 assert(sz
== m
->topsize
);
3228 assert(sz
== ((sp
->base
+ sp
->size
) - (char*)p
) - TOP_FOOT_SIZE
);
3230 assert(!pinuse(chunk_plus_offset(p
, sz
)));
3233 /* Check properties of (inuse) mmapped chunks */
3234 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
) {
3235 size_t sz
= chunksize(p
);
3236 size_t len
= (sz
+ (p
->prev_foot
) + MMAP_FOOT_PAD
);
3237 assert(is_mmapped(p
));
3238 assert(use_mmap(m
));
3239 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3240 assert(ok_address(m
, p
));
3241 assert(!is_small(sz
));
3242 assert((len
& (mparams
.page_size
-SIZE_T_ONE
)) == 0);
3243 assert(chunk_plus_offset(p
, sz
)->head
== FENCEPOST_HEAD
);
3244 assert(chunk_plus_offset(p
, sz
+SIZE_T_SIZE
)->head
== 0);
3247 /* Check properties of inuse chunks */
3248 static void do_check_inuse_chunk(mstate m
, mchunkptr p
) {
3249 do_check_any_chunk(m
, p
);
3250 assert(is_inuse(p
));
3251 assert(next_pinuse(p
));
3252 /* If not pinuse and not mmapped, previous chunk has OK offset */
3253 assert(is_mmapped(p
) || pinuse(p
) || next_chunk(prev_chunk(p
)) == p
);
3255 do_check_mmapped_chunk(m
, p
);
3258 /* Check properties of free chunks */
3259 static void do_check_free_chunk(mstate m
, mchunkptr p
) {
3260 size_t sz
= chunksize(p
);
3261 mchunkptr next
= chunk_plus_offset(p
, sz
);
3262 do_check_any_chunk(m
, p
);
3263 assert(!is_inuse(p
));
3264 assert(!next_pinuse(p
));
3265 assert (!is_mmapped(p
));
3266 if (p
!= m
->dv
&& p
!= m
->top
) {
3267 if (sz
>= MIN_CHUNK_SIZE
) {
3268 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3269 assert(is_aligned(chunk2mem(p
)));
3270 assert(next
->prev_foot
== sz
);
3272 assert (next
== m
->top
|| is_inuse(next
));
3273 assert(p
->fd
->bk
== p
);
3274 assert(p
->bk
->fd
== p
);
3276 else /* markers are always of size SIZE_T_SIZE */
3277 assert(sz
== SIZE_T_SIZE
);
3281 /* Check properties of malloced chunks at the point they are malloced */
3282 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
) {
3284 mchunkptr p
= mem2chunk(mem
);
3285 size_t sz
= p
->head
& ~INUSE_BITS
;
3286 do_check_inuse_chunk(m
, p
);
3287 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3288 assert(sz
>= MIN_CHUNK_SIZE
);
3290 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3291 assert(is_mmapped(p
) || sz
< (s
+ MIN_CHUNK_SIZE
));
3295 /* Check a tree and its subtrees. */
3296 static void do_check_tree(mstate m
, tchunkptr t
) {
3299 bindex_t tindex
= t
->index
;
3300 size_t tsize
= chunksize(t
);
3302 compute_tree_index(tsize
, idx
);
3303 assert(tindex
== idx
);
3304 assert(tsize
>= MIN_LARGE_SIZE
);
3305 assert(tsize
>= minsize_for_tree_index(idx
));
3306 assert((idx
== NTREEBINS
-1) || (tsize
< minsize_for_tree_index((idx
+1))));
3308 do { /* traverse through chain of same-sized nodes */
3309 do_check_any_chunk(m
, ((mchunkptr
)u
));
3310 assert(u
->index
== tindex
);
3311 assert(chunksize(u
) == tsize
);
3312 assert(!is_inuse(u
));
3313 assert(!next_pinuse(u
));
3314 assert(u
->fd
->bk
== u
);
3315 assert(u
->bk
->fd
== u
);
3316 if (u
->parent
== 0) {
3317 assert(u
->child
[0] == 0);
3318 assert(u
->child
[1] == 0);
3321 assert(head
== 0); /* only one node on chain has parent */
3323 assert(u
->parent
!= u
);
3324 assert (u
->parent
->child
[0] == u
||
3325 u
->parent
->child
[1] == u
||
3326 *((tbinptr
*)(u
->parent
)) == u
);
3327 if (u
->child
[0] != 0) {
3328 assert(u
->child
[0]->parent
== u
);
3329 assert(u
->child
[0] != u
);
3330 do_check_tree(m
, u
->child
[0]);
3332 if (u
->child
[1] != 0) {
3333 assert(u
->child
[1]->parent
== u
);
3334 assert(u
->child
[1] != u
);
3335 do_check_tree(m
, u
->child
[1]);
3337 if (u
->child
[0] != 0 && u
->child
[1] != 0) {
3338 assert(chunksize(u
->child
[0]) < chunksize(u
->child
[1]));
3346 /* Check all the chunks in a treebin. */
3347 static void do_check_treebin(mstate m
, bindex_t i
) {
3348 tbinptr
* tb
= treebin_at(m
, i
);
3350 int empty
= (m
->treemap
& (1U << i
)) == 0;
3354 do_check_tree(m
, t
);
3357 /* Check all the chunks in a smallbin. */
3358 static void do_check_smallbin(mstate m
, bindex_t i
) {
3359 sbinptr b
= smallbin_at(m
, i
);
3360 mchunkptr p
= b
->bk
;
3361 unsigned int empty
= (m
->smallmap
& (1U << i
)) == 0;
3365 for (; p
!= b
; p
= p
->bk
) {
3366 size_t size
= chunksize(p
);
3368 /* each chunk claims to be free */
3369 do_check_free_chunk(m
, p
);
3370 /* chunk belongs in bin */
3371 assert(small_index(size
) == i
);
3372 assert(p
->bk
== b
|| chunksize(p
->bk
) == chunksize(p
));
3373 /* chunk is followed by an inuse chunk */
3375 if (q
->head
!= FENCEPOST_HEAD
)
3376 do_check_inuse_chunk(m
, q
);
3381 /* Find x in a bin. Used in other check functions. */
3382 static int bin_find(mstate m
, mchunkptr x
) {
3383 size_t size
= chunksize(x
);
3384 if (is_small(size
)) {
3385 bindex_t sidx
= small_index(size
);
3386 sbinptr b
= smallbin_at(m
, sidx
);
3387 if (smallmap_is_marked(m
, sidx
)) {
3392 } while ((p
= p
->fd
) != b
);
3397 compute_tree_index(size
, tidx
);
3398 if (treemap_is_marked(m
, tidx
)) {
3399 tchunkptr t
= *treebin_at(m
, tidx
);
3400 size_t sizebits
= size
<< leftshift_for_tree_index(tidx
);
3401 while (t
!= 0 && chunksize(t
) != size
) {
3402 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
3408 if (u
== (tchunkptr
)x
)
3410 } while ((u
= u
->fd
) != t
);
3417 /* Traverse each chunk and check it; return total */
3418 static size_t traverse_and_check(mstate m
) {
3420 if (is_initialized(m
)) {
3421 msegmentptr s
= &m
->seg
;
3422 sum
+= m
->topsize
+ TOP_FOOT_SIZE
;
3424 mchunkptr q
= align_as_chunk(s
->base
);
3425 mchunkptr lastq
= 0;
3427 while (segment_holds(s
, q
) &&
3428 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3429 sum
+= chunksize(q
);
3431 assert(!bin_find(m
, q
));
3432 do_check_inuse_chunk(m
, q
);
3435 assert(q
== m
->dv
|| bin_find(m
, q
));
3436 assert(lastq
== 0 || is_inuse(lastq
)); /* Not 2 consecutive free */
3437 do_check_free_chunk(m
, q
);
3449 /* Check all properties of malloc_state. */
3450 static void do_check_malloc_state(mstate m
) {
3454 for (i
= 0; i
< NSMALLBINS
; ++i
)
3455 do_check_smallbin(m
, i
);
3456 for (i
= 0; i
< NTREEBINS
; ++i
)
3457 do_check_treebin(m
, i
);
3459 if (m
->dvsize
!= 0) { /* check dv chunk */
3460 do_check_any_chunk(m
, m
->dv
);
3461 assert(m
->dvsize
== chunksize(m
->dv
));
3462 assert(m
->dvsize
>= MIN_CHUNK_SIZE
);
3463 assert(bin_find(m
, m
->dv
) == 0);
3466 if (m
->top
!= 0) { /* check top chunk */
3467 do_check_top_chunk(m
, m
->top
);
3468 /*assert(m->topsize == chunksize(m->top)); redundant */
3469 assert(m
->topsize
> 0);
3470 assert(bin_find(m
, m
->top
) == 0);
3473 total
= traverse_and_check(m
);
3474 assert(total
<= m
->footprint
);
3475 assert(m
->footprint
<= m
->max_footprint
);
3479 /* ----------------------------- statistics ------------------------------ */
3482 static struct mallinfo
internal_mallinfo(mstate m
) {
3483 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3484 ensure_initialization();
3485 if (!PREACTION(m
)) {
3486 check_malloc_state(m
);
3487 if (is_initialized(m
)) {
3488 size_t nfree
= SIZE_T_ONE
; /* top always free */
3489 size_t mfree
= m
->topsize
+ TOP_FOOT_SIZE
;
3491 msegmentptr s
= &m
->seg
;
3493 mchunkptr q
= align_as_chunk(s
->base
);
3494 while (segment_holds(s
, q
) &&
3495 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3496 size_t sz
= chunksize(q
);
3509 nm
.hblkhd
= m
->footprint
- sum
;
3510 nm
.usmblks
= m
->max_footprint
;
3511 nm
.uordblks
= m
->footprint
- mfree
;
3512 nm
.fordblks
= mfree
;
3513 nm
.keepcost
= m
->topsize
;
3520 #endif /* !NO_MALLINFO */
3522 #if !NO_MALLOC_STATS
3523 static void internal_malloc_stats(mstate m
) {
3524 ensure_initialization();
3525 if (!PREACTION(m
)) {
3529 check_malloc_state(m
);
3530 if (is_initialized(m
)) {
3531 msegmentptr s
= &m
->seg
;
3532 maxfp
= m
->max_footprint
;
3534 used
= fp
- (m
->topsize
+ TOP_FOOT_SIZE
);
3537 mchunkptr q
= align_as_chunk(s
->base
);
3538 while (segment_holds(s
, q
) &&
3539 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3541 used
-= chunksize(q
);
3547 POSTACTION(m
); /* drop lock */
3548 fprintf(stderr
, "max system bytes = %10lu\n", (unsigned long)(maxfp
));
3549 fprintf(stderr
, "system bytes = %10lu\n", (unsigned long)(fp
));
3550 fprintf(stderr
, "in use bytes = %10lu\n", (unsigned long)(used
));
3553 #endif /* NO_MALLOC_STATS */
3555 /* ----------------------- Operations on smallbins ----------------------- */
3558 Various forms of linking and unlinking are defined as macros. Even
3559 the ones for trees, which are very long but have very short typical
3560 paths. This is ugly but reduces reliance on inlining support of
3564 /* Link a free chunk into a smallbin */
3565 #define insert_small_chunk(M, P, S) {\
3566 bindex_t I = small_index(S);\
3567 mchunkptr B = smallbin_at(M, I);\
3569 assert(S >= MIN_CHUNK_SIZE);\
3570 if (!smallmap_is_marked(M, I))\
3571 mark_smallmap(M, I);\
3572 else if (RTCHECK(ok_address(M, B->fd)))\
3575 CORRUPTION_ERROR_ACTION(M);\
3583 /* Unlink a chunk from a smallbin */
3584 #define unlink_small_chunk(M, P, S) {\
3585 mchunkptr F = P->fd;\
3586 mchunkptr B = P->bk;\
3587 bindex_t I = small_index(S);\
3590 assert(chunksize(P) == small_index2size(I));\
3591 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3593 clear_smallmap(M, I);\
3595 else if (RTCHECK(B == smallbin_at(M,I) ||\
3596 (ok_address(M, B) && B->fd == P))) {\
3601 CORRUPTION_ERROR_ACTION(M);\
3605 CORRUPTION_ERROR_ACTION(M);\
3609 /* Unlink the first chunk from a smallbin */
3610 #define unlink_first_small_chunk(M, B, P, I) {\
3611 mchunkptr F = P->fd;\
3614 assert(chunksize(P) == small_index2size(I));\
3616 clear_smallmap(M, I);\
3618 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3623 CORRUPTION_ERROR_ACTION(M);\
3627 /* Replace dv node, binning the old one */
3628 /* Used only when dvsize known to be small */
3629 #define replace_dv(M, P, S) {\
3630 size_t DVS = M->dvsize;\
3631 assert(is_small(DVS));\
3633 mchunkptr DV = M->dv;\
3634 insert_small_chunk(M, DV, DVS);\
3640 /* ------------------------- Operations on trees ------------------------- */
3642 /* Insert chunk into tree */
3643 #define insert_large_chunk(M, X, S) {\
3646 compute_tree_index(S, I);\
3647 H = treebin_at(M, I);\
3649 X->child[0] = X->child[1] = 0;\
3650 if (!treemap_is_marked(M, I)) {\
3651 mark_treemap(M, I);\
3653 X->parent = (tchunkptr)H;\
3658 size_t K = S << leftshift_for_tree_index(I);\
3660 if (chunksize(T) != S) {\
3661 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3665 else if (RTCHECK(ok_address(M, C))) {\
3672 CORRUPTION_ERROR_ACTION(M);\
3677 tchunkptr F = T->fd;\
3678 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3686 CORRUPTION_ERROR_ACTION(M);\
3697 1. If x is a chained node, unlink it from its same-sized fd/bk links
3698 and choose its bk node as its replacement.
3699 2. If x was the last node of its size, but not a leaf node, it must
3700 be replaced with a leaf node (not merely one with an open left or
3701 right), to make sure that lefts and rights of descendents
3702 correspond properly to bit masks. We use the rightmost descendent
3703 of x. We could use any other leaf, but this is easy to locate and
3704 tends to counteract removal of leftmosts elsewhere, and so keeps
3705 paths shorter than minimally guaranteed. This doesn't loop much
3706 because on average a node in a tree is near the bottom.
3707 3. If x is the base of a chain (i.e., has parent links) relink
3708 x's parent and children to x's replacement (or null if none).
3711 #define unlink_large_chunk(M, X) {\
3712 tchunkptr XP = X->parent;\
3715 tchunkptr F = X->fd;\
3717 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3722 CORRUPTION_ERROR_ACTION(M);\
3727 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3728 ((R = *(RP = &(X->child[0]))) != 0)) {\
3730 while ((*(CP = &(R->child[1])) != 0) ||\
3731 (*(CP = &(R->child[0])) != 0)) {\
3734 if (RTCHECK(ok_address(M, RP)))\
3737 CORRUPTION_ERROR_ACTION(M);\
3742 tbinptr* H = treebin_at(M, X->index);\
3744 if ((*H = R) == 0) \
3745 clear_treemap(M, X->index);\
3747 else if (RTCHECK(ok_address(M, XP))) {\
3748 if (XP->child[0] == X) \
3754 CORRUPTION_ERROR_ACTION(M);\
3756 if (RTCHECK(ok_address(M, R))) {\
3759 if ((C0 = X->child[0]) != 0) {\
3760 if (RTCHECK(ok_address(M, C0))) {\
3765 CORRUPTION_ERROR_ACTION(M);\
3767 if ((C1 = X->child[1]) != 0) {\
3768 if (RTCHECK(ok_address(M, C1))) {\
3773 CORRUPTION_ERROR_ACTION(M);\
3777 CORRUPTION_ERROR_ACTION(M);\
3782 /* Relays to large vs small bin operations */
3784 #define insert_chunk(M, P, S)\
3785 if (is_small(S)) insert_small_chunk(M, P, S)\
3786 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3788 #define unlink_chunk(M, P, S)\
3789 if (is_small(S)) unlink_small_chunk(M, P, S)\
3790 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3793 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3796 #define internal_malloc(m, b) mspace_malloc(m, b)
3797 #define internal_free(m, mem) mspace_free(m,mem);
3798 #else /* ONLY_MSPACES */
3800 #define internal_malloc(m, b)\
3801 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3802 #define internal_free(m, mem)\
3803 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3805 #define internal_malloc(m, b) dlmalloc(b)
3806 #define internal_free(m, mem) dlfree(mem)
3807 #endif /* MSPACES */
3808 #endif /* ONLY_MSPACES */
3810 /* ----------------------- Direct-mmapping chunks ----------------------- */
3813 Directly mmapped chunks are set up with an offset to the start of
3814 the mmapped region stored in the prev_foot field of the chunk. This
3815 allows reconstruction of the required argument to MUNMAP when freed,
3816 and also allows adjustment of the returned chunk to meet alignment
3817 requirements (especially in memalign).
3820 /* Malloc using mmap */
3821 static void* mmap_alloc(mstate m
, size_t nb
) {
3822 size_t mmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3823 if (m
->footprint_limit
!= 0) {
3824 size_t fp
= m
->footprint
+ mmsize
;
3825 if (fp
<= m
->footprint
|| fp
> m
->footprint_limit
)
3828 if (mmsize
> nb
) { /* Check for wrap around 0 */
3829 char* mm
= (char*)(CALL_DIRECT_MMAP(mmsize
));
3831 size_t offset
= align_offset(chunk2mem(mm
));
3832 size_t psize
= mmsize
- offset
- MMAP_FOOT_PAD
;
3833 mchunkptr p
= (mchunkptr
)(mm
+ offset
);
3834 p
->prev_foot
= offset
;
3836 mark_inuse_foot(m
, p
, psize
);
3837 chunk_plus_offset(p
, psize
)->head
= FENCEPOST_HEAD
;
3838 chunk_plus_offset(p
, psize
+SIZE_T_SIZE
)->head
= 0;
3840 if (m
->least_addr
== 0 || mm
< m
->least_addr
)
3842 if ((m
->footprint
+= mmsize
) > m
->max_footprint
)
3843 m
->max_footprint
= m
->footprint
;
3844 assert(is_aligned(chunk2mem(p
)));
3845 check_mmapped_chunk(m
, p
);
3846 return chunk2mem(p
);
3852 /* Realloc using mmap */
3853 static mchunkptr
mmap_resize(mstate m
, mchunkptr oldp
, size_t nb
, int flags
) {
3854 size_t oldsize
= chunksize(oldp
);
3855 (void)flags
; /* placate people compiling -Wunused */
3856 if (is_small(nb
)) /* Can't shrink mmap regions below small size */
3858 /* Keep old chunk if big enough but not too big */
3859 if (oldsize
>= nb
+ SIZE_T_SIZE
&&
3860 (oldsize
- nb
) <= (mparams
.granularity
<< 1))
3863 size_t offset
= oldp
->prev_foot
;
3864 size_t oldmmsize
= oldsize
+ offset
+ MMAP_FOOT_PAD
;
3865 size_t newmmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3866 char* cp
= (char*)CALL_MREMAP((char*)oldp
- offset
,
3867 oldmmsize
, newmmsize
, flags
);
3869 mchunkptr newp
= (mchunkptr
)(cp
+ offset
);
3870 size_t psize
= newmmsize
- offset
- MMAP_FOOT_PAD
;
3872 mark_inuse_foot(m
, newp
, psize
);
3873 chunk_plus_offset(newp
, psize
)->head
= FENCEPOST_HEAD
;
3874 chunk_plus_offset(newp
, psize
+SIZE_T_SIZE
)->head
= 0;
3876 if (cp
< m
->least_addr
)
3878 if ((m
->footprint
+= newmmsize
- oldmmsize
) > m
->max_footprint
)
3879 m
->max_footprint
= m
->footprint
;
3880 check_mmapped_chunk(m
, newp
);
3888 /* -------------------------- mspace management -------------------------- */
3890 /* Initialize top chunk and its size */
3891 static void init_top(mstate m
, mchunkptr p
, size_t psize
) {
3892 /* Ensure alignment */
3893 size_t offset
= align_offset(chunk2mem(p
));
3894 p
= (mchunkptr
)((char*)p
+ offset
);
3899 p
->head
= psize
| PINUSE_BIT
;
3900 /* set size of fake trailing chunk holding overhead space only once */
3901 chunk_plus_offset(p
, psize
)->head
= TOP_FOOT_SIZE
;
3902 m
->trim_check
= mparams
.trim_threshold
; /* reset on each update */
3905 /* Initialize bins for a new mstate that is otherwise zeroed out */
3906 static void init_bins(mstate m
) {
3907 /* Establish circular links for smallbins */
3909 for (i
= 0; i
< NSMALLBINS
; ++i
) {
3910 sbinptr bin
= smallbin_at(m
,i
);
3911 bin
->fd
= bin
->bk
= bin
;
3915 #if PROCEED_ON_ERROR
3917 /* default corruption action */
3918 static void reset_on_error(mstate m
) {
3920 ++malloc_corruption_error_count
;
3921 /* Reinitialize fields to forget about all memory */
3922 m
->smallmap
= m
->treemap
= 0;
3923 m
->dvsize
= m
->topsize
= 0;
3928 for (i
= 0; i
< NTREEBINS
; ++i
)
3929 *treebin_at(m
, i
) = 0;
3932 #endif /* PROCEED_ON_ERROR */
3934 /* Allocate chunk and prepend remainder with chunk in successor base. */
3935 static void* prepend_alloc(mstate m
, char* newbase
, char* oldbase
,
3937 mchunkptr p
= align_as_chunk(newbase
);
3938 mchunkptr oldfirst
= align_as_chunk(oldbase
);
3939 size_t psize
= (char*)oldfirst
- (char*)p
;
3940 mchunkptr q
= chunk_plus_offset(p
, nb
);
3941 size_t qsize
= psize
- nb
;
3942 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
3944 assert((char*)oldfirst
> (char*)q
);
3945 assert(pinuse(oldfirst
));
3946 assert(qsize
>= MIN_CHUNK_SIZE
);
3948 /* consolidate remainder with first chunk of old base */
3949 if (oldfirst
== m
->top
) {
3950 size_t tsize
= m
->topsize
+= qsize
;
3952 q
->head
= tsize
| PINUSE_BIT
;
3953 check_top_chunk(m
, q
);
3955 else if (oldfirst
== m
->dv
) {
3956 size_t dsize
= m
->dvsize
+= qsize
;
3958 set_size_and_pinuse_of_free_chunk(q
, dsize
);
3961 if (!is_inuse(oldfirst
)) {
3962 size_t nsize
= chunksize(oldfirst
);
3963 unlink_chunk(m
, oldfirst
, nsize
);
3964 oldfirst
= chunk_plus_offset(oldfirst
, nsize
);
3967 set_free_with_pinuse(q
, qsize
, oldfirst
);
3968 insert_chunk(m
, q
, qsize
);
3969 check_free_chunk(m
, q
);
3972 check_malloced_chunk(m
, chunk2mem(p
), nb
);
3973 return chunk2mem(p
);
3976 /* Add a segment to hold a new noncontiguous region */
3977 static void add_segment(mstate m
, char* tbase
, size_t tsize
, flag_t mmapped
) {
3978 /* Determine locations and sizes of segment, fenceposts, old top */
3979 char* old_top
= (char*)m
->top
;
3980 msegmentptr oldsp
= segment_holding(m
, old_top
);
3981 char* old_end
= oldsp
->base
+ oldsp
->size
;
3982 size_t ssize
= pad_request(sizeof(struct malloc_segment
));
3983 char* rawsp
= old_end
- (ssize
+ FOUR_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3984 size_t offset
= align_offset(chunk2mem(rawsp
));
3985 char* asp
= rawsp
+ offset
;
3986 char* csp
= (asp
< (old_top
+ MIN_CHUNK_SIZE
))? old_top
: asp
;
3987 mchunkptr sp
= (mchunkptr
)csp
;
3988 msegmentptr ss
= (msegmentptr
)(chunk2mem(sp
));
3989 mchunkptr tnext
= chunk_plus_offset(sp
, ssize
);
3990 mchunkptr p
= tnext
;
3993 /* reset top to new space */
3994 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
3996 /* Set up segment record */
3997 assert(is_aligned(ss
));
3998 set_size_and_pinuse_of_inuse_chunk(m
, sp
, ssize
);
3999 *ss
= m
->seg
; /* Push current record */
4000 m
->seg
.base
= tbase
;
4001 m
->seg
.size
= tsize
;
4002 m
->seg
.sflags
= mmapped
;
4005 /* Insert trailing fenceposts */
4007 mchunkptr nextp
= chunk_plus_offset(p
, SIZE_T_SIZE
);
4008 p
->head
= FENCEPOST_HEAD
;
4010 if ((char*)(&(nextp
->head
)) < old_end
)
4015 assert(nfences
>= 2);
4017 /* Insert the rest of old top into a bin as an ordinary free chunk */
4018 if (csp
!= old_top
) {
4019 mchunkptr q
= (mchunkptr
)old_top
;
4020 size_t psize
= csp
- old_top
;
4021 mchunkptr tn
= chunk_plus_offset(q
, psize
);
4022 set_free_with_pinuse(q
, psize
, tn
);
4023 insert_chunk(m
, q
, psize
);
4026 check_top_chunk(m
, m
->top
);
4029 /* -------------------------- System allocation -------------------------- */
4031 /* Get memory from system using MORECORE or MMAP */
4032 static void* sys_alloc(mstate m
, size_t nb
) {
4033 char* tbase
= CMFAIL
;
4035 flag_t mmap_flag
= 0;
4036 size_t asize
; /* allocation size */
4038 ensure_initialization();
4040 /* Directly map large chunks, but only if already initialized */
4041 if (use_mmap(m
) && nb
>= mparams
.mmap_threshold
&& m
->topsize
!= 0) {
4042 void* mem
= mmap_alloc(m
, nb
);
4047 asize
= granularity_align(nb
+ SYS_ALLOC_PADDING
);
4049 return 0; /* wraparound */
4050 if (m
->footprint_limit
!= 0) {
4051 size_t fp
= m
->footprint
+ asize
;
4052 if (fp
<= m
->footprint
|| fp
> m
->footprint_limit
)
4057 Try getting memory in any of three ways (in most-preferred to
4058 least-preferred order):
4059 1. A call to MORECORE that can normally contiguously extend memory.
4060 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4061 or main space is mmapped or a previous contiguous call failed)
4062 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4063 Note that under the default settings, if MORECORE is unable to
4064 fulfill a request, and HAVE_MMAP is true, then mmap is
4065 used as a noncontiguous system allocator. This is a useful backup
4066 strategy for systems with holes in address spaces -- in this case
4067 sbrk cannot contiguously expand the heap, but mmap may be able to
4069 3. A call to MORECORE that cannot usually contiguously extend memory.
4070 (disabled if not HAVE_MORECORE)
4072 In all cases, we need to request enough bytes from system to ensure
4073 we can malloc nb bytes upon success, so pad with enough space for
4074 top_foot, plus alignment-pad to make sure we don't lose bytes if
4075 not on boundary, and round this up to a granularity unit.
4078 if (MORECORE_CONTIGUOUS
&& !use_noncontiguous(m
)) {
4080 size_t ssize
= asize
; /* sbrk call size */
4081 msegmentptr ss
= (m
->top
== 0)? 0 : segment_holding(m
, (char*)m
->top
);
4082 ACQUIRE_MALLOC_GLOBAL_LOCK();
4084 if (ss
== 0) { /* First time through or recovery */
4085 char* base
= (char*)CALL_MORECORE(0);
4086 if (base
!= CMFAIL
) {
4088 /* Adjust to end on a page boundary */
4089 if (!is_page_aligned(base
))
4090 ssize
+= (page_align((size_t)base
) - (size_t)base
);
4091 fp
= m
->footprint
+ ssize
; /* recheck limits */
4092 if (ssize
> nb
&& ssize
< HALF_MAX_SIZE_T
&&
4093 (m
->footprint_limit
== 0 ||
4094 (fp
> m
->footprint
&& fp
<= m
->footprint_limit
)) &&
4095 (br
= (char*)(CALL_MORECORE(ssize
))) == base
) {
4102 /* Subtract out existing available top space from MORECORE request. */
4103 ssize
= granularity_align(nb
- m
->topsize
+ SYS_ALLOC_PADDING
);
4104 /* Use mem here only if it did continuously extend old space */
4105 if (ssize
< HALF_MAX_SIZE_T
&&
4106 (br
= (char*)(CALL_MORECORE(ssize
))) == ss
->base
+ss
->size
) {
4112 if (tbase
== CMFAIL
) { /* Cope with partial failure */
4113 if (br
!= CMFAIL
) { /* Try to use/extend the space we did get */
4114 if (ssize
< HALF_MAX_SIZE_T
&&
4115 ssize
< nb
+ SYS_ALLOC_PADDING
) {
4116 size_t esize
= granularity_align(nb
+ SYS_ALLOC_PADDING
- ssize
);
4117 if (esize
< HALF_MAX_SIZE_T
) {
4118 char* end
= (char*)CALL_MORECORE(esize
);
4121 else { /* Can't use; try to release */
4122 (void) CALL_MORECORE(-ssize
);
4128 if (br
!= CMFAIL
) { /* Use the space we did get */
4133 disable_contiguous(m
); /* Don't try contiguous path in the future */
4136 RELEASE_MALLOC_GLOBAL_LOCK();
4139 if (HAVE_MMAP
&& tbase
== CMFAIL
) { /* Try MMAP */
4140 char* mp
= (char*)(CALL_MMAP(asize
));
4144 mmap_flag
= USE_MMAP_BIT
;
4148 if (HAVE_MORECORE
&& tbase
== CMFAIL
) { /* Try noncontiguous MORECORE */
4149 if (asize
< HALF_MAX_SIZE_T
) {
4152 ACQUIRE_MALLOC_GLOBAL_LOCK();
4153 br
= (char*)(CALL_MORECORE(asize
));
4154 end
= (char*)(CALL_MORECORE(0));
4155 RELEASE_MALLOC_GLOBAL_LOCK();
4156 if (br
!= CMFAIL
&& end
!= CMFAIL
&& br
< end
) {
4157 size_t ssize
= end
- br
;
4158 if (ssize
> nb
+ TOP_FOOT_SIZE
) {
4166 if (tbase
!= CMFAIL
) {
4168 if ((m
->footprint
+= tsize
) > m
->max_footprint
)
4169 m
->max_footprint
= m
->footprint
;
4171 if (!is_initialized(m
)) { /* first-time initialization */
4172 if (m
->least_addr
== 0 || tbase
< m
->least_addr
)
4173 m
->least_addr
= tbase
;
4174 m
->seg
.base
= tbase
;
4175 m
->seg
.size
= tsize
;
4176 m
->seg
.sflags
= mmap_flag
;
4177 m
->magic
= mparams
.magic
;
4178 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
4182 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
4186 /* Offset top by embedded malloc_state */
4187 mchunkptr mn
= next_chunk(mem2chunk(m
));
4188 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) -TOP_FOOT_SIZE
);
4193 /* Try to merge with an existing segment */
4194 msegmentptr sp
= &m
->seg
;
4195 /* Only consider most recent segment if traversal suppressed */
4196 while (sp
!= 0 && tbase
!= sp
->base
+ sp
->size
)
4197 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4199 !is_extern_segment(sp
) &&
4200 (sp
->sflags
& USE_MMAP_BIT
) == mmap_flag
&&
4201 segment_holds(sp
, m
->top
)) { /* append */
4203 init_top(m
, m
->top
, m
->topsize
+ tsize
);
4206 if (tbase
< m
->least_addr
)
4207 m
->least_addr
= tbase
;
4209 while (sp
!= 0 && sp
->base
!= tbase
+ tsize
)
4210 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4212 !is_extern_segment(sp
) &&
4213 (sp
->sflags
& USE_MMAP_BIT
) == mmap_flag
) {
4214 char* oldbase
= sp
->base
;
4217 return prepend_alloc(m
, tbase
, oldbase
, nb
);
4220 add_segment(m
, tbase
, tsize
, mmap_flag
);
4224 if (nb
< m
->topsize
) { /* Allocate from new or extended top space */
4225 size_t rsize
= m
->topsize
-= nb
;
4226 mchunkptr p
= m
->top
;
4227 mchunkptr r
= m
->top
= chunk_plus_offset(p
, nb
);
4228 r
->head
= rsize
| PINUSE_BIT
;
4229 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
4230 check_top_chunk(m
, m
->top
);
4231 check_malloced_chunk(m
, chunk2mem(p
), nb
);
4232 return chunk2mem(p
);
4236 MALLOC_FAILURE_ACTION
;
4240 /* ----------------------- system deallocation -------------------------- */
4242 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4243 static size_t release_unused_segments(mstate m
) {
4244 size_t released
= 0;
4246 msegmentptr pred
= &m
->seg
;
4247 msegmentptr sp
= pred
->next
;
4249 char* base
= sp
->base
;
4250 size_t size
= sp
->size
;
4251 msegmentptr next
= sp
->next
;
4253 if (is_mmapped_segment(sp
) && !is_extern_segment(sp
)) {
4254 mchunkptr p
= align_as_chunk(base
);
4255 size_t psize
= chunksize(p
);
4256 /* Can unmap if first chunk holds entire segment and not pinned */
4257 if (!is_inuse(p
) && (char*)p
+ psize
>= base
+ size
- TOP_FOOT_SIZE
) {
4258 tchunkptr tp
= (tchunkptr
)p
;
4259 assert(segment_holds(sp
, (char*)sp
));
4265 unlink_large_chunk(m
, tp
);
4267 if (CALL_MUNMAP(base
, size
) == 0) {
4269 m
->footprint
-= size
;
4270 /* unlink obsoleted record */
4274 else { /* back out if cannot unmap */
4275 insert_large_chunk(m
, tp
, psize
);
4279 if (NO_SEGMENT_TRAVERSAL
) /* scan only first segment */
4284 /* Reset check counter */
4285 m
->release_checks
= (((size_t) nsegs
> (size_t) MAX_RELEASE_CHECK_RATE
)?
4286 (size_t) nsegs
: (size_t) MAX_RELEASE_CHECK_RATE
);
4290 static int sys_trim(mstate m
, size_t pad
) {
4291 size_t released
= 0;
4292 ensure_initialization();
4293 if (pad
< MAX_REQUEST
&& is_initialized(m
)) {
4294 pad
+= TOP_FOOT_SIZE
; /* ensure enough room for segment overhead */
4296 if (m
->topsize
> pad
) {
4297 /* Shrink top space in granularity-size units, keeping at least one */
4298 size_t unit
= mparams
.granularity
;
4299 size_t extra
= ((m
->topsize
- pad
+ (unit
- SIZE_T_ONE
)) / unit
-
4301 msegmentptr sp
= segment_holding(m
, (char*)m
->top
);
4303 if (!is_extern_segment(sp
)) {
4304 if (is_mmapped_segment(sp
)) {
4306 sp
->size
>= extra
&&
4307 !has_segment_link(m
, sp
)) { /* can't shrink if pinned */
4308 size_t newsize
= sp
->size
- extra
;
4309 (void)newsize
; /* placate people compiling -Wunused-variable */
4310 /* Prefer mremap, fall back to munmap */
4311 if ((CALL_MREMAP(sp
->base
, sp
->size
, newsize
, 0) != MFAIL
) ||
4312 (CALL_MUNMAP(sp
->base
+ newsize
, extra
) == 0)) {
4317 else if (HAVE_MORECORE
) {
4318 if (extra
>= HALF_MAX_SIZE_T
) /* Avoid wrapping negative */
4319 extra
= (HALF_MAX_SIZE_T
) + SIZE_T_ONE
- unit
;
4320 ACQUIRE_MALLOC_GLOBAL_LOCK();
4322 /* Make sure end of memory is where we last set it. */
4323 char* old_br
= (char*)(CALL_MORECORE(0));
4324 if (old_br
== sp
->base
+ sp
->size
) {
4325 char* rel_br
= (char*)(CALL_MORECORE(-extra
));
4326 char* new_br
= (char*)(CALL_MORECORE(0));
4327 if (rel_br
!= CMFAIL
&& new_br
< old_br
)
4328 released
= old_br
- new_br
;
4331 RELEASE_MALLOC_GLOBAL_LOCK();
4335 if (released
!= 0) {
4336 sp
->size
-= released
;
4337 m
->footprint
-= released
;
4338 init_top(m
, m
->top
, m
->topsize
- released
);
4339 check_top_chunk(m
, m
->top
);
4343 /* Unmap any unused mmapped segments */
4345 released
+= release_unused_segments(m
);
4347 /* On failure, disable autotrim to avoid repeated failed future calls */
4348 if (released
== 0 && m
->topsize
> m
->trim_check
)
4349 m
->trim_check
= MAX_SIZE_T
;
4352 return (released
!= 0)? 1 : 0;
4355 /* Consolidate and bin a chunk. Differs from exported versions
4356 of free mainly in that the chunk need not be marked as inuse.
4358 static void dispose_chunk(mstate m
, mchunkptr p
, size_t psize
) {
4359 mchunkptr next
= chunk_plus_offset(p
, psize
);
4362 size_t prevsize
= p
->prev_foot
;
4363 if (is_mmapped(p
)) {
4364 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4365 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4366 m
->footprint
-= psize
;
4369 prev
= chunk_minus_offset(p
, prevsize
);
4372 if (RTCHECK(ok_address(m
, prev
))) { /* consolidate backward */
4374 unlink_chunk(m
, p
, prevsize
);
4376 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4378 set_free_with_pinuse(p
, psize
, next
);
4383 CORRUPTION_ERROR_ACTION(m
);
4387 if (RTCHECK(ok_address(m
, next
))) {
4388 if (!cinuse(next
)) { /* consolidate forward */
4389 if (next
== m
->top
) {
4390 size_t tsize
= m
->topsize
+= psize
;
4392 p
->head
= tsize
| PINUSE_BIT
;
4399 else if (next
== m
->dv
) {
4400 size_t dsize
= m
->dvsize
+= psize
;
4402 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4406 size_t nsize
= chunksize(next
);
4408 unlink_chunk(m
, next
, nsize
);
4409 set_size_and_pinuse_of_free_chunk(p
, psize
);
4417 set_free_with_pinuse(p
, psize
, next
);
4419 insert_chunk(m
, p
, psize
);
4422 CORRUPTION_ERROR_ACTION(m
);
4426 /* ---------------------------- malloc --------------------------- */
4428 /* allocate a large request from the best fitting chunk in a treebin */
4429 static void* tmalloc_large(mstate m
, size_t nb
) {
4431 size_t rsize
= -nb
; /* Unsigned negation */
4434 compute_tree_index(nb
, idx
);
4435 if ((t
= *treebin_at(m
, idx
)) != 0) {
4436 /* Traverse tree for this bin looking for node with size == nb */
4437 size_t sizebits
= nb
<< leftshift_for_tree_index(idx
);
4438 tchunkptr rst
= 0; /* The deepest untaken right subtree */
4441 size_t trem
= chunksize(t
) - nb
;
4444 if ((rsize
= trem
) == 0)
4448 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
4449 if (rt
!= 0 && rt
!= t
)
4452 t
= rst
; /* set t to least subtree holding sizes > nb */
4458 if (t
== 0 && v
== 0) { /* set t to root of next non-empty treebin */
4459 binmap_t leftbits
= left_bits(idx2bit(idx
)) & m
->treemap
;
4460 if (leftbits
!= 0) {
4462 binmap_t leastbit
= least_bit(leftbits
);
4463 compute_bit2idx(leastbit
, i
);
4464 t
= *treebin_at(m
, i
);
4468 while (t
!= 0) { /* find smallest of tree or subtree */
4469 size_t trem
= chunksize(t
) - nb
;
4474 t
= leftmost_child(t
);
4477 /* If dv is a better fit, return 0 so malloc will use it */
4478 if (v
!= 0 && rsize
< (size_t)(m
->dvsize
- nb
)) {
4479 if (RTCHECK(ok_address(m
, v
))) { /* split */
4480 mchunkptr r
= chunk_plus_offset(v
, nb
);
4481 assert(chunksize(v
) == rsize
+ nb
);
4482 if (RTCHECK(ok_next(v
, r
))) {
4483 unlink_large_chunk(m
, v
);
4484 if (rsize
< MIN_CHUNK_SIZE
)
4485 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4487 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4488 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4489 insert_chunk(m
, r
, rsize
);
4491 return chunk2mem(v
);
4494 CORRUPTION_ERROR_ACTION(m
);
4499 /* allocate a small request from the best fitting chunk in a treebin */
4500 static void* tmalloc_small(mstate m
, size_t nb
) {
4504 binmap_t leastbit
= least_bit(m
->treemap
);
4505 compute_bit2idx(leastbit
, i
);
4506 v
= t
= *treebin_at(m
, i
);
4507 rsize
= chunksize(t
) - nb
;
4509 while ((t
= leftmost_child(t
)) != 0) {
4510 size_t trem
= chunksize(t
) - nb
;
4517 if (RTCHECK(ok_address(m
, v
))) {
4518 mchunkptr r
= chunk_plus_offset(v
, nb
);
4519 assert(chunksize(v
) == rsize
+ nb
);
4520 if (RTCHECK(ok_next(v
, r
))) {
4521 unlink_large_chunk(m
, v
);
4522 if (rsize
< MIN_CHUNK_SIZE
)
4523 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4525 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4526 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4527 replace_dv(m
, r
, rsize
);
4529 return chunk2mem(v
);
4533 CORRUPTION_ERROR_ACTION(m
);
4539 void* dlmalloc(size_t bytes
) {
4542 If a small request (< 256 bytes minus per-chunk overhead):
4543 1. If one exists, use a remainderless chunk in associated smallbin.
4544 (Remainderless means that there are too few excess bytes to
4545 represent as a chunk.)
4546 2. If it is big enough, use the dv chunk, which is normally the
4547 chunk adjacent to the one used for the most recent small request.
4548 3. If one exists, split the smallest available chunk in a bin,
4549 saving remainder in dv.
4550 4. If it is big enough, use the top chunk.
4551 5. If available, get memory from system and use it
4552 Otherwise, for a large request:
4553 1. Find the smallest available binned chunk that fits, and use it
4554 if it is better fitting than dv chunk, splitting if necessary.
4555 2. If better fitting than any binned chunk, use the dv chunk.
4556 3. If it is big enough, use the top chunk.
4557 4. If request size >= mmap threshold, try to directly mmap this chunk.
4558 5. If available, get memory from system and use it
4560 The ugly goto's here ensure that postaction occurs along all paths.
4564 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4567 if (!PREACTION(gm
)) {
4570 if (bytes
<= MAX_SMALL_REQUEST
) {
4573 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
4574 idx
= small_index(nb
);
4575 smallbits
= gm
->smallmap
>> idx
;
4577 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
4579 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
4580 b
= smallbin_at(gm
, idx
);
4582 assert(chunksize(p
) == small_index2size(idx
));
4583 unlink_first_small_chunk(gm
, b
, p
, idx
);
4584 set_inuse_and_pinuse(gm
, p
, small_index2size(idx
));
4586 check_malloced_chunk(gm
, mem
, nb
);
4590 else if (nb
> gm
->dvsize
) {
4591 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
4595 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
4596 binmap_t leastbit
= least_bit(leftbits
);
4597 compute_bit2idx(leastbit
, i
);
4598 b
= smallbin_at(gm
, i
);
4600 assert(chunksize(p
) == small_index2size(i
));
4601 unlink_first_small_chunk(gm
, b
, p
, i
);
4602 rsize
= small_index2size(i
) - nb
;
4603 /* Fit here cannot be remainderless if 4byte sizes */
4604 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
4605 set_inuse_and_pinuse(gm
, p
, small_index2size(i
));
4607 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4608 r
= chunk_plus_offset(p
, nb
);
4609 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4610 replace_dv(gm
, r
, rsize
);
4613 check_malloced_chunk(gm
, mem
, nb
);
4617 else if (gm
->treemap
!= 0 && (mem
= tmalloc_small(gm
, nb
)) != 0) {
4618 check_malloced_chunk(gm
, mem
, nb
);
4623 else if (bytes
>= MAX_REQUEST
)
4624 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
4626 nb
= pad_request(bytes
);
4627 if (gm
->treemap
!= 0 && (mem
= tmalloc_large(gm
, nb
)) != 0) {
4628 check_malloced_chunk(gm
, mem
, nb
);
4633 if (nb
<= gm
->dvsize
) {
4634 size_t rsize
= gm
->dvsize
- nb
;
4635 mchunkptr p
= gm
->dv
;
4636 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
4637 mchunkptr r
= gm
->dv
= chunk_plus_offset(p
, nb
);
4639 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4640 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4642 else { /* exhaust dv */
4643 size_t dvs
= gm
->dvsize
;
4646 set_inuse_and_pinuse(gm
, p
, dvs
);
4649 check_malloced_chunk(gm
, mem
, nb
);
4653 else if (nb
< gm
->topsize
) { /* Split top */
4654 size_t rsize
= gm
->topsize
-= nb
;
4655 mchunkptr p
= gm
->top
;
4656 mchunkptr r
= gm
->top
= chunk_plus_offset(p
, nb
);
4657 r
->head
= rsize
| PINUSE_BIT
;
4658 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4660 check_top_chunk(gm
, gm
->top
);
4661 check_malloced_chunk(gm
, mem
, nb
);
4665 mem
= sys_alloc(gm
, nb
);
4675 /* ---------------------------- free --------------------------- */
4677 void dlfree(void* mem
) {
4679 Consolidate freed chunks with preceeding or succeeding bordering
4680 free chunks, if they exist, and then place in a bin. Intermixed
4681 with special cases for top, dv, mmapped chunks, and usage errors.
4685 mchunkptr p
= mem2chunk(mem
);
4687 mstate fm
= get_mstate_for(p
);
4688 if (!ok_magic(fm
)) {
4689 USAGE_ERROR_ACTION(fm
, p
);
4694 #endif /* FOOTERS */
4695 if (!PREACTION(fm
)) {
4696 check_inuse_chunk(fm
, p
);
4697 if (RTCHECK(ok_address(fm
, p
) && ok_inuse(p
))) {
4698 size_t psize
= chunksize(p
);
4699 mchunkptr next
= chunk_plus_offset(p
, psize
);
4701 size_t prevsize
= p
->prev_foot
;
4702 if (is_mmapped(p
)) {
4703 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4704 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4705 fm
->footprint
-= psize
;
4709 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
4712 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
4714 unlink_chunk(fm
, p
, prevsize
);
4716 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4718 set_free_with_pinuse(p
, psize
, next
);
4727 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
4728 if (!cinuse(next
)) { /* consolidate forward */
4729 if (next
== fm
->top
) {
4730 size_t tsize
= fm
->topsize
+= psize
;
4732 p
->head
= tsize
| PINUSE_BIT
;
4737 if (should_trim(fm
, tsize
))
4741 else if (next
== fm
->dv
) {
4742 size_t dsize
= fm
->dvsize
+= psize
;
4744 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4748 size_t nsize
= chunksize(next
);
4750 unlink_chunk(fm
, next
, nsize
);
4751 set_size_and_pinuse_of_free_chunk(p
, psize
);
4759 set_free_with_pinuse(p
, psize
, next
);
4761 if (is_small(psize
)) {
4762 insert_small_chunk(fm
, p
, psize
);
4763 check_free_chunk(fm
, p
);
4766 tchunkptr tp
= (tchunkptr
)p
;
4767 insert_large_chunk(fm
, tp
, psize
);
4768 check_free_chunk(fm
, p
);
4769 if (--fm
->release_checks
== 0)
4770 release_unused_segments(fm
);
4776 USAGE_ERROR_ACTION(fm
, p
);
4783 #endif /* FOOTERS */
4786 void* dlcalloc(size_t n_elements
, size_t elem_size
) {
4789 if (n_elements
!= 0) {
4790 req
= n_elements
* elem_size
;
4791 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
4792 (req
/ n_elements
!= elem_size
))
4793 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
4795 mem
= dlmalloc(req
);
4796 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
4797 memset(mem
, 0, req
);
4801 #endif /* !ONLY_MSPACES */
4803 /* ------------ Internal support for realloc, memalign, etc -------------- */
4805 /* Try to realloc; only in-place unless can_move true */
4806 static mchunkptr
try_realloc_chunk(mstate m
, mchunkptr p
, size_t nb
,
4809 size_t oldsize
= chunksize(p
);
4810 mchunkptr next
= chunk_plus_offset(p
, oldsize
);
4811 if (RTCHECK(ok_address(m
, p
) && ok_inuse(p
) &&
4812 ok_next(p
, next
) && ok_pinuse(next
))) {
4813 if (is_mmapped(p
)) {
4814 newp
= mmap_resize(m
, p
, nb
, can_move
);
4816 else if (oldsize
>= nb
) { /* already big enough */
4817 size_t rsize
= oldsize
- nb
;
4818 if (rsize
>= MIN_CHUNK_SIZE
) { /* split off remainder */
4819 mchunkptr r
= chunk_plus_offset(p
, nb
);
4820 set_inuse(m
, p
, nb
);
4821 set_inuse(m
, r
, rsize
);
4822 dispose_chunk(m
, r
, rsize
);
4826 else if (next
== m
->top
) { /* extend into top */
4827 if (oldsize
+ m
->topsize
> nb
) {
4828 size_t newsize
= oldsize
+ m
->topsize
;
4829 size_t newtopsize
= newsize
- nb
;
4830 mchunkptr newtop
= chunk_plus_offset(p
, nb
);
4831 set_inuse(m
, p
, nb
);
4832 newtop
->head
= newtopsize
|PINUSE_BIT
;
4834 m
->topsize
= newtopsize
;
4838 else if (next
== m
->dv
) { /* extend into dv */
4839 size_t dvs
= m
->dvsize
;
4840 if (oldsize
+ dvs
>= nb
) {
4841 size_t dsize
= oldsize
+ dvs
- nb
;
4842 if (dsize
>= MIN_CHUNK_SIZE
) {
4843 mchunkptr r
= chunk_plus_offset(p
, nb
);
4844 mchunkptr n
= chunk_plus_offset(r
, dsize
);
4845 set_inuse(m
, p
, nb
);
4846 set_size_and_pinuse_of_free_chunk(r
, dsize
);
4851 else { /* exhaust dv */
4852 size_t newsize
= oldsize
+ dvs
;
4853 set_inuse(m
, p
, newsize
);
4860 else if (!cinuse(next
)) { /* extend into next free chunk */
4861 size_t nextsize
= chunksize(next
);
4862 if (oldsize
+ nextsize
>= nb
) {
4863 size_t rsize
= oldsize
+ nextsize
- nb
;
4864 unlink_chunk(m
, next
, nextsize
);
4865 if (rsize
< MIN_CHUNK_SIZE
) {
4866 size_t newsize
= oldsize
+ nextsize
;
4867 set_inuse(m
, p
, newsize
);
4870 mchunkptr r
= chunk_plus_offset(p
, nb
);
4871 set_inuse(m
, p
, nb
);
4872 set_inuse(m
, r
, rsize
);
4873 dispose_chunk(m
, r
, rsize
);
4880 USAGE_ERROR_ACTION(m
, chunk2mem(p
));
4885 static void* internal_memalign(mstate m
, size_t alignment
, size_t bytes
) {
4887 if (alignment
< MIN_CHUNK_SIZE
) /* must be at least a minimum chunk size */
4888 alignment
= MIN_CHUNK_SIZE
;
4889 if ((alignment
& (alignment
-SIZE_T_ONE
)) != 0) {/* Ensure a power of 2 */
4890 size_t a
= MALLOC_ALIGNMENT
<< 1;
4891 while (a
< alignment
) a
<<= 1;
4894 if (bytes
>= MAX_REQUEST
- alignment
) {
4895 if (m
!= 0) { /* Test isn't needed but avoids compiler warning */
4896 MALLOC_FAILURE_ACTION
;
4900 size_t nb
= request2size(bytes
);
4901 size_t req
= nb
+ alignment
+ MIN_CHUNK_SIZE
- CHUNK_OVERHEAD
;
4902 mem
= internal_malloc(m
, req
);
4904 mchunkptr p
= mem2chunk(mem
);
4907 if ((((size_t)(mem
)) & (alignment
- 1)) != 0) { /* misaligned */
4909 Find an aligned spot inside chunk. Since we need to give
4910 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4911 the first calculation places us at a spot with less than
4912 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4913 We've allocated enough total room so that this is always
4916 char* br
= (char*)mem2chunk((size_t)(((size_t)((char*)mem
+ alignment
-
4919 char* pos
= ((size_t)(br
- (char*)(p
)) >= MIN_CHUNK_SIZE
)?
4921 mchunkptr newp
= (mchunkptr
)pos
;
4922 size_t leadsize
= pos
- (char*)(p
);
4923 size_t newsize
= chunksize(p
) - leadsize
;
4925 if (is_mmapped(p
)) { /* For mmapped chunks, just adjust offset */
4926 newp
->prev_foot
= p
->prev_foot
+ leadsize
;
4927 newp
->head
= newsize
;
4929 else { /* Otherwise, give back leader, use the rest */
4930 set_inuse(m
, newp
, newsize
);
4931 set_inuse(m
, p
, leadsize
);
4932 dispose_chunk(m
, p
, leadsize
);
4937 /* Give back spare room at the end */
4938 if (!is_mmapped(p
)) {
4939 size_t size
= chunksize(p
);
4940 if (size
> nb
+ MIN_CHUNK_SIZE
) {
4941 size_t remainder_size
= size
- nb
;
4942 mchunkptr remainder
= chunk_plus_offset(p
, nb
);
4943 set_inuse(m
, p
, nb
);
4944 set_inuse(m
, remainder
, remainder_size
);
4945 dispose_chunk(m
, remainder
, remainder_size
);
4950 assert (chunksize(p
) >= nb
);
4951 assert(((size_t)mem
& (alignment
- 1)) == 0);
4952 check_inuse_chunk(m
, p
);
4960 Common support for independent_X routines, handling
4961 all of the combinations that can result.
4963 bit 0 set if all elements are same size (using sizes[0])
4964 bit 1 set if elements should be zeroed
4966 static void** ialloc(mstate m
,
4972 size_t element_size
; /* chunksize of each element, if all same */
4973 size_t contents_size
; /* total size of elements */
4974 size_t array_size
; /* request size of pointer array */
4975 void* mem
; /* malloced aggregate space */
4976 mchunkptr p
; /* corresponding chunk */
4977 size_t remainder_size
; /* remaining bytes while splitting */
4978 void** marray
; /* either "chunks" or malloced ptr array */
4979 mchunkptr array_chunk
; /* chunk for malloced ptr array */
4980 flag_t was_enabled
; /* to disable mmap */
4984 ensure_initialization();
4985 /* compute array length, if needed */
4987 if (n_elements
== 0)
4988 return chunks
; /* nothing to do */
4993 /* if empty req, must still return chunk representing empty array */
4994 if (n_elements
== 0)
4995 return (void**)internal_malloc(m
, 0);
4997 array_size
= request2size(n_elements
* (sizeof(void*)));
5000 /* compute total element size */
5001 if (opts
& 0x1) { /* all-same-size */
5002 element_size
= request2size(*sizes
);
5003 contents_size
= n_elements
* element_size
;
5005 else { /* add up all the sizes */
5008 for (i
= 0; i
!= n_elements
; ++i
)
5009 contents_size
+= request2size(sizes
[i
]);
5012 size
= contents_size
+ array_size
;
5015 Allocate the aggregate chunk. First disable direct-mmapping so
5016 malloc won't use it, since we would not be able to later
5017 free/realloc space internal to a segregated mmap region.
5019 was_enabled
= use_mmap(m
);
5021 mem
= internal_malloc(m
, size
- CHUNK_OVERHEAD
);
5027 if (PREACTION(m
)) return 0;
5029 remainder_size
= chunksize(p
);
5031 assert(!is_mmapped(p
));
5033 if (opts
& 0x2) { /* optionally clear the elements */
5034 memset((size_t*)mem
, 0, remainder_size
- SIZE_T_SIZE
- array_size
);
5037 /* If not provided, allocate the pointer array as final part of chunk */
5039 size_t array_chunk_size
;
5040 array_chunk
= chunk_plus_offset(p
, contents_size
);
5041 array_chunk_size
= remainder_size
- contents_size
;
5042 marray
= (void**) (chunk2mem(array_chunk
));
5043 set_size_and_pinuse_of_inuse_chunk(m
, array_chunk
, array_chunk_size
);
5044 remainder_size
= contents_size
;
5047 /* split out elements */
5048 for (i
= 0; ; ++i
) {
5049 marray
[i
] = chunk2mem(p
);
5050 if (i
!= n_elements
-1) {
5051 if (element_size
!= 0)
5052 size
= element_size
;
5054 size
= request2size(sizes
[i
]);
5055 remainder_size
-= size
;
5056 set_size_and_pinuse_of_inuse_chunk(m
, p
, size
);
5057 p
= chunk_plus_offset(p
, size
);
5059 else { /* the final element absorbs any overallocation slop */
5060 set_size_and_pinuse_of_inuse_chunk(m
, p
, remainder_size
);
5066 if (marray
!= chunks
) {
5067 /* final element must have exactly exhausted chunk */
5068 if (element_size
!= 0) {
5069 assert(remainder_size
== element_size
);
5072 assert(remainder_size
== request2size(sizes
[i
]));
5074 check_inuse_chunk(m
, mem2chunk(marray
));
5076 for (i
= 0; i
!= n_elements
; ++i
)
5077 check_inuse_chunk(m
, mem2chunk(marray
[i
]));
5085 /* Try to free all pointers in the given array.
5086 Note: this could be made faster, by delaying consolidation,
5087 at the price of disabling some user integrity checks, We
5088 still optimize some consolidations by combining adjacent
5089 chunks before freeing, which will occur often if allocated
5090 with ialloc or the array is sorted.
5092 static size_t internal_bulk_free(mstate m
, void* array
[], size_t nelem
) {
5094 if (!PREACTION(m
)) {
5096 void** fence
= &(array
[nelem
]);
5097 for (a
= array
; a
!= fence
; ++a
) {
5100 mchunkptr p
= mem2chunk(mem
);
5101 size_t psize
= chunksize(p
);
5103 if (get_mstate_for(p
) != m
) {
5108 check_inuse_chunk(m
, p
);
5110 if (RTCHECK(ok_address(m
, p
) && ok_inuse(p
))) {
5111 void ** b
= a
+ 1; /* try to merge with next chunk */
5112 mchunkptr next
= next_chunk(p
);
5113 if (b
!= fence
&& *b
== chunk2mem(next
)) {
5114 size_t newsize
= chunksize(next
) + psize
;
5115 set_inuse(m
, p
, newsize
);
5119 dispose_chunk(m
, p
, psize
);
5122 CORRUPTION_ERROR_ACTION(m
);
5127 if (should_trim(m
, m
->topsize
))
5135 #if MALLOC_INSPECT_ALL
5136 static void internal_inspect_all(mstate m
,
5137 void(*handler
)(void *start
,
5140 void* callback_arg
),
5142 if (is_initialized(m
)) {
5143 mchunkptr top
= m
->top
;
5145 for (s
= &m
->seg
; s
!= 0; s
= s
->next
) {
5146 mchunkptr q
= align_as_chunk(s
->base
);
5147 while (segment_holds(s
, q
) && q
->head
!= FENCEPOST_HEAD
) {
5148 mchunkptr next
= next_chunk(q
);
5149 size_t sz
= chunksize(q
);
5153 used
= sz
- CHUNK_OVERHEAD
; /* must not be mmapped */
5154 start
= chunk2mem(q
);
5158 if (is_small(sz
)) { /* offset by possible bookkeeping */
5159 start
= (void*)((char*)q
+ sizeof(struct malloc_chunk
));
5162 start
= (void*)((char*)q
+ sizeof(struct malloc_tree_chunk
));
5165 if (start
< (void*)next
) /* skip if all space is bookkeeping */
5166 handler(start
, next
, used
, arg
);
5174 #endif /* MALLOC_INSPECT_ALL */
5176 /* ------------------ Exported realloc, memalign, etc -------------------- */
5180 void* dlrealloc(void* oldmem
, size_t bytes
) {
5183 mem
= dlmalloc(bytes
);
5185 else if (bytes
>= MAX_REQUEST
) {
5186 MALLOC_FAILURE_ACTION
;
5188 #ifdef REALLOC_ZERO_BYTES_FREES
5189 else if (bytes
== 0) {
5192 #endif /* REALLOC_ZERO_BYTES_FREES */
5194 size_t nb
= request2size(bytes
);
5195 mchunkptr oldp
= mem2chunk(oldmem
);
5199 mstate m
= get_mstate_for(oldp
);
5201 USAGE_ERROR_ACTION(m
, oldmem
);
5204 #endif /* FOOTERS */
5205 if (!PREACTION(m
)) {
5206 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 1);
5209 check_inuse_chunk(m
, newp
);
5210 mem
= chunk2mem(newp
);
5213 mem
= internal_malloc(m
, bytes
);
5215 size_t oc
= chunksize(oldp
) - overhead_for(oldp
);
5216 memcpy(mem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
5217 internal_free(m
, oldmem
);
5225 void* dlrealloc_in_place(void* oldmem
, size_t bytes
) {
5228 if (bytes
>= MAX_REQUEST
) {
5229 MALLOC_FAILURE_ACTION
;
5232 size_t nb
= request2size(bytes
);
5233 mchunkptr oldp
= mem2chunk(oldmem
);
5237 mstate m
= get_mstate_for(oldp
);
5239 USAGE_ERROR_ACTION(m
, oldmem
);
5242 #endif /* FOOTERS */
5243 if (!PREACTION(m
)) {
5244 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 0);
5247 check_inuse_chunk(m
, newp
);
5256 void* dlmemalign(size_t alignment
, size_t bytes
) {
5257 if (alignment
<= MALLOC_ALIGNMENT
) {
5258 return dlmalloc(bytes
);
5260 return internal_memalign(gm
, alignment
, bytes
);
5263 int dlposix_memalign(void** pp
, size_t alignment
, size_t bytes
) {
5265 if (alignment
== MALLOC_ALIGNMENT
)
5266 mem
= dlmalloc(bytes
);
5268 size_t d
= alignment
/ sizeof(void*);
5269 size_t r
= alignment
% sizeof(void*);
5270 if (r
!= 0 || d
== 0 || (d
& (d
-SIZE_T_ONE
)) != 0)
5272 else if (bytes
<= MAX_REQUEST
- alignment
) {
5273 if (alignment
< MIN_CHUNK_SIZE
)
5274 alignment
= MIN_CHUNK_SIZE
;
5275 mem
= internal_memalign(gm
, alignment
, bytes
);
5286 void* dlvalloc(size_t bytes
) {
5288 ensure_initialization();
5289 pagesz
= mparams
.page_size
;
5290 return dlmemalign(pagesz
, bytes
);
5293 void* dlpvalloc(size_t bytes
) {
5295 ensure_initialization();
5296 pagesz
= mparams
.page_size
;
5297 return dlmemalign(pagesz
, (bytes
+ pagesz
- SIZE_T_ONE
) & ~(pagesz
- SIZE_T_ONE
));
5300 void** dlindependent_calloc(size_t n_elements
, size_t elem_size
,
5302 size_t sz
= elem_size
; /* serves as 1-element array */
5303 return ialloc(gm
, n_elements
, &sz
, 3, chunks
);
5306 void** dlindependent_comalloc(size_t n_elements
, size_t sizes
[],
5308 return ialloc(gm
, n_elements
, sizes
, 0, chunks
);
5311 size_t dlbulk_free(void* array
[], size_t nelem
) {
5312 return internal_bulk_free(gm
, array
, nelem
);
5315 #if MALLOC_INSPECT_ALL
5316 void dlmalloc_inspect_all(void(*handler
)(void *start
,
5319 void* callback_arg
),
5321 ensure_initialization();
5322 if (!PREACTION(gm
)) {
5323 internal_inspect_all(gm
, handler
, arg
);
5327 #endif /* MALLOC_INSPECT_ALL */
5329 int dlmalloc_trim(size_t pad
) {
5331 ensure_initialization();
5332 if (!PREACTION(gm
)) {
5333 result
= sys_trim(gm
, pad
);
5339 size_t dlmalloc_footprint(void) {
5340 return gm
->footprint
;
5343 size_t dlmalloc_max_footprint(void) {
5344 return gm
->max_footprint
;
5347 size_t dlmalloc_footprint_limit(void) {
5348 size_t maf
= gm
->footprint_limit
;
5349 return maf
== 0 ? MAX_SIZE_T
: maf
;
5352 size_t dlmalloc_set_footprint_limit(size_t bytes
) {
5353 size_t result
; /* invert sense of 0 */
5355 result
= granularity_align(1); /* Use minimal size */
5356 if (bytes
== MAX_SIZE_T
)
5357 result
= 0; /* disable */
5359 result
= granularity_align(bytes
);
5360 return gm
->footprint_limit
= result
;
5364 struct mallinfo
dlmallinfo(void) {
5365 return internal_mallinfo(gm
);
5367 #endif /* NO_MALLINFO */
5369 #if !NO_MALLOC_STATS
5370 void dlmalloc_stats() {
5371 internal_malloc_stats(gm
);
5373 #endif /* NO_MALLOC_STATS */
5375 int dlmallopt(int param_number
, int value
) {
5376 return change_mparam(param_number
, value
);
5379 size_t dlmalloc_usable_size(void* mem
) {
5381 mchunkptr p
= mem2chunk(mem
);
5383 return chunksize(p
) - overhead_for(p
);
5388 #endif /* !ONLY_MSPACES */
5390 /* ----------------------------- user mspaces ---------------------------- */
5394 static mstate
init_user_mstate(char* tbase
, size_t tsize
) {
5395 size_t msize
= pad_request(sizeof(struct malloc_state
));
5397 mchunkptr msp
= align_as_chunk(tbase
);
5398 mstate m
= (mstate
)(chunk2mem(msp
));
5399 memset(m
, 0, msize
);
5400 (void)INITIAL_LOCK(&m
->mutex
);
5401 msp
->head
= (msize
|INUSE_BITS
);
5402 m
->seg
.base
= m
->least_addr
= tbase
;
5403 m
->seg
.size
= m
->footprint
= m
->max_footprint
= tsize
;
5404 m
->magic
= mparams
.magic
;
5405 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
5406 m
->mflags
= mparams
.default_mflags
;
5409 disable_contiguous(m
);
5411 mn
= next_chunk(mem2chunk(m
));
5412 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) - TOP_FOOT_SIZE
);
5413 check_top_chunk(m
, m
->top
);
5417 mspace
create_mspace(size_t capacity
, int locked
) {
5420 ensure_initialization();
5421 msize
= pad_request(sizeof(struct malloc_state
));
5422 if (capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5423 size_t rs
= ((capacity
== 0)? mparams
.granularity
:
5424 (capacity
+ TOP_FOOT_SIZE
+ msize
));
5425 size_t tsize
= granularity_align(rs
);
5426 char* tbase
= (char*)(CALL_MMAP(tsize
));
5427 if (tbase
!= CMFAIL
) {
5428 m
= init_user_mstate(tbase
, tsize
);
5429 m
->seg
.sflags
= USE_MMAP_BIT
;
5430 set_lock(m
, locked
);
5436 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
) {
5439 ensure_initialization();
5440 msize
= pad_request(sizeof(struct malloc_state
));
5441 if (capacity
> msize
+ TOP_FOOT_SIZE
&&
5442 capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5443 m
= init_user_mstate((char*)base
, capacity
);
5444 m
->seg
.sflags
= EXTERN_BIT
;
5445 set_lock(m
, locked
);
5450 int mspace_track_large_chunks(mspace msp
, int enable
) {
5452 mstate ms
= (mstate
)msp
;
5453 if (!PREACTION(ms
)) {
5454 if (!use_mmap(ms
)) {
5467 size_t destroy_mspace(mspace msp
) {
5469 mstate ms
= (mstate
)msp
;
5471 msegmentptr sp
= &ms
->seg
;
5472 (void)DESTROY_LOCK(&ms
->mutex
); /* destroy before unmapped */
5474 char* base
= sp
->base
;
5475 size_t size
= sp
->size
;
5476 flag_t flag
= sp
->sflags
;
5477 (void)base
; /* placate people compiling -Wunused-variable */
5479 if ((flag
& USE_MMAP_BIT
) && !(flag
& EXTERN_BIT
) &&
5480 CALL_MUNMAP(base
, size
) == 0)
5485 USAGE_ERROR_ACTION(ms
,ms
);
5491 mspace versions of routines are near-clones of the global
5492 versions. This is not so nice but better than the alternatives.
5495 void* mspace_malloc(mspace msp
, size_t bytes
) {
5496 mstate ms
= (mstate
)msp
;
5497 if (!ok_magic(ms
)) {
5498 USAGE_ERROR_ACTION(ms
,ms
);
5501 if (!PREACTION(ms
)) {
5504 if (bytes
<= MAX_SMALL_REQUEST
) {
5507 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
5508 idx
= small_index(nb
);
5509 smallbits
= ms
->smallmap
>> idx
;
5511 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
5513 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
5514 b
= smallbin_at(ms
, idx
);
5516 assert(chunksize(p
) == small_index2size(idx
));
5517 unlink_first_small_chunk(ms
, b
, p
, idx
);
5518 set_inuse_and_pinuse(ms
, p
, small_index2size(idx
));
5520 check_malloced_chunk(ms
, mem
, nb
);
5524 else if (nb
> ms
->dvsize
) {
5525 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
5529 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
5530 binmap_t leastbit
= least_bit(leftbits
);
5531 compute_bit2idx(leastbit
, i
);
5532 b
= smallbin_at(ms
, i
);
5534 assert(chunksize(p
) == small_index2size(i
));
5535 unlink_first_small_chunk(ms
, b
, p
, i
);
5536 rsize
= small_index2size(i
) - nb
;
5537 /* Fit here cannot be remainderless if 4byte sizes */
5538 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
5539 set_inuse_and_pinuse(ms
, p
, small_index2size(i
));
5541 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5542 r
= chunk_plus_offset(p
, nb
);
5543 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5544 replace_dv(ms
, r
, rsize
);
5547 check_malloced_chunk(ms
, mem
, nb
);
5551 else if (ms
->treemap
!= 0 && (mem
= tmalloc_small(ms
, nb
)) != 0) {
5552 check_malloced_chunk(ms
, mem
, nb
);
5557 else if (bytes
>= MAX_REQUEST
)
5558 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
5560 nb
= pad_request(bytes
);
5561 if (ms
->treemap
!= 0 && (mem
= tmalloc_large(ms
, nb
)) != 0) {
5562 check_malloced_chunk(ms
, mem
, nb
);
5567 if (nb
<= ms
->dvsize
) {
5568 size_t rsize
= ms
->dvsize
- nb
;
5569 mchunkptr p
= ms
->dv
;
5570 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
5571 mchunkptr r
= ms
->dv
= chunk_plus_offset(p
, nb
);
5573 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5574 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5576 else { /* exhaust dv */
5577 size_t dvs
= ms
->dvsize
;
5580 set_inuse_and_pinuse(ms
, p
, dvs
);
5583 check_malloced_chunk(ms
, mem
, nb
);
5587 else if (nb
< ms
->topsize
) { /* Split top */
5588 size_t rsize
= ms
->topsize
-= nb
;
5589 mchunkptr p
= ms
->top
;
5590 mchunkptr r
= ms
->top
= chunk_plus_offset(p
, nb
);
5591 r
->head
= rsize
| PINUSE_BIT
;
5592 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5594 check_top_chunk(ms
, ms
->top
);
5595 check_malloced_chunk(ms
, mem
, nb
);
5599 mem
= sys_alloc(ms
, nb
);
5609 void mspace_free(mspace msp
, void* mem
) {
5611 mchunkptr p
= mem2chunk(mem
);
5613 mstate fm
= get_mstate_for(p
);
5614 (void)msp
; /* placate people compiling -Wunused */
5616 mstate fm
= (mstate
)msp
;
5617 #endif /* FOOTERS */
5618 if (!ok_magic(fm
)) {
5619 USAGE_ERROR_ACTION(fm
, p
);
5622 if (!PREACTION(fm
)) {
5623 check_inuse_chunk(fm
, p
);
5624 if (RTCHECK(ok_address(fm
, p
) && ok_inuse(p
))) {
5625 size_t psize
= chunksize(p
);
5626 mchunkptr next
= chunk_plus_offset(p
, psize
);
5628 size_t prevsize
= p
->prev_foot
;
5629 if (is_mmapped(p
)) {
5630 psize
+= prevsize
+ MMAP_FOOT_PAD
;
5631 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
5632 fm
->footprint
-= psize
;
5636 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
5639 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
5641 unlink_chunk(fm
, p
, prevsize
);
5643 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
5645 set_free_with_pinuse(p
, psize
, next
);
5654 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
5655 if (!cinuse(next
)) { /* consolidate forward */
5656 if (next
== fm
->top
) {
5657 size_t tsize
= fm
->topsize
+= psize
;
5659 p
->head
= tsize
| PINUSE_BIT
;
5664 if (should_trim(fm
, tsize
))
5668 else if (next
== fm
->dv
) {
5669 size_t dsize
= fm
->dvsize
+= psize
;
5671 set_size_and_pinuse_of_free_chunk(p
, dsize
);
5675 size_t nsize
= chunksize(next
);
5677 unlink_chunk(fm
, next
, nsize
);
5678 set_size_and_pinuse_of_free_chunk(p
, psize
);
5686 set_free_with_pinuse(p
, psize
, next
);
5688 if (is_small(psize
)) {
5689 insert_small_chunk(fm
, p
, psize
);
5690 check_free_chunk(fm
, p
);
5693 tchunkptr tp
= (tchunkptr
)p
;
5694 insert_large_chunk(fm
, tp
, psize
);
5695 check_free_chunk(fm
, p
);
5696 if (--fm
->release_checks
== 0)
5697 release_unused_segments(fm
);
5703 USAGE_ERROR_ACTION(fm
, p
);
5710 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
) {
5713 mstate ms
= (mstate
)msp
;
5714 if (!ok_magic(ms
)) {
5715 USAGE_ERROR_ACTION(ms
,ms
);
5718 if (n_elements
!= 0) {
5719 req
= n_elements
* elem_size
;
5720 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
5721 (req
/ n_elements
!= elem_size
))
5722 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
5724 mem
= internal_malloc(ms
, req
);
5725 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
5726 memset(mem
, 0, req
);
5730 void* mspace_realloc(mspace msp
, void* oldmem
, size_t bytes
) {
5733 mem
= mspace_malloc(msp
, bytes
);
5735 else if (bytes
>= MAX_REQUEST
) {
5736 MALLOC_FAILURE_ACTION
;
5738 #ifdef REALLOC_ZERO_BYTES_FREES
5739 else if (bytes
== 0) {
5740 mspace_free(msp
, oldmem
);
5742 #endif /* REALLOC_ZERO_BYTES_FREES */
5744 size_t nb
= request2size(bytes
);
5745 mchunkptr oldp
= mem2chunk(oldmem
);
5747 mstate m
= (mstate
)msp
;
5749 mstate m
= get_mstate_for(oldp
);
5751 USAGE_ERROR_ACTION(m
, oldmem
);
5754 #endif /* FOOTERS */
5755 if (!PREACTION(m
)) {
5756 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 1);
5759 check_inuse_chunk(m
, newp
);
5760 mem
= chunk2mem(newp
);
5763 mem
= mspace_malloc(m
, bytes
);
5765 size_t oc
= chunksize(oldp
) - overhead_for(oldp
);
5766 memcpy(mem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
5767 mspace_free(m
, oldmem
);
5775 void* mspace_realloc_in_place(mspace msp
, void* oldmem
, size_t bytes
) {
5778 if (bytes
>= MAX_REQUEST
) {
5779 MALLOC_FAILURE_ACTION
;
5782 size_t nb
= request2size(bytes
);
5783 mchunkptr oldp
= mem2chunk(oldmem
);
5785 mstate m
= (mstate
)msp
;
5787 mstate m
= get_mstate_for(oldp
);
5788 (void)msp
; /* placate people compiling -Wunused */
5790 USAGE_ERROR_ACTION(m
, oldmem
);
5793 #endif /* FOOTERS */
5794 if (!PREACTION(m
)) {
5795 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 0);
5798 check_inuse_chunk(m
, newp
);
5807 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
) {
5808 mstate ms
= (mstate
)msp
;
5809 if (!ok_magic(ms
)) {
5810 USAGE_ERROR_ACTION(ms
,ms
);
5813 if (alignment
<= MALLOC_ALIGNMENT
)
5814 return mspace_malloc(msp
, bytes
);
5815 return internal_memalign(ms
, alignment
, bytes
);
5818 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
5819 size_t elem_size
, void* chunks
[]) {
5820 size_t sz
= elem_size
; /* serves as 1-element array */
5821 mstate ms
= (mstate
)msp
;
5822 if (!ok_magic(ms
)) {
5823 USAGE_ERROR_ACTION(ms
,ms
);
5826 return ialloc(ms
, n_elements
, &sz
, 3, chunks
);
5829 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
5830 size_t sizes
[], void* chunks
[]) {
5831 mstate ms
= (mstate
)msp
;
5832 if (!ok_magic(ms
)) {
5833 USAGE_ERROR_ACTION(ms
,ms
);
5836 return ialloc(ms
, n_elements
, sizes
, 0, chunks
);
5839 size_t mspace_bulk_free(mspace msp
, void* array
[], size_t nelem
) {
5840 return internal_bulk_free((mstate
)msp
, array
, nelem
);
5843 #if MALLOC_INSPECT_ALL
5844 void mspace_inspect_all(mspace msp
,
5845 void(*handler
)(void *start
,
5848 void* callback_arg
),
5850 mstate ms
= (mstate
)msp
;
5852 if (!PREACTION(ms
)) {
5853 internal_inspect_all(ms
, handler
, arg
);
5858 USAGE_ERROR_ACTION(ms
,ms
);
5861 #endif /* MALLOC_INSPECT_ALL */
5863 int mspace_trim(mspace msp
, size_t pad
) {
5865 mstate ms
= (mstate
)msp
;
5867 if (!PREACTION(ms
)) {
5868 result
= sys_trim(ms
, pad
);
5873 USAGE_ERROR_ACTION(ms
,ms
);
5878 #if !NO_MALLOC_STATS
5879 void mspace_malloc_stats(mspace msp
) {
5880 mstate ms
= (mstate
)msp
;
5882 internal_malloc_stats(ms
);
5885 USAGE_ERROR_ACTION(ms
,ms
);
5888 #endif /* NO_MALLOC_STATS */
5890 size_t mspace_footprint(mspace msp
) {
5892 mstate ms
= (mstate
)msp
;
5894 result
= ms
->footprint
;
5897 USAGE_ERROR_ACTION(ms
,ms
);
5902 size_t mspace_max_footprint(mspace msp
) {
5904 mstate ms
= (mstate
)msp
;
5906 result
= ms
->max_footprint
;
5909 USAGE_ERROR_ACTION(ms
,ms
);
5914 size_t mspace_footprint_limit(mspace msp
) {
5916 mstate ms
= (mstate
)msp
;
5918 size_t maf
= ms
->footprint_limit
;
5919 result
= (maf
== 0) ? MAX_SIZE_T
: maf
;
5922 USAGE_ERROR_ACTION(ms
,ms
);
5927 size_t mspace_set_footprint_limit(mspace msp
, size_t bytes
) {
5929 mstate ms
= (mstate
)msp
;
5932 result
= granularity_align(1); /* Use minimal size */
5933 if (bytes
== MAX_SIZE_T
)
5934 result
= 0; /* disable */
5936 result
= granularity_align(bytes
);
5937 ms
->footprint_limit
= result
;
5940 USAGE_ERROR_ACTION(ms
,ms
);
5946 struct mallinfo
mspace_mallinfo(mspace msp
) {
5947 mstate ms
= (mstate
)msp
;
5948 if (!ok_magic(ms
)) {
5949 USAGE_ERROR_ACTION(ms
,ms
);
5951 return internal_mallinfo(ms
);
5953 #endif /* NO_MALLINFO */
5955 size_t mspace_usable_size(const void* mem
) {
5957 mchunkptr p
= mem2chunk(mem
);
5959 return chunksize(p
) - overhead_for(p
);
5964 int mspace_mallopt(int param_number
, int value
) {
5965 return change_mparam(param_number
, value
);
5968 #endif /* MSPACES */
5971 /* -------------------- Alternative MORECORE functions ------------------- */
5974 Guidelines for creating a custom version of MORECORE:
5976 * For best performance, MORECORE should allocate in multiples of pagesize.
5977 * MORECORE may allocate more memory than requested. (Or even less,
5978 but this will usually result in a malloc failure.)
5979 * MORECORE must not allocate memory when given argument zero, but
5980 instead return one past the end address of memory from previous
5982 * For best performance, consecutive calls to MORECORE with positive
5983 arguments should return increasing addresses, indicating that
5984 space has been contiguously extended.
5985 * Even though consecutive calls to MORECORE need not return contiguous
5986 addresses, it must be OK for malloc'ed chunks to span multiple
5987 regions in those cases where they do happen to be contiguous.
5988 * MORECORE need not handle negative arguments -- it may instead
5989 just return MFAIL when given negative arguments.
5990 Negative arguments are always multiples of pagesize. MORECORE
5991 must not misinterpret negative args as large positive unsigned
5992 args. You can suppress all such calls from even occurring by defining
5993 MORECORE_CANNOT_TRIM,
5995 As an example alternative MORECORE, here is a custom allocator
5996 kindly contributed for pre-OSX macOS. It uses virtually but not
5997 necessarily physically contiguous non-paged memory (locked in,
5998 present and won't get swapped out). You can use it by uncommenting
5999 this section, adding some #includes, and setting up the appropriate
6002 #define MORECORE osMoreCore
6004 There is also a shutdown routine that should somehow be called for
6005 cleanup upon program exit.
6007 #define MAX_POOL_ENTRIES 100
6008 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6009 static int next_os_pool;
6010 void *our_os_pools[MAX_POOL_ENTRIES];
6012 void *osMoreCore(int size)
6015 static void *sbrk_top = 0;
6019 if (size < MINIMUM_MORECORE_SIZE)
6020 size = MINIMUM_MORECORE_SIZE;
6021 if (CurrentExecutionLevel() == kTaskLevel)
6022 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6025 return (void *) MFAIL;
6027 // save ptrs so they can be freed during cleanup
6028 our_os_pools[next_os_pool] = ptr;
6030 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6031 sbrk_top = (char *) ptr + size;
6036 // we don't currently support shrink behavior
6037 return (void *) MFAIL;
6045 // cleanup any allocated memory pools
6046 // called as last thing before shutting down driver
6048 void osCleanupMem(void)
6052 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6055 PoolDeallocate(*ptr);
6063 /* -----------------------------------------------------------------------
6065 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6066 * fix bad comparison in dlposix_memalign
6067 * don't reuse adjusted asize in sys_alloc
6068 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6069 * reduce compiler warnings -- thanks to all who reported/suggested these
6071 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6072 * Always perform unlink checks unless INSECURE
6073 * Add posix_memalign.
6074 * Improve realloc to expand in more cases; expose realloc_in_place.
6075 Thanks to Peter Buhr for the suggestion.
6076 * Add footprint_limit, inspect_all, bulk_free. Thanks
6077 to Barry Hayes and others for the suggestions.
6078 * Internal refactorings to avoid calls while holding locks
6079 * Use non-reentrant locks by default. Thanks to Roland McGrath
6081 * Small fixes to mspace_destroy, reset_on_error.
6082 * Various configuration extensions/changes. Thanks
6083 to all who contributed these.
6085 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6086 * Update Creative Commons URL
6088 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6089 * Use zeros instead of prev foot for is_mmapped
6090 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6091 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6092 * Fix insufficient sys_alloc padding when using 16byte alignment
6093 * Fix bad error check in mspace_footprint
6094 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6095 * Reentrant spin locks; thanks to Earl Chew and others
6096 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6097 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6098 * Extension hook in malloc_state
6099 * Various small adjustments to reduce warnings on some compilers
6100 * Various configuration extensions/changes for more platforms. Thanks
6101 to all who contributed these.
6103 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6104 * Add max_footprint functions
6105 * Ensure all appropriate literals are size_t
6106 * Fix conditional compilation problem for some #define settings
6107 * Avoid concatenating segments with the one provided
6108 in create_mspace_with_base
6109 * Rename some variables to avoid compiler shadowing warnings
6110 * Use explicit lock initialization.
6111 * Better handling of sbrk interference.
6112 * Simplify and fix segment insertion, trimming and mspace_destroy
6113 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6114 * Thanks especially to Dennis Flanagan for help on these.
6116 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6117 * Fix memalign brace error.
6119 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6120 * Fix improper #endif nesting in C++
6121 * Add explicit casts needed for C++
6123 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6124 * Use trees for large bins
6126 * Use segments to unify sbrk-based and mmap-based system allocation,
6127 removing need for emulation on most platforms without sbrk.
6128 * Default safety checks
6129 * Optional footer checks. Thanks to William Robertson for the idea.
6130 * Internal code refactoring
6131 * Incorporate suggestions and platform-specific changes.
6132 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6133 Aaron Bachmann, Emery Berger, and others.
6134 * Speed up non-fastbin processing enough to remove fastbins.
6135 * Remove useless cfree() to avoid conflicts with other apps.
6136 * Remove internal memcpy, memset. Compilers handle builtins better.
6137 * Remove some options that no one ever used and rename others.
6139 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6140 * Fix malloc_state bitmap array misdeclaration
6142 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6143 * Allow tuning of FIRST_SORTED_BIN_SIZE
6144 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6145 * Better detection and support for non-contiguousness of MORECORE.
6146 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6147 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6148 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6149 * Raised default trim and map thresholds to 256K.
6150 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6151 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6152 * Branch-free bin calculation
6153 * Default trim and mmap thresholds now 256K.
6155 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6156 * Introduce independent_comalloc and independent_calloc.
6157 Thanks to Michael Pachos for motivation and help.
6158 * Make optional .h file available
6159 * Allow > 2GB requests on 32bit systems.
6160 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6161 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6163 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6165 * memalign: check alignment arg
6166 * realloc: don't try to shift chunks backwards, since this
6167 leads to more fragmentation in some programs and doesn't
6168 seem to help in any others.
6169 * Collect all cases in malloc requiring system memory into sysmalloc
6170 * Use mmap as backup to sbrk
6171 * Place all internal state in malloc_state
6172 * Introduce fastbins (although similar to 2.5.1)
6173 * Many minor tunings and cosmetic improvements
6174 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6175 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6176 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6177 * Include errno.h to support default failure action.
6179 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6180 * return null for negative arguments
6181 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6182 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6183 (e.g. WIN32 platforms)
6184 * Cleanup header file inclusion for WIN32 platforms
6185 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6186 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6187 memory allocation routines
6188 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6189 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6190 usage of 'assert' in non-WIN32 code
6191 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6193 * Always call 'fREe()' rather than 'free()'
6195 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6196 * Fixed ordering problem with boundary-stamping
6198 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6199 * Added pvalloc, as recommended by H.J. Liu
6200 * Added 64bit pointer support mainly from Wolfram Gloger
6201 * Added anonymously donated WIN32 sbrk emulation
6202 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6203 * malloc_extend_top: fix mask error that caused wastage after
6205 * Add linux mremap support code from HJ Liu
6207 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6208 * Integrated most documentation with the code.
6209 * Add support for mmap, with help from
6210 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6211 * Use last_remainder in more cases.
6212 * Pack bins using idea from colin@nyx10.cs.du.edu
6213 * Use ordered bins instead of best-fit threshhold
6214 * Eliminate block-local decls to simplify tracing and debugging.
6215 * Support another case of realloc via move into top
6216 * Fix error occuring when initial sbrk_base not word-aligned.
6217 * Rely on page size for units instead of SBRK_UNIT to
6218 avoid surprises about sbrk alignment conventions.
6219 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6220 (raymond@es.ele.tue.nl) for the suggestion.
6221 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6222 * More precautions for cases where other routines call sbrk,
6223 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6224 * Added macros etc., allowing use in linux libc from
6225 H.J. Lu (hjl@gnu.ai.mit.edu)
6226 * Inverted this history list
6228 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6229 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6230 * Removed all preallocation code since under current scheme
6231 the work required to undo bad preallocations exceeds
6232 the work saved in good cases for most test programs.
6233 * No longer use return list or unconsolidated bins since
6234 no scheme using them consistently outperforms those that don't
6235 given above changes.
6236 * Use best fit for very large chunks to prevent some worst-cases.
6237 * Added some support for debugging
6239 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6240 * Removed footers when chunks are in use. Thanks to
6241 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6243 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6244 * Added malloc_trim, with help from Wolfram Gloger
6245 (wmglo@Dent.MED.Uni-Muenchen.DE).
6247 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6249 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6250 * realloc: try to expand in both directions
6251 * malloc: swap order of clean-bin strategy;
6252 * realloc: only conditionally expand backwards
6253 * Try not to scavenge used bins
6254 * Use bin counts as a guide to preallocation
6255 * Occasionally bin return list chunks in first scan
6256 * Add a few optimizations from colin@nyx10.cs.du.edu
6258 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6259 * faster bin computation & slightly different binning
6260 * merged all consolidations to one part of malloc proper
6261 (eliminating old malloc_find_space & malloc_clean_bin)
6262 * Scan 2 returns chunks (not just 1)
6263 * Propagate failure in realloc if malloc returns 0
6264 * Add stuff to allow compilation on non-ANSI compilers
6265 from kpv@research.att.com
6267 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6268 * removed potential for odd address access in prev_chunk
6269 * removed dependency on getpagesize.h
6270 * misc cosmetics and a bit more internal documentation
6271 * anticosmetics: mangled names in macros to evade debugger strangeness
6272 * tested on sparc, hp-700, dec-mips, rs6000
6273 with gcc & native cc (hp, dec only) allowing
6274 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6276 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6277 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6278 structure of old version, but most details differ.)