1 /* ---------- To make a malloc.h, start cutting here ------------ */
4 A version of malloc/free/realloc written by Doug Lea and released to the
5 public domain. Send questions/comments/complaints/performance data
8 * VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://g.oswego.edu/pub/misc/malloc.c
12 Check before installing!
14 * Why use this malloc?
16 This is not the fastest, most space-conserving, most portable, or
17 most tunable malloc ever written. However it is among the fastest
18 while also being among the most space-conserving, portable and tunable.
19 Consistent balance across these factors results in a good general-purpose
20 allocator. For a high-level description, see
21 http://g.oswego.edu/dl/html/malloc.html
23 * Synopsis of public routines
25 (Much fuller descriptions are contained in the program documentation below.)
28 Return a pointer to a newly allocated chunk of at least n bytes, or null
29 if no space is available.
31 Release the chunk of memory pointed to by p, or no effect if p is null.
32 realloc(Void_t* p, size_t n);
33 Return a pointer to a chunk of size n that contains the same data
34 as does chunk p up to the minimum of (n, p's size) bytes, or null
35 if no space is available. The returned pointer may or may not be
36 the same as p. If p is null, equivalent to malloc. Unless the
37 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
38 size argument of zero (re)allocates a minimum-sized chunk.
39 memalign(size_t alignment, size_t n);
40 Return a pointer to a newly allocated chunk of n bytes, aligned
41 in accord with the alignment argument, which must be a power of
44 Equivalent to memalign(pagesize, n), where pagesize is the page
45 size of the system (or as near to this as can be figured out from
46 all the includes/defines below.)
48 Equivalent to valloc(minimum-page-that-holds(n)), that is,
49 round up n to nearest pagesize.
50 calloc(size_t unit, size_t quantity);
51 Returns a pointer to quantity * unit bytes, with all locations
54 Equivalent to free(p).
55 malloc_trim(size_t pad);
56 Release all but pad bytes of freed top-most memory back
57 to the system. Return 1 if successful, else 0.
58 malloc_usable_size(Void_t* p);
59 Report the number usable allocated bytes associated with allocated
60 chunk p. This may or may not report more bytes than were requested,
61 due to alignment and minimum size constraints.
63 Prints brief summary statistics on stderr.
65 Returns (by copy) a struct containing various summary statistics.
66 mallopt(int parameter_number, int parameter_value)
67 Changes one of the tunable parameters described below. Returns
68 1 if successful in changing the parameter, else 0.
73 8 byte alignment is currently hardwired into the design. This
74 seems to suffice for all current machines and C compilers.
76 Assumed pointer representation: 4 or 8 bytes
77 Code for 8-byte pointers is untested by me but has worked
78 reliably by Wolfram Gloger, who contributed most of the
79 changes supporting this.
81 Assumed size_t representation: 4 or 8 bytes
82 Note that size_t is allowed to be 4 bytes even if pointers are 8.
84 Minimum overhead per allocated chunk: 4 or 8 bytes
85 Each malloced chunk has a hidden overhead of 4 bytes holding size
86 and status information.
88 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
89 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
91 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
92 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
93 needed; 4 (8) for a trailing size field
94 and 8 (16) bytes for free list pointers. Thus, the minimum
95 allocatable size is 16/24/32 bytes.
97 Even a request for zero bytes (i.e., malloc(0)) returns a
98 pointer to something of the minimum allocatable size.
100 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
101 8-byte size_t: 2^63 - 16 bytes
103 It is assumed that (possibly signed) size_t bit values suffice to
104 represent chunk sizes. `Possibly signed' is due to the fact
105 that `size_t' may be defined on a system as either a signed or
106 an unsigned type. To be conservative, values that would appear
107 as negative numbers are avoided.
108 Requests for sizes with a negative sign bit when the request
109 size is treaded as a long will return null.
111 Maximum overhead wastage per allocated chunk: normally 15 bytes
113 Alignnment demands, plus the minimum allocatable size restriction
114 make the normal worst-case wastage 15 bytes (i.e., up to 15
115 more bytes will be allocated than were requested in malloc), with
117 1. Because requests for zero bytes allocate non-zero space,
118 the worst case wastage for a request of zero bytes is 24 bytes.
119 2. For requests >= mmap_threshold that are serviced via
120 mmap(), the worst case wastage is 8 bytes plus the remainder
121 from a system page (the minimal mmap unit); typically 4096 bytes.
125 Here are some features that are NOT currently supported
127 * No user-definable hooks for callbacks and the like.
128 * No automated mechanism for fully checking that all accesses
129 to malloced memory stay within their bounds.
130 * No support for compaction.
132 * Synopsis of compile-time options:
134 People have reported using previous versions of this malloc on all
135 versions of Unix, sometimes by tweaking some of the defines
136 below. It has been tested most extensively on Solaris and
137 Linux. It is also reported to work on WIN32 platforms.
138 People have also reported adapting this malloc for use in
139 stand-alone embedded systems.
141 The implementation is in straight, hand-tuned ANSI C. Among other
142 consequences, it uses a lot of macros. Because of this, to be at
143 all usable, this code should be compiled using an optimizing compiler
144 (for example gcc -O2) that can simplify expressions and control
147 __STD_C (default: derived from C compiler defines)
148 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
149 a C compiler sufficiently close to ANSI to get away with it.
150 DEBUG (default: NOT defined)
151 Define to enable debugging. Adds fairly extensive assertion-based
152 checking to help track down memory errors, but noticeably slows down
154 SEPARATE_OBJECTS (default: NOT defined)
155 Define this to compile into separate .o files. You must then
156 compile malloc.c several times, defining a DEFINE_* macro each
157 time. The list of DEFINE_* macros appears below.
158 MALLOC_LOCK (default: NOT defined)
159 MALLOC_UNLOCK (default: NOT defined)
160 Define these to C expressions which are run to lock and unlock
161 the malloc data structures. Calls may be nested; that is,
162 MALLOC_LOCK may be called more than once before the corresponding
163 MALLOC_UNLOCK calls. MALLOC_LOCK must avoid waiting for a lock
164 that it already holds.
165 MALLOC_ALIGNMENT (default: NOT defined)
166 Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
167 which is the normal default.
168 SIZE_T_SMALLER_THAN_LONG (default: NOT defined)
169 Define this when the platform you are compiling has sizeof(long) > sizeof(size_t).
170 The option causes some extra code to be generated to handle operations
171 that use size_t operands and have long results.
172 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
173 Define this if you think that realloc(p, 0) should be equivalent
174 to free(p). Otherwise, since malloc returns a unique pointer for
175 malloc(0), so does realloc(p, 0).
176 HAVE_MEMCPY (default: defined)
177 Define if you are not otherwise using ANSI STD C, but still
178 have memcpy and memset in your C library and want to use them.
179 Otherwise, simple internal versions are supplied.
180 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
181 Define as 1 if you want the C library versions of memset and
182 memcpy called in realloc and calloc (otherwise macro versions are used).
183 At least on some platforms, the simple macro versions usually
184 outperform libc versions.
185 HAVE_MMAP (default: defined as 1)
186 Define to non-zero to optionally make malloc() use mmap() to
187 allocate very large blocks.
188 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
189 Define to non-zero to optionally make realloc() use mremap() to
190 reallocate very large blocks.
191 malloc_getpagesize (default: derived from system #includes)
192 Either a constant or routine call returning the system page size.
193 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
194 Optionally define if you are on a system with a /usr/include/malloc.h
195 that declares struct mallinfo. It is not at all necessary to
196 define this even if you do, but will ensure consistency.
197 INTERNAL_SIZE_T (default: size_t)
198 Define to a 32-bit type (probably `unsigned int') if you are on a
199 64-bit machine, yet do not want or need to allow malloc requests of
200 greater than 2^31 to be handled. This saves space, especially for
202 INTERNAL_LINUX_C_LIB (default: NOT defined)
203 Defined only when compiled as part of Linux libc.
204 Also note that there is some odd internal name-mangling via defines
205 (for example, internally, `malloc' is named `mALLOc') needed
206 when compiling in this case. These look funny but don't otherwise
208 INTERNAL_NEWLIB (default: NOT defined)
209 Defined only when compiled as part of the Cygnus newlib
211 WIN32 (default: undefined)
212 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
213 LACKS_UNISTD_H (default: undefined if not WIN32)
214 Define this if your system does not have a <unistd.h>.
215 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
216 Define this if your system does not have a <sys/param.h>.
217 MORECORE (default: sbrk)
218 The name of the routine to call to obtain more memory from the system.
219 MORECORE_FAILURE (default: -1)
220 The value returned upon failure of MORECORE.
221 MORECORE_CLEARS (default 1)
222 True (1) if the routine mapped to MORECORE zeroes out memory (which
224 DEFAULT_TRIM_THRESHOLD
226 DEFAULT_MMAP_THRESHOLD
228 Default values of tunable parameters (described in detail below)
229 controlling interaction with host system routines (sbrk, mmap, etc).
230 These values may also be changed dynamically via mallopt(). The
231 preset defaults are those that give best performance for typical
233 USE_DL_PREFIX (default: undefined)
234 Prefix all public routines with the string 'dl'. Useful to
235 quickly avoid procedure declaration conflicts and linker symbol
236 conflicts with existing memory allocation routines.
254 #endif /*__cplusplus*/
259 #if (__STD_C || defined(WIN32))
267 #include <stddef.h> /* for size_t */
269 #include <sys/types.h>
276 #include <stdio.h> /* needed for malloc_stats */
286 Special defines for Cygnus newlib distribution.
290 #ifdef INTERNAL_NEWLIB
292 #include <sys/config.h>
295 In newlib, all the publically visible routines take a reentrancy
296 pointer. We don't currently do anything much with it, but we do
297 pass it to the lock routine.
302 #define POINTER_UINT unsigned _POINTER_INT
303 #define SEPARATE_OBJECTS
305 #define MORECORE(size) _sbrk_r(reent_ptr, (size))
306 #define MORECORE_CLEARS 0
307 #define MALLOC_LOCK __malloc_lock(reent_ptr)
308 #define MALLOC_UNLOCK __malloc_unlock(reent_ptr)
312 #define malloc_getpagesize (128)
314 #define malloc_getpagesize (4096)
319 extern void __malloc_lock(struct _reent *);
320 extern void __malloc_unlock(struct _reent *);
322 extern void __malloc_lock();
323 extern void __malloc_unlock();
327 #define RARG struct _reent *reent_ptr,
328 #define RONEARG struct _reent *reent_ptr
330 #define RARG reent_ptr
331 #define RONEARG reent_ptr
332 #define RDECL struct _reent *reent_ptr;
335 #define RCALL reent_ptr,
336 #define RONECALL reent_ptr
338 #else /* ! INTERNAL_NEWLIB */
340 #define POINTER_UINT unsigned long
347 #endif /* ! INTERNAL_NEWLIB */
352 Because freed chunks may be overwritten with link fields, this
353 malloc will often die when freed memory is overwritten by user
354 programs. This can be very effective (albeit in an annoying way)
355 in helping track down dangling pointers.
357 If you compile with -DDEBUG, a number of assertion checks are
358 enabled that will catch more memory errors. You probably won't be
359 able to make much sense of the actual assertion errors, but they
360 should help you locate incorrectly overwritten memory. The
361 checking is fairly extensive, and will slow down execution
362 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
363 attempt to check every non-mmapped allocated and free chunk in the
364 course of computing the summmaries. (By nature, mmapped regions
365 cannot be checked very much automatically.)
367 Setting DEBUG may also be helpful if you are trying to modify
368 this code. The assertions in the check routines spell out in more
369 detail the assumptions and invariants underlying the algorithms.
376 #define assert(x) ((void)0)
381 SEPARATE_OBJECTS should be defined if you want each function to go
382 into a separate .o file. You must then compile malloc.c once per
383 function, defining the appropriate DEFINE_ macro. See below for the
387 #ifndef SEPARATE_OBJECTS
388 #define DEFINE_MALLOC
390 #define DEFINE_REALLOC
391 #define DEFINE_CALLOC
393 #define DEFINE_MEMALIGN
394 #define DEFINE_VALLOC
395 #define DEFINE_PVALLOC
396 #define DEFINE_MALLINFO
397 #define DEFINE_MALLOC_STATS
398 #define DEFINE_MALLOC_USABLE_SIZE
399 #define DEFINE_MALLOPT
401 #define STATIC static
407 Define MALLOC_LOCK and MALLOC_UNLOCK to C expressions to run to
408 lock and unlock the malloc data structures. MALLOC_LOCK may be
416 #ifndef MALLOC_UNLOCK
417 #define MALLOC_UNLOCK
421 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
422 of chunk sizes. On a 64-bit machine, you can reduce malloc
423 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
424 at the expense of not being able to handle requests greater than
425 2^31. This limitation is hardly ever a concern; you are encouraged
426 to set this. However, the default version is the same as size_t.
429 #ifndef INTERNAL_SIZE_T
430 #define INTERNAL_SIZE_T size_t
434 Following is needed on implementations whereby long > size_t.
435 The problem is caused because the code performs subtractions of
436 size_t values and stores the result in long values. In the case
437 where long > size_t and the first value is actually less than
438 the second value, the resultant value is positive. For example,
439 (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
440 which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF. This is due to the
441 fact that assignment from unsigned to signed won't sign extend.
444 #ifdef SIZE_T_SMALLER_THAN_LONG
445 #define long_sub_size_t(x, y) ( (x < y) ? -((long)(y - x)) : (x - y) );
447 #define long_sub_size_t(x, y) ( (long)(x - y) )
451 REALLOC_ZERO_BYTES_FREES should be set if a call to
452 realloc with zero bytes should be the same as a call to free.
453 Some people think it should. Otherwise, since this malloc
454 returns a unique pointer for malloc(0), so does realloc(p, 0).
458 /* #define REALLOC_ZERO_BYTES_FREES */
462 WIN32 causes an emulation of sbrk to be compiled in
463 mmap-based options are not currently supported in WIN32.
468 #define MORECORE wsbrk
471 #define LACKS_UNISTD_H
472 #define LACKS_SYS_PARAM_H
475 Include 'windows.h' to get the necessary declarations for the
476 Microsoft Visual C++ data structures and routines used in the 'sbrk'
479 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
480 Visual C++ header files are included.
482 #define WIN32_LEAN_AND_MEAN
488 HAVE_MEMCPY should be defined if you are not otherwise using
489 ANSI STD C, but still have memcpy and memset in your C library
490 and want to use them in calloc and realloc. Otherwise simple
491 macro versions are defined here.
493 USE_MEMCPY should be defined as 1 if you actually want to
494 have memset and memcpy called. People report that the macro
495 versions are often enough faster than libc versions on many
496 systems that it is better to use them.
510 #if (__STD_C || defined(HAVE_MEMCPY))
513 void* memset(void*, int, size_t);
514 void* memcpy(void*, const void*, size_t);
517 // On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
528 /* The following macros are only invoked with (2n+1)-multiples of
529 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
530 for fast inline execution when n is small. */
532 #define MALLOC_ZERO(charp, nbytes) \
534 INTERNAL_SIZE_T mzsz = (nbytes); \
535 if(mzsz <= 9*sizeof(mzsz)) { \
536 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
537 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
539 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
541 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
546 } else memset((charp), 0, mzsz); \
549 #define MALLOC_COPY(dest,src,nbytes) \
551 INTERNAL_SIZE_T mcsz = (nbytes); \
552 if(mcsz <= 9*sizeof(mcsz)) { \
553 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
554 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
555 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
556 *mcdst++ = *mcsrc++; \
557 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
558 *mcdst++ = *mcsrc++; \
559 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
560 *mcdst++ = *mcsrc++; }}} \
561 *mcdst++ = *mcsrc++; \
562 *mcdst++ = *mcsrc++; \
564 } else memcpy(dest, src, mcsz); \
567 #else /* !USE_MEMCPY */
569 /* Use Duff's device for good zeroing/copying performance. */
571 #define MALLOC_ZERO(charp, nbytes) \
573 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
574 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
575 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
577 case 0: for(;;) { *mzp++ = 0; \
578 case 7: *mzp++ = 0; \
579 case 6: *mzp++ = 0; \
580 case 5: *mzp++ = 0; \
581 case 4: *mzp++ = 0; \
582 case 3: *mzp++ = 0; \
583 case 2: *mzp++ = 0; \
584 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
588 #define MALLOC_COPY(dest,src,nbytes) \
590 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
591 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
592 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
593 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
595 case 0: for(;;) { *mcdst++ = *mcsrc++; \
596 case 7: *mcdst++ = *mcsrc++; \
597 case 6: *mcdst++ = *mcsrc++; \
598 case 5: *mcdst++ = *mcsrc++; \
599 case 4: *mcdst++ = *mcsrc++; \
600 case 3: *mcdst++ = *mcsrc++; \
601 case 2: *mcdst++ = *mcsrc++; \
602 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
610 Define HAVE_MMAP to optionally make malloc() use mmap() to
611 allocate very large blocks. These will be returned to the
612 operating system immediately after a free().
620 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
621 large blocks. This is currently only possible on Linux with
622 kernel versions newer than 1.3.77.
626 #ifdef INTERNAL_LINUX_C_LIB
627 #define HAVE_MREMAP 1
629 #define HAVE_MREMAP 0
637 #include <sys/mman.h>
639 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
640 #define MAP_ANONYMOUS MAP_ANON
643 #endif /* HAVE_MMAP */
646 Access to system page size. To the extent possible, this malloc
647 manages memory from the system in page-size units.
649 The following mechanics for getpagesize were adapted from
650 bsd/gnu getpagesize.h
653 #ifndef LACKS_UNISTD_H
657 #ifndef malloc_getpagesize
658 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
659 # ifndef _SC_PAGE_SIZE
660 # define _SC_PAGE_SIZE _SC_PAGESIZE
663 # ifdef _SC_PAGE_SIZE
664 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
666 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
667 extern size_t getpagesize();
668 # define malloc_getpagesize getpagesize()
671 # define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
673 # ifndef LACKS_SYS_PARAM_H
674 # include <sys/param.h>
676 # ifdef EXEC_PAGESIZE
677 # define malloc_getpagesize EXEC_PAGESIZE
681 # define malloc_getpagesize NBPG
683 # define malloc_getpagesize (NBPG * CLSIZE)
687 # define malloc_getpagesize NBPC
690 # define malloc_getpagesize PAGESIZE
692 # define malloc_getpagesize (4096) /* just guess */
706 This version of malloc supports the standard SVID/XPG mallinfo
707 routine that returns a struct containing the same kind of
708 information you can get from malloc_stats. It should work on
709 any SVID/XPG compliant system that has a /usr/include/malloc.h
710 defining struct mallinfo. (If you'd like to install such a thing
711 yourself, cut out the preliminary declarations as described above
712 and below and save them in a malloc.h file. But there's no
713 compelling reason to bother to do this.)
715 The main declaration needed is the mallinfo struct that is returned
716 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
717 bunch of fields, most of which are not even meaningful in this
718 version of malloc. Some of these fields are are instead filled by
719 mallinfo() with other numbers that might possibly be of interest.
721 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
722 /usr/include/malloc.h file that includes a declaration of struct
723 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
724 version is declared below. These must be precisely the same for
729 /* #define HAVE_USR_INCLUDE_MALLOC_H */
731 #if HAVE_USR_INCLUDE_MALLOC_H
732 #include "/usr/include/malloc.h"
735 /* SVID2/XPG mallinfo structure */
738 int arena; /* total space allocated from system */
739 int ordblks; /* number of non-inuse chunks */
740 int smblks; /* unused -- always zero */
741 int hblks; /* number of mmapped regions */
742 int hblkhd; /* total space in mmapped regions */
743 int usmblks; /* unused -- always zero */
744 int fsmblks; /* unused -- always zero */
745 int uordblks; /* total allocated space */
746 int fordblks; /* total non-inuse space */
747 int keepcost; /* top-most, releasable (via malloc_trim) space */
750 /* SVID2/XPG mallopt options */
752 #define M_MXFAST 1 /* UNUSED in this malloc */
753 #define M_NLBLKS 2 /* UNUSED in this malloc */
754 #define M_GRAIN 3 /* UNUSED in this malloc */
755 #define M_KEEP 4 /* UNUSED in this malloc */
759 /* mallopt options that actually do something */
761 #define M_TRIM_THRESHOLD -1
763 #define M_MMAP_THRESHOLD -3
764 #define M_MMAP_MAX -4
768 #ifndef DEFAULT_TRIM_THRESHOLD
769 #define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
773 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
774 to keep before releasing via malloc_trim in free().
776 Automatic trimming is mainly useful in long-lived programs.
777 Because trimming via sbrk can be slow on some systems, and can
778 sometimes be wasteful (in cases where programs immediately
779 afterward allocate more large chunks) the value should be high
780 enough so that your overall system performance would improve by
783 The trim threshold and the mmap control parameters (see below)
784 can be traded off with one another. Trimming and mmapping are
785 two different ways of releasing unused memory back to the
786 system. Between these two, it is often possible to keep
787 system-level demands of a long-lived program down to a bare
788 minimum. For example, in one test suite of sessions measuring
789 the XF86 X server on Linux, using a trim threshold of 128K and a
790 mmap threshold of 192K led to near-minimal long term resource
793 If you are using this malloc in a long-lived program, it should
794 pay to experiment with these values. As a rough guide, you
795 might set to a value close to the average size of a process
796 (program) running on your system. Releasing this much memory
797 would allow such a process to run in memory. Generally, it's
798 worth it to tune for trimming rather tham memory mapping when a
799 program undergoes phases where several large chunks are
800 allocated and released in ways that can reuse each other's
801 storage, perhaps mixed with phases where there are no such
802 chunks at all. And in well-behaved long-lived programs,
803 controlling release of large blocks via trimming versus mapping
806 However, in most programs, these parameters serve mainly as
807 protection against the system-level effects of carrying around
808 massive amounts of unneeded memory. Since frequent calls to
809 sbrk, mmap, and munmap otherwise degrade performance, the default
810 parameters are set to relatively high values that serve only as
813 The default trim value is high enough to cause trimming only in
814 fairly extreme (by current memory consumption standards) cases.
815 It must be greater than page size to have any useful effect. To
816 disable trimming completely, you can set to (unsigned long)(-1);
822 #ifndef DEFAULT_TOP_PAD
823 #define DEFAULT_TOP_PAD (0)
827 M_TOP_PAD is the amount of extra `padding' space to allocate or
828 retain whenever sbrk is called. It is used in two ways internally:
830 * When sbrk is called to extend the top of the arena to satisfy
831 a new malloc request, this much padding is added to the sbrk
834 * When malloc_trim is called automatically from free(),
835 it is used as the `pad' argument.
837 In both cases, the actual amount of padding is rounded
838 so that the end of the arena is always a system page boundary.
840 The main reason for using padding is to avoid calling sbrk so
841 often. Having even a small pad greatly reduces the likelihood
842 that nearly every malloc request during program start-up (or
843 after trimming) will invoke sbrk, which needlessly wastes
846 Automatic rounding-up to page-size units is normally sufficient
847 to avoid measurable overhead, so the default is 0. However, in
848 systems where sbrk is relatively slow, it can pay to increase
849 this value, at the expense of carrying around more memory than
855 #ifndef DEFAULT_MMAP_THRESHOLD
856 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
861 M_MMAP_THRESHOLD is the request size threshold for using mmap()
862 to service a request. Requests of at least this size that cannot
863 be allocated using already-existing space will be serviced via mmap.
864 (If enough normal freed space already exists it is used instead.)
866 Using mmap segregates relatively large chunks of memory so that
867 they can be individually obtained and released from the host
868 system. A request serviced through mmap is never reused by any
869 other request (at least not directly; the system may just so
870 happen to remap successive requests to the same locations).
872 Segregating space in this way has the benefit that mmapped space
873 can ALWAYS be individually released back to the system, which
874 helps keep the system level memory demands of a long-lived
875 program low. Mapped memory can never become `locked' between
876 other chunks, as can happen with normally allocated chunks, which
877 menas that even trimming via malloc_trim would not release them.
879 However, it has the disadvantages that:
881 1. The space cannot be reclaimed, consolidated, and then
882 used to service later requests, as happens with normal chunks.
883 2. It can lead to more wastage because of mmap page alignment
885 3. It causes malloc performance to be more dependent on host
886 system memory management support routines which may vary in
887 implementation quality and may impose arbitrary
888 limitations. Generally, servicing a request via normal
889 malloc steps is faster than going through a system's mmap.
891 All together, these considerations should lead you to use mmap
892 only for relatively large requests.
899 #ifndef DEFAULT_MMAP_MAX
901 #define DEFAULT_MMAP_MAX (64)
903 #define DEFAULT_MMAP_MAX (0)
908 M_MMAP_MAX is the maximum number of requests to simultaneously
909 service using mmap. This parameter exists because:
911 1. Some systems have a limited number of internal tables for
913 2. In most systems, overreliance on mmap can degrade overall
915 3. If a program allocates many large regions, it is probably
916 better off using normal sbrk-based allocation routines that
917 can reclaim and reallocate normal heap memory. Using a
918 small value allows transition into this mode after the
919 first few allocations.
921 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
922 the default value is 0, and attempts to set it to non-zero values
923 in mallopt will fail.
930 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
931 Useful to quickly avoid procedure declaration conflicts and linker
932 symbol conflicts with existing memory allocation routines.
936 /* #define USE_DL_PREFIX */
943 Special defines for linux libc
945 Except when compiled using these special defines for Linux libc
946 using weak aliases, this malloc is NOT designed to work in
947 multithreaded applications. No semaphores or other concurrency
948 control are provided to ensure that multiple malloc or free calls
949 don't run at the same time, which could be disasterous. A single
950 semaphore could be used across malloc, realloc, and free (which is
951 essentially the effect of the linux weak alias approach). It would
952 be hard to obtain finer granularity.
957 #ifdef INTERNAL_LINUX_C_LIB
961 Void_t * __default_morecore_init (ptrdiff_t);
962 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
966 Void_t * __default_morecore_init ();
967 Void_t *(*__morecore)() = __default_morecore_init;
971 #define MORECORE (*__morecore)
972 #define MORECORE_FAILURE 0
973 #define MORECORE_CLEARS 1
975 #else /* INTERNAL_LINUX_C_LIB */
977 #ifndef INTERNAL_NEWLIB
979 extern Void_t* sbrk(ptrdiff_t);
981 extern Void_t* sbrk();
986 #define MORECORE sbrk
989 #ifndef MORECORE_FAILURE
990 #define MORECORE_FAILURE -1
993 #ifndef MORECORE_CLEARS
994 #define MORECORE_CLEARS 1
997 #endif /* INTERNAL_LINUX_C_LIB */
999 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
1001 #define cALLOc __libc_calloc
1002 #define fREe __libc_free
1003 #define mALLOc __libc_malloc
1004 #define mEMALIGn __libc_memalign
1005 #define rEALLOc __libc_realloc
1006 #define vALLOc __libc_valloc
1007 #define pvALLOc __libc_pvalloc
1008 #define mALLINFo __libc_mallinfo
1009 #define mALLOPt __libc_mallopt
1011 #pragma weak calloc = __libc_calloc
1012 #pragma weak free = __libc_free
1013 #pragma weak cfree = __libc_free
1014 #pragma weak malloc = __libc_malloc
1015 #pragma weak memalign = __libc_memalign
1016 #pragma weak realloc = __libc_realloc
1017 #pragma weak valloc = __libc_valloc
1018 #pragma weak pvalloc = __libc_pvalloc
1019 #pragma weak mallinfo = __libc_mallinfo
1020 #pragma weak mallopt = __libc_mallopt
1024 #ifdef INTERNAL_NEWLIB
1026 #define cALLOc _calloc_r
1027 #define fREe _free_r
1028 #define mALLOc _malloc_r
1029 #define mEMALIGn _memalign_r
1030 #define rEALLOc _realloc_r
1031 #define vALLOc _valloc_r
1032 #define pvALLOc _pvalloc_r
1033 #define mALLINFo _mallinfo_r
1034 #define mALLOPt _mallopt_r
1036 #define malloc_stats _malloc_stats_r
1037 #define malloc_trim _malloc_trim_r
1038 #define malloc_usable_size _malloc_usable_size_r
1040 #define malloc_update_mallinfo __malloc_update_mallinfo
1042 #define malloc_av_ __malloc_av_
1043 #define malloc_current_mallinfo __malloc_current_mallinfo
1044 #define malloc_max_sbrked_mem __malloc_max_sbrked_mem
1045 #define malloc_max_total_mem __malloc_max_total_mem
1046 #define malloc_sbrk_base __malloc_sbrk_base
1047 #define malloc_top_pad __malloc_top_pad
1048 #define malloc_trim_threshold __malloc_trim_threshold
1050 #else /* ! INTERNAL_NEWLIB */
1052 #ifdef USE_DL_PREFIX
1053 #define cALLOc dlcalloc
1055 #define mALLOc dlmalloc
1056 #define mEMALIGn dlmemalign
1057 #define rEALLOc dlrealloc
1058 #define vALLOc dlvalloc
1059 #define pvALLOc dlpvalloc
1060 #define mALLINFo dlmallinfo
1061 #define mALLOPt dlmallopt
1062 #else /* USE_DL_PREFIX */
1063 #define cALLOc calloc
1065 #define mALLOc malloc
1066 #define mEMALIGn memalign
1067 #define rEALLOc realloc
1068 #define vALLOc valloc
1069 #define pvALLOc pvalloc
1070 #define mALLINFo mallinfo
1071 #define mALLOPt mallopt
1072 #endif /* USE_DL_PREFIX */
1074 #endif /* ! INTERNAL_NEWLIB */
1077 /* Public routines */
1081 Void_t* mALLOc(RARG size_t);
1082 void fREe(RARG Void_t*);
1083 Void_t* rEALLOc(RARG Void_t*, size_t);
1084 Void_t* mEMALIGn(RARG size_t, size_t);
1085 Void_t* vALLOc(RARG size_t);
1086 Void_t* pvALLOc(RARG size_t);
1087 Void_t* cALLOc(RARG size_t, size_t);
1088 void cfree(Void_t*);
1089 int malloc_trim(RARG size_t);
1090 size_t malloc_usable_size(RARG Void_t*);
1091 void malloc_stats(RONEARG);
1092 int mALLOPt(RARG int, int);
1093 struct mallinfo mALLINFo(RONEARG);
1104 size_t malloc_usable_size();
1105 void malloc_stats();
1107 struct mallinfo mALLINFo();
1112 }; /* end of extern "C" */
1115 /* ---------- To make a malloc.h, end cutting here ------------ */
1119 Emulation of sbrk for WIN32
1120 All code within the ifdef WIN32 is untested by me.
1122 Thanks to Martin Fong and others for supplying this.
1128 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1129 ~(malloc_getpagesize-1))
1130 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
1132 /* resrve 64MB to insure large contiguous space */
1133 #define RESERVED_SIZE (1024*1024*64)
1134 #define NEXT_SIZE (2048*1024)
1135 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1137 struct GmListElement;
1138 typedef struct GmListElement GmListElement;
1140 struct GmListElement
1142 GmListElement* next;
1146 static GmListElement* head = 0;
1147 static unsigned int gNextAddress = 0;
1148 static unsigned int gAddressBase = 0;
1149 static unsigned int gAllocatedSize = 0;
1152 GmListElement* makeGmListElement (void* bas)
1154 GmListElement* this;
1155 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1169 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1170 if (gAddressBase && (gNextAddress - gAddressBase))
1172 rval = VirtualFree ((void*)gAddressBase,
1173 gNextAddress - gAddressBase,
1179 GmListElement* next = head->next;
1180 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1188 void* findRegion (void* start_address, unsigned long size)
1190 MEMORY_BASIC_INFORMATION info;
1191 if (size >= TOP_MEMORY) return NULL;
1193 while ((unsigned long)start_address + size < TOP_MEMORY)
1195 VirtualQuery (start_address, &info, sizeof (info));
1196 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1197 return start_address;
1200 // Requested region is not available so see if the
1201 // next region is available. Set 'start_address'
1202 // to the next region and call 'VirtualQuery()'
1205 start_address = (char*)info.BaseAddress + info.RegionSize;
1207 // Make sure we start looking for the next region
1208 // on the *next* 64K boundary. Otherwise, even if
1209 // the new region is free according to
1210 // 'VirtualQuery()', the subsequent call to
1211 // 'VirtualAlloc()' (which follows the call to
1212 // this routine in 'wsbrk()') will round *down*
1213 // the requested address to a 64K boundary which
1214 // we already know is an address in the
1215 // unavailable region. Thus, the subsequent call
1216 // to 'VirtualAlloc()' will fail and bring us back
1217 // here, causing us to go into an infinite loop.
1220 (void *) AlignPage64K((unsigned long) start_address);
1228 void* wsbrk (long size)
1233 if (gAddressBase == 0)
1235 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1236 gNextAddress = gAddressBase =
1237 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1238 MEM_RESERVE, PAGE_NOACCESS);
1239 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1242 long new_size = max (NEXT_SIZE, AlignPage (size));
1243 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1246 new_address = findRegion (new_address, new_size);
1248 if (new_address == 0)
1251 gAddressBase = gNextAddress =
1252 (unsigned int)VirtualAlloc (new_address, new_size,
1253 MEM_RESERVE, PAGE_NOACCESS);
1254 // repeat in case of race condition
1255 // The region that we found has been snagged
1256 // by another thread
1258 while (gAddressBase == 0);
1260 assert (new_address == (void*)gAddressBase);
1262 gAllocatedSize = new_size;
1264 if (!makeGmListElement ((void*)gAddressBase))
1267 if ((size + gNextAddress) > AlignPage (gNextAddress))
1270 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1271 (size + gNextAddress -
1272 AlignPage (gNextAddress)),
1273 MEM_COMMIT, PAGE_READWRITE);
1277 tmp = (void*)gNextAddress;
1278 gNextAddress = (unsigned int)tmp + size;
1283 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1284 /* Trim by releasing the virtual memory */
1285 if (alignedGoal >= gAddressBase)
1287 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1289 gNextAddress = gNextAddress + size;
1290 return (void*)gNextAddress;
1294 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1296 gNextAddress = gAddressBase;
1302 return (void*)gNextAddress;
1317 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1318 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1319 struct malloc_chunk* fd; /* double links -- used only if free. */
1320 struct malloc_chunk* bk;
1323 typedef struct malloc_chunk* mchunkptr;
1327 malloc_chunk details:
1329 (The following includes lightly edited explanations by Colin Plumb.)
1331 Chunks of memory are maintained using a `boundary tag' method as
1332 described in e.g., Knuth or Standish. (See the paper by Paul
1333 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1334 survey of such techniques.) Sizes of free chunks are stored both
1335 in the front of each chunk and at the end. This makes
1336 consolidating fragmented chunks into bigger chunks very fast. The
1337 size fields also hold bits representing whether chunks are free or
1340 An allocated chunk looks like this:
1343 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1344 | Size of previous chunk, if allocated | |
1345 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1346 | Size of chunk, in bytes |P|
1347 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1348 | User data starts here... .
1350 . (malloc_usable_space() bytes) .
1352 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1354 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1357 Where "chunk" is the front of the chunk for the purpose of most of
1358 the malloc code, but "mem" is the pointer that is returned to the
1359 user. "Nextchunk" is the beginning of the next contiguous chunk.
1361 Chunks always begin on even word boundries, so the mem portion
1362 (which is returned to the user) is also on an even word boundary, and
1363 thus double-word aligned.
1365 Free chunks are stored in circular doubly-linked lists, and look like this:
1367 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1368 | Size of previous chunk |
1369 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1370 `head:' | Size of chunk, in bytes |P|
1371 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1372 | Forward pointer to next chunk in list |
1373 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1374 | Back pointer to previous chunk in list |
1375 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1376 | Unused space (may be 0 bytes long) .
1379 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1380 `foot:' | Size of chunk, in bytes |
1381 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1383 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1384 chunk size (which is always a multiple of two words), is an in-use
1385 bit for the *previous* chunk. If that bit is *clear*, then the
1386 word before the current chunk size contains the previous chunk
1387 size, and can be used to find the front of the previous chunk.
1388 (The very first chunk allocated always has this bit set,
1389 preventing access to non-existent (or non-owned) memory.)
1391 Note that the `foot' of the current chunk is actually represented
1392 as the prev_size of the NEXT chunk. (This makes it easier to
1393 deal with alignments etc).
1395 The two exceptions to all this are
1397 1. The special chunk `top', which doesn't bother using the
1398 trailing size field since there is no
1399 next contiguous chunk that would have to index off it. (After
1400 initialization, `top' is forced to always exist. If it would
1401 become less than MINSIZE bytes long, it is replenished via
1404 2. Chunks allocated via mmap, which have the second-lowest-order
1405 bit (IS_MMAPPED) set in their size fields. Because they are
1406 never merged or traversed from any other chunk, they have no
1407 foot size or inuse information.
1409 Available chunks are kept in any of several places (all declared below):
1411 * `av': An array of chunks serving as bin headers for consolidated
1412 chunks. Each bin is doubly linked. The bins are approximately
1413 proportionally (log) spaced. There are a lot of these bins
1414 (128). This may look excessive, but works very well in
1415 practice. All procedures maintain the invariant that no
1416 consolidated chunk physically borders another one. Chunks in
1417 bins are kept in size order, with ties going to the
1418 approximately least recently used chunk.
1420 The chunks in each bin are maintained in decreasing sorted order by
1421 size. This is irrelevant for the small bins, which all contain
1422 the same-sized chunks, but facilitates best-fit allocation for
1423 larger chunks. (These lists are just sequential. Keeping them in
1424 order almost never requires enough traversal to warrant using
1425 fancier ordered data structures.) Chunks of the same size are
1426 linked with the most recently freed at the front, and allocations
1427 are taken from the back. This results in LRU or FIFO allocation
1428 order, which tends to give each chunk an equal opportunity to be
1429 consolidated with adjacent freed chunks, resulting in larger free
1430 chunks and less fragmentation.
1432 * `top': The top-most available chunk (i.e., the one bordering the
1433 end of available memory) is treated specially. It is never
1434 included in any bin, is used only if no other chunk is
1435 available, and is released back to the system if it is very
1436 large (see M_TRIM_THRESHOLD).
1438 * `last_remainder': A bin holding only the remainder of the
1439 most recently split (non-top) chunk. This bin is checked
1440 before other non-fitting chunks, so as to provide better
1441 locality for runs of sequentially allocated chunks.
1443 * Implicitly, through the host system's memory mapping tables.
1444 If supported, requests greater than a threshold are usually
1445 serviced via calls to mmap, and then later released via munmap.
1454 /* sizes, alignments */
1456 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1457 #ifndef MALLOC_ALIGNMENT
1458 #define MALLOC_ALIGN 8
1459 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1461 #define MALLOC_ALIGN MALLOC_ALIGNMENT
1463 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1464 #define MINSIZE (sizeof(struct malloc_chunk))
1466 /* conversion from malloc headers to user pointers, and back */
1468 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1469 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1471 /* pad request bytes into a usable size */
1473 #define request2size(req) \
1474 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1475 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1476 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1478 /* Check if m has acceptable alignment */
1480 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1486 Physical chunk operations
1490 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1492 #define PREV_INUSE 0x1
1494 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1496 #define IS_MMAPPED 0x2
1498 /* Bits to mask off when extracting size */
1500 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1503 /* Ptr to next physical malloc_chunk. */
1505 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1507 /* Ptr to previous physical malloc_chunk */
1509 #define prev_chunk(p)\
1510 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1513 /* Treat space at ptr + offset as a chunk */
1515 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1521 Dealing with use bits
1524 /* extract p's inuse bit */
1527 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1529 /* extract inuse bit of previous chunk */
1531 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1533 /* check for mmap()'ed chunk */
1535 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1537 /* set/clear chunk as in use without otherwise disturbing */
1539 #define set_inuse(p)\
1540 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1542 #define clear_inuse(p)\
1543 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1545 /* check/set/clear inuse bits in known places */
1547 #define inuse_bit_at_offset(p, s)\
1548 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1550 #define set_inuse_bit_at_offset(p, s)\
1551 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1553 #define clear_inuse_bit_at_offset(p, s)\
1554 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1560 Dealing with size fields
1563 /* Get size, ignoring use bits */
1565 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1567 /* Set size at head, without disturbing its use bit */
1569 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1571 /* Set size/use ignoring previous bits in header */
1573 #define set_head(p, s) ((p)->size = (s))
1575 /* Set size at footer (only when chunk is not in use) */
1577 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1586 The bins, `av_' are an array of pairs of pointers serving as the
1587 heads of (initially empty) doubly-linked lists of chunks, laid out
1588 in a way so that each pair can be treated as if it were in a
1589 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1590 and chunks are the same).
1592 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1593 8 bytes apart. Larger bins are approximately logarithmically
1594 spaced. (See the table below.) The `av_' array is never mentioned
1595 directly in the code, but instead via bin access macros.
1603 4 bins of size 32768
1604 2 bins of size 262144
1605 1 bin of size what's left
1607 There is actually a little bit of slop in the numbers in bin_index
1608 for the sake of speed. This makes no difference elsewhere.
1610 The special chunks `top' and `last_remainder' get their own bins,
1611 (this is implemented via yet more trickery with the av_ array),
1612 although `top' is never properly linked to its bin since it is
1613 always handled specially.
1617 #ifdef SEPARATE_OBJECTS
1618 #define av_ malloc_av_
1621 #define NAV 128 /* number of bins */
1623 typedef struct malloc_chunk* mbinptr;
1627 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1628 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1629 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1632 The first 2 bins are never indexed. The corresponding av_ cells are instead
1633 used for bookkeeping. This is not to save space, but to simplify
1634 indexing, maintain locality, and avoid some initialization tests.
1637 #define top (bin_at(0)->fd) /* The topmost chunk */
1638 #define last_remainder (bin_at(1)) /* remainder from last split */
1642 Because top initially points to its own bin with initial
1643 zero size, thus forcing extension on the first malloc request,
1644 we avoid having any special code in malloc to check whether
1645 it even exists yet. But we still need to in malloc_extend_top.
1648 #define initial_top ((mchunkptr)(bin_at(0)))
1650 /* Helper macro to initialize bins */
1652 #define IAV(i) bin_at(i), bin_at(i)
1654 #ifdef DEFINE_MALLOC
1655 STATIC mbinptr av_[NAV * 2 + 2] = {
1657 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1658 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1659 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1660 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1661 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1662 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1663 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1664 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1665 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1666 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1667 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1668 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1669 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1670 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1671 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1672 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1675 extern mbinptr av_[NAV * 2 + 2];
1680 /* field-extraction macros */
1682 #define first(b) ((b)->fd)
1683 #define last(b) ((b)->bk)
1689 #define bin_index(sz) \
1690 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1691 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1692 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1693 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1694 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1695 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1698 bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1699 identically sized chunks. This is exploited in malloc.
1702 #define MAX_SMALLBIN_SIZE 512
1703 #define SMALLBIN_WIDTH 8
1704 #define SMALLBIN_WIDTH_BITS 3
1705 #define MAX_SMALLBIN (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1707 #define smallbin_index(sz) (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1710 Requests are `small' if both the corresponding and the next bin are small
1713 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1718 To help compensate for the large number of bins, a one-level index
1719 structure is used for bin-by-bin searching. `binblocks' is a
1720 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1721 have any (possibly) non-empty bins, so they can be skipped over
1722 all at once during during traversals. The bits are NOT always
1723 cleared as soon as all bins in a block are empty, but instead only
1724 when all are noticed to be empty during traversal in malloc.
1727 #define BINBLOCKWIDTH 4 /* bins per block */
1729 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1731 /* bin<->block macros */
1733 #define idx2binblock(ix) ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1734 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1735 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1741 /* Other static bookkeeping data */
1743 #ifdef SEPARATE_OBJECTS
1744 #define trim_threshold malloc_trim_threshold
1745 #define top_pad malloc_top_pad
1746 #define n_mmaps_max malloc_n_mmaps_max
1747 #define mmap_threshold malloc_mmap_threshold
1748 #define sbrk_base malloc_sbrk_base
1749 #define max_sbrked_mem malloc_max_sbrked_mem
1750 #define max_total_mem malloc_max_total_mem
1751 #define current_mallinfo malloc_current_mallinfo
1752 #define n_mmaps malloc_n_mmaps
1753 #define max_n_mmaps malloc_max_n_mmaps
1754 #define mmapped_mem malloc_mmapped_mem
1755 #define max_mmapped_mem malloc_max_mmapped_mem
1758 /* variables holding tunable values */
1760 #ifdef DEFINE_MALLOC
1762 STATIC unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1763 STATIC unsigned long top_pad = DEFAULT_TOP_PAD;
1765 STATIC unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1766 STATIC unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1769 /* The first value returned from sbrk */
1770 STATIC char* sbrk_base = (char*)(-1);
1772 /* The maximum memory obtained from system via sbrk */
1773 STATIC unsigned long max_sbrked_mem = 0;
1775 /* The maximum via either sbrk or mmap */
1776 STATIC unsigned long max_total_mem = 0;
1778 /* internal working copy of mallinfo */
1779 STATIC struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1783 /* Tracking mmaps */
1785 STATIC unsigned int n_mmaps = 0;
1786 STATIC unsigned int max_n_mmaps = 0;
1787 STATIC unsigned long mmapped_mem = 0;
1788 STATIC unsigned long max_mmapped_mem = 0;
1792 #else /* ! DEFINE_MALLOC */
1794 extern unsigned long trim_threshold;
1795 extern unsigned long top_pad;
1797 extern unsigned int n_mmaps_max;
1798 extern unsigned long mmap_threshold;
1800 extern char* sbrk_base;
1801 extern unsigned long max_sbrked_mem;
1802 extern unsigned long max_total_mem;
1803 extern struct mallinfo current_mallinfo;
1805 extern unsigned int n_mmaps;
1806 extern unsigned int max_n_mmaps;
1807 extern unsigned long mmapped_mem;
1808 extern unsigned long max_mmapped_mem;
1811 #endif /* ! DEFINE_MALLOC */
1813 /* The total memory obtained from system via sbrk */
1814 #define sbrked_mem (current_mallinfo.arena)
1826 These routines make a number of assertions about the states
1827 of data structures that should be true at all times. If any
1828 are not true, it's very likely that a user program has somehow
1829 trashed memory. (It's also possible that there is a coding error
1830 in malloc. In which case, please report it!)
1834 static void do_check_chunk(mchunkptr p)
1836 static void do_check_chunk(p) mchunkptr p;
1839 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1841 /* No checkable chunk is mmapped */
1842 assert(!chunk_is_mmapped(p));
1844 /* Check for legal address ... */
1845 assert((char*)p >= sbrk_base);
1847 assert((char*)p + sz <= (char*)top);
1849 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1855 static void do_check_free_chunk(mchunkptr p)
1857 static void do_check_free_chunk(p) mchunkptr p;
1860 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1861 mchunkptr next = chunk_at_offset(p, sz);
1865 /* Check whether it claims to be free ... */
1868 /* Unless a special marker, must have OK fields */
1869 if ((long)sz >= (long)MINSIZE)
1871 assert((sz & MALLOC_ALIGN_MASK) == 0);
1872 assert(aligned_OK(chunk2mem(p)));
1873 /* ... matching footer field */
1874 assert(next->prev_size == sz);
1875 /* ... and is fully consolidated */
1876 assert(prev_inuse(p));
1877 assert (next == top || inuse(next));
1879 /* ... and has minimally sane links */
1880 assert(p->fd->bk == p);
1881 assert(p->bk->fd == p);
1883 else /* markers are always of size SIZE_SZ */
1884 assert(sz == SIZE_SZ);
1888 static void do_check_inuse_chunk(mchunkptr p)
1890 static void do_check_inuse_chunk(p) mchunkptr p;
1893 mchunkptr next = next_chunk(p);
1896 /* Check whether it claims to be in use ... */
1899 /* ... and is surrounded by OK chunks.
1900 Since more things can be checked with free chunks than inuse ones,
1901 if an inuse chunk borders them and debug is on, it's worth doing them.
1905 mchunkptr prv = prev_chunk(p);
1906 assert(next_chunk(prv) == p);
1907 do_check_free_chunk(prv);
1911 assert(prev_inuse(next));
1912 assert(chunksize(next) >= MINSIZE);
1914 else if (!inuse(next))
1915 do_check_free_chunk(next);
1920 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1922 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1925 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1926 long room = long_sub_size_t(sz, s);
1928 do_check_inuse_chunk(p);
1930 /* Legal size ... */
1931 assert((long)sz >= (long)MINSIZE);
1932 assert((sz & MALLOC_ALIGN_MASK) == 0);
1934 assert(room < (long)MINSIZE);
1936 /* ... and alignment */
1937 assert(aligned_OK(chunk2mem(p)));
1940 /* ... and was allocated at front of an available chunk */
1941 assert(prev_inuse(p));
1946 #define check_free_chunk(P) do_check_free_chunk(P)
1947 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1948 #define check_chunk(P) do_check_chunk(P)
1949 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1951 #define check_free_chunk(P)
1952 #define check_inuse_chunk(P)
1953 #define check_chunk(P)
1954 #define check_malloced_chunk(P,N)
1960 Macro-based internal utilities
1965 Linking chunks in bin lists.
1966 Call these only with variables, not arbitrary expressions, as arguments.
1970 Place chunk p of size s in its bin, in size order,
1971 putting it ahead of others of same size.
1975 #define frontlink(P, S, IDX, BK, FD) \
1977 if (S < MAX_SMALLBIN_SIZE) \
1979 IDX = smallbin_index(S); \
1980 mark_binblock(IDX); \
1985 FD->bk = BK->fd = P; \
1989 IDX = bin_index(S); \
1992 if (FD == BK) mark_binblock(IDX); \
1995 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
2000 FD->bk = BK->fd = P; \
2005 /* take a chunk off a list */
2007 #define unlink(P, BK, FD) \
2015 /* Place p as the last remainder */
2017 #define link_last_remainder(P) \
2019 last_remainder->fd = last_remainder->bk = P; \
2020 P->fd = P->bk = last_remainder; \
2023 /* Clear the last_remainder bin */
2025 #define clear_last_remainder \
2026 (last_remainder->fd = last_remainder->bk = last_remainder)
2033 /* Routines dealing with mmap(). */
2037 #ifdef DEFINE_MALLOC
2040 static mchunkptr mmap_chunk(size_t size)
2042 static mchunkptr mmap_chunk(size) size_t size;
2045 size_t page_mask = malloc_getpagesize - 1;
2048 #ifndef MAP_ANONYMOUS
2052 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
2054 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
2055 * there is no following chunk whose prev_size field could be used.
2057 size = (size + SIZE_SZ + page_mask) & ~page_mask;
2059 #ifdef MAP_ANONYMOUS
2060 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
2061 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
2062 #else /* !MAP_ANONYMOUS */
2065 fd = open("/dev/zero", O_RDWR);
2066 if(fd < 0) return 0;
2068 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
2071 if(p == (mchunkptr)-1) return 0;
2074 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2076 /* We demand that eight bytes into a page must be 8-byte aligned. */
2077 assert(aligned_OK(chunk2mem(p)));
2079 /* The offset to the start of the mmapped region is stored
2080 * in the prev_size field of the chunk; normally it is zero,
2081 * but that can be changed in memalign().
2084 set_head(p, size|IS_MMAPPED);
2086 mmapped_mem += size;
2087 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2088 max_mmapped_mem = mmapped_mem;
2089 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2090 max_total_mem = mmapped_mem + sbrked_mem;
2094 #endif /* DEFINE_MALLOC */
2096 #ifdef SEPARATE_OBJECTS
2097 #define munmap_chunk malloc_munmap_chunk
2103 STATIC void munmap_chunk(mchunkptr p)
2105 STATIC void munmap_chunk(p) mchunkptr p;
2108 INTERNAL_SIZE_T size = chunksize(p);
2111 assert (chunk_is_mmapped(p));
2112 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2113 assert((n_mmaps > 0));
2114 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2117 mmapped_mem -= (size + p->prev_size);
2119 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2121 /* munmap returns non-zero on failure */
2125 #else /* ! DEFINE_FREE */
2128 extern void munmap_chunk(mchunkptr);
2130 extern void munmap_chunk();
2133 #endif /* ! DEFINE_FREE */
2137 #ifdef DEFINE_REALLOC
2140 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2142 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2145 size_t page_mask = malloc_getpagesize - 1;
2146 INTERNAL_SIZE_T offset = p->prev_size;
2147 INTERNAL_SIZE_T size = chunksize(p);
2150 assert (chunk_is_mmapped(p));
2151 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2152 assert((n_mmaps > 0));
2153 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2155 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2156 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2158 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2160 if (cp == (char *)-1) return 0;
2162 p = (mchunkptr)(cp + offset);
2164 assert(aligned_OK(chunk2mem(p)));
2166 assert((p->prev_size == offset));
2167 set_head(p, (new_size - offset)|IS_MMAPPED);
2169 mmapped_mem -= size + offset;
2170 mmapped_mem += new_size;
2171 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2172 max_mmapped_mem = mmapped_mem;
2173 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2174 max_total_mem = mmapped_mem + sbrked_mem;
2178 #endif /* DEFINE_REALLOC */
2180 #endif /* HAVE_MREMAP */
2182 #endif /* HAVE_MMAP */
2187 #ifdef DEFINE_MALLOC
2190 Extend the top-most chunk by obtaining memory from system.
2191 Main interface to sbrk (but see also malloc_trim).
2195 static void malloc_extend_top(RARG INTERNAL_SIZE_T nb)
2197 static void malloc_extend_top(RARG nb) RDECL INTERNAL_SIZE_T nb;
2200 char* brk; /* return value from sbrk */
2201 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2202 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2203 char* new_brk; /* return of 2nd sbrk call */
2204 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2206 mchunkptr old_top = top; /* Record state of old top */
2207 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2208 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2210 /* Pad request with top_pad plus minimal overhead */
2212 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2213 unsigned long pagesz = malloc_getpagesize;
2215 /* If not the first time through, round to preserve page boundary */
2216 /* Otherwise, we need to correct to a page size below anyway. */
2217 /* (We also correct below if an intervening foreign sbrk call.) */
2219 if (sbrk_base != (char*)(-1))
2220 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2222 brk = (char*)(MORECORE (sbrk_size));
2224 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2225 if (brk == (char*)(MORECORE_FAILURE) ||
2226 (brk < old_end && old_top != initial_top))
2229 sbrked_mem += sbrk_size;
2231 if (brk == old_end) /* can just add bytes to current top */
2233 top_size = sbrk_size + old_top_size;
2234 set_head(top, top_size | PREV_INUSE);
2238 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2240 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2241 sbrked_mem += brk - (char*)old_end;
2243 /* Guarantee alignment of first new chunk made from this space */
2244 front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2245 if (front_misalign > 0)
2247 correction = (MALLOC_ALIGNMENT) - front_misalign;
2253 /* Guarantee the next brk will be at a page boundary */
2254 correction += (((((POINTER_UINT)(brk + sbrk_size))+(pagesz-1)) &
2255 ~(pagesz - 1)) - ((POINTER_UINT)(brk + sbrk_size));
2257 /* Allocate correction */
2258 new_brk = (char*)(MORECORE (correction));
2259 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2261 sbrked_mem += correction;
2263 top = (mchunkptr)brk;
2264 top_size = new_brk - brk + correction;
2265 set_head(top, top_size | PREV_INUSE);
2267 if (old_top != initial_top)
2270 /* There must have been an intervening foreign sbrk call. */
2271 /* A double fencepost is necessary to prevent consolidation */
2273 /* If not enough space to do this, then user did something very wrong */
2274 if (old_top_size < MINSIZE)
2276 set_head(top, PREV_INUSE); /* will force null return from malloc */
2280 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2281 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2282 set_head_size(old_top, old_top_size);
2283 chunk_at_offset(old_top, old_top_size )->size =
2285 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2287 /* If possible, release the rest. */
2288 if (old_top_size >= MINSIZE)
2289 fREe(RCALL chunk2mem(old_top));
2293 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2294 max_sbrked_mem = sbrked_mem;
2296 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2297 max_total_mem = mmapped_mem + sbrked_mem;
2299 if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem)
2300 max_total_mem = sbrked_mem;
2303 /* We always land on a page boundary */
2304 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2307 #endif /* DEFINE_MALLOC */
2310 /* Main public routines */
2312 #ifdef DEFINE_MALLOC
2317 The requested size is first converted into a usable form, `nb'.
2318 This currently means to add 4 bytes overhead plus possibly more to
2319 obtain 8-byte alignment and/or to obtain a size of at least
2320 MINSIZE (currently 16 bytes), the smallest allocatable size.
2321 (All fits are considered `exact' if they are within MINSIZE bytes.)
2323 From there, the first successful of the following steps is taken:
2325 1. The bin corresponding to the request size is scanned, and if
2326 a chunk of exactly the right size is found, it is taken.
2328 2. The most recently remaindered chunk is used if it is big
2329 enough. This is a form of (roving) first fit, used only in
2330 the absence of exact fits. Runs of consecutive requests use
2331 the remainder of the chunk used for the previous such request
2332 whenever possible. This limited use of a first-fit style
2333 allocation strategy tends to give contiguous chunks
2334 coextensive lifetimes, which improves locality and can reduce
2335 fragmentation in the long run.
2337 3. Other bins are scanned in increasing size order, using a
2338 chunk big enough to fulfill the request, and splitting off
2339 any remainder. This search is strictly by best-fit; i.e.,
2340 the smallest (with ties going to approximately the least
2341 recently used) chunk that fits is selected.
2343 4. If large enough, the chunk bordering the end of memory
2344 (`top') is split off. (This use of `top' is in accord with
2345 the best-fit search rule. In effect, `top' is treated as
2346 larger (and thus less well fitting) than any other available
2347 chunk since it can be extended to be as large as necessary
2348 (up to system limitations).
2350 5. If the request size meets the mmap threshold and the
2351 system supports mmap, and there are few enough currently
2352 allocated mmapped regions, and a call to mmap succeeds,
2353 the request is allocated via direct memory mapping.
2355 6. Otherwise, the top of memory is extended by
2356 obtaining more space from the system (normally using sbrk,
2357 but definable to anything else via the MORECORE macro).
2358 Memory is gathered from the system (in system page-sized
2359 units) in a way that allows chunks obtained across different
2360 sbrk calls to be consolidated, but does not require
2361 contiguous memory. Thus, it should be safe to intersperse
2362 mallocs with other sbrk calls.
2365 All allocations are made from the the `lowest' part of any found
2366 chunk. (The implementation invariant is that prev_inuse is
2367 always true of any allocated chunk; i.e., that each allocated
2368 chunk borders either a previously allocated and still in-use chunk,
2369 or the base of its memory arena.)
2374 Void_t* mALLOc(RARG size_t bytes)
2376 Void_t* mALLOc(RARG bytes) RDECL size_t bytes;
2379 #ifdef MALLOC_PROVIDED
2385 mchunkptr victim; /* inspected/selected chunk */
2386 INTERNAL_SIZE_T victim_size; /* its size */
2387 int idx; /* index for bin traversal */
2388 mbinptr bin; /* associated bin */
2389 mchunkptr remainder; /* remainder from a split */
2390 long remainder_size; /* its size */
2391 int remainder_index; /* its bin index */
2392 unsigned long block; /* block traverser bit */
2393 int startidx; /* first bin of a traversed block */
2394 mchunkptr fwd; /* misc temp for linking */
2395 mchunkptr bck; /* misc temp for linking */
2396 mbinptr q; /* misc temp */
2400 if ((long)bytes < 0) return 0;
2402 nb = request2size(bytes); /* padded request size; */
2406 /* Check for exact match in a bin */
2408 if (is_small_request(nb)) /* Faster version for small requests */
2410 idx = smallbin_index(nb);
2412 /* No traversal or size check necessary for small bins. */
2417 #if MALLOC_ALIGN != 16
2418 /* Also scan the next one, since it would have a remainder < MINSIZE */
2427 victim_size = chunksize(victim);
2428 unlink(victim, bck, fwd);
2429 set_inuse_bit_at_offset(victim, victim_size);
2430 check_malloced_chunk(victim, nb);
2432 return chunk2mem(victim);
2435 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2440 idx = bin_index(nb);
2443 for (victim = last(bin); victim != bin; victim = victim->bk)
2445 victim_size = chunksize(victim);
2446 remainder_size = long_sub_size_t(victim_size, nb);
2448 if (remainder_size >= (long)MINSIZE) /* too big */
2450 --idx; /* adjust to rescan below after checking last remainder */
2454 else if (remainder_size >= 0) /* exact fit */
2456 unlink(victim, bck, fwd);
2457 set_inuse_bit_at_offset(victim, victim_size);
2458 check_malloced_chunk(victim, nb);
2460 return chunk2mem(victim);
2468 /* Try to use the last split-off remainder */
2470 if ( (victim = last_remainder->fd) != last_remainder)
2472 victim_size = chunksize(victim);
2473 remainder_size = long_sub_size_t(victim_size, nb);
2475 if (remainder_size >= (long)MINSIZE) /* re-split */
2477 remainder = chunk_at_offset(victim, nb);
2478 set_head(victim, nb | PREV_INUSE);
2479 link_last_remainder(remainder);
2480 set_head(remainder, remainder_size | PREV_INUSE);
2481 set_foot(remainder, remainder_size);
2482 check_malloced_chunk(victim, nb);
2484 return chunk2mem(victim);
2487 clear_last_remainder;
2489 if (remainder_size >= 0) /* exhaust */
2491 set_inuse_bit_at_offset(victim, victim_size);
2492 check_malloced_chunk(victim, nb);
2494 return chunk2mem(victim);
2497 /* Else place in bin */
2499 frontlink(victim, victim_size, remainder_index, bck, fwd);
2503 If there are any possibly nonempty big-enough blocks,
2504 search for best fitting chunk by scanning bins in blockwidth units.
2507 if ( (block = idx2binblock(idx)) <= binblocks)
2510 /* Get to the first marked block */
2512 if ( (block & binblocks) == 0)
2514 /* force to an even block boundary */
2515 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2517 while ((block & binblocks) == 0)
2519 idx += BINBLOCKWIDTH;
2524 /* For each possibly nonempty block ... */
2527 startidx = idx; /* (track incomplete blocks) */
2528 q = bin = bin_at(idx);
2530 /* For each bin in this block ... */
2533 /* Find and use first big enough chunk ... */
2535 for (victim = last(bin); victim != bin; victim = victim->bk)
2537 victim_size = chunksize(victim);
2538 remainder_size = long_sub_size_t(victim_size, nb);
2540 if (remainder_size >= (long)MINSIZE) /* split */
2542 remainder = chunk_at_offset(victim, nb);
2543 set_head(victim, nb | PREV_INUSE);
2544 unlink(victim, bck, fwd);
2545 link_last_remainder(remainder);
2546 set_head(remainder, remainder_size | PREV_INUSE);
2547 set_foot(remainder, remainder_size);
2548 check_malloced_chunk(victim, nb);
2550 return chunk2mem(victim);
2553 else if (remainder_size >= 0) /* take */
2555 set_inuse_bit_at_offset(victim, victim_size);
2556 unlink(victim, bck, fwd);
2557 check_malloced_chunk(victim, nb);
2559 return chunk2mem(victim);
2564 bin = next_bin(bin);
2566 #if MALLOC_ALIGN == 16
2567 if (idx < MAX_SMALLBIN)
2569 bin = next_bin(bin);
2573 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2575 /* Clear out the block bit. */
2577 do /* Possibly backtrack to try to clear a partial block */
2579 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2581 binblocks &= ~block;
2586 } while (first(q) == q);
2588 /* Get to the next possibly nonempty block */
2590 if ( (block <<= 1) <= binblocks && (block != 0) )
2592 while ((block & binblocks) == 0)
2594 idx += BINBLOCKWIDTH;
2604 /* Try to use top chunk */
2606 /* Require that there be a remainder, ensuring top always exists */
2607 remainder_size = long_sub_size_t(chunksize(top), nb);
2608 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2612 /* If big and would otherwise need to extend, try to use mmap instead */
2613 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2614 (victim = mmap_chunk(nb)) != 0)
2617 return chunk2mem(victim);
2622 malloc_extend_top(RCALL nb);
2623 remainder_size = long_sub_size_t(chunksize(top), nb);
2624 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2627 return 0; /* propagate failure */
2632 set_head(victim, nb | PREV_INUSE);
2633 top = chunk_at_offset(victim, nb);
2634 set_head(top, remainder_size | PREV_INUSE);
2635 check_malloced_chunk(victim, nb);
2637 return chunk2mem(victim);
2639 #endif /* MALLOC_PROVIDED */
2642 #endif /* DEFINE_MALLOC */
2652 1. free(0) has no effect.
2654 2. If the chunk was allocated via mmap, it is release via munmap().
2656 3. If a returned chunk borders the current high end of memory,
2657 it is consolidated into the top, and if the total unused
2658 topmost memory exceeds the trim threshold, malloc_trim is
2661 4. Other chunks are consolidated as they arrive, and
2662 placed in corresponding bins. (This includes the case of
2663 consolidating with the current `last_remainder').
2669 void fREe(RARG Void_t* mem)
2671 void fREe(RARG mem) RDECL Void_t* mem;
2674 #ifdef MALLOC_PROVIDED
2680 mchunkptr p; /* chunk corresponding to mem */
2681 INTERNAL_SIZE_T hd; /* its head field */
2682 INTERNAL_SIZE_T sz; /* its size */
2683 int idx; /* its bin index */
2684 mchunkptr next; /* next contiguous chunk */
2685 INTERNAL_SIZE_T nextsz; /* its size */
2686 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2687 mchunkptr bck; /* misc temp for linking */
2688 mchunkptr fwd; /* misc temp for linking */
2689 int islr; /* track whether merging with last_remainder */
2691 if (mem == 0) /* free(0) has no effect */
2700 if (hd & IS_MMAPPED) /* release mmapped memory. */
2708 check_inuse_chunk(p);
2710 sz = hd & ~PREV_INUSE;
2711 next = chunk_at_offset(p, sz);
2712 nextsz = chunksize(next);
2714 if (next == top) /* merge with top */
2718 if (!(hd & PREV_INUSE)) /* consolidate backward */
2720 prevsz = p->prev_size;
2721 p = chunk_at_offset(p, -((long) prevsz));
2723 unlink(p, bck, fwd);
2726 set_head(p, sz | PREV_INUSE);
2728 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2729 malloc_trim(RCALL top_pad);
2734 set_head(next, nextsz); /* clear inuse bit */
2738 if (!(hd & PREV_INUSE)) /* consolidate backward */
2740 prevsz = p->prev_size;
2741 p = chunk_at_offset(p, -((long) prevsz));
2744 if (p->fd == last_remainder) /* keep as last_remainder */
2747 unlink(p, bck, fwd);
2750 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2754 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2757 link_last_remainder(p);
2760 unlink(next, bck, fwd);
2764 set_head(p, sz | PREV_INUSE);
2767 frontlink(p, sz, idx, bck, fwd);
2771 #endif /* MALLOC_PROVIDED */
2774 #endif /* DEFINE_FREE */
2776 #ifdef DEFINE_REALLOC
2782 Chunks that were obtained via mmap cannot be extended or shrunk
2783 unless HAVE_MREMAP is defined, in which case mremap is used.
2784 Otherwise, if their reallocation is for additional space, they are
2785 copied. If for less, they are just left alone.
2787 Otherwise, if the reallocation is for additional space, and the
2788 chunk can be extended, it is, else a malloc-copy-free sequence is
2789 taken. There are several different ways that a chunk could be
2790 extended. All are tried:
2792 * Extending forward into following adjacent free chunk.
2793 * Shifting backwards, joining preceding adjacent space
2794 * Both shifting backwards and extending forward.
2795 * Extending into newly sbrked space
2797 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2798 size argument of zero (re)allocates a minimum-sized chunk.
2800 If the reallocation is for less space, and the new request is for
2801 a `small' (<512 bytes) size, then the newly unused space is lopped
2804 The old unix realloc convention of allowing the last-free'd chunk
2805 to be used as an argument to realloc is no longer supported.
2806 I don't know of any programs still relying on this feature,
2807 and allowing it would also allow too many other incorrect
2808 usages of realloc to be sensible.
2815 Void_t* rEALLOc(RARG Void_t* oldmem, size_t bytes)
2817 Void_t* rEALLOc(RARG oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2820 #ifdef MALLOC_PROVIDED
2822 realloc (oldmem, bytes);
2826 INTERNAL_SIZE_T nb; /* padded request size */
2828 mchunkptr oldp; /* chunk corresponding to oldmem */
2829 INTERNAL_SIZE_T oldsize; /* its size */
2831 mchunkptr newp; /* chunk to return */
2832 INTERNAL_SIZE_T newsize; /* its size */
2833 Void_t* newmem; /* corresponding user mem */
2835 mchunkptr next; /* next contiguous chunk after oldp */
2836 INTERNAL_SIZE_T nextsize; /* its size */
2838 mchunkptr prev; /* previous contiguous chunk before oldp */
2839 INTERNAL_SIZE_T prevsize; /* its size */
2841 mchunkptr remainder; /* holds split off extra space from newp */
2842 INTERNAL_SIZE_T remainder_size; /* its size */
2844 mchunkptr bck; /* misc temp for linking */
2845 mchunkptr fwd; /* misc temp for linking */
2847 #ifdef REALLOC_ZERO_BYTES_FREES
2848 if (bytes == 0) { fREe(RCALL oldmem); return 0; }
2851 if ((long)bytes < 0) return 0;
2853 /* realloc of null is supposed to be same as malloc */
2854 if (oldmem == 0) return mALLOc(RCALL bytes);
2858 newp = oldp = mem2chunk(oldmem);
2859 newsize = oldsize = chunksize(oldp);
2862 nb = request2size(bytes);
2865 if (chunk_is_mmapped(oldp))
2868 newp = mremap_chunk(oldp, nb);
2872 return chunk2mem(newp);
2875 /* Note the extra SIZE_SZ overhead. */
2876 if(oldsize - SIZE_SZ >= nb)
2879 return oldmem; /* do nothing */
2881 /* Must alloc, copy, free. */
2882 newmem = mALLOc(RCALL bytes);
2886 return 0; /* propagate failure */
2888 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2895 check_inuse_chunk(oldp);
2897 if ((long)(oldsize) < (long)(nb))
2900 /* Try expanding forward */
2902 next = chunk_at_offset(oldp, oldsize);
2903 if (next == top || !inuse(next))
2905 nextsize = chunksize(next);
2907 /* Forward into top only if a remainder */
2910 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2912 newsize += nextsize;
2913 top = chunk_at_offset(oldp, nb);
2914 set_head(top, (newsize - nb) | PREV_INUSE);
2915 set_head_size(oldp, nb);
2917 return chunk2mem(oldp);
2921 /* Forward into next chunk */
2922 else if (((long)(nextsize + newsize) >= (long)(nb)))
2924 unlink(next, bck, fwd);
2925 newsize += nextsize;
2935 /* Try shifting backwards. */
2937 if (!prev_inuse(oldp))
2939 prev = prev_chunk(oldp);
2940 prevsize = chunksize(prev);
2942 /* try forward + backward first to save a later consolidation */
2949 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2951 unlink(prev, bck, fwd);
2953 newsize += prevsize + nextsize;
2954 newmem = chunk2mem(newp);
2955 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2956 top = chunk_at_offset(newp, nb);
2957 set_head(top, (newsize - nb) | PREV_INUSE);
2958 set_head_size(newp, nb);
2964 /* into next chunk */
2965 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2967 unlink(next, bck, fwd);
2968 unlink(prev, bck, fwd);
2970 newsize += nextsize + prevsize;
2971 newmem = chunk2mem(newp);
2972 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2978 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2980 unlink(prev, bck, fwd);
2982 newsize += prevsize;
2983 newmem = chunk2mem(newp);
2984 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2991 newmem = mALLOc (RCALL bytes);
2993 if (newmem == 0) /* propagate failure */
2999 /* Avoid copy if newp is next chunk after oldp. */
3000 /* (This can only happen when new chunk is sbrk'ed.) */
3002 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
3004 newsize += chunksize(newp);
3009 /* Otherwise copy, free, and exit */
3010 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
3017 split: /* split off extra room in old or expanded chunk */
3019 remainder_size = long_sub_size_t(newsize, nb);
3021 if (remainder_size >= (long)MINSIZE) /* split off remainder */
3023 remainder = chunk_at_offset(newp, nb);
3024 set_head_size(newp, nb);
3025 set_head(remainder, remainder_size | PREV_INUSE);
3026 set_inuse_bit_at_offset(remainder, remainder_size);
3027 fREe(RCALL chunk2mem(remainder)); /* let free() deal with it */
3031 set_head_size(newp, newsize);
3032 set_inuse_bit_at_offset(newp, newsize);
3035 check_inuse_chunk(newp);
3037 return chunk2mem(newp);
3039 #endif /* MALLOC_PROVIDED */
3042 #endif /* DEFINE_REALLOC */
3044 #ifdef DEFINE_MEMALIGN
3050 memalign requests more than enough space from malloc, finds a spot
3051 within that chunk that meets the alignment request, and then
3052 possibly frees the leading and trailing space.
3054 The alignment argument must be a power of two. This property is not
3055 checked by memalign, so misuse may result in random runtime errors.
3057 8-byte alignment is guaranteed by normal malloc calls, so don't
3058 bother calling memalign with an argument of 8 or less.
3060 Overreliance on memalign is a sure way to fragment space.
3066 Void_t* mEMALIGn(RARG size_t alignment, size_t bytes)
3068 Void_t* mEMALIGn(RARG alignment, bytes) RDECL size_t alignment; size_t bytes;
3071 INTERNAL_SIZE_T nb; /* padded request size */
3072 char* m; /* memory returned by malloc call */
3073 mchunkptr p; /* corresponding chunk */
3074 char* brk; /* alignment point within p */
3075 mchunkptr newp; /* chunk to return */
3076 INTERNAL_SIZE_T newsize; /* its size */
3077 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3078 mchunkptr remainder; /* spare room at end to split off */
3079 long remainder_size; /* its size */
3081 if ((long)bytes < 0) return 0;
3083 /* If need less alignment than we give anyway, just relay to malloc */
3085 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(RCALL bytes);
3087 /* Otherwise, ensure that it is at least a minimum chunk size */
3089 if (alignment < MINSIZE) alignment = MINSIZE;
3091 /* Call malloc with worst case padding to hit alignment. */
3093 nb = request2size(bytes);
3094 m = (char*)(mALLOc(RCALL nb + alignment + MINSIZE));
3096 if (m == 0) return 0; /* propagate failure */
3102 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3105 if(chunk_is_mmapped(p))
3108 return chunk2mem(p); /* nothing more to do */
3112 else /* misaligned */
3115 Find an aligned spot inside chunk.
3116 Since we need to give back leading space in a chunk of at
3117 least MINSIZE, if the first calculation places us at
3118 a spot with less than MINSIZE leader, we can move to the
3119 next aligned spot -- we've allocated enough total room so that
3120 this is always possible.
3123 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
3124 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3126 newp = (mchunkptr)brk;
3127 leadsize = brk - (char*)(p);
3128 newsize = chunksize(p) - leadsize;
3131 if(chunk_is_mmapped(p))
3133 newp->prev_size = p->prev_size + leadsize;
3134 set_head(newp, newsize|IS_MMAPPED);
3136 return chunk2mem(newp);
3140 /* give back leader, use the rest */
3142 set_head(newp, newsize | PREV_INUSE);
3143 set_inuse_bit_at_offset(newp, newsize);
3144 set_head_size(p, leadsize);
3145 fREe(RCALL chunk2mem(p));
3148 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3151 /* Also give back spare room at the end */
3153 remainder_size = long_sub_size_t(chunksize(p), nb);
3155 if (remainder_size >= (long)MINSIZE)
3157 remainder = chunk_at_offset(p, nb);
3158 set_head(remainder, remainder_size | PREV_INUSE);
3159 set_head_size(p, nb);
3160 fREe(RCALL chunk2mem(remainder));
3163 check_inuse_chunk(p);
3165 return chunk2mem(p);
3169 #endif /* DEFINE_MEMALIGN */
3171 #ifdef DEFINE_VALLOC
3174 valloc just invokes memalign with alignment argument equal
3175 to the page size of the system (or as near to this as can
3176 be figured out from all the includes/defines above.)
3180 Void_t* vALLOc(RARG size_t bytes)
3182 Void_t* vALLOc(RARG bytes) RDECL size_t bytes;
3185 return mEMALIGn (RCALL malloc_getpagesize, bytes);
3188 #endif /* DEFINE_VALLOC */
3190 #ifdef DEFINE_PVALLOC
3193 pvalloc just invokes valloc for the nearest pagesize
3194 that will accommodate request
3199 Void_t* pvALLOc(RARG size_t bytes)
3201 Void_t* pvALLOc(RARG bytes) RDECL size_t bytes;
3204 size_t pagesize = malloc_getpagesize;
3205 return mEMALIGn (RCALL pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3208 #endif /* DEFINE_PVALLOC */
3210 #ifdef DEFINE_CALLOC
3214 calloc calls malloc, then zeroes out the allocated chunk.
3219 Void_t* cALLOc(RARG size_t n, size_t elem_size)
3221 Void_t* cALLOc(RARG n, elem_size) RDECL size_t n; size_t elem_size;
3225 INTERNAL_SIZE_T csz;
3227 INTERNAL_SIZE_T sz = n * elem_size;
3231 INTERNAL_SIZE_T oldtopsize;
3236 /* check if expand_top called, in which case don't need to clear */
3240 oldtopsize = chunksize(top);
3243 mem = mALLOc (RCALL sz);
3245 if ((long)n < 0) return 0;
3258 /* Two optional cases in which clearing not necessary */
3262 if (chunk_is_mmapped(p))
3274 if (p == oldtop && csz > oldtopsize)
3276 /* clear only the bytes from non-freshly-sbrked memory */
3282 MALLOC_ZERO(mem, csz - SIZE_SZ);
3287 #endif /* DEFINE_CALLOC */
3293 cfree just calls free. It is needed/defined on some systems
3294 that pair it with calloc, presumably for odd historical reasons.
3298 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3299 #if !defined(INTERNAL_NEWLIB) || !defined(_REENT_ONLY)
3301 void cfree(Void_t *mem)
3303 void cfree(mem) Void_t *mem;
3306 #ifdef INTERNAL_NEWLIB
3315 #endif /* DEFINE_CFREE */
3321 Malloc_trim gives memory back to the system (via negative
3322 arguments to sbrk) if there is unused memory at the `high' end of
3323 the malloc pool. You can call this after freeing large blocks of
3324 memory to potentially reduce the system-level memory requirements
3325 of a program. However, it cannot guarantee to reduce memory. Under
3326 some allocation patterns, some large free blocks of memory will be
3327 locked between two used chunks, so they cannot be given back to
3330 The `pad' argument to malloc_trim represents the amount of free
3331 trailing space to leave untrimmed. If this argument is zero,
3332 only the minimum amount of memory to maintain internal data
3333 structures will be left (one page or less). Non-zero arguments
3334 can be supplied to maintain enough trailing space to service
3335 future expected allocations without having to re-obtain memory
3338 Malloc_trim returns 1 if it actually released any memory, else 0.
3343 int malloc_trim(RARG size_t pad)
3345 int malloc_trim(RARG pad) RDECL size_t pad;
3348 long top_size; /* Amount of top-most memory */
3349 long extra; /* Amount to release */
3350 char* current_brk; /* address returned by pre-check sbrk call */
3351 char* new_brk; /* address returned by negative sbrk call */
3353 unsigned long pagesz = malloc_getpagesize;
3357 top_size = chunksize(top);
3358 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3360 if (extra < (long)pagesz) /* Not enough memory to release */
3368 /* Test to make sure no one else called sbrk */
3369 current_brk = (char*)(MORECORE (0));
3370 if (current_brk != (char*)(top) + top_size)
3373 return 0; /* Apparently we don't own memory; must fail */
3378 new_brk = (char*)(MORECORE (-extra));
3380 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3382 /* Try to figure out what we have */
3383 current_brk = (char*)(MORECORE (0));
3384 top_size = current_brk - (char*)top;
3385 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3387 sbrked_mem = current_brk - sbrk_base;
3388 set_head(top, top_size | PREV_INUSE);
3397 /* Success. Adjust top accordingly. */
3398 set_head(top, (top_size - extra) | PREV_INUSE);
3399 sbrked_mem -= extra;
3408 #endif /* DEFINE_FREE */
3410 #ifdef DEFINE_MALLOC_USABLE_SIZE
3415 This routine tells you how many bytes you can actually use in an
3416 allocated chunk, which may be more than you requested (although
3417 often not). You can use this many bytes without worrying about
3418 overwriting other allocated objects. Not a particularly great
3419 programming practice, but still sometimes useful.
3424 size_t malloc_usable_size(RARG Void_t* mem)
3426 size_t malloc_usable_size(RARG mem) RDECL Void_t* mem;
3435 if(!chunk_is_mmapped(p))
3437 if (!inuse(p)) return 0;
3440 check_inuse_chunk(p);
3443 return chunksize(p) - SIZE_SZ;
3445 return chunksize(p) - 2*SIZE_SZ;
3449 #endif /* DEFINE_MALLOC_USABLE_SIZE */
3451 #ifdef DEFINE_MALLINFO
3453 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3455 STATIC void malloc_update_mallinfo()
3464 INTERNAL_SIZE_T avail = chunksize(top);
3465 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3467 for (i = 1; i < NAV; ++i)
3470 for (p = last(b); p != b; p = p->bk)
3473 check_free_chunk(p);
3474 for (q = next_chunk(p);
3475 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3477 check_inuse_chunk(q);
3479 avail += chunksize(p);
3484 current_mallinfo.ordblks = navail;
3485 current_mallinfo.uordblks = sbrked_mem - avail;
3486 current_mallinfo.fordblks = avail;
3488 current_mallinfo.hblks = n_mmaps;
3489 current_mallinfo.hblkhd = mmapped_mem;
3491 current_mallinfo.keepcost = chunksize(top);
3495 #else /* ! DEFINE_MALLINFO */
3498 extern void malloc_update_mallinfo(void);
3500 extern void malloc_update_mallinfo();
3503 #endif /* ! DEFINE_MALLINFO */
3505 #ifdef DEFINE_MALLOC_STATS
3511 Prints on stderr the amount of space obtain from the system (both
3512 via sbrk and mmap), the maximum amount (which may be more than
3513 current if malloc_trim and/or munmap got called), the maximum
3514 number of simultaneous mmap regions used, and the current number
3515 of bytes allocated via malloc (or realloc, etc) but not yet
3516 freed. (Note that this is the number of bytes allocated, not the
3517 number requested. It will be larger than the number requested
3518 because of alignment and bookkeeping overhead.)
3523 void malloc_stats(RONEARG)
3525 void malloc_stats(RONEARG) RDECL
3528 unsigned long local_max_total_mem;
3529 int local_sbrked_mem;
3530 struct mallinfo local_mallinfo;
3532 unsigned long local_mmapped_mem, local_max_n_mmaps;
3537 malloc_update_mallinfo();
3538 local_max_total_mem = max_total_mem;
3539 local_sbrked_mem = sbrked_mem;
3540 local_mallinfo = current_mallinfo;
3542 local_mmapped_mem = mmapped_mem;
3543 local_max_n_mmaps = max_n_mmaps;
3547 #ifdef INTERNAL_NEWLIB
3548 fp = _stderr_r(reent_ptr);
3549 #define fprintf fiprintf
3554 fprintf(fp, "max system bytes = %10u\n",
3555 (unsigned int)(local_max_total_mem));
3557 fprintf(fp, "system bytes = %10u\n",
3558 (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3559 fprintf(fp, "in use bytes = %10u\n",
3560 (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3562 fprintf(fp, "system bytes = %10u\n",
3563 (unsigned int)local_sbrked_mem);
3564 fprintf(fp, "in use bytes = %10u\n",
3565 (unsigned int)local_mallinfo.uordblks);
3568 fprintf(fp, "max mmap regions = %10u\n",
3569 (unsigned int)local_max_n_mmaps);
3573 #endif /* DEFINE_MALLOC_STATS */
3575 #ifdef DEFINE_MALLINFO
3578 mallinfo returns a copy of updated current mallinfo.
3582 struct mallinfo mALLINFo(RONEARG)
3584 struct mallinfo mALLINFo(RONEARG) RDECL
3587 struct mallinfo ret;
3590 malloc_update_mallinfo();
3591 ret = current_mallinfo;
3596 #endif /* DEFINE_MALLINFO */
3598 #ifdef DEFINE_MALLOPT
3603 mallopt is the general SVID/XPG interface to tunable parameters.
3604 The format is to provide a (parameter-number, parameter-value) pair.
3605 mallopt then sets the corresponding parameter to the argument
3606 value if it can (i.e., so long as the value is meaningful),
3607 and returns 1 if successful else 0.
3609 See descriptions of tunable parameters above.
3614 int mALLOPt(RARG int param_number, int value)
3616 int mALLOPt(RARG param_number, value) RDECL int param_number; int value;
3620 switch(param_number)
3622 case M_TRIM_THRESHOLD:
3623 trim_threshold = value; MALLOC_UNLOCK; return 1;
3625 top_pad = value; MALLOC_UNLOCK; return 1;
3626 case M_MMAP_THRESHOLD:
3628 mmap_threshold = value;
3634 n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3636 MALLOC_UNLOCK; return value == 0;
3645 #endif /* DEFINE_MALLOPT */
3651 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3652 * return null for negative arguments
3653 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3654 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3655 (e.g. WIN32 platforms)
3656 * Cleanup up header file inclusion for WIN32 platforms
3657 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3658 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3659 memory allocation routines
3660 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3661 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3662 usage of 'assert' in non-WIN32 code
3663 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3665 * Always call 'fREe()' rather than 'free()'
3667 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3668 * Fixed ordering problem with boundary-stamping
3670 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3671 * Added pvalloc, as recommended by H.J. Liu
3672 * Added 64bit pointer support mainly from Wolfram Gloger
3673 * Added anonymously donated WIN32 sbrk emulation
3674 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3675 * malloc_extend_top: fix mask error that caused wastage after
3677 * Add linux mremap support code from HJ Liu
3679 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3680 * Integrated most documentation with the code.
3681 * Add support for mmap, with help from
3682 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3683 * Use last_remainder in more cases.
3684 * Pack bins using idea from colin@nyx10.cs.du.edu
3685 * Use ordered bins instead of best-fit threshhold
3686 * Eliminate block-local decls to simplify tracing and debugging.
3687 * Support another case of realloc via move into top
3688 * Fix error occuring when initial sbrk_base not word-aligned.
3689 * Rely on page size for units instead of SBRK_UNIT to
3690 avoid surprises about sbrk alignment conventions.
3691 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3692 (raymond@es.ele.tue.nl) for the suggestion.
3693 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3694 * More precautions for cases where other routines call sbrk,
3695 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3696 * Added macros etc., allowing use in linux libc from
3697 H.J. Lu (hjl@gnu.ai.mit.edu)
3698 * Inverted this history list
3700 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3701 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3702 * Removed all preallocation code since under current scheme
3703 the work required to undo bad preallocations exceeds
3704 the work saved in good cases for most test programs.
3705 * No longer use return list or unconsolidated bins since
3706 no scheme using them consistently outperforms those that don't
3707 given above changes.
3708 * Use best fit for very large chunks to prevent some worst-cases.
3709 * Added some support for debugging
3711 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3712 * Removed footers when chunks are in use. Thanks to
3713 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3715 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3716 * Added malloc_trim, with help from Wolfram Gloger
3717 (wmglo@Dent.MED.Uni-Muenchen.DE).
3719 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3721 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3722 * realloc: try to expand in both directions
3723 * malloc: swap order of clean-bin strategy;
3724 * realloc: only conditionally expand backwards
3725 * Try not to scavenge used bins
3726 * Use bin counts as a guide to preallocation
3727 * Occasionally bin return list chunks in first scan
3728 * Add a few optimizations from colin@nyx10.cs.du.edu
3730 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3731 * faster bin computation & slightly different binning
3732 * merged all consolidations to one part of malloc proper
3733 (eliminating old malloc_find_space & malloc_clean_bin)
3734 * Scan 2 returns chunks (not just 1)
3735 * Propagate failure in realloc if malloc returns 0
3736 * Add stuff to allow compilation on non-ANSI compilers
3737 from kpv@research.att.com
3739 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3740 * removed potential for odd address access in prev_chunk
3741 * removed dependency on getpagesize.h
3742 * misc cosmetics and a bit more internal documentation
3743 * anticosmetics: mangled names in macros to evade debugger strangeness
3744 * tested on sparc, hp-700, dec-mips, rs6000
3745 with gcc & native cc (hp, dec only) allowing
3746 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3748 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3749 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3750 structure of old version, but most details differ.)