1 /* ---------- To make a malloc.h, start cutting here ------------ */
4 A version of malloc/free/realloc written by Doug Lea and released to the
5 public domain. Send questions/comments/complaints/performance data
8 * VERSION 2.6.4 Thu Nov 28 07:54:55 1996 Doug Lea (dl at gee)
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://g.oswego.edu/pub/misc/malloc.c
12 Check before installing!
14 * Why use this malloc?
16 This is not the fastest, most space-conserving, most portable, or
17 most tunable malloc ever written. However it is among the fastest
18 while also being among the most space-conserving, portable and tunable.
19 Consistent balance across these factors results in a good general-purpose
20 allocator. For a high-level description, see
21 http://g.oswego.edu/dl/html/malloc.html
23 * Synopsis of public routines
25 (Much fuller descriptions are contained in the program documentation below.)
28 Return a pointer to a newly allocated chunk of at least n bytes, or null
29 if no space is available.
31 Release the chunk of memory pointed to by p, or no effect if p is null.
32 realloc(Void_t* p, size_t n);
33 Return a pointer to a chunk of size n that contains the same data
34 as does chunk p up to the minimum of (n, p's size) bytes, or null
35 if no space is available. The returned pointer may or may not be
36 the same as p. If p is null, equivalent to malloc. Unless the
37 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
38 size argument of zero (re)allocates a minimum-sized chunk.
39 memalign(size_t alignment, size_t n);
40 Return a pointer to a newly allocated chunk of n bytes, aligned
41 in accord with the alignment argument, which must be a power of
44 Equivalent to memalign(pagesize, n), where pagesize is the page
45 size of the system (or as near to this as can be figured out from
46 all the includes/defines below.)
48 Equivalent to valloc(minimum-page-that-holds(n)), that is,
49 round up n to nearest pagesize.
50 calloc(size_t unit, size_t quantity);
51 Returns a pointer to quantity * unit bytes, with all locations
54 Equivalent to free(p).
55 malloc_trim(size_t pad);
56 Release all but pad bytes of freed top-most memory back
57 to the system. Return 1 if successful, else 0.
58 malloc_usable_size(Void_t* p);
59 Report the number usable allocated bytes associated with allocated
60 chunk p. This may or may not report more bytes than were requested,
61 due to alignment and minimum size constraints.
63 Prints brief summary statistics on stderr.
65 Returns (by copy) a struct containing various summary statistics.
66 mallopt(int parameter_number, int parameter_value)
67 Changes one of the tunable parameters described below. Returns
68 1 if successful in changing the parameter, else 0.
73 8 byte alignment is currently hardwired into the design. This
74 seems to suffice for all current machines and C compilers.
76 Assumed pointer representation: 4 or 8 bytes
77 Code for 8-byte pointers is untested by me but has worked
78 reliably by Wolfram Gloger, who contributed most of the
79 changes supporting this.
81 Assumed size_t representation: 4 or 8 bytes
82 Note that size_t is allowed to be 4 bytes even if pointers are 8.
84 Minimum overhead per allocated chunk: 4 or 8 bytes
85 Each malloced chunk has a hidden overhead of 4 bytes holding size
86 and status information.
88 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
89 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
91 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
92 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
93 needed; 4 (8) for a trailing size field
94 and 8 (16) bytes for free list pointers. Thus, the minimum
95 allocatable size is 16/24/32 bytes.
97 Even a request for zero bytes (i.e., malloc(0)) returns a
98 pointer to something of the minimum allocatable size.
100 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
101 8-byte size_t: 2^63 - 16 bytes
103 It is assumed that (possibly signed) size_t bit values suffice to
104 represent chunk sizes. `Possibly signed' is due to the fact
105 that `size_t' may be defined on a system as either a signed or
106 an unsigned type. To be conservative, values that would appear
107 as negative numbers are avoided.
108 Requests for sizes with a negative sign bit will return a
111 Maximum overhead wastage per allocated chunk: normally 15 bytes
113 Alignnment demands, plus the minimum allocatable size restriction
114 make the normal worst-case wastage 15 bytes (i.e., up to 15
115 more bytes will be allocated than were requested in malloc), with
117 1. Because requests for zero bytes allocate non-zero space,
118 the worst case wastage for a request of zero bytes is 24 bytes.
119 2. For requests >= mmap_threshold that are serviced via
120 mmap(), the worst case wastage is 8 bytes plus the remainder
121 from a system page (the minimal mmap unit); typically 4096 bytes.
125 Here are some features that are NOT currently supported
127 * No user-definable hooks for callbacks and the like.
128 * No automated mechanism for fully checking that all accesses
129 to malloced memory stay within their bounds.
130 * No support for compaction.
132 * Synopsis of compile-time options:
134 People have reported using previous versions of this malloc on all
135 versions of Unix, sometimes by tweaking some of the defines
136 below. It has been tested most extensively on Solaris and
137 Linux. It is also reported to work on WIN32 platforms.
138 People have also reported adapting this malloc for use in
139 stand-alone embedded systems.
141 The implementation is in straight, hand-tuned ANSI C. Among other
142 consequences, it uses a lot of macros. Because of this, to be at
143 all usable, this code should be compiled using an optimizing compiler
144 (for example gcc -O2) that can simplify expressions and control
147 __STD_C (default: derived from C compiler defines)
148 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
149 a C compiler sufficiently close to ANSI to get away with it.
150 DEBUG (default: NOT defined)
151 Define to enable debugging. Adds fairly extensive assertion-based
152 checking to help track down memory errors, but noticeably slows down
154 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
155 Define this if you think that realloc(p, 0) should be equivalent
156 to free(p). Otherwise, since malloc returns a unique pointer for
157 malloc(0), so does realloc(p, 0).
158 HAVE_MEMCPY (default: defined)
159 Define if you are not otherwise using ANSI STD C, but still
160 have memcpy and memset in your C library and want to use them.
161 Otherwise, simple internal versions are supplied.
162 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
163 Define as 1 if you want the C library versions of memset and
164 memcpy called in realloc and calloc (otherwise macro versions are used).
165 At least on some platforms, the simple macro versions usually
166 outperform libc versions.
167 HAVE_MMAP (default: defined as 1)
168 Define to non-zero to optionally make malloc() use mmap() to
169 allocate very large blocks.
170 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
171 Define to non-zero to optionally make realloc() use mremap() to
172 reallocate very large blocks.
173 malloc_getpagesize (default: derived from system #includes)
174 Either a constant or routine call returning the system page size.
175 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
176 Optionally define if you are on a system with a /usr/include/malloc.h
177 that declares struct mallinfo. It is not at all necessary to
178 define this even if you do, but will ensure consistency.
179 INTERNAL_SIZE_T (default: size_t)
180 Define to a 32-bit type (probably `unsigned int') if you are on a
181 64-bit machine, yet do not want or need to allow malloc requests of
182 greater than 2^31 to be handled. This saves space, especially for
184 INTERNAL_LINUX_C_LIB (default: NOT defined)
185 Defined only when compiled as part of Linux libc.
186 Also note that there is some odd internal name-mangling via defines
187 (for example, internally, `malloc' is named `mALLOc') needed
188 when compiling in this case. These look funny but don't otherwise
190 WIN32 (default: undefined)
191 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
192 LACKS_UNISTD_H (default: undefined)
193 Define this if your system does not have a <unistd.h>.
194 MORECORE (default: sbrk)
195 The name of the routine to call to obtain more memory from the system.
196 MORECORE_FAILURE (default: -1)
197 The value returned upon failure of MORECORE.
198 MORECORE_CLEARS (default 1)
199 True (1) if the routine mapped to MORECORE zeroes out memory (which
201 DEFAULT_TRIM_THRESHOLD
203 DEFAULT_MMAP_THRESHOLD
205 Default values of tunable parameters (described in detail below)
206 controlling interaction with host system routines (sbrk, mmap, etc).
207 These values may also be changed dynamically via mallopt(). The
208 preset defaults are those that give best performance for typical
227 #endif /*__cplusplus*/
240 #include <stddef.h> /* for size_t */
242 #include <sys/types.h>
249 #include <stdio.h> /* needed for malloc_stats */
260 Because freed chunks may be overwritten with link fields, this
261 malloc will often die when freed memory is overwritten by user
262 programs. This can be very effective (albeit in an annoying way)
263 in helping track down dangling pointers.
265 If you compile with -DDEBUG, a number of assertion checks are
266 enabled that will catch more memory errors. You probably won't be
267 able to make much sense of the actual assertion errors, but they
268 should help you locate incorrectly overwritten memory. The
269 checking is fairly extensive, and will slow down execution
270 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
271 attempt to check every non-mmapped allocated and free chunk in the
272 course of computing the summmaries. (By nature, mmapped regions
273 cannot be checked very much automatically.)
275 Setting DEBUG may also be helpful if you are trying to modify
276 this code. The assertions in the check routines spell out in more
277 detail the assumptions and invariants underlying the algorithms.
284 #define assert(x) ((void)0)
289 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
290 of chunk sizes. On a 64-bit machine, you can reduce malloc
291 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
292 at the expense of not being able to handle requests greater than
293 2^31. This limitation is hardly ever a concern; you are encouraged
294 to set this. However, the default version is the same as size_t.
297 #ifndef INTERNAL_SIZE_T
298 #define INTERNAL_SIZE_T size_t
302 REALLOC_ZERO_BYTES_FREES should be set if a call to
303 realloc with zero bytes should be the same as a call to free.
304 Some people think it should. Otherwise, since this malloc
305 returns a unique pointer for malloc(0), so does realloc(p, 0).
309 /* #define REALLOC_ZERO_BYTES_FREES */
313 WIN32 causes an emulation of sbrk to be compiled in
314 mmap-based options are not currently supported in WIN32.
319 #define MORECORE wsbrk
325 HAVE_MEMCPY should be defined if you are not otherwise using
326 ANSI STD C, but still have memcpy and memset in your C library
327 and want to use them in calloc and realloc. Otherwise simple
328 macro versions are defined here.
330 USE_MEMCPY should be defined as 1 if you actually want to
331 have memset and memcpy called. People report that the macro
332 versions are often enough faster than libc versions on many
333 systems that it is better to use them.
347 #if (__STD_C || defined(HAVE_MEMCPY))
350 void* memset(void*, int, size_t);
351 void* memcpy(void*, const void*, size_t);
360 /* The following macros are only invoked with (2n+1)-multiples of
361 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
362 for fast inline execution when n is small. */
364 #define MALLOC_ZERO(charp, nbytes) \
366 INTERNAL_SIZE_T mzsz = (nbytes); \
367 if(mzsz <= 9*sizeof(mzsz)) { \
368 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
369 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
371 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
373 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
378 } else memset((charp), 0, mzsz); \
381 #define MALLOC_COPY(dest,src,nbytes) \
383 INTERNAL_SIZE_T mcsz = (nbytes); \
384 if(mcsz <= 9*sizeof(mcsz)) { \
385 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
386 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
387 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
388 *mcdst++ = *mcsrc++; \
389 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
390 *mcdst++ = *mcsrc++; \
391 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
392 *mcdst++ = *mcsrc++; }}} \
393 *mcdst++ = *mcsrc++; \
394 *mcdst++ = *mcsrc++; \
396 } else memcpy(dest, src, mcsz); \
399 #else /* !USE_MEMCPY */
401 /* Use Duff's device for good zeroing/copying performance. */
403 #define MALLOC_ZERO(charp, nbytes) \
405 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
406 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
407 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
409 case 0: for(;;) { *mzp++ = 0; \
410 case 7: *mzp++ = 0; \
411 case 6: *mzp++ = 0; \
412 case 5: *mzp++ = 0; \
413 case 4: *mzp++ = 0; \
414 case 3: *mzp++ = 0; \
415 case 2: *mzp++ = 0; \
416 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
420 #define MALLOC_COPY(dest,src,nbytes) \
422 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
423 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
424 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
425 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
427 case 0: for(;;) { *mcdst++ = *mcsrc++; \
428 case 7: *mcdst++ = *mcsrc++; \
429 case 6: *mcdst++ = *mcsrc++; \
430 case 5: *mcdst++ = *mcsrc++; \
431 case 4: *mcdst++ = *mcsrc++; \
432 case 3: *mcdst++ = *mcsrc++; \
433 case 2: *mcdst++ = *mcsrc++; \
434 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
442 Define HAVE_MMAP to optionally make malloc() use mmap() to
443 allocate very large blocks. These will be returned to the
444 operating system immediately after a free().
452 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
453 large blocks. This is currently only possible on Linux with
454 kernel versions newer than 1.3.77.
458 #ifdef INTERNAL_LINUX_C_LIB
459 #define HAVE_MREMAP 1
461 #define HAVE_MREMAP 0
469 #include <sys/mman.h>
471 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
472 #define MAP_ANONYMOUS MAP_ANON
475 #endif /* HAVE_MMAP */
478 Access to system page size. To the extent possible, this malloc
479 manages memory from the system in page-size units.
481 The following mechanics for getpagesize were adapted from
482 bsd/gnu getpagesize.h
485 #ifndef LACKS_UNISTD_H
489 #ifndef malloc_getpagesize
490 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
491 # ifndef _SC_PAGE_SIZE
492 # define _SC_PAGE_SIZE _SC_PAGESIZE
495 # ifdef _SC_PAGE_SIZE
496 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
498 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
499 extern size_t getpagesize();
500 # define malloc_getpagesize getpagesize()
502 # include <sys/param.h>
503 # ifdef EXEC_PAGESIZE
504 # define malloc_getpagesize EXEC_PAGESIZE
508 # define malloc_getpagesize NBPG
510 # define malloc_getpagesize (NBPG * CLSIZE)
514 # define malloc_getpagesize NBPC
517 # define malloc_getpagesize PAGESIZE
519 # define malloc_getpagesize (4096) /* just guess */
532 This version of malloc supports the standard SVID/XPG mallinfo
533 routine that returns a struct containing the same kind of
534 information you can get from malloc_stats. It should work on
535 any SVID/XPG compliant system that has a /usr/include/malloc.h
536 defining struct mallinfo. (If you'd like to install such a thing
537 yourself, cut out the preliminary declarations as described above
538 and below and save them in a malloc.h file. But there's no
539 compelling reason to bother to do this.)
541 The main declaration needed is the mallinfo struct that is returned
542 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
543 bunch of fields, most of which are not even meaningful in this
544 version of malloc. Some of these fields are are instead filled by
545 mallinfo() with other numbers that might possibly be of interest.
547 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
548 /usr/include/malloc.h file that includes a declaration of struct
549 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
550 version is declared below. These must be precisely the same for
555 /* #define HAVE_USR_INCLUDE_MALLOC_H */
557 #if HAVE_USR_INCLUDE_MALLOC_H
558 #include "/usr/include/malloc.h"
561 /* SVID2/XPG mallinfo structure */
564 int arena; /* total space allocated from system */
565 int ordblks; /* number of non-inuse chunks */
566 int smblks; /* unused -- always zero */
567 int hblks; /* number of mmapped regions */
568 int hblkhd; /* total space in mmapped regions */
569 int usmblks; /* unused -- always zero */
570 int fsmblks; /* unused -- always zero */
571 int uordblks; /* total allocated space */
572 int fordblks; /* total non-inuse space */
573 int keepcost; /* top-most, releasable (via malloc_trim) space */
576 /* SVID2/XPG mallopt options */
578 #define M_MXFAST 1 /* UNUSED in this malloc */
579 #define M_NLBLKS 2 /* UNUSED in this malloc */
580 #define M_GRAIN 3 /* UNUSED in this malloc */
581 #define M_KEEP 4 /* UNUSED in this malloc */
585 /* mallopt options that actually do something */
587 #define M_TRIM_THRESHOLD -1
589 #define M_MMAP_THRESHOLD -3
590 #define M_MMAP_MAX -4
594 #ifndef DEFAULT_TRIM_THRESHOLD
595 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
599 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
600 to keep before releasing via malloc_trim in free().
602 Automatic trimming is mainly useful in long-lived programs.
603 Because trimming via sbrk can be slow on some systems, and can
604 sometimes be wasteful (in cases where programs immediately
605 afterward allocate more large chunks) the value should be high
606 enough so that your overall system performance would improve by
609 The trim threshold and the mmap control parameters (see below)
610 can be traded off with one another. Trimming and mmapping are
611 two different ways of releasing unused memory back to the
612 system. Between these two, it is often possible to keep
613 system-level demands of a long-lived program down to a bare
614 minimum. For example, in one test suite of sessions measuring
615 the XF86 X server on Linux, using a trim threshold of 128K and a
616 mmap threshold of 192K led to near-minimal long term resource
619 If you are using this malloc in a long-lived program, it should
620 pay to experiment with these values. As a rough guide, you
621 might set to a value close to the average size of a process
622 (program) running on your system. Releasing this much memory
623 would allow such a process to run in memory. Generally, it's
624 worth it to tune for trimming rather tham memory mapping when a
625 program undergoes phases where several large chunks are
626 allocated and released in ways that can reuse each other's
627 storage, perhaps mixed with phases where there are no such
628 chunks at all. And in well-behaved long-lived programs,
629 controlling release of large blocks via trimming versus mapping
632 However, in most programs, these parameters serve mainly as
633 protection against the system-level effects of carrying around
634 massive amounts of unneeded memory. Since frequent calls to
635 sbrk, mmap, and munmap otherwise degrade performance, the default
636 parameters are set to relatively high values that serve only as
639 The default trim value is high enough to cause trimming only in
640 fairly extreme (by current memory consumption standards) cases.
641 It must be greater than page size to have any useful effect. To
642 disable trimming completely, you can set to (unsigned long)(-1);
648 #ifndef DEFAULT_TOP_PAD
649 #define DEFAULT_TOP_PAD (0)
653 M_TOP_PAD is the amount of extra `padding' space to allocate or
654 retain whenever sbrk is called. It is used in two ways internally:
656 * When sbrk is called to extend the top of the arena to satisfy
657 a new malloc request, this much padding is added to the sbrk
660 * When malloc_trim is called automatically from free(),
661 it is used as the `pad' argument.
663 In both cases, the actual amount of padding is rounded
664 so that the end of the arena is always a system page boundary.
666 The main reason for using padding is to avoid calling sbrk so
667 often. Having even a small pad greatly reduces the likelihood
668 that nearly every malloc request during program start-up (or
669 after trimming) will invoke sbrk, which needlessly wastes
672 Automatic rounding-up to page-size units is normally sufficient
673 to avoid measurable overhead, so the default is 0. However, in
674 systems where sbrk is relatively slow, it can pay to increase
675 this value, at the expense of carrying around more memory than
681 #ifndef DEFAULT_MMAP_THRESHOLD
682 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
687 M_MMAP_THRESHOLD is the request size threshold for using mmap()
688 to service a request. Requests of at least this size that cannot
689 be allocated using already-existing space will be serviced via mmap.
690 (If enough normal freed space already exists it is used instead.)
692 Using mmap segregates relatively large chunks of memory so that
693 they can be individually obtained and released from the host
694 system. A request serviced through mmap is never reused by any
695 other request (at least not directly; the system may just so
696 happen to remap successive requests to the same locations).
698 Segregating space in this way has the benefit that mmapped space
699 can ALWAYS be individually released back to the system, which
700 helps keep the system level memory demands of a long-lived
701 program low. Mapped memory can never become `locked' between
702 other chunks, as can happen with normally allocated chunks, which
703 menas that even trimming via malloc_trim would not release them.
705 However, it has the disadvantages that:
707 1. The space cannot be reclaimed, consolidated, and then
708 used to service later requests, as happens with normal chunks.
709 2. It can lead to more wastage because of mmap page alignment
711 3. It causes malloc performance to be more dependent on host
712 system memory management support routines which may vary in
713 implementation quality and may impose arbitrary
714 limitations. Generally, servicing a request via normal
715 malloc steps is faster than going through a system's mmap.
717 All together, these considerations should lead you to use mmap
718 only for relatively large requests.
725 #ifndef DEFAULT_MMAP_MAX
727 #define DEFAULT_MMAP_MAX (64)
729 #define DEFAULT_MMAP_MAX (0)
734 M_MMAP_MAX is the maximum number of requests to simultaneously
735 service using mmap. This parameter exists because:
737 1. Some systems have a limited number of internal tables for
739 2. In most systems, overreliance on mmap can degrade overall
741 3. If a program allocates many large regions, it is probably
742 better off using normal sbrk-based allocation routines that
743 can reclaim and reallocate normal heap memory. Using a
744 small value allows transition into this mode after the
745 first few allocations.
747 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
748 the default value is 0, and attempts to set it to non-zero values
749 in mallopt will fail.
757 Special defines for linux libc
759 Except when compiled using these special defines for Linux libc
760 using weak aliases, this malloc is NOT designed to work in
761 multithreaded applications. No semaphores or other concurrency
762 control are provided to ensure that multiple malloc or free calls
763 don't run at the same time, which could be disasterous. A single
764 semaphore could be used across malloc, realloc, and free (which is
765 essentially the effect of the linux weak alias approach). It would
766 be hard to obtain finer granularity.
771 #ifdef INTERNAL_LINUX_C_LIB
775 Void_t * __default_morecore_init (ptrdiff_t);
776 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
780 Void_t * __default_morecore_init ();
781 Void_t *(*__morecore)() = __default_morecore_init;
785 #define MORECORE (*__morecore)
786 #define MORECORE_FAILURE 0
787 #define MORECORE_CLEARS 1
789 #else /* INTERNAL_LINUX_C_LIB */
792 extern Void_t* sbrk(ptrdiff_t);
794 extern Void_t* sbrk();
798 #define MORECORE sbrk
801 #ifndef MORECORE_FAILURE
802 #define MORECORE_FAILURE -1
805 #ifndef MORECORE_CLEARS
806 #define MORECORE_CLEARS 1
809 #endif /* INTERNAL_LINUX_C_LIB */
811 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
813 #define cALLOc __libc_calloc
814 #define fREe __libc_free
815 #define mALLOc __libc_malloc
816 #define mEMALIGn __libc_memalign
817 #define rEALLOc __libc_realloc
818 #define vALLOc __libc_valloc
819 #define pvALLOc __libc_pvalloc
820 #define mALLINFo __libc_mallinfo
821 #define mALLOPt __libc_mallopt
823 #pragma weak calloc = __libc_calloc
824 #pragma weak free = __libc_free
825 #pragma weak cfree = __libc_free
826 #pragma weak malloc = __libc_malloc
827 #pragma weak memalign = __libc_memalign
828 #pragma weak realloc = __libc_realloc
829 #pragma weak valloc = __libc_valloc
830 #pragma weak pvalloc = __libc_pvalloc
831 #pragma weak mallinfo = __libc_mallinfo
832 #pragma weak mallopt = __libc_mallopt
837 #define cALLOc calloc
839 #define mALLOc malloc
840 #define mEMALIGn memalign
841 #define rEALLOc realloc
842 #define vALLOc valloc
843 #define pvALLOc pvalloc
844 #define mALLINFo mallinfo
845 #define mALLOPt mallopt
849 /* Public routines */
853 Void_t* mALLOc(size_t);
855 Void_t* rEALLOc(Void_t*, size_t);
856 Void_t* mEMALIGn(size_t, size_t);
857 Void_t* vALLOc(size_t);
858 Void_t* pvALLOc(size_t);
859 Void_t* cALLOc(size_t, size_t);
861 int malloc_trim(size_t);
862 size_t malloc_usable_size(Void_t*);
864 int mALLOPt(int, int);
865 struct mallinfo mALLINFo(void);
876 size_t malloc_usable_size();
879 struct mallinfo mALLINFo();
884 }; /* end of extern "C" */
887 /* ---------- To make a malloc.h, end cutting here ------------ */
891 Emulation of sbrk for WIN32
892 All code within the ifdef WIN32 is untested by me.
898 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) &
899 ~(malloc_getpagesize-1))
901 /* resrve 64MB to insure large contiguous space */
902 #define RESERVED_SIZE (1024*1024*64)
903 #define NEXT_SIZE (2048*1024)
904 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
906 struct GmListElement;
907 typedef struct GmListElement GmListElement;
915 static GmListElement* head = 0;
916 static unsigned int gNextAddress = 0;
917 static unsigned int gAddressBase = 0;
918 static unsigned int gAllocatedSize = 0;
921 GmListElement* makeGmListElement (void* bas)
924 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
938 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
939 if (gAddressBase && (gNextAddress - gAddressBase))
941 rval = VirtualFree ((void*)gAddressBase,
942 gNextAddress - gAddressBase,
948 GmListElement* next = head->next;
949 rval = VirtualFree (head->base, 0, MEM_RELEASE);
957 void* findRegion (void* start_address, unsigned long size)
959 MEMORY_BASIC_INFORMATION info;
960 while ((unsigned long)start_address < TOP_MEMORY)
962 VirtualQuery (start_address, &info, sizeof (info));
963 if (info.State != MEM_FREE)
964 start_address = (char*)info.BaseAddress + info.RegionSize;
965 else if (info.RegionSize >= size)
966 return start_address;
968 start_address = (char*)info.BaseAddress + info.RegionSize;
975 void* wsbrk (long size)
980 if (gAddressBase == 0)
982 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
983 gNextAddress = gAddressBase =
984 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
985 MEM_RESERVE, PAGE_NOACCESS);
986 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
989 long new_size = max (NEXT_SIZE, AlignPage (size));
990 void* new_address = (void*)(gAddressBase+gAllocatedSize);
993 new_address = findRegion (new_address, new_size);
995 if (new_address == 0)
998 gAddressBase = gNextAddress =
999 (unsigned int)VirtualAlloc (new_address, new_size,
1000 MEM_RESERVE, PAGE_NOACCESS);
1001 // repeat in case of race condition
1002 // The region that we found has been snagged
1003 // by another thread
1005 while (gAddressBase == 0);
1007 ASSERT (new_address == (void*)gAddressBase);
1009 gAllocatedSize = new_size;
1011 if (!makeGmListElement ((void*)gAddressBase))
1014 if ((size + gNextAddress) > AlignPage (gNextAddress))
1017 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1018 (size + gNextAddress -
1019 AlignPage (gNextAddress)),
1020 MEM_COMMIT, PAGE_READWRITE);
1024 tmp = (void*)gNextAddress;
1025 gNextAddress = (unsigned int)tmp + size;
1030 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1031 /* Trim by releasing the virtual memory */
1032 if (alignedGoal >= gAddressBase)
1034 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1036 gNextAddress = gNextAddress + size;
1037 return (void*)gNextAddress;
1041 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1043 gNextAddress = gAddressBase;
1049 return (void*)gNextAddress;
1064 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1065 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1066 struct malloc_chunk* fd; /* double links -- used only if free. */
1067 struct malloc_chunk* bk;
1070 typedef struct malloc_chunk* mchunkptr;
1074 malloc_chunk details:
1076 (The following includes lightly edited explanations by Colin Plumb.)
1078 Chunks of memory are maintained using a `boundary tag' method as
1079 described in e.g., Knuth or Standish. (See the paper by Paul
1080 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1081 survey of such techniques.) Sizes of free chunks are stored both
1082 in the front of each chunk and at the end. This makes
1083 consolidating fragmented chunks into bigger chunks very fast. The
1084 size fields also hold bits representing whether chunks are free or
1087 An allocated chunk looks like this:
1090 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1091 | Size of previous chunk, if allocated | |
1092 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1093 | Size of chunk, in bytes |P|
1094 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1095 | User data starts here... .
1097 . (malloc_usable_space() bytes) .
1099 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1101 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1104 Where "chunk" is the front of the chunk for the purpose of most of
1105 the malloc code, but "mem" is the pointer that is returned to the
1106 user. "Nextchunk" is the beginning of the next contiguous chunk.
1108 Chunks always begin on even word boundries, so the mem portion
1109 (which is returned to the user) is also on an even word boundary, and
1110 thus double-word aligned.
1112 Free chunks are stored in circular doubly-linked lists, and look like this:
1114 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1115 | Size of previous chunk |
1116 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1117 `head:' | Size of chunk, in bytes |P|
1118 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1119 | Forward pointer to next chunk in list |
1120 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1121 | Back pointer to previous chunk in list |
1122 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1123 | Unused space (may be 0 bytes long) .
1126 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1127 `foot:' | Size of chunk, in bytes |
1128 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1130 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1131 chunk size (which is always a multiple of two words), is an in-use
1132 bit for the *previous* chunk. If that bit is *clear*, then the
1133 word before the current chunk size contains the previous chunk
1134 size, and can be used to find the front of the previous chunk.
1135 (The very first chunk allocated always has this bit set,
1136 preventing access to non-existent (or non-owned) memory.)
1138 Note that the `foot' of the current chunk is actually represented
1139 as the prev_size of the NEXT chunk. (This makes it easier to
1140 deal with alignments etc).
1142 The two exceptions to all this are
1144 1. The special chunk `top', which doesn't bother using the
1145 trailing size field since there is no
1146 next contiguous chunk that would have to index off it. (After
1147 initialization, `top' is forced to always exist. If it would
1148 become less than MINSIZE bytes long, it is replenished via
1151 2. Chunks allocated via mmap, which have the second-lowest-order
1152 bit (IS_MMAPPED) set in their size fields. Because they are
1153 never merged or traversed from any other chunk, they have no
1154 foot size or inuse information.
1156 Available chunks are kept in any of several places (all declared below):
1158 * `av': An array of chunks serving as bin headers for consolidated
1159 chunks. Each bin is doubly linked. The bins are approximately
1160 proportionally (log) spaced. There are a lot of these bins
1161 (128). This may look excessive, but works very well in
1162 practice. All procedures maintain the invariant that no
1163 consolidated chunk physically borders another one. Chunks in
1164 bins are kept in size order, with ties going to the
1165 approximately least recently used chunk.
1167 The chunks in each bin are maintained in decreasing sorted order by
1168 size. This is irrelevant for the small bins, which all contain
1169 the same-sized chunks, but facilitates best-fit allocation for
1170 larger chunks. (These lists are just sequential. Keeping them in
1171 order almost never requires enough traversal to warrant using
1172 fancier ordered data structures.) Chunks of the same size are
1173 linked with the most recently freed at the front, and allocations
1174 are taken from the back. This results in LRU or FIFO allocation
1175 order, which tends to give each chunk an equal opportunity to be
1176 consolidated with adjacent freed chunks, resulting in larger free
1177 chunks and less fragmentation.
1179 * `top': The top-most available chunk (i.e., the one bordering the
1180 end of available memory) is treated specially. It is never
1181 included in any bin, is used only if no other chunk is
1182 available, and is released back to the system if it is very
1183 large (see M_TRIM_THRESHOLD).
1185 * `last_remainder': A bin holding only the remainder of the
1186 most recently split (non-top) chunk. This bin is checked
1187 before other non-fitting chunks, so as to provide better
1188 locality for runs of sequentially allocated chunks.
1190 * Implicitly, through the host system's memory mapping tables.
1191 If supported, requests greater than a threshold are usually
1192 serviced via calls to mmap, and then later released via munmap.
1201 /* sizes, alignments */
1203 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1204 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1205 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1206 #define MINSIZE (sizeof(struct malloc_chunk))
1208 /* conversion from malloc headers to user pointers, and back */
1210 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1211 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1213 /* pad request bytes into a usable size */
1215 #define request2size(req) \
1216 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1217 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1218 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1220 /* Check if m has acceptable alignment */
1222 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1228 Physical chunk operations
1232 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1234 #define PREV_INUSE 0x1
1236 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1238 #define IS_MMAPPED 0x2
1240 /* Bits to mask off when extracting size */
1242 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1245 /* Ptr to next physical malloc_chunk. */
1247 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1249 /* Ptr to previous physical malloc_chunk */
1251 #define prev_chunk(p)\
1252 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1255 /* Treat space at ptr + offset as a chunk */
1257 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1263 Dealing with use bits
1266 /* extract p's inuse bit */
1269 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1271 /* extract inuse bit of previous chunk */
1273 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1275 /* check for mmap()'ed chunk */
1277 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1279 /* set/clear chunk as in use without otherwise disturbing */
1281 #define set_inuse(p)\
1282 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1284 #define clear_inuse(p)\
1285 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1287 /* check/set/clear inuse bits in known places */
1289 #define inuse_bit_at_offset(p, s)\
1290 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1292 #define set_inuse_bit_at_offset(p, s)\
1293 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1295 #define clear_inuse_bit_at_offset(p, s)\
1296 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1302 Dealing with size fields
1305 /* Get size, ignoring use bits */
1307 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1309 /* Set size at head, without disturbing its use bit */
1311 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1313 /* Set size/use ignoring previous bits in header */
1315 #define set_head(p, s) ((p)->size = (s))
1317 /* Set size at footer (only when chunk is not in use) */
1319 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1328 The bins, `av_' are an array of pairs of pointers serving as the
1329 heads of (initially empty) doubly-linked lists of chunks, laid out
1330 in a way so that each pair can be treated as if it were in a
1331 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1332 and chunks are the same).
1334 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1335 8 bytes apart. Larger bins are approximately logarithmically
1336 spaced. (See the table below.) The `av_' array is never mentioned
1337 directly in the code, but instead via bin access macros.
1345 4 bins of size 32768
1346 2 bins of size 262144
1347 1 bin of size what's left
1349 There is actually a little bit of slop in the numbers in bin_index
1350 for the sake of speed. This makes no difference elsewhere.
1352 The special chunks `top' and `last_remainder' get their own bins,
1353 (this is implemented via yet more trickery with the av_ array),
1354 although `top' is never properly linked to its bin since it is
1355 always handled specially.
1359 #define NAV 128 /* number of bins */
1361 typedef struct malloc_chunk* mbinptr;
1365 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1366 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1367 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1370 The first 2 bins are never indexed. The corresponding av_ cells are instead
1371 used for bookkeeping. This is not to save space, but to simplify
1372 indexing, maintain locality, and avoid some initialization tests.
1375 #define top (bin_at(0)->fd) /* The topmost chunk */
1376 #define last_remainder (bin_at(1)) /* remainder from last split */
1380 Because top initially points to its own bin with initial
1381 zero size, thus forcing extension on the first malloc request,
1382 we avoid having any special code in malloc to check whether
1383 it even exists yet. But we still need to in malloc_extend_top.
1386 #define initial_top ((mchunkptr)(bin_at(0)))
1388 /* Helper macro to initialize bins */
1390 #define IAV(i) bin_at(i), bin_at(i)
1392 static mbinptr av_[NAV * 2 + 2] = {
1394 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1395 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1396 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1397 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1398 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1399 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1400 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1401 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1402 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1403 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1404 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1405 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1406 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1407 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1408 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1409 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1414 /* field-extraction macros */
1416 #define first(b) ((b)->fd)
1417 #define last(b) ((b)->bk)
1423 #define bin_index(sz) \
1424 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1425 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1426 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1427 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1428 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1429 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1432 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1433 identically sized chunks. This is exploited in malloc.
1436 #define MAX_SMALLBIN 63
1437 #define MAX_SMALLBIN_SIZE 512
1438 #define SMALLBIN_WIDTH 8
1440 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1443 Requests are `small' if both the corresponding and the next bin are small
1446 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1451 To help compensate for the large number of bins, a one-level index
1452 structure is used for bin-by-bin searching. `binblocks' is a
1453 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1454 have any (possibly) non-empty bins, so they can be skipped over
1455 all at once during during traversals. The bits are NOT always
1456 cleared as soon as all bins in a block are empty, but instead only
1457 when all are noticed to be empty during traversal in malloc.
1460 #define BINBLOCKWIDTH 4 /* bins per block */
1462 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1464 /* bin<->block macros */
1466 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1467 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1468 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1474 /* Other static bookkeeping data */
1476 /* variables holding tunable values */
1478 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1479 static unsigned long top_pad = DEFAULT_TOP_PAD;
1480 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1481 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1483 /* The first value returned from sbrk */
1484 static char* sbrk_base = (char*)(-1);
1486 /* The maximum memory obtained from system via sbrk */
1487 static unsigned long max_sbrked_mem = 0;
1489 /* The maximum via either sbrk or mmap */
1490 static unsigned long max_total_mem = 0;
1492 /* internal working copy of mallinfo */
1493 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1495 /* The total memory obtained from system via sbrk */
1496 #define sbrked_mem (current_mallinfo.arena)
1498 /* Tracking mmaps */
1500 static unsigned int n_mmaps = 0;
1501 static unsigned int max_n_mmaps = 0;
1502 static unsigned long mmapped_mem = 0;
1503 static unsigned long max_mmapped_mem = 0;
1515 These routines make a number of assertions about the states
1516 of data structures that should be true at all times. If any
1517 are not true, it's very likely that a user program has somehow
1518 trashed memory. (It's also possible that there is a coding error
1519 in malloc. In which case, please report it!)
1523 static void do_check_chunk(mchunkptr p)
1525 static void do_check_chunk(p) mchunkptr p;
1528 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1530 /* No checkable chunk is mmapped */
1531 assert(!chunk_is_mmapped(p));
1533 /* Check for legal address ... */
1534 assert((char*)p >= sbrk_base);
1536 assert((char*)p + sz <= (char*)top);
1538 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1544 static void do_check_free_chunk(mchunkptr p)
1546 static void do_check_free_chunk(p) mchunkptr p;
1549 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1550 mchunkptr next = chunk_at_offset(p, sz);
1554 /* Check whether it claims to be free ... */
1557 /* Unless a special marker, must have OK fields */
1558 if ((long)sz >= (long)MINSIZE)
1560 assert((sz & MALLOC_ALIGN_MASK) == 0);
1561 assert(aligned_OK(chunk2mem(p)));
1562 /* ... matching footer field */
1563 assert(next->prev_size == sz);
1564 /* ... and is fully consolidated */
1565 assert(prev_inuse(p));
1566 assert (next == top || inuse(next));
1568 /* ... and has minimally sane links */
1569 assert(p->fd->bk == p);
1570 assert(p->bk->fd == p);
1572 else /* markers are always of size SIZE_SZ */
1573 assert(sz == SIZE_SZ);
1577 static void do_check_inuse_chunk(mchunkptr p)
1579 static void do_check_inuse_chunk(p) mchunkptr p;
1582 mchunkptr next = next_chunk(p);
1585 /* Check whether it claims to be in use ... */
1588 /* ... and is surrounded by OK chunks.
1589 Since more things can be checked with free chunks than inuse ones,
1590 if an inuse chunk borders them and debug is on, it's worth doing them.
1594 mchunkptr prv = prev_chunk(p);
1595 assert(next_chunk(prv) == p);
1596 do_check_free_chunk(prv);
1600 assert(prev_inuse(next));
1601 assert(chunksize(next) >= MINSIZE);
1603 else if (!inuse(next))
1604 do_check_free_chunk(next);
1609 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1611 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1614 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1617 do_check_inuse_chunk(p);
1619 /* Legal size ... */
1620 assert((long)sz >= (long)MINSIZE);
1621 assert((sz & MALLOC_ALIGN_MASK) == 0);
1623 assert(room < (long)MINSIZE);
1625 /* ... and alignment */
1626 assert(aligned_OK(chunk2mem(p)));
1629 /* ... and was allocated at front of an available chunk */
1630 assert(prev_inuse(p));
1635 #define check_free_chunk(P) do_check_free_chunk(P)
1636 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1637 #define check_chunk(P) do_check_chunk(P)
1638 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1640 #define check_free_chunk(P)
1641 #define check_inuse_chunk(P)
1642 #define check_chunk(P)
1643 #define check_malloced_chunk(P,N)
1649 Macro-based internal utilities
1654 Linking chunks in bin lists.
1655 Call these only with variables, not arbitrary expressions, as arguments.
1659 Place chunk p of size s in its bin, in size order,
1660 putting it ahead of others of same size.
1664 #define frontlink(P, S, IDX, BK, FD) \
1666 if (S < MAX_SMALLBIN_SIZE) \
1668 IDX = smallbin_index(S); \
1669 mark_binblock(IDX); \
1674 FD->bk = BK->fd = P; \
1678 IDX = bin_index(S); \
1681 if (FD == BK) mark_binblock(IDX); \
1684 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1689 FD->bk = BK->fd = P; \
1694 /* take a chunk off a list */
1696 #define unlink(P, BK, FD) \
1704 /* Place p as the last remainder */
1706 #define link_last_remainder(P) \
1708 last_remainder->fd = last_remainder->bk = P; \
1709 P->fd = P->bk = last_remainder; \
1712 /* Clear the last_remainder bin */
1714 #define clear_last_remainder \
1715 (last_remainder->fd = last_remainder->bk = last_remainder)
1722 /* Routines dealing with mmap(). */
1727 static mchunkptr mmap_chunk(size_t size)
1729 static mchunkptr mmap_chunk(size) size_t size;
1732 size_t page_mask = malloc_getpagesize - 1;
1735 #ifndef MAP_ANONYMOUS
1739 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1741 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1742 * there is no following chunk whose prev_size field could be used.
1744 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1746 #ifdef MAP_ANONYMOUS
1747 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1748 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1749 #else /* !MAP_ANONYMOUS */
1752 fd = open("/dev/zero", O_RDWR);
1753 if(fd < 0) return 0;
1755 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1758 if(p == (mchunkptr)-1) return 0;
1761 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1763 /* We demand that eight bytes into a page must be 8-byte aligned. */
1764 assert(aligned_OK(chunk2mem(p)));
1766 /* The offset to the start of the mmapped region is stored
1767 * in the prev_size field of the chunk; normally it is zero,
1768 * but that can be changed in memalign().
1771 set_head(p, size|IS_MMAPPED);
1773 mmapped_mem += size;
1774 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1775 max_mmapped_mem = mmapped_mem;
1776 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1777 max_total_mem = mmapped_mem + sbrked_mem;
1782 static void munmap_chunk(mchunkptr p)
1784 static void munmap_chunk(p) mchunkptr p;
1787 INTERNAL_SIZE_T size = chunksize(p);
1790 assert (chunk_is_mmapped(p));
1791 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1792 assert((n_mmaps > 0));
1793 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1796 mmapped_mem -= (size + p->prev_size);
1798 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1800 /* munmap returns non-zero on failure */
1807 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1809 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1812 size_t page_mask = malloc_getpagesize - 1;
1813 INTERNAL_SIZE_T offset = p->prev_size;
1814 INTERNAL_SIZE_T size = chunksize(p);
1817 assert (chunk_is_mmapped(p));
1818 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1819 assert((n_mmaps > 0));
1820 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1822 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1823 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1825 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1827 if (cp == (char *)-1) return 0;
1829 p = (mchunkptr)(cp + offset);
1831 assert(aligned_OK(chunk2mem(p)));
1833 assert((p->prev_size == offset));
1834 set_head(p, (new_size - offset)|IS_MMAPPED);
1836 mmapped_mem -= size + offset;
1837 mmapped_mem += new_size;
1838 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1839 max_mmapped_mem = mmapped_mem;
1840 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1841 max_total_mem = mmapped_mem + sbrked_mem;
1845 #endif /* HAVE_MREMAP */
1847 #endif /* HAVE_MMAP */
1853 Extend the top-most chunk by obtaining memory from system.
1854 Main interface to sbrk (but see also malloc_trim).
1858 static void malloc_extend_top(INTERNAL_SIZE_T nb)
1860 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1863 char* brk; /* return value from sbrk */
1864 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1865 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1866 char* new_brk; /* return of 2nd sbrk call */
1867 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1869 mchunkptr old_top = top; /* Record state of old top */
1870 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1871 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1873 /* Pad request with top_pad plus minimal overhead */
1875 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
1876 unsigned long pagesz = malloc_getpagesize;
1878 /* If not the first time through, round to preserve page boundary */
1879 /* Otherwise, we need to correct to a page size below anyway. */
1880 /* (We also correct below if an intervening foreign sbrk call.) */
1882 if (sbrk_base != (char*)(-1))
1883 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1885 brk = (char*)(MORECORE (sbrk_size));
1887 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
1888 if (brk == (char*)(MORECORE_FAILURE) ||
1889 (brk < old_end && old_top != initial_top))
1892 sbrked_mem += sbrk_size;
1894 if (brk == old_end) /* can just add bytes to current top */
1896 top_size = sbrk_size + old_top_size;
1897 set_head(top, top_size | PREV_INUSE);
1901 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
1903 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
1904 sbrked_mem += brk - (char*)old_end;
1906 /* Guarantee alignment of first new chunk made from this space */
1907 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
1908 if (front_misalign > 0)
1910 correction = (MALLOC_ALIGNMENT) - front_misalign;
1916 /* Guarantee the next brk will be at a page boundary */
1917 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
1919 /* Allocate correction */
1920 new_brk = (char*)(MORECORE (correction));
1921 if (new_brk == (char*)(MORECORE_FAILURE)) return;
1923 sbrked_mem += correction;
1925 top = (mchunkptr)brk;
1926 top_size = new_brk - brk + correction;
1927 set_head(top, top_size | PREV_INUSE);
1929 if (old_top != initial_top)
1932 /* There must have been an intervening foreign sbrk call. */
1933 /* A double fencepost is necessary to prevent consolidation */
1935 /* If not enough space to do this, then user did something very wrong */
1936 if (old_top_size < MINSIZE)
1938 set_head(top, PREV_INUSE); /* will force null return from malloc */
1942 /* Also keep size a multiple of MALLOC_ALIGNMENT */
1943 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
1944 chunk_at_offset(old_top, old_top_size )->size =
1946 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
1948 set_head_size(old_top, old_top_size);
1949 /* If possible, release the rest. */
1950 if (old_top_size >= MINSIZE)
1951 fREe(chunk2mem(old_top));
1955 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
1956 max_sbrked_mem = sbrked_mem;
1957 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1958 max_total_mem = mmapped_mem + sbrked_mem;
1960 /* We always land on a page boundary */
1961 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
1967 /* Main public routines */
1973 The requested size is first converted into a usable form, `nb'.
1974 This currently means to add 4 bytes overhead plus possibly more to
1975 obtain 8-byte alignment and/or to obtain a size of at least
1976 MINSIZE (currently 16 bytes), the smallest allocatable size.
1977 (All fits are considered `exact' if they are within MINSIZE bytes.)
1979 From there, the first successful of the following steps is taken:
1981 1. The bin corresponding to the request size is scanned, and if
1982 a chunk of exactly the right size is found, it is taken.
1984 2. The most recently remaindered chunk is used if it is big
1985 enough. This is a form of (roving) first fit, used only in
1986 the absence of exact fits. Runs of consecutive requests use
1987 the remainder of the chunk used for the previous such request
1988 whenever possible. This limited use of a first-fit style
1989 allocation strategy tends to give contiguous chunks
1990 coextensive lifetimes, which improves locality and can reduce
1991 fragmentation in the long run.
1993 3. Other bins are scanned in increasing size order, using a
1994 chunk big enough to fulfill the request, and splitting off
1995 any remainder. This search is strictly by best-fit; i.e.,
1996 the smallest (with ties going to approximately the least
1997 recently used) chunk that fits is selected.
1999 4. If large enough, the chunk bordering the end of memory
2000 (`top') is split off. (This use of `top' is in accord with
2001 the best-fit search rule. In effect, `top' is treated as
2002 larger (and thus less well fitting) than any other available
2003 chunk since it can be extended to be as large as necessary
2004 (up to system limitations).
2006 5. If the request size meets the mmap threshold and the
2007 system supports mmap, and there are few enough currently
2008 allocated mmapped regions, and a call to mmap succeeds,
2009 the request is allocated via direct memory mapping.
2011 6. Otherwise, the top of memory is extended by
2012 obtaining more space from the system (normally using sbrk,
2013 but definable to anything else via the MORECORE macro).
2014 Memory is gathered from the system (in system page-sized
2015 units) in a way that allows chunks obtained across different
2016 sbrk calls to be consolidated, but does not require
2017 contiguous memory. Thus, it should be safe to intersperse
2018 mallocs with other sbrk calls.
2021 All allocations are made from the the `lowest' part of any found
2022 chunk. (The implementation invariant is that prev_inuse is
2023 always true of any allocated chunk; i.e., that each allocated
2024 chunk borders either a previously allocated and still in-use chunk,
2025 or the base of its memory arena.)
2030 Void_t* mALLOc(size_t bytes)
2032 Void_t* mALLOc(bytes) size_t bytes;
2035 mchunkptr victim; /* inspected/selected chunk */
2036 INTERNAL_SIZE_T victim_size; /* its size */
2037 int idx; /* index for bin traversal */
2038 mbinptr bin; /* associated bin */
2039 mchunkptr remainder; /* remainder from a split */
2040 long remainder_size; /* its size */
2041 int remainder_index; /* its bin index */
2042 unsigned long block; /* block traverser bit */
2043 int startidx; /* first bin of a traversed block */
2044 mchunkptr fwd; /* misc temp for linking */
2045 mchunkptr bck; /* misc temp for linking */
2046 mbinptr q; /* misc temp */
2048 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2050 /* Check for exact match in a bin */
2052 if (is_small_request(nb)) /* Faster version for small requests */
2054 idx = smallbin_index(nb);
2056 /* No traversal or size check necessary for small bins. */
2061 /* Also scan the next one, since it would have a remainder < MINSIZE */
2069 victim_size = chunksize(victim);
2070 unlink(victim, bck, fwd);
2071 set_inuse_bit_at_offset(victim, victim_size);
2072 check_malloced_chunk(victim, nb);
2073 return chunk2mem(victim);
2076 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2081 idx = bin_index(nb);
2084 for (victim = last(bin); victim != bin; victim = victim->bk)
2086 victim_size = chunksize(victim);
2087 remainder_size = victim_size - nb;
2089 if (remainder_size >= (long)MINSIZE) /* too big */
2091 --idx; /* adjust to rescan below after checking last remainder */
2095 else if (remainder_size >= 0) /* exact fit */
2097 unlink(victim, bck, fwd);
2098 set_inuse_bit_at_offset(victim, victim_size);
2099 check_malloced_chunk(victim, nb);
2100 return chunk2mem(victim);
2108 /* Try to use the last split-off remainder */
2110 if ( (victim = last_remainder->fd) != last_remainder)
2112 victim_size = chunksize(victim);
2113 remainder_size = victim_size - nb;
2115 if (remainder_size >= (long)MINSIZE) /* re-split */
2117 remainder = chunk_at_offset(victim, nb);
2118 set_head(victim, nb | PREV_INUSE);
2119 link_last_remainder(remainder);
2120 set_head(remainder, remainder_size | PREV_INUSE);
2121 set_foot(remainder, remainder_size);
2122 check_malloced_chunk(victim, nb);
2123 return chunk2mem(victim);
2126 clear_last_remainder;
2128 if (remainder_size >= 0) /* exhaust */
2130 set_inuse_bit_at_offset(victim, victim_size);
2131 check_malloced_chunk(victim, nb);
2132 return chunk2mem(victim);
2135 /* Else place in bin */
2137 frontlink(victim, victim_size, remainder_index, bck, fwd);
2141 If there are any possibly nonempty big-enough blocks,
2142 search for best fitting chunk by scanning bins in blockwidth units.
2145 if ( (block = idx2binblock(idx)) <= binblocks)
2148 /* Get to the first marked block */
2150 if ( (block & binblocks) == 0)
2152 /* force to an even block boundary */
2153 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2155 while ((block & binblocks) == 0)
2157 idx += BINBLOCKWIDTH;
2162 /* For each possibly nonempty block ... */
2165 startidx = idx; /* (track incomplete blocks) */
2166 q = bin = bin_at(idx);
2168 /* For each bin in this block ... */
2171 /* Find and use first big enough chunk ... */
2173 for (victim = last(bin); victim != bin; victim = victim->bk)
2175 victim_size = chunksize(victim);
2176 remainder_size = victim_size - nb;
2178 if (remainder_size >= (long)MINSIZE) /* split */
2180 remainder = chunk_at_offset(victim, nb);
2181 set_head(victim, nb | PREV_INUSE);
2182 unlink(victim, bck, fwd);
2183 link_last_remainder(remainder);
2184 set_head(remainder, remainder_size | PREV_INUSE);
2185 set_foot(remainder, remainder_size);
2186 check_malloced_chunk(victim, nb);
2187 return chunk2mem(victim);
2190 else if (remainder_size >= 0) /* take */
2192 set_inuse_bit_at_offset(victim, victim_size);
2193 unlink(victim, bck, fwd);
2194 check_malloced_chunk(victim, nb);
2195 return chunk2mem(victim);
2200 bin = next_bin(bin);
2202 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2204 /* Clear out the block bit. */
2206 do /* Possibly backtrack to try to clear a partial block */
2208 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2210 binblocks &= ~block;
2215 } while (first(q) == q);
2217 /* Get to the next possibly nonempty block */
2219 if ( (block <<= 1) <= binblocks && (block != 0) )
2221 while ((block & binblocks) == 0)
2223 idx += BINBLOCKWIDTH;
2233 /* Try to use top chunk */
2235 /* Require that there be a remainder, ensuring top always exists */
2236 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2240 /* If big and would otherwise need to extend, try to use mmap instead */
2241 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2242 (victim = mmap_chunk(nb)) != 0)
2243 return chunk2mem(victim);
2247 malloc_extend_top(nb);
2248 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2249 return 0; /* propagate failure */
2253 set_head(victim, nb | PREV_INUSE);
2254 top = chunk_at_offset(victim, nb);
2255 set_head(top, remainder_size | PREV_INUSE);
2256 check_malloced_chunk(victim, nb);
2257 return chunk2mem(victim);
2270 1. free(0) has no effect.
2272 2. If the chunk was allocated via mmap, it is release via munmap().
2274 3. If a returned chunk borders the current high end of memory,
2275 it is consolidated into the top, and if the total unused
2276 topmost memory exceeds the trim threshold, malloc_trim is
2279 4. Other chunks are consolidated as they arrive, and
2280 placed in corresponding bins. (This includes the case of
2281 consolidating with the current `last_remainder').
2287 void fREe(Void_t* mem)
2289 void fREe(mem) Void_t* mem;
2292 mchunkptr p; /* chunk corresponding to mem */
2293 INTERNAL_SIZE_T hd; /* its head field */
2294 INTERNAL_SIZE_T sz; /* its size */
2295 int idx; /* its bin index */
2296 mchunkptr next; /* next contiguous chunk */
2297 INTERNAL_SIZE_T nextsz; /* its size */
2298 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2299 mchunkptr bck; /* misc temp for linking */
2300 mchunkptr fwd; /* misc temp for linking */
2301 int islr; /* track whether merging with last_remainder */
2303 if (mem == 0) /* free(0) has no effect */
2310 if (hd & IS_MMAPPED) /* release mmapped memory. */
2317 check_inuse_chunk(p);
2319 sz = hd & ~PREV_INUSE;
2320 next = chunk_at_offset(p, sz);
2321 nextsz = chunksize(next);
2323 if (next == top) /* merge with top */
2327 if (!(hd & PREV_INUSE)) /* consolidate backward */
2329 prevsz = p->prev_size;
2330 p = chunk_at_offset(p, -prevsz);
2332 unlink(p, bck, fwd);
2335 set_head(p, sz | PREV_INUSE);
2337 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2338 malloc_trim(top_pad);
2342 set_head(next, nextsz); /* clear inuse bit */
2346 if (!(hd & PREV_INUSE)) /* consolidate backward */
2348 prevsz = p->prev_size;
2349 p = chunk_at_offset(p, -prevsz);
2352 if (p->fd == last_remainder) /* keep as last_remainder */
2355 unlink(p, bck, fwd);
2358 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2362 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2365 link_last_remainder(p);
2368 unlink(next, bck, fwd);
2372 set_head(p, sz | PREV_INUSE);
2375 frontlink(p, sz, idx, bck, fwd);
2386 Chunks that were obtained via mmap cannot be extended or shrunk
2387 unless HAVE_MREMAP is defined, in which case mremap is used.
2388 Otherwise, if their reallocation is for additional space, they are
2389 copied. If for less, they are just left alone.
2391 Otherwise, if the reallocation is for additional space, and the
2392 chunk can be extended, it is, else a malloc-copy-free sequence is
2393 taken. There are several different ways that a chunk could be
2394 extended. All are tried:
2396 * Extending forward into following adjacent free chunk.
2397 * Shifting backwards, joining preceding adjacent space
2398 * Both shifting backwards and extending forward.
2399 * Extending into newly sbrked space
2401 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2402 size argument of zero (re)allocates a minimum-sized chunk.
2404 If the reallocation is for less space, and the new request is for
2405 a `small' (<512 bytes) size, then the newly unused space is lopped
2408 The old unix realloc convention of allowing the last-free'd chunk
2409 to be used as an argument to realloc is no longer supported.
2410 I don't know of any programs still relying on this feature,
2411 and allowing it would also allow too many other incorrect
2412 usages of realloc to be sensible.
2419 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2421 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2424 INTERNAL_SIZE_T nb; /* padded request size */
2426 mchunkptr oldp; /* chunk corresponding to oldmem */
2427 INTERNAL_SIZE_T oldsize; /* its size */
2429 mchunkptr newp; /* chunk to return */
2430 INTERNAL_SIZE_T newsize; /* its size */
2431 Void_t* newmem; /* corresponding user mem */
2433 mchunkptr next; /* next contiguous chunk after oldp */
2434 INTERNAL_SIZE_T nextsize; /* its size */
2436 mchunkptr prev; /* previous contiguous chunk before oldp */
2437 INTERNAL_SIZE_T prevsize; /* its size */
2439 mchunkptr remainder; /* holds split off extra space from newp */
2440 INTERNAL_SIZE_T remainder_size; /* its size */
2442 mchunkptr bck; /* misc temp for linking */
2443 mchunkptr fwd; /* misc temp for linking */
2445 #ifdef REALLOC_ZERO_BYTES_FREES
2446 if (bytes == 0) { fREe(oldmem); return 0; }
2450 /* realloc of null is supposed to be same as malloc */
2451 if (oldmem == 0) return mALLOc(bytes);
2453 newp = oldp = mem2chunk(oldmem);
2454 newsize = oldsize = chunksize(oldp);
2457 nb = request2size(bytes);
2460 if (chunk_is_mmapped(oldp))
2463 newp = mremap_chunk(oldp, nb);
2464 if(newp) return chunk2mem(newp);
2466 /* Note the extra SIZE_SZ overhead. */
2467 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2468 /* Must alloc, copy, free. */
2469 newmem = mALLOc(bytes);
2470 if (newmem == 0) return 0; /* propagate failure */
2471 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2477 check_inuse_chunk(oldp);
2479 if ((long)(oldsize) < (long)(nb))
2482 /* Try expanding forward */
2484 next = chunk_at_offset(oldp, oldsize);
2485 if (next == top || !inuse(next))
2487 nextsize = chunksize(next);
2489 /* Forward into top only if a remainder */
2492 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2494 newsize += nextsize;
2495 top = chunk_at_offset(oldp, nb);
2496 set_head(top, (newsize - nb) | PREV_INUSE);
2497 set_head_size(oldp, nb);
2498 return chunk2mem(oldp);
2502 /* Forward into next chunk */
2503 else if (((long)(nextsize + newsize) >= (long)(nb)))
2505 unlink(next, bck, fwd);
2506 newsize += nextsize;
2516 /* Try shifting backwards. */
2518 if (!prev_inuse(oldp))
2520 prev = prev_chunk(oldp);
2521 prevsize = chunksize(prev);
2523 /* try forward + backward first to save a later consolidation */
2530 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2532 unlink(prev, bck, fwd);
2534 newsize += prevsize + nextsize;
2535 newmem = chunk2mem(newp);
2536 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2537 top = chunk_at_offset(newp, nb);
2538 set_head(top, (newsize - nb) | PREV_INUSE);
2539 set_head_size(newp, nb);
2544 /* into next chunk */
2545 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2547 unlink(next, bck, fwd);
2548 unlink(prev, bck, fwd);
2550 newsize += nextsize + prevsize;
2551 newmem = chunk2mem(newp);
2552 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2558 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2560 unlink(prev, bck, fwd);
2562 newsize += prevsize;
2563 newmem = chunk2mem(newp);
2564 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2571 newmem = mALLOc (bytes);
2573 if (newmem == 0) /* propagate failure */
2576 /* Avoid copy if newp is next chunk after oldp. */
2577 /* (This can only happen when new chunk is sbrk'ed.) */
2579 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2581 newsize += chunksize(newp);
2586 /* Otherwise copy, free, and exit */
2587 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2593 split: /* split off extra room in old or expanded chunk */
2595 if (newsize - nb >= MINSIZE) /* split off remainder */
2597 remainder = chunk_at_offset(newp, nb);
2598 remainder_size = newsize - nb;
2599 set_head_size(newp, nb);
2600 set_head(remainder, remainder_size | PREV_INUSE);
2601 set_inuse_bit_at_offset(remainder, remainder_size);
2602 fREe(chunk2mem(remainder)); /* let free() deal with it */
2606 set_head_size(newp, newsize);
2607 set_inuse_bit_at_offset(newp, newsize);
2610 check_inuse_chunk(newp);
2611 return chunk2mem(newp);
2621 memalign requests more than enough space from malloc, finds a spot
2622 within that chunk that meets the alignment request, and then
2623 possibly frees the leading and trailing space.
2625 The alignment argument must be a power of two. This property is not
2626 checked by memalign, so misuse may result in random runtime errors.
2628 8-byte alignment is guaranteed by normal malloc calls, so don't
2629 bother calling memalign with an argument of 8 or less.
2631 Overreliance on memalign is a sure way to fragment space.
2637 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2639 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2642 INTERNAL_SIZE_T nb; /* padded request size */
2643 char* m; /* memory returned by malloc call */
2644 mchunkptr p; /* corresponding chunk */
2645 char* brk; /* alignment point within p */
2646 mchunkptr newp; /* chunk to return */
2647 INTERNAL_SIZE_T newsize; /* its size */
2648 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2649 mchunkptr remainder; /* spare room at end to split off */
2650 long remainder_size; /* its size */
2652 /* If need less alignment than we give anyway, just relay to malloc */
2654 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2656 /* Otherwise, ensure that it is at least a minimum chunk size */
2658 if (alignment < MINSIZE) alignment = MINSIZE;
2660 /* Call malloc with worst case padding to hit alignment. */
2662 nb = request2size(bytes);
2663 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2665 if (m == 0) return 0; /* propagate failure */
2669 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2672 if(chunk_is_mmapped(p))
2673 return chunk2mem(p); /* nothing more to do */
2676 else /* misaligned */
2679 Find an aligned spot inside chunk.
2680 Since we need to give back leading space in a chunk of at
2681 least MINSIZE, if the first calculation places us at
2682 a spot with less than MINSIZE leader, we can move to the
2683 next aligned spot -- we've allocated enough total room so that
2684 this is always possible.
2687 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
2688 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2690 newp = (mchunkptr)brk;
2691 leadsize = brk - (char*)(p);
2692 newsize = chunksize(p) - leadsize;
2695 if(chunk_is_mmapped(p))
2697 newp->prev_size = p->prev_size + leadsize;
2698 set_head(newp, newsize|IS_MMAPPED);
2699 return chunk2mem(newp);
2703 /* give back leader, use the rest */
2705 set_head(newp, newsize | PREV_INUSE);
2706 set_inuse_bit_at_offset(newp, newsize);
2707 set_head_size(p, leadsize);
2711 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2714 /* Also give back spare room at the end */
2716 remainder_size = chunksize(p) - nb;
2718 if (remainder_size >= (long)MINSIZE)
2720 remainder = chunk_at_offset(p, nb);
2721 set_head(remainder, remainder_size | PREV_INUSE);
2722 set_head_size(p, nb);
2723 fREe(chunk2mem(remainder));
2726 check_inuse_chunk(p);
2727 return chunk2mem(p);
2735 valloc just invokes memalign with alignment argument equal
2736 to the page size of the system (or as near to this as can
2737 be figured out from all the includes/defines above.)
2741 Void_t* vALLOc(size_t bytes)
2743 Void_t* vALLOc(bytes) size_t bytes;
2746 return mEMALIGn (malloc_getpagesize, bytes);
2750 pvalloc just invokes valloc for the nearest pagesize
2751 that will accommodate request
2756 Void_t* pvALLOc(size_t bytes)
2758 Void_t* pvALLOc(bytes) size_t bytes;
2761 size_t pagesize = malloc_getpagesize;
2762 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2767 calloc calls malloc, then zeroes out the allocated chunk.
2772 Void_t* cALLOc(size_t n, size_t elem_size)
2774 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2778 INTERNAL_SIZE_T csz;
2780 INTERNAL_SIZE_T sz = n * elem_size;
2782 /* check if expand_top called, in which case don't need to clear */
2784 mchunkptr oldtop = top;
2785 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2787 Void_t* mem = mALLOc (sz);
2795 /* Two optional cases in which clearing not necessary */
2799 if (chunk_is_mmapped(p)) return mem;
2805 if (p == oldtop && csz > oldtopsize)
2807 /* clear only the bytes from non-freshly-sbrked memory */
2812 MALLOC_ZERO(mem, csz - SIZE_SZ);
2819 cfree just calls free. It is needed/defined on some systems
2820 that pair it with calloc, presumably for odd historical reasons.
2824 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2826 void cfree(Void_t *mem)
2828 void cfree(mem) Void_t *mem;
2839 Malloc_trim gives memory back to the system (via negative
2840 arguments to sbrk) if there is unused memory at the `high' end of
2841 the malloc pool. You can call this after freeing large blocks of
2842 memory to potentially reduce the system-level memory requirements
2843 of a program. However, it cannot guarantee to reduce memory. Under
2844 some allocation patterns, some large free blocks of memory will be
2845 locked between two used chunks, so they cannot be given back to
2848 The `pad' argument to malloc_trim represents the amount of free
2849 trailing space to leave untrimmed. If this argument is zero,
2850 only the minimum amount of memory to maintain internal data
2851 structures will be left (one page or less). Non-zero arguments
2852 can be supplied to maintain enough trailing space to service
2853 future expected allocations without having to re-obtain memory
2856 Malloc_trim returns 1 if it actually released any memory, else 0.
2861 int malloc_trim(size_t pad)
2863 int malloc_trim(pad) size_t pad;
2866 long top_size; /* Amount of top-most memory */
2867 long extra; /* Amount to release */
2868 char* current_brk; /* address returned by pre-check sbrk call */
2869 char* new_brk; /* address returned by negative sbrk call */
2871 unsigned long pagesz = malloc_getpagesize;
2873 top_size = chunksize(top);
2874 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2876 if (extra < (long)pagesz) /* Not enough memory to release */
2881 /* Test to make sure no one else called sbrk */
2882 current_brk = (char*)(MORECORE (0));
2883 if (current_brk != (char*)(top) + top_size)
2884 return 0; /* Apparently we don't own memory; must fail */
2888 new_brk = (char*)(MORECORE (-extra));
2890 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
2892 /* Try to figure out what we have */
2893 current_brk = (char*)(MORECORE (0));
2894 top_size = current_brk - (char*)top;
2895 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
2897 sbrked_mem = current_brk - sbrk_base;
2898 set_head(top, top_size | PREV_INUSE);
2906 /* Success. Adjust top accordingly. */
2907 set_head(top, (top_size - extra) | PREV_INUSE);
2908 sbrked_mem -= extra;
2921 This routine tells you how many bytes you can actually use in an
2922 allocated chunk, which may be more than you requested (although
2923 often not). You can use this many bytes without worrying about
2924 overwriting other allocated objects. Not a particularly great
2925 programming practice, but still sometimes useful.
2930 size_t malloc_usable_size(Void_t* mem)
2932 size_t malloc_usable_size(mem) Void_t* mem;
2941 if(!chunk_is_mmapped(p))
2943 if (!inuse(p)) return 0;
2944 check_inuse_chunk(p);
2945 return chunksize(p) - SIZE_SZ;
2947 return chunksize(p) - 2*SIZE_SZ;
2954 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
2956 static void malloc_update_mallinfo()
2965 INTERNAL_SIZE_T avail = chunksize(top);
2966 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
2968 for (i = 1; i < NAV; ++i)
2971 for (p = last(b); p != b; p = p->bk)
2974 check_free_chunk(p);
2975 for (q = next_chunk(p);
2976 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
2978 check_inuse_chunk(q);
2980 avail += chunksize(p);
2985 current_mallinfo.ordblks = navail;
2986 current_mallinfo.uordblks = sbrked_mem - avail;
2987 current_mallinfo.fordblks = avail;
2988 current_mallinfo.hblks = n_mmaps;
2989 current_mallinfo.hblkhd = mmapped_mem;
2990 current_mallinfo.keepcost = chunksize(top);
3000 Prints on stderr the amount of space obtain from the system (both
3001 via sbrk and mmap), the maximum amount (which may be more than
3002 current if malloc_trim and/or munmap got called), the maximum
3003 number of simultaneous mmap regions used, and the current number
3004 of bytes allocated via malloc (or realloc, etc) but not yet
3005 freed. (Note that this is the number of bytes allocated, not the
3006 number requested. It will be larger than the number requested
3007 because of alignment and bookkeeping overhead.)
3013 malloc_update_mallinfo();
3014 fprintf(stderr, "max system bytes = %10u\n",
3015 (unsigned int)(max_total_mem));
3016 fprintf(stderr, "system bytes = %10u\n",
3017 (unsigned int)(sbrked_mem + mmapped_mem));
3018 fprintf(stderr, "in use bytes = %10u\n",
3019 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3021 fprintf(stderr, "max mmap regions = %10u\n",
3022 (unsigned int)max_n_mmaps);
3027 mallinfo returns a copy of updated current mallinfo.
3030 struct mallinfo mALLINFo()
3032 malloc_update_mallinfo();
3033 return current_mallinfo;
3042 mallopt is the general SVID/XPG interface to tunable parameters.
3043 The format is to provide a (parameter-number, parameter-value) pair.
3044 mallopt then sets the corresponding parameter to the argument
3045 value if it can (i.e., so long as the value is meaningful),
3046 and returns 1 if successful else 0.
3048 See descriptions of tunable parameters above.
3053 int mALLOPt(int param_number, int value)
3055 int mALLOPt(param_number, value) int param_number; int value;
3058 switch(param_number)
3060 case M_TRIM_THRESHOLD:
3061 trim_threshold = value; return 1;
3063 top_pad = value; return 1;
3064 case M_MMAP_THRESHOLD:
3065 mmap_threshold = value; return 1;
3068 n_mmaps_max = value; return 1;
3070 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3082 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3083 * Added pvalloc, as recommended by H.J. Liu
3084 * Added 64bit pointer support mainly from Wolfram Gloger
3085 * Added anonymously donated WIN32 sbrk emulation
3086 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3087 * malloc_extend_top: fix mask error that caused wastage after
3089 * Add linux mremap support code from HJ Liu
3091 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3092 * Integrated most documentation with the code.
3093 * Add support for mmap, with help from
3094 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3095 * Use last_remainder in more cases.
3096 * Pack bins using idea from colin@nyx10.cs.du.edu
3097 * Use ordered bins instead of best-fit threshhold
3098 * Eliminate block-local decls to simplify tracing and debugging.
3099 * Support another case of realloc via move into top
3100 * Fix error occuring when initial sbrk_base not word-aligned.
3101 * Rely on page size for units instead of SBRK_UNIT to
3102 avoid surprises about sbrk alignment conventions.
3103 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3104 (raymond@es.ele.tue.nl) for the suggestion.
3105 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3106 * More precautions for cases where other routines call sbrk,
3107 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3108 * Added macros etc., allowing use in linux libc from
3109 H.J. Lu (hjl@gnu.ai.mit.edu)
3110 * Inverted this history list
3112 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3113 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3114 * Removed all preallocation code since under current scheme
3115 the work required to undo bad preallocations exceeds
3116 the work saved in good cases for most test programs.
3117 * No longer use return list or unconsolidated bins since
3118 no scheme using them consistently outperforms those that don't
3119 given above changes.
3120 * Use best fit for very large chunks to prevent some worst-cases.
3121 * Added some support for debugging
3123 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3124 * Removed footers when chunks are in use. Thanks to
3125 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3127 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3128 * Added malloc_trim, with help from Wolfram Gloger
3129 (wmglo@Dent.MED.Uni-Muenchen.DE).
3131 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3133 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3134 * realloc: try to expand in both directions
3135 * malloc: swap order of clean-bin strategy;
3136 * realloc: only conditionally expand backwards
3137 * Try not to scavenge used bins
3138 * Use bin counts as a guide to preallocation
3139 * Occasionally bin return list chunks in first scan
3140 * Add a few optimizations from colin@nyx10.cs.du.edu
3142 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3143 * faster bin computation & slightly different binning
3144 * merged all consolidations to one part of malloc proper
3145 (eliminating old malloc_find_space & malloc_clean_bin)
3146 * Scan 2 returns chunks (not just 1)
3147 * Propagate failure in realloc if malloc returns 0
3148 * Add stuff to allow compilation on non-ANSI compilers
3149 from kpv@research.att.com
3151 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3152 * removed potential for odd address access in prev_chunk
3153 * removed dependency on getpagesize.h
3154 * misc cosmetics and a bit more internal documentation
3155 * anticosmetics: mangled names in macros to evade debugger strangeness
3156 * tested on sparc, hp-700, dec-mips, rs6000
3157 with gcc & native cc (hp, dec only) allowing
3158 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3160 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3161 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3162 structure of old version, but most details differ.)