1 /*
2   This is a version (aka dlmalloc) of malloc/free/realloc written by
3   Doug Lea and released to the public domain.  Use, modify, and
4   redistribute this code without permission or acknowledgement in any
5   way you wish.  Send questions, comments, complaints, performance
6   data, etc to dl@cs.oswego.edu
7 
8 * VERSION 2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
9 
10    Note: There may be an updated version of this malloc obtainable at
11            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
12          Check before installing!
13 
14 * Quickstart
15 
16   This library is all in one file to simplify the most common usage:
17   ftp it, compile it (-O), and link it into another program. All
18   of the compile-time options default to reasonable values for use on
19   most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
20   You might later want to step through various compile-time and dynamic
21   tuning options.
22 
23   For convenience, an include file for code using this malloc is at:
24      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.7.1.h
25   You don't really need this .h file unless you call functions not
26   defined in your system include files.  The .h file contains only the
27   excerpts from this file needed for using this malloc on ANSI C/C++
28   systems, so long as you haven't changed compile-time options about
29   naming and tuning parameters.  If you do, then you can create your
30   own malloc.h that does include all settings by cutting at the point
31   indicated below.
32 
33 * Why use this malloc?
34 
35   This is not the fastest, most space-conserving, most portable, or
36   most tunable malloc ever written. However it is among the fastest
37   while also being among the most space-conserving, portable and tunable.
38   Consistent balance across these factors results in a good general-purpose
39   allocator for malloc-intensive programs.
40 
41   The main properties of the algorithms are:
42   * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
43     with ties normally decided via FIFO (i.e. least recently used).
44   * For small (<= 64 bytes by default) requests, it is a caching
45     allocator, that maintains pools of quickly recycled chunks.
46   * In between, and for combinations of large and small requests, it does
47     the best it can trying to meet both goals at once.
48   * For very large requests (>= 128KB by default), it relies on system
49     memory mapping facilities, if supported.
50 
51   For a longer but slightly out of date high-level description, see
52      http://gee.cs.oswego.edu/dl/html/malloc.html
53 
54   You may already by default be using a C library containing a malloc
55   that is  based on some version of this malloc (for example in
56   linux). You might still want to use the one in this file in order to
57   customize settings or to avoid overheads associated with library
58   versions.
59 
60 * Contents, described in more detail in "description of public routines" below.
61 
62   Standard (ANSI/SVID/...)  functions:
63     malloc(size_t n);
64     calloc(size_t n_elements, size_t element_size);
65     free(Void_t* p);
66     realloc(Void_t* p, size_t n);
67     memalign(size_t alignment, size_t n);
68     valloc(size_t n);
69     mallinfo()
70     mallopt(int parameter_number, int parameter_value)
71 
72   Additional functions:
73     independent_calloc(size_t n_elements, size_t size, Void_t* chunks[]);
74     independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
75     pvalloc(size_t n);
76     cfree(Void_t* p);
77     malloc_trim(size_t pad);
78     malloc_usable_size(Void_t* p);
79     malloc_stats();
80 
81 * Vital statistics:
82 
83   Supported pointer representation:       4 or 8 bytes
84   Supported size_t  representation:       4 or 8 bytes
85        Note that size_t is allowed to be 4 bytes even if pointers are 8.
86        You can adjust this by defining INTERNAL_SIZE_T
87 
88   Alignment:                              2 * sizeof(size_t) (default)
89        (i.e., 8 byte alignment with 4byte size_t). This suffices for
90        nearly all current machines and C compilers. However, you can
91        define MALLOC_ALIGNMENT to be wider than this if necessary.
92 
93   Minimum overhead per allocated chunk:   4 or 8 bytes
94        Each malloced chunk has a hidden word of overhead holding size
95        and status information.
96 
97   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
98                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
99 
100        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
101        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
102        needed; 4 (8) for a trailing size field and 8 (16) bytes for
103        free list pointers. Thus, the minimum allocatable size is
104        16/24/32 bytes.
105 
106        Even a request for zero bytes (i.e., malloc(0)) returns a
107        pointer to something of the minimum allocatable size.
108 
109        The maximum overhead wastage (i.e., number of extra bytes
110        allocated than were requested in malloc) is less than or equal
111        to the minimum size, except for requests >= mmap_threshold that
112        are serviced via mmap(), where the worst case wastage is 2 *
113        sizeof(size_t) bytes plus the remainder from a system page (the
114        minimal mmap unit); typically 4096 or 8192 bytes.
115 
116   Maximum allocated size:  4-byte size_t: 2^32 minus about two pages
117                            8-byte size_t: 2^64 minus about two pages
118 
119        It is assumed that (possibly signed) size_t values suffice to
120        represent chunk sizes. `Possibly signed' is due to the fact
121        that `size_t' may be defined on a system as either a signed or
122        an unsigned type. The ISO C standard says that it must be
123        unsigned, but a few systems are known not to adhere to this.
124        Additionally, even when size_t is unsigned, sbrk (which is by
125        default used to obtain memory from system) accepts signed
126        arguments, and may not be able to handle size_t-wide arguments
127        with negative sign bit.  Generally, values that would
128        appear as negative after accounting for overhead and alignment
129        are supported only via mmap(), which does not have this
130        limitation.
131 
132        Requests for sizes outside the allowed range will perform an optional
133        failure action and then return null. (Requests may also
134        also fail because a system is out of memory.)
135 
136   Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
137 
138        When USE_MALLOC_LOCK is defined, wrappers are created to
139        surround every public call with either a pthread mutex or
140        a win32 spinlock (depending on WIN32). This is not
141        especially fast, and can be a major bottleneck.
142        It is designed only to provide minimal protection
143        in concurrent environments, and to provide a basis for
144        extensions.  If you are using malloc in a concurrent program,
145        you would be far better off obtaining ptmalloc, which is
146        derived from a version of this malloc, and is well-tuned for
147        concurrent programs. (See http://www.malloc.de) Note that
148        even when USE_MALLOC_LOCK is defined, you can can guarantee
149        full thread-safety only if no threads acquire memory through
150        direct calls to MORECORE or other system-level allocators.
151 
152   Compliance: I believe it is compliant with the 1997 Single Unix Specification
153        (See http://www.opennc.org). Also SVID/XPG, ANSI C, and probably
154        others as well.
155 
156 * Synopsis of compile-time options:
157 
158     People have reported using previous versions of this malloc on all
159     versions of Unix, sometimes by tweaking some of the defines
160     below. It has been tested most extensively on Solaris and
161     Linux. It is also reported to work on WIN32 platforms.
162     People also report using it in stand-alone embedded systems.
163 
164     The implementation is in straight, hand-tuned ANSI C.  It is not
165     at all modular. (Sorry!)  It uses a lot of macros.  To be at all
166     usable, this code should be compiled using an optimizing compiler
167     (for example gcc -O3) that can simplify expressions and control
168     paths. (FAQ: some macros import variables as arguments rather than
169     declare locals because people reported that some debuggers
170     otherwise get confused.)
171 
172     OPTION                     DEFAULT VALUE
173 
174     Compilation Environment options:
175 
176     __STD_C                    derived from C compiler defines
177     WIN32                      NOT defined
178     HAVE_MEMCPY                defined
179     USE_MEMCPY                 1 if HAVE_MEMCPY is defined
180     HAVE_MMAP                  defined as 1
181     MMAP_CLEARS                1
182     HAVE_MREMAP                0 unless linux defined
183     malloc_getpagesize         derived from system #includes, or 4096 if not
184     HAVE_USR_INCLUDE_MALLOC_H  NOT defined
185     LACKS_UNISTD_H             NOT defined unless WIN32
186     LACKS_SYS_PARAM_H          NOT defined unless WIN32
187     LACKS_SYS_MMAN_H           NOT defined unless WIN32
188     LACKS_FCNTL_H              NOT defined
189 
190     Changing default word sizes:
191 
192     INTERNAL_SIZE_T            size_t
193     MALLOC_ALIGNMENT           2 * sizeof(INTERNAL_SIZE_T)
194     PTR_UINT                   unsigned long
195     CHUNK_SIZE_T               unsigned long
196 
197     Configuration and functionality options:
198 
199     USE_DL_PREFIX              NOT defined
200     USE_PUBLIC_MALLOC_WRAPPERS NOT defined
201     USE_MALLOC_LOCK            NOT defined
202     DEBUG                      NOT defined
203     REALLOC_ZERO_BYTES_FREES   NOT defined
204     MALLOC_FAILURE_ACTION      errno = ENOMEM, if __STD_C defined, else no-op
205     TRIM_FASTBINS              0
206     FIRST_SORTED_BIN_SIZE      512
207 
208     Options for customizing MORECORE:
209 
210     MORECORE                   sbrk
211     MORECORE_CONTIGUOUS        1
212     MORECORE_CANNOT_TRIM       NOT defined
213     MMAP_AS_MORECORE_SIZE      (1024 * 1024)
214 
215     Tuning options that are also dynamically changeable via mallopt:
216 
217     DEFAULT_MXFAST             64
218     DEFAULT_TRIM_THRESHOLD     256 * 1024
219     DEFAULT_TOP_PAD            0
220     DEFAULT_MMAP_THRESHOLD     256 * 1024
221     DEFAULT_MMAP_MAX           65536
222 
223     There are several other #defined constants and macros that you
224     probably don't want to touch unless you are extending or adapting malloc.
225 */
226 
227 /*
228   WIN32 sets up defaults for MS environment and compilers.
229   Otherwise defaults are for unix.
230 */
231 
232 /* #define WIN32 */
233 
234 #ifdef WIN32
235 
236 #define WIN32_LEAN_AND_MEAN
237 #include <windows.h>
238 
239 /* Win32 doesn't supply or need the following headers */
240 /* #define LACKS_UNISTD_H */
241 /* #define LACKS_SYS_PARAM_H */
242 #define LACKS_SYS_MMAN_H
243 
244 /* Use the supplied emulation of sbrk */
245 /* #define MORECORE sbrk */
246 /* #define MORECORE_CONTIGUOUS 1 */
247 /* #define MORECORE_FAILURE    ((void*)(-1)) */
248 
249 /* Use the supplied emulation of mmap and munmap */
250 /* #define HAVE_MMAP 1 */
251 /* #define MUNMAP_FAILURE  (-1) */
252 /* #define MMAP_CLEARS 1 */
253 
254 /* These values don't really matter in windows mmap emulation */
255 #define MAP_PRIVATE 1
256 #define MAP_ANONYMOUS 2
257 #define PROT_READ 1
258 #define PROT_WRITE 2
259 
260 /* Emulation functions defined at the end of this file */
261 
262 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
263 #ifdef USE_MALLOC_LOCK
264 static int slwait(int *sl);
265 static int slrelease(int *sl);
266 #endif
267 
268 static long getpagesize(void);
269 static long getregionsize(void);
270 static void *sbrk(long size);
271 
272 static void vminfo (unsigned long*free, unsigned long*reserved, unsigned long*committed);
273 static int cpuinfo (int whole, unsigned long*kernel, unsigned long*user);
274 
275 #endif
276 
277 /*
278   __STD_C should be nonzero if using ANSI-standard C compiler, a C++
279   compiler, or a C compiler sufficiently close to ANSI to get away
280   with it.
281 */
282 
283 #ifndef __STD_C
284 #if defined(__STDC__) || defined(_cplusplus)
285 #define __STD_C     1
286 #else
287 #define __STD_C     0
288 #endif
289 #endif /*__STD_C*/
290 
291 
292 /*
293   Void_t* is the pointer type that malloc should say it returns
294 */
295 
296 #ifndef Void_t
297 #if (__STD_C || defined(WIN32))
298 #define Void_t      void
299 #else
300 #define Void_t      char
301 #endif
302 #endif /*Void_t*/
303 
304 #if __STD_C
305 #include <stddef.h>   /* for size_t */
306 #else
307 #include <sys/types.h>
308 #endif
309 
310 #ifdef __cplusplus
311 extern "C" {
312 #endif
313 
314 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
315 
316 /* #define  LACKS_UNISTD_H */
317 
318 #ifndef LACKS_UNISTD_H
319 #include <unistd.h>
320 #endif
321 
322 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
323 
324 /* #define  LACKS_SYS_PARAM_H */
325 
326 
327 #include <stdio.h>    /* needed for malloc_stats */
328 #include <errno.h>    /* needed for optional MALLOC_FAILURE_ACTION */
329 
330 
331 /*
332   Debugging:
333 
334   Because freed chunks may be overwritten with bookkeeping fields, this
335   malloc will often die when freed memory is overwritten by user
336   programs.  This can be very effective (albeit in an annoying way)
337   in helping track down dangling pointers.
338 
339   If you compile with -DDEBUG, a number of assertion checks are
340   enabled that will catch more memory errors. You probably won't be
341   able to make much sense of the actual assertion errors, but they
342   should help you locate incorrectly overwritten memory.  The
343   checking is fairly extensive, and will slow down execution
344   noticeably. Calling malloc_stats or mallinfo with DEBUG set will
345   attempt to check every non-mmapped allocated and free chunk in the
346   course of computing the summmaries. (By nature, mmapped regions
347   cannot be checked very much automatically.)
348 
349   Setting DEBUG may also be helpful if you are trying to modify
350   this code. The assertions in the check routines spell out in more
351   detail the assumptions and invariants underlying the algorithms.
352 
353   Setting DEBUG does NOT provide an automated mechanism for checking
354   that all accesses to malloced memory stay within their
355   bounds. However, there are several add-ons and adaptations of this
356   or other mallocs available that do this.
357 */
358 
359 #if DEBUG
360 #include <assert.h>
361 #else
362 #define assert(x) ((void)0)
363 #endif
364 
365 /*
366   The unsigned integer type used for comparing any two chunk sizes.
367   This should be at least as wide as size_t, but should not be signed.
368 */
369 
370 #ifndef CHUNK_SIZE_T
371 #define CHUNK_SIZE_T unsigned long
372 #endif
373 
374 /*
375   The unsigned integer type used to hold addresses when they are are
376   manipulated as integers. Except that it is not defined on all
377   systems, intptr_t would suffice.
378 */
379 #ifndef PTR_UINT
380 #define PTR_UINT unsigned long
381 #endif
382 
383 
384 /*
385   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
386   of chunk sizes.
387 
388   The default version is the same as size_t.
389 
390   While not strictly necessary, it is best to define this as an
391   unsigned type, even if size_t is a signed type. This may avoid some
392   artificial size limitations on some systems.
393 
394   On a 64-bit machine, you may be able to reduce malloc overhead by
395   defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
396   expense of not being able to handle more than 2^32 of malloced
397   space. If this limitation is acceptable, you are encouraged to set
398   this unless you are on a platform requiring 16byte alignments. In
399   this case the alignment requirements turn out to negate any
400   potential advantages of decreasing size_t word size.
401 
402   Implementors: Beware of the possible combinations of:
403      - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
404        and might be the same width as int or as long
405      - size_t might have different width and signedness as INTERNAL_SIZE_T
406      - int and long might be 32 or 64 bits, and might be the same width
407   To deal with this, most comparisons and difference computations
408   among INTERNAL_SIZE_Ts should cast them to CHUNK_SIZE_T, being
409   aware of the fact that casting an unsigned int to a wider long does
410   not sign-extend. (This also makes checking for negative numbers
411   awkward.) Some of these casts result in harmless compiler warnings
412   on some systems.
413 */
414 
415 #ifndef INTERNAL_SIZE_T
416 #define INTERNAL_SIZE_T size_t
417 #endif
418 
419 /* The corresponding word size */
420 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
421 
422 
423 
424 /*
425   MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
426   It must be a power of two at least 2 * SIZE_SZ, even on machines
427   for which smaller alignments would suffice. It may be defined as
428   larger than this though. Note however that code and data structures
429   are optimized for the case of 8-byte alignment.
430 */
431 
432 
433 #ifndef MALLOC_ALIGNMENT
434 #define MALLOC_ALIGNMENT       (2 * SIZE_SZ)
435 #endif
436 
437 /* The corresponding bit mask value */
438 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
439 
440 
441 
442 /*
443   REALLOC_ZERO_BYTES_FREES should be set if a call to
444   realloc with zero bytes should be the same as a call to free.
445   Some people think it should. Otherwise, since this malloc
446   returns a unique pointer for malloc(0), so does realloc(p, 0).
447 */
448 
449 /*   #define REALLOC_ZERO_BYTES_FREES */
450 
451 /*
452   TRIM_FASTBINS controls whether free() of a very small chunk can
453   immediately lead to trimming. Setting to true (1) can reduce memory
454   footprint, but will almost always slow down programs that use a lot
455   of small chunks.
456 
457   Define this only if you are willing to give up some speed to more
458   aggressively reduce system-level memory footprint when releasing
459   memory in programs that use many small chunks.  You can get
460   essentially the same effect by setting MXFAST to 0, but this can
461   lead to even greater slowdowns in programs using many small chunks.
462   TRIM_FASTBINS is an in-between compile-time option, that disables
463   only those chunks bordering topmost memory from being placed in
464   fastbins.
465 */
466 
467 #ifndef TRIM_FASTBINS
468 #define TRIM_FASTBINS  0
469 #endif
470 
471 
472 /*
473   USE_DL_PREFIX will prefix all public routines with the string 'dl'.
474   This is necessary when you only want to use this malloc in one part
475   of a program, using your regular system malloc elsewhere.
476 */
477 
478 /* #define USE_DL_PREFIX */
479 
480 
481 /*
482   USE_MALLOC_LOCK causes wrapper functions to surround each
483   callable routine with pthread mutex lock/unlock.
484 
485   USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
486 */
487 
488 
489 /* #define USE_MALLOC_LOCK */
490 
491 
492 /*
493   If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
494   actually a wrapper function that first calls MALLOC_PREACTION, then
495   calls the internal routine, and follows it with
496   MALLOC_POSTACTION. This is needed for locking, but you can also use
497   this, without USE_MALLOC_LOCK, for purposes of interception,
498   instrumentation, etc. It is a sad fact that using wrappers often
499   noticeably degrades performance of malloc-intensive programs.
500 */
501 
502 #ifdef USE_MALLOC_LOCK
503 #define USE_PUBLIC_MALLOC_WRAPPERS
504 #else
505 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
506 #endif
507 
508 
509 /*
510    Two-phase name translation.
511    All of the actual routines are given mangled names.
512    When wrappers are used, they become the public callable versions.
513    When DL_PREFIX is used, the callable names are prefixed.
514 */
515 
516 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
517 #define cALLOc      public_cALLOc
518 #define fREe        public_fREe
519 #define cFREe       public_cFREe
520 #define mALLOc      public_mALLOc
521 #define mEMALIGn    public_mEMALIGn
522 #define rEALLOc     public_rEALLOc
523 #define vALLOc      public_vALLOc
524 #define pVALLOc     public_pVALLOc
525 #define mALLINFo    public_mALLINFo
526 #define mALLOPt     public_mALLOPt
527 #define mTRIm       public_mTRIm
528 #define mSTATs      public_mSTATs
529 #define mUSABLe     public_mUSABLe
530 #define iCALLOc     public_iCALLOc
531 #define iCOMALLOc   public_iCOMALLOc
532 #endif
533 
534 #ifdef USE_DL_PREFIX
535 #define public_cALLOc    dlcalloc
536 #define public_fREe      dlfree
537 #define public_cFREe     dlcfree
538 #define public_mALLOc    dlmalloc
539 #define public_mEMALIGn  dlmemalign
540 #define public_rEALLOc   dlrealloc
541 #define public_vALLOc    dlvalloc
542 #define public_pVALLOc   dlpvalloc
543 #define public_mALLINFo  dlmallinfo
544 #define public_mALLOPt   dlmallopt
545 #define public_mTRIm     dlmalloc_trim
546 #define public_mSTATs    dlmalloc_stats
547 #define public_mUSABLe   dlmalloc_usable_size
548 #define public_iCALLOc   dlindependent_calloc
549 #define public_iCOMALLOc dlindependent_comalloc
550 #else /* USE_DL_PREFIX */
551 #define public_cALLOc    calloc
552 #define public_fREe      free
553 #define public_cFREe     cfree
554 #define public_mALLOc    malloc
555 #define public_mEMALIGn  memalign
556 #define public_rEALLOc   realloc
557 #define public_vALLOc    valloc
558 #define public_pVALLOc   pvalloc
559 #define public_mALLINFo  mallinfo
560 #define public_mALLOPt   mallopt
561 #define public_mTRIm     malloc_trim
562 #define public_mSTATs    malloc_stats
563 #define public_mUSABLe   malloc_usable_size
564 #define public_iCALLOc   independent_calloc
565 #define public_iCOMALLOc independent_comalloc
566 #endif /* USE_DL_PREFIX */
567 
568 
569 /*
570   HAVE_MEMCPY should be defined if you are not otherwise using
571   ANSI STD C, but still have memcpy and memset in your C library
572   and want to use them in calloc and realloc. Otherwise simple
573   macro versions are defined below.
574 
575   USE_MEMCPY should be defined as 1 if you actually want to
576   have memset and memcpy called. People report that the macro
577   versions are faster than libc versions on some systems.
578 
579   Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
580   (of <= 36 bytes) are manually unrolled in realloc and calloc.
581 */
582 
583 #define HAVE_MEMCPY
584 
585 #ifndef USE_MEMCPY
586 #ifdef HAVE_MEMCPY
587 #define USE_MEMCPY 1
588 #else
589 #define USE_MEMCPY 0
590 #endif
591 #endif
592 
593 
594 #if (__STD_C || defined(HAVE_MEMCPY))
595 
596 #ifdef WIN32
597 /* On Win32 memset and memcpy are already declared in windows.h */
598 #else
599 #if __STD_C
600 void* memset(void*, int, size_t);
601 void* memcpy(void*, const void*, size_t);
602 #else
603 Void_t* memset();
604 Void_t* memcpy();
605 #endif
606 #endif
607 #endif
608 
609 /*
610   MALLOC_FAILURE_ACTION is the action to take before "return 0" when
611   malloc fails to be able to return memory, either because memory is
612   exhausted or because of illegal arguments.
613 
614   By default, sets errno if running on STD_C platform, else does nothing.
615 */
616 
617 #ifndef MALLOC_FAILURE_ACTION
618 #if __STD_C
619 #define MALLOC_FAILURE_ACTION \
620    errno = ENOMEM;
621 
622 #else
623 #define MALLOC_FAILURE_ACTION
624 #endif
625 #endif
626 
627 /*
628   MORECORE-related declarations. By default, rely on sbrk
629 */
630 
631 
632 #ifdef LACKS_UNISTD_H
633 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
634 #if __STD_C
635 extern Void_t*     sbrk(ptrdiff_t);
636 #else
637 extern Void_t*     sbrk();
638 #endif
639 #endif
640 #endif
641 
642 /*
643   MORECORE is the name of the routine to call to obtain more memory
644   from the system.  See below for general guidance on writing
645   alternative MORECORE functions, as well as a version for WIN32 and a
646   sample version for pre-OSX macos.
647 */
648 
649 #ifndef MORECORE
650 #define MORECORE sbrk
651 #endif
652 
653 /*
654   MORECORE_FAILURE is the value returned upon failure of MORECORE
655   as well as mmap. Since it cannot be an otherwise valid memory address,
656   and must reflect values of standard sys calls, you probably ought not
657   try to redefine it.
658 */
659 
660 #ifndef MORECORE_FAILURE
661 #define MORECORE_FAILURE (-1)
662 #endif
663 
664 /*
665   If MORECORE_CONTIGUOUS is true, take advantage of fact that
666   consecutive calls to MORECORE with positive arguments always return
667   contiguous increasing addresses.  This is true of unix sbrk.  Even
668   if not defined, when regions happen to be contiguous, malloc will
669   permit allocations spanning regions obtained from different
670   calls. But defining this when applicable enables some stronger
671   consistency checks and space efficiencies.
672 */
673 
674 #ifndef MORECORE_CONTIGUOUS
675 #define MORECORE_CONTIGUOUS 1
676 #endif
677 
678 /*
679   Define MORECORE_CANNOT_TRIM if your version of MORECORE
680   cannot release space back to the system when given negative
681   arguments. This is generally necessary only if you are using
682   a hand-crafted MORECORE function that cannot handle negative arguments.
683 */
684 
685 /* #define MORECORE_CANNOT_TRIM */
686 
687 
688 /*
689   Define HAVE_MMAP as true to optionally make malloc() use mmap() to
690   allocate very large blocks.  These will be returned to the
691   operating system immediately after a free(). Also, if mmap
692   is available, it is used as a backup strategy in cases where
693   MORECORE fails to provide space from system.
694 
695   This malloc is best tuned to work with mmap for large requests.
696   If you do not have mmap, operations involving very large chunks (1MB
697   or so) may be slower than you'd like.
698 */
699 
700 #ifndef HAVE_MMAP
701   /* #define HAVE_MMAP 1 */
702 #endif
703 
704 #if HAVE_MMAP
705 /*
706    Standard unix mmap using /dev/zero clears memory so calloc doesn't
707    need to.
708 */
709 
710 #ifndef MMAP_CLEARS
711 #define MMAP_CLEARS 1
712 #endif
713 
714 #else /* no mmap */
715 #ifndef MMAP_CLEARS
716 #define MMAP_CLEARS 0
717 #endif
718 #endif
719 
720 
721 /*
722    MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
723    sbrk fails, and mmap is used as a backup (which is done only if
724    HAVE_MMAP).  The value must be a multiple of page size.  This
725    backup strategy generally applies only when systems have "holes" in
726    address space, so sbrk cannot perform contiguous expansion, but
727    there is still space available on system.  On systems for which
728    this is known to be useful (i.e. most linux kernels), this occurs
729    only when programs allocate huge amounts of memory.  Between this,
730    and the fact that mmap regions tend to be limited, the size should
731    be large, to avoid too many mmap calls and thus avoid running out
732    of kernel resources.
733 */
734 
735 #ifndef MMAP_AS_MORECORE_SIZE
736 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
737 #endif
738 
739 /*
740   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
741   large blocks.  This is currently only possible on Linux with
742   kernel versions newer than 1.3.77.
743 */
744 
745 #ifndef HAVE_MREMAP
746 #ifdef linux
747 #define HAVE_MREMAP 1
748 #else
749 #define HAVE_MREMAP 0
750 #endif
751 
752 #endif /* HAVE_MMAP */
753 
754 
755 /*
756   The system page size. To the extent possible, this malloc manages
757   memory from the system in page-size units.  Note that this value is
758   cached during initialization into a field of malloc_state. So even
759   if malloc_getpagesize is a function, it is only called once.
760 
761   The following mechanics for getpagesize were adapted from bsd/gnu
762   getpagesize.h. If none of the system-probes here apply, a value of
763   4096 is used, which should be OK: If they don't apply, then using
764   the actual value probably doesn't impact performance.
765 */
766 
767 
768 #ifndef malloc_getpagesize
769 
770 #ifndef LACKS_UNISTD_H
771 #  include <unistd.h>
772 #endif
773 
774 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
775 #    ifndef _SC_PAGE_SIZE
776 #      define _SC_PAGE_SIZE _SC_PAGESIZE
777 #    endif
778 #  endif
779 
780 #  ifdef _SC_PAGE_SIZE
781 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
782 #  else
783 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
784        extern size_t getpagesize();
785 #      define malloc_getpagesize getpagesize()
786 #    else
787 #      ifdef WIN32 /* use supplied emulation of getpagesize */
788 #        define malloc_getpagesize getpagesize()
789 #      else
790 #        ifndef LACKS_SYS_PARAM_H
791 #          include <sys/param.h>
792 #        endif
793 #        ifdef EXEC_PAGESIZE
794 #          define malloc_getpagesize EXEC_PAGESIZE
795 #        else
796 #          ifdef NBPG
797 #            ifndef CLSIZE
798 #              define malloc_getpagesize NBPG
799 #            else
800 #              define malloc_getpagesize (NBPG * CLSIZE)
801 #            endif
802 #          else
803 #            ifdef NBPC
804 #              define malloc_getpagesize NBPC
805 #            else
806 #              ifdef PAGESIZE
807 #                define malloc_getpagesize PAGESIZE
808 #              else /* just guess */
809 #                define malloc_getpagesize (4096)
810 #              endif
811 #            endif
812 #          endif
813 #        endif
814 #      endif
815 #    endif
816 #  endif
817 #endif
818 
819 /*
820   This version of malloc supports the standard SVID/XPG mallinfo
821   routine that returns a struct containing usage properties and
822   statistics. It should work on any SVID/XPG compliant system that has
823   a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
824   install such a thing yourself, cut out the preliminary declarations
825   as described above and below and save them in a malloc.h file. But
826   there's no compelling reason to bother to do this.)
827 
828   The main declaration needed is the mallinfo struct that is returned
829   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
830   bunch of fields that are not even meaningful in this version of
831   malloc.  These fields are are instead filled by mallinfo() with
832   other numbers that might be of interest.
833 
834   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
835   /usr/include/malloc.h file that includes a declaration of struct
836   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
837   version is declared below.  These must be precisely the same for
838   mallinfo() to work.  The original SVID version of this struct,
839   defined on most systems with mallinfo, declares all fields as
840   ints. But some others define as unsigned long. If your system
841   defines the fields using a type of different width than listed here,
842   you must #include your system version and #define
843   HAVE_USR_INCLUDE_MALLOC_H.
844 */
845 
846 /* #define HAVE_USR_INCLUDE_MALLOC_H */
847 
848 #ifdef HAVE_USR_INCLUDE_MALLOC_H
849 #include "/usr/include/malloc.h"
850 #else
851 
852 /* SVID2/XPG mallinfo structure */
853 
854 struct mallinfo {
855   int arena;    /* non-mmapped space allocated from system */
856   int ordblks;  /* number of free chunks */
857   int smblks;   /* number of fastbin blocks */
858   int hblks;    /* number of mmapped regions */
859   int hblkhd;   /* space in mmapped regions */
860   int usmblks;  /* maximum total allocated space */
861   int fsmblks;  /* space available in freed fastbin blocks */
862   int uordblks; /* total allocated space */
863   int fordblks; /* total free space */
864   int keepcost; /* top-most, releasable (via malloc_trim) space */
865 };
866 
867 /*
868   SVID/XPG defines four standard parameter numbers for mallopt,
869   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
870   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
871   so setting them has no effect. But this malloc also supports other
872   options in mallopt described below.
873 */
874 #endif
875 
876 
877 /* ---------- description of public routines ------------ */
878 
879 /*
880   malloc(size_t n)
881   Returns a pointer to a newly allocated chunk of at least n bytes, or null
882   if no space is available. Additionally, on failure, errno is
883   set to ENOMEM on ANSI C systems.
884 
885   If n is zero, malloc returns a minumum-sized chunk. (The minimum
886   size is 16 bytes on most 32bit systems, and 24 or 32 bytes on 64bit
887   systems.)  On most systems, size_t is an unsigned type, so calls
888   with negative arguments are interpreted as requests for huge amounts
889   of space, which will often fail. The maximum supported value of n
890   differs across systems, but is in all cases less than the maximum
891   representable value of a size_t.
892 */
893 #if __STD_C
894 Void_t*  public_mALLOc(size_t);
895 #else
896 Void_t*  public_mALLOc();
897 #endif
898 
899 /*
900   free(Void_t* p)
901   Releases the chunk of memory pointed to by p, that had been previously
902   allocated using malloc or a related routine such as realloc.
903   It has no effect if p is null. It can have arbitrary (i.e., bad!)
904   effects if p has already been freed.
905 
906   Unless disabled (using mallopt), freeing very large spaces will
907   when possible, automatically trigger operations that give
908   back unused memory to the system, thus reducing program footprint.
909 */
910 #if __STD_C
911 void     public_fREe(Void_t*);
912 #else
913 void     public_fREe();
914 #endif
915 
916 /*
917   calloc(size_t n_elements, size_t element_size);
918   Returns a pointer to n_elements * element_size bytes, with all locations
919   set to zero.
920 */
921 #if __STD_C
922 Void_t*  public_cALLOc(size_t, size_t);
923 #else
924 Void_t*  public_cALLOc();
925 #endif
926 
927 /*
928   realloc(Void_t* p, size_t n)
929   Returns a pointer to a chunk of size n that contains the same data
930   as does chunk p up to the minimum of (n, p's size) bytes, or null
931   if no space is available.
932 
933   The returned pointer may or may not be the same as p. The algorithm
934   prefers extending p when possible, otherwise it employs the
935   equivalent of a malloc-copy-free sequence.
936 
937   If p is null, realloc is equivalent to malloc.
938 
939   If space is not available, realloc returns null, errno is set (if on
940   ANSI) and p is NOT freed.
941 
942   if n is for fewer bytes than already held by p, the newly unused
943   space is lopped off and freed if possible.  Unless the #define
944   REALLOC_ZERO_BYTES_FREES is set, realloc with a size argument of
945   zero (re)allocates a minimum-sized chunk.
946 
947   Large chunks that were internally obtained via mmap will always
948   be reallocated using malloc-copy-free sequences unless
949   the system supports MREMAP (currently only linux).
950 
951   The old unix realloc convention of allowing the last-free'd chunk
952   to be used as an argument to realloc is not supported.
953 */
954 #if __STD_C
955 Void_t*  public_rEALLOc(Void_t*, size_t);
956 #else
957 Void_t*  public_rEALLOc();
958 #endif
959 
960 /*
961   memalign(size_t alignment, size_t n);
962   Returns a pointer to a newly allocated chunk of n bytes, aligned
963   in accord with the alignment argument.
964 
965   The alignment argument should be a power of two. If the argument is
966   not a power of two, the nearest greater power is used.
967   8-byte alignment is guaranteed by normal malloc calls, so don't
968   bother calling memalign with an argument of 8 or less.
969 
970   Overreliance on memalign is a sure way to fragment space.
971 */
972 #if __STD_C
973 Void_t*  public_mEMALIGn(size_t, size_t);
974 #else
975 Void_t*  public_mEMALIGn();
976 #endif
977 
978 /*
979   valloc(size_t n);
980   Equivalent to memalign(pagesize, n), where pagesize is the page
981   size of the system. If the pagesize is unknown, 4096 is used.
982 */
983 #if __STD_C
984 Void_t*  public_vALLOc(size_t);
985 #else
986 Void_t*  public_vALLOc();
987 #endif
988 
989 
990 
991 /*
992   mallopt(int parameter_number, int parameter_value)
993   Sets tunable parameters The format is to provide a
994   (parameter-number, parameter-value) pair.  mallopt then sets the
995   corresponding parameter to the argument value if it can (i.e., so
996   long as the value is meaningful), and returns 1 if successful else
997   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
998   normally defined in malloc.h.  Only one of these (M_MXFAST) is used
999   in this malloc. The others (M_NLBLKS, M_GRAIN, M_KEEP) don't apply,
1000   so setting them has no effect. But this malloc also supports four
1001   other options in mallopt. See below for details.  Briefly, supported
1002   parameters are as follows (listed defaults are for "typical"
1003   configurations).
1004 
1005   Symbol            param #   default    allowed param values
1006   M_MXFAST          1         64         0-80  (0 disables fastbins)
1007   M_TRIM_THRESHOLD -1         256*1024   any   (-1U disables trimming)
1008   M_TOP_PAD        -2         0          any
1009   M_MMAP_THRESHOLD -3         256*1024   any   (or 0 if no MMAP support)
1010   M_MMAP_MAX       -4         65536      any   (0 disables use of mmap)
1011 */
1012 #if __STD_C
1013 int      public_mALLOPt(int, int);
1014 #else
1015 int      public_mALLOPt();
1016 #endif
1017 
1018 
1019 /*
1020   mallinfo()
1021   Returns (by copy) a struct containing various summary statistics:
1022 
1023   arena:     current total non-mmapped bytes allocated from system
1024   ordblks:   the number of free chunks
1025   smblks:    the number of fastbin blocks (i.e., small chunks that
1026                have been freed but not use resused or consolidated)
1027   hblks:     current number of mmapped regions
1028   hblkhd:    total bytes held in mmapped regions
1029   usmblks:   the maximum total allocated space. This will be greater
1030                 than current total if trimming has occurred.
1031   fsmblks:   total bytes held in fastbin blocks
1032   uordblks:  current total allocated space (normal or mmapped)
1033   fordblks:  total free space
1034   keepcost:  the maximum number of bytes that could ideally be released
1035                back to system via malloc_trim. ("ideally" means that
1036                it ignores page restrictions etc.)
1037 
1038   Because these fields are ints, but internal bookkeeping may
1039   be kept as longs, the reported values may wrap around zero and
1040   thus be inaccurate.
1041 */
1042 #if __STD_C
1043 struct mallinfo public_mALLINFo(void);
1044 #else
1045 struct mallinfo public_mALLINFo();
1046 #endif
1047 
1048 /*
1049   independent_calloc(size_t n_elements, size_t element_size, Void_t* chunks[]);
1050 
1051   independent_calloc is similar to calloc, but instead of returning a
1052   single cleared space, it returns an array of pointers to n_elements
1053   independent elements that can hold contents of size elem_size, each
1054   of which starts out cleared, and can be independently freed,
1055   realloc'ed etc. The elements are guaranteed to be adjacently
1056   allocated (this is not guaranteed to occur with multiple callocs or
1057   mallocs), which may also improve cache locality in some
1058   applications.
1059 
1060   The "chunks" argument is optional (i.e., may be null, which is
1061   probably the most typical usage). If it is null, the returned array
1062   is itself dynamically allocated and should also be freed when it is
1063   no longer needed. Otherwise, the chunks array must be of at least
1064   n_elements in length. It is filled in with the pointers to the
1065   chunks.
1066 
1067   In either case, independent_calloc returns this pointer array, or
1068   null if the allocation failed.  If n_elements is zero and "chunks"
1069   is null, it returns a chunk representing an array with zero elements
1070   (which should be freed if not wanted).
1071 
1072   Each element must be individually freed when it is no longer
1073   needed. If you'd like to instead be able to free all at once, you
1074   should instead use regular calloc and assign pointers into this
1075   space to represent elements.  (In this case though, you cannot
1076   independently free elements.)
1077 
1078   independent_calloc simplifies and speeds up implementations of many
1079   kinds of pools.  It may also be useful when constructing large data
1080   structures that initially have a fixed number of fixed-sized nodes,
1081   but the number is not known at compile time, and some of the nodes
1082   may later need to be freed. For example:
1083 
1084   struct Node { int item; struct Node* next; };
1085 
1086   struct Node* build_list() {
1087     struct Node** pool;
1088     int n = read_number_of_nodes_needed();
1089     if (n <= 0) return 0;
1090     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1091     if (pool == 0) die();
1092     // organize into a linked list...
1093     struct Node* first = pool[0];
1094     for (i = 0; i < n-1; ++i)
1095       pool[i]->next = pool[i+1];
1096     free(pool);     // Can now free the array (or not, if it is needed later)
1097     return first;
1098   }
1099 */
1100 #if __STD_C
1101 Void_t** public_iCALLOc(size_t, size_t, Void_t**);
1102 #else
1103 Void_t** public_iCALLOc();
1104 #endif
1105 
1106 /*
1107   independent_comalloc(size_t n_elements, size_t sizes[], Void_t* chunks[]);
1108 
1109   independent_comalloc allocates, all at once, a set of n_elements
1110   chunks with sizes indicated in the "sizes" array.    It returns
1111   an array of pointers to these elements, each of which can be
1112   independently freed, realloc'ed etc. The elements are guaranteed to
1113   be adjacently allocated (this is not guaranteed to occur with
1114   multiple callocs or mallocs), which may also improve cache locality
1115   in some applications.
1116 
1117   The "chunks" argument is optional (i.e., may be null). If it is null
1118   the returned array is itself dynamically allocated and should also
1119   be freed when it is no longer needed. Otherwise, the chunks array
1120   must be of at least n_elements in length. It is filled in with the
1121   pointers to the chunks.
1122 
1123   In either case, independent_comalloc returns this pointer array, or
1124   null if the allocation failed.  If n_elements is zero and chunks is
1125   null, it returns a chunk representing an array with zero elements
1126   (which should be freed if not wanted).
1127 
1128   Each element must be individually freed when it is no longer
1129   needed. If you'd like to instead be able to free all at once, you
1130   should instead use a single regular malloc, and assign pointers at
1131   particular offsets in the aggregate space. (In this case though, you
1132   cannot independently free elements.)
1133 
1134   independent_comallac differs from independent_calloc in that each
1135   element may have a different size, and also that it does not
1136   automatically clear elements.
1137 
1138   independent_comalloc can be used to speed up allocation in cases
1139   where several structs or objects must always be allocated at the
1140   same time.  For example:
1141 
1142   struct Head { ... }
1143   struct Foot { ... }
1144 
1145   void send_message(char* msg) {
1146     int msglen = strlen(msg);
1147     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1148     void* chunks[3];
1149     if (independent_comalloc(3, sizes, chunks) == 0)
1150       die();
1151     struct Head* head = (struct Head*)(chunks[0]);
1152     char*        body = (char*)(chunks[1]);
1153     struct Foot* foot = (struct Foot*)(chunks[2]);
1154     // ...
1155   }
1156 
1157   In general though, independent_comalloc is worth using only for
1158   larger values of n_elements. For small values, you probably won't
1159   detect enough difference from series of malloc calls to bother.
1160 
1161   Overuse of independent_comalloc can increase overall memory usage,
1162   since it cannot reuse existing noncontiguous small chunks that
1163   might be available for some of the elements.
1164 */
1165 #if __STD_C
1166 Void_t** public_iCOMALLOc(size_t, size_t*, Void_t**);
1167 #else
1168 Void_t** public_iCOMALLOc();
1169 #endif
1170 
1171 
1172 /*
1173   pvalloc(size_t n);
1174   Equivalent to valloc(minimum-page-that-holds(n)), that is,
1175   round up n to nearest pagesize.
1176  */
1177 #if __STD_C
1178 Void_t*  public_pVALLOc(size_t);
1179 #else
1180 Void_t*  public_pVALLOc();
1181 #endif
1182 
1183 /*
1184   cfree(Void_t* p);
1185   Equivalent to free(p).
1186 
1187   cfree is needed/defined on some systems that pair it with calloc,
1188   for odd historical reasons (such as: cfree is used in example
1189   code in the first edition of K&R).
1190 */
1191 #if __STD_C
1192 void     public_cFREe(Void_t*);
1193 #else
1194 void     public_cFREe();
1195 #endif
1196 
1197 /*
1198   malloc_trim(size_t pad);
1199 
1200   If possible, gives memory back to the system (via negative
1201   arguments to sbrk) if there is unused memory at the `high' end of
1202   the malloc pool. You can call this after freeing large blocks of
1203   memory to potentially reduce the system-level memory requirements
1204   of a program. However, it cannot guarantee to reduce memory. Under
1205   some allocation patterns, some large free blocks of memory will be
1206   locked between two used chunks, so they cannot be given back to
1207   the system.
1208 
1209   The `pad' argument to malloc_trim represents the amount of free
1210   trailing space to leave untrimmed. If this argument is zero,
1211   only the minimum amount of memory to maintain internal data
1212   structures will be left (one page or less). Non-zero arguments
1213   can be supplied to maintain enough trailing space to service
1214   future expected allocations without having to re-obtain memory
1215   from the system.
1216 
1217   Malloc_trim returns 1 if it actually released any memory, else 0.
1218   On systems that do not support "negative sbrks", it will always
1219   rreturn 0.
1220 */
1221 #if __STD_C
1222 int      public_mTRIm(size_t);
1223 #else
1224 int      public_mTRIm();
1225 #endif
1226 
1227 /*
1228   malloc_usable_size(Void_t* p);
1229 
1230   Returns the number of bytes you can actually use in
1231   an allocated chunk, which may be more than you requested (although
1232   often not) due to alignment and minimum size constraints.
1233   You can use this many bytes without worrying about
1234   overwriting other allocated objects. This is not a particularly great
1235   programming practice. malloc_usable_size can be more useful in
1236   debugging and assertions, for example:
1237 
1238   p = malloc(n);
1239   assert(malloc_usable_size(p) >= 256);
1240 
1241 */
1242 #if __STD_C
1243 size_t   public_mUSABLe(Void_t*);
1244 #else
1245 size_t   public_mUSABLe();
1246 #endif
1247 
1248 /*
1249   malloc_stats();
1250   Prints on stderr the amount of space obtained from the system (both
1251   via sbrk and mmap), the maximum amount (which may be more than
1252   current if malloc_trim and/or munmap got called), and the current
1253   number of bytes allocated via malloc (or realloc, etc) but not yet
1254   freed. Note that this is the number of bytes allocated, not the
1255   number requested. It will be larger than the number requested
1256   because of alignment and bookkeeping overhead. Because it includes
1257   alignment wastage as being in use, this figure may be greater than
1258   zero even when no user-level chunks are allocated.
1259 
1260   The reported current and maximum system memory can be inaccurate if
1261   a program makes other calls to system memory allocation functions
1262   (normally sbrk) outside of malloc.
1263 
1264   malloc_stats prints only the most commonly interesting statistics.
1265   More information can be obtained by calling mallinfo.
1266 
1267 */
1268 #if __STD_C
1269 void     public_mSTATs(void);
1270 #else
1271 void     public_mSTATs();
1272 #endif
1273 
1274 /* mallopt tuning options */
1275 
1276 /*
1277   M_MXFAST is the maximum request size used for "fastbins", special bins
1278   that hold returned chunks without consolidating their spaces. This
1279   enables future requests for chunks of the same size to be handled
1280   very quickly, but can increase fragmentation, and thus increase the
1281   overall memory footprint of a program.
1282 
1283   This malloc manages fastbins very conservatively yet still
1284   efficiently, so fragmentation is rarely a problem for values less
1285   than or equal to the default.  The maximum supported value of MXFAST
1286   is 80. You wouldn't want it any higher than this anyway.  Fastbins
1287   are designed especially for use with many small structs, objects or
1288   strings -- the default handles structs/objects/arrays with sizes up
1289   to 16 4byte fields, or small strings representing words, tokens,
1290   etc. Using fastbins for larger objects normally worsens
1291   fragmentation without improving speed.
1292 
1293   M_MXFAST is set in REQUEST size units. It is internally used in
1294   chunksize units, which adds padding and alignment.  You can reduce
1295   M_MXFAST to 0 to disable all use of fastbins.  This causes the malloc
1296   algorithm to be a closer approximation of fifo-best-fit in all cases,
1297   not just for larger requests, but will generally cause it to be
1298   slower.
1299 */
1300 
1301 
1302 /* M_MXFAST is a standard SVID/XPG tuning option, usually listed in malloc.h */
1303 #ifndef M_MXFAST
1304 #define M_MXFAST            1
1305 #endif
1306 
1307 #ifndef DEFAULT_MXFAST
1308 #define DEFAULT_MXFAST     64
1309 #endif
1310 
1311 
1312 /*
1313   M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
1314   to keep before releasing via malloc_trim in free().
1315 
1316   Automatic trimming is mainly useful in long-lived programs.
1317   Because trimming via sbrk can be slow on some systems, and can
1318   sometimes be wasteful (in cases where programs immediately
1319   afterward allocate more large chunks) the value should be high
1320   enough so that your overall system performance would improve by
1321   releasing this much memory.
1322 
1323   The trim threshold and the mmap control parameters (see below)
1324   can be traded off with one another. Trimming and mmapping are
1325   two different ways of releasing unused memory back to the
1326   system. Between these two, it is often possible to keep
1327   system-level demands of a long-lived program down to a bare
1328   minimum. For example, in one test suite of sessions measuring
1329   the XF86 X server on Linux, using a trim threshold of 128K and a
1330   mmap threshold of 192K led to near-minimal long term resource
1331   consumption.
1332 
1333   If you are using this malloc in a long-lived program, it should
1334   pay to experiment with these values.  As a rough guide, you
1335   might set to a value close to the average size of a process
1336   (program) running on your system.  Releasing this much memory
1337   would allow such a process to run in memory.  Generally, it's
1338   worth it to tune for trimming rather tham memory mapping when a
1339   program undergoes phases where several large chunks are
1340   allocated and released in ways that can reuse each other's
1341   storage, perhaps mixed with phases where there are no such
1342   chunks at all.  And in well-behaved long-lived programs,
1343   controlling release of large blocks via trimming versus mapping
1344   is usually faster.
1345 
1346   However, in most programs, these parameters serve mainly as
1347   protection against the system-level effects of carrying around
1348   massive amounts of unneeded memory. Since frequent calls to
1349   sbrk, mmap, and munmap otherwise degrade performance, the default
1350   parameters are set to relatively high values that serve only as
1351   safeguards.
1352 
1353   The trim value must be greater than page size to have any useful
1354   effect.  To disable trimming completely, you can set to
1355   (unsigned long)(-1)
1356 
1357   Trim settings interact with fastbin (MXFAST) settings: Unless
1358   TRIM_FASTBINS is defined, automatic trimming never takes place upon
1359   freeing a chunk with size less than or equal to MXFAST. Trimming is
1360   instead delayed until subsequent freeing of larger chunks. However,
1361   you can still force an attempted trim by calling malloc_trim.
1362 
1363   Also, trimming is not generally possible in cases where
1364   the main arena is obtained via mmap.
1365 
1366   Note that the trick some people use of mallocing a huge space and
1367   then freeing it at program startup, in an attempt to reserve system
1368   memory, doesn't have the intended effect under automatic trimming,
1369   since that memory will immediately be returned to the system.
1370 */
1371 
1372 #define M_TRIM_THRESHOLD       -1
1373 
1374 #ifndef DEFAULT_TRIM_THRESHOLD
1375 #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
1376 #endif
1377 
1378 /*
1379   M_TOP_PAD is the amount of extra `padding' space to allocate or
1380   retain whenever sbrk is called. It is used in two ways internally:
1381 
1382   * When sbrk is called to extend the top of the arena to satisfy
1383   a new malloc request, this much padding is added to the sbrk
1384   request.
1385 
1386   * When malloc_trim is called automatically from free(),
1387   it is used as the `pad' argument.
1388 
1389   In both cases, the actual amount of padding is rounded
1390   so that the end of the arena is always a system page boundary.
1391 
1392   The main reason for using padding is to avoid calling sbrk so
1393   often. Having even a small pad greatly reduces the likelihood
1394   that nearly every malloc request during program start-up (or
1395   after trimming) will invoke sbrk, which needlessly wastes
1396   time.
1397 
1398   Automatic rounding-up to page-size units is normally sufficient
1399   to avoid measurable overhead, so the default is 0.  However, in
1400   systems where sbrk is relatively slow, it can pay to increase
1401   this value, at the expense of carrying around more memory than
1402   the program needs.
1403 */
1404 
1405 #define M_TOP_PAD              -2
1406 
1407 #ifndef DEFAULT_TOP_PAD
1408 #define DEFAULT_TOP_PAD        (0)
1409 #endif
1410 
1411 /*
1412   M_MMAP_THRESHOLD is the request size threshold for using mmap()
1413   to service a request. Requests of at least this size that cannot
1414   be allocated using already-existing space will be serviced via mmap.
1415   (If enough normal freed space already exists it is used instead.)
1416 
1417   Using mmap segregates relatively large chunks of memory so that
1418   they can be individually obtained and released from the host
1419   system. A request serviced through mmap is never reused by any
1420   other request (at least not directly; the system may just so
1421   happen to remap successive requests to the same locations).
1422 
1423   Segregating space in this way has the benefits that:
1424 
1425    1. Mmapped space can ALWAYS be individually released back
1426       to the system, which helps keep the system level memory
1427       demands of a long-lived program low.
1428    2. Mapped memory can never become `locked' between
1429       other chunks, as can happen with normally allocated chunks, which
1430       means that even trimming via malloc_trim would not release them.
1431    3. On some systems with "holes" in address spaces, mmap can obtain
1432       memory that sbrk cannot.
1433 
1434   However, it has the disadvantages that:
1435 
1436    1. The space cannot be reclaimed, consolidated, and then
1437       used to service later requests, as happens with normal chunks.
1438    2. It can lead to more wastage because of mmap page alignment
1439       requirements
1440    3. It causes malloc performance to be more dependent on host
1441       system memory management support routines which may vary in
1442       implementation quality and may impose arbitrary
1443       limitations. Generally, servicing a request via normal
1444       malloc steps is faster than going through a system's mmap.
1445 
1446   The advantages of mmap nearly always outweigh disadvantages for
1447   "large" chunks, but the value of "large" varies across systems.  The
1448   default is an empirically derived value that works well in most
1449   systems.
1450 */
1451 
1452 #define M_MMAP_THRESHOLD      -3
1453 
1454 #ifndef DEFAULT_MMAP_THRESHOLD
1455 #define DEFAULT_MMAP_THRESHOLD (256 * 1024)
1456 #endif
1457 
1458 /*
1459   M_MMAP_MAX is the maximum number of requests to simultaneously
1460   service using mmap. This parameter exists because
1461 . Some systems have a limited number of internal tables for
1462   use by mmap, and using more than a few of them may degrade
1463   performance.
1464 
1465   The default is set to a value that serves only as a safeguard.
1466   Setting to 0 disables use of mmap for servicing large requests.  If
1467   HAVE_MMAP is not set, the default value is 0, and attempts to set it
1468   to non-zero values in mallopt will fail.
1469 */
1470 
1471 #define M_MMAP_MAX             -4
1472 
1473 #ifndef DEFAULT_MMAP_MAX
1474 #if HAVE_MMAP
1475 #define DEFAULT_MMAP_MAX       (65536)
1476 #else
1477 #define DEFAULT_MMAP_MAX       (0)
1478 #endif
1479 #endif
1480 
1481 #ifdef __cplusplus
1482 };  /* end of extern "C" */
1483 #endif
1484 
1485 /*
1486   ========================================================================
1487   To make a fully customizable malloc.h header file, cut everything
1488   above this line, put into file malloc.h, edit to suit, and #include it
1489   on the next line, as well as in programs that use this malloc.
1490   ========================================================================
1491 */
1492 
1493 /* #include "malloc.h" */
1494 
1495 /* --------------------- public wrappers ---------------------- */
1496 
1497 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
1498 
1499 /* Declare all routines as internal */
1500 #if __STD_C
1501 static Void_t*  mALLOc(size_t);
1502 static void     fREe(Void_t*);
1503 static Void_t*  rEALLOc(Void_t*, size_t);
1504 static Void_t*  mEMALIGn(size_t, size_t);
1505 static Void_t*  vALLOc(size_t);
1506 static Void_t*  pVALLOc(size_t);
1507 static Void_t*  cALLOc(size_t, size_t);
1508 static Void_t** iCALLOc(size_t, size_t, Void_t**);
1509 static Void_t** iCOMALLOc(size_t, size_t*, Void_t**);
1510 static void     cFREe(Void_t*);
1511 static int      mTRIm(size_t);
1512 static size_t   mUSABLe(Void_t*);
1513 static void     mSTATs();
1514 static int      mALLOPt(int, int);
1515 static struct mallinfo mALLINFo(void);
1516 #else
1517 static Void_t*  mALLOc();
1518 static void     fREe();
1519 static Void_t*  rEALLOc();
1520 static Void_t*  mEMALIGn();
1521 static Void_t*  vALLOc();
1522 static Void_t*  pVALLOc();
1523 static Void_t*  cALLOc();
1524 static Void_t** iCALLOc();
1525 static Void_t** iCOMALLOc();
1526 static void     cFREe();
1527 static int      mTRIm();
1528 static size_t   mUSABLe();
1529 static void     mSTATs();
1530 static int      mALLOPt();
1531 static struct mallinfo mALLINFo();
1532 #endif
1533 
1534 /*
1535   MALLOC_PREACTION and MALLOC_POSTACTION should be
1536   defined to return 0 on success, and nonzero on failure.
1537   The return value of MALLOC_POSTACTION is currently ignored
1538   in wrapper functions since there is no reasonable default
1539   action to take on failure.
1540 */
1541 
1542 
1543 #ifdef USE_MALLOC_LOCK
1544 
1545 #ifdef WIN32
1546 
1547 static int mALLOC_MUTEx;
1548 #define MALLOC_PREACTION   slwait(&mALLOC_MUTEx)
1549 #define MALLOC_POSTACTION  slrelease(&mALLOC_MUTEx)
1550 
1551 #else
1552 
1553 #include <pthread.h>
1554 
1555 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
1556 
1557 #define MALLOC_PREACTION   pthread_mutex_lock(&mALLOC_MUTEx)
1558 #define MALLOC_POSTACTION  pthread_mutex_unlock(&mALLOC_MUTEx)
1559 
1560 #endif /* USE_MALLOC_LOCK */
1561 
1562 #else
1563 
1564 /* Substitute anything you like for these */
1565 
1566 #define MALLOC_PREACTION   (0)
1567 #define MALLOC_POSTACTION  (0)
1568 
1569 #endif
1570 
public_mALLOc(size_t bytes)1571 Void_t* public_mALLOc(size_t bytes) {
1572   Void_t* m;
1573   if (MALLOC_PREACTION != 0) {
1574     return 0;
1575   }
1576   m = mALLOc(bytes);
1577   if (MALLOC_POSTACTION != 0) {
1578   }
1579   return m;
1580 }
1581 
public_fREe(Void_t * m)1582 void public_fREe(Void_t* m) {
1583   if (MALLOC_PREACTION != 0) {
1584     return;
1585   }
1586   fREe(m);
1587   if (MALLOC_POSTACTION != 0) {
1588   }
1589 }
1590 
public_rEALLOc(Void_t * m,size_t bytes)1591 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
1592   if (MALLOC_PREACTION != 0) {
1593     return 0;
1594   }
1595   m = rEALLOc(m, bytes);
1596   if (MALLOC_POSTACTION != 0) {
1597   }
1598   return m;
1599 }
1600 
public_mEMALIGn(size_t alignment,size_t bytes)1601 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
1602   Void_t* m;
1603   if (MALLOC_PREACTION != 0) {
1604     return 0;
1605   }
1606   m = mEMALIGn(alignment, bytes);
1607   if (MALLOC_POSTACTION != 0) {
1608   }
1609   return m;
1610 }
1611 
public_vALLOc(size_t bytes)1612 Void_t* public_vALLOc(size_t bytes) {
1613   Void_t* m;
1614   if (MALLOC_PREACTION != 0) {
1615     return 0;
1616   }
1617   m = vALLOc(bytes);
1618   if (MALLOC_POSTACTION != 0) {
1619   }
1620   return m;
1621 }
1622 
public_pVALLOc(size_t bytes)1623 Void_t* public_pVALLOc(size_t bytes) {
1624   Void_t* m;
1625   if (MALLOC_PREACTION != 0) {
1626     return 0;
1627   }
1628   m = pVALLOc(bytes);
1629   if (MALLOC_POSTACTION != 0) {
1630   }
1631   return m;
1632 }
1633 
public_cALLOc(size_t n,size_t elem_size)1634 Void_t* public_cALLOc(size_t n, size_t elem_size) {
1635   Void_t* m;
1636   if (MALLOC_PREACTION != 0) {
1637     return 0;
1638   }
1639   m = cALLOc(n, elem_size);
1640   if (MALLOC_POSTACTION != 0) {
1641   }
1642   return m;
1643 }
1644 
1645 
public_iCALLOc(size_t n,size_t elem_size,Void_t ** chunks)1646 Void_t** public_iCALLOc(size_t n, size_t elem_size, Void_t** chunks) {
1647   Void_t** m;
1648   if (MALLOC_PREACTION != 0) {
1649     return 0;
1650   }
1651   m = iCALLOc(n, elem_size, chunks);
1652   if (MALLOC_POSTACTION != 0) {
1653   }
1654   return m;
1655 }
1656 
public_iCOMALLOc(size_t n,size_t sizes[],Void_t ** chunks)1657 Void_t** public_iCOMALLOc(size_t n, size_t sizes[], Void_t** chunks) {
1658   Void_t** m;
1659   if (MALLOC_PREACTION != 0) {
1660     return 0;
1661   }
1662   m = iCOMALLOc(n, sizes, chunks);
1663   if (MALLOC_POSTACTION != 0) {
1664   }
1665   return m;
1666 }
1667 
public_cFREe(Void_t * m)1668 void public_cFREe(Void_t* m) {
1669   if (MALLOC_PREACTION != 0) {
1670     return;
1671   }
1672   cFREe(m);
1673   if (MALLOC_POSTACTION != 0) {
1674   }
1675 }
1676 
public_mTRIm(size_t s)1677 int public_mTRIm(size_t s) {
1678   int result;
1679   if (MALLOC_PREACTION != 0) {
1680     return 0;
1681   }
1682   result = mTRIm(s);
1683   if (MALLOC_POSTACTION != 0) {
1684   }
1685   return result;
1686 }
1687 
public_mUSABLe(Void_t * m)1688 size_t public_mUSABLe(Void_t* m) {
1689   size_t result;
1690   if (MALLOC_PREACTION != 0) {
1691     return 0;
1692   }
1693   result = mUSABLe(m);
1694   if (MALLOC_POSTACTION != 0) {
1695   }
1696   return result;
1697 }
1698 
public_mSTATs()1699 void public_mSTATs() {
1700   if (MALLOC_PREACTION != 0) {
1701     return;
1702   }
1703   mSTATs();
1704   if (MALLOC_POSTACTION != 0) {
1705   }
1706 }
1707 
public_mALLINFo()1708 struct mallinfo public_mALLINFo() {
1709   struct mallinfo m;
1710   if (MALLOC_PREACTION != 0) {
1711     struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1712     return nm;
1713   }
1714   m = mALLINFo();
1715   if (MALLOC_POSTACTION != 0) {
1716   }
1717   return m;
1718 }
1719 
public_mALLOPt(int p,int v)1720 int public_mALLOPt(int p, int v) {
1721   int result;
1722   if (MALLOC_PREACTION != 0) {
1723     return 0;
1724   }
1725   result = mALLOPt(p, v);
1726   if (MALLOC_POSTACTION != 0) {
1727   }
1728   return result;
1729 }
1730 
1731 #endif
1732 
1733 
1734 
1735 /* ------------- Optional versions of memcopy ---------------- */
1736 
1737 
1738 #if USE_MEMCPY
1739 
1740 /*
1741   Note: memcpy is ONLY invoked with non-overlapping regions,
1742   so the (usually slower) memmove is not needed.
1743 */
1744 
1745 #define MALLOC_COPY(dest, src, nbytes)  memcpy(dest, src, nbytes)
1746 #define MALLOC_ZERO(dest, nbytes)       memset(dest, 0,   nbytes)
1747 
1748 #else /* !USE_MEMCPY */
1749 
1750 /* Use Duff's device for good zeroing/copying performance. */
1751 
1752 #define MALLOC_ZERO(charp, nbytes)                                            \
1753 do {                                                                          \
1754   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
1755   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
1756   long mcn;                                                                   \
1757   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
1758   switch (mctmp) {                                                            \
1759     case 0: for(;;) { *mzp++ = 0;                                             \
1760     case 7:           *mzp++ = 0;                                             \
1761     case 6:           *mzp++ = 0;                                             \
1762     case 5:           *mzp++ = 0;                                             \
1763     case 4:           *mzp++ = 0;                                             \
1764     case 3:           *mzp++ = 0;                                             \
1765     case 2:           *mzp++ = 0;                                             \
1766     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
1767   }                                                                           \
1768 } while(0)
1769 
1770 #define MALLOC_COPY(dest,src,nbytes)                                          \
1771 do {                                                                          \
1772   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
1773   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
1774   CHUNK_SIZE_T  mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T);                     \
1775   long mcn;                                                                   \
1776   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
1777   switch (mctmp) {                                                            \
1778     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
1779     case 7:           *mcdst++ = *mcsrc++;                                    \
1780     case 6:           *mcdst++ = *mcsrc++;                                    \
1781     case 5:           *mcdst++ = *mcsrc++;                                    \
1782     case 4:           *mcdst++ = *mcsrc++;                                    \
1783     case 3:           *mcdst++ = *mcsrc++;                                    \
1784     case 2:           *mcdst++ = *mcsrc++;                                    \
1785     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
1786   }                                                                           \
1787 } while(0)
1788 
1789 #endif
1790 
1791 /* ------------------ MMAP support ------------------  */
1792 
1793 
1794 #if HAVE_MMAP
1795 
1796 #ifndef LACKS_FCNTL_H
1797 #include <fcntl.h>
1798 #endif
1799 
1800 #ifndef LACKS_SYS_MMAN_H
1801 #include <sys/mman.h>
1802 #endif
1803 
1804 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1805 #define MAP_ANONYMOUS MAP_ANON
1806 #endif
1807 
1808 /*
1809    Nearly all versions of mmap support MAP_ANONYMOUS,
1810    so the following is unlikely to be needed, but is
1811    supplied just in case.
1812 */
1813 
1814 #ifndef MAP_ANONYMOUS
1815 
1816 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1817 
1818 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
1819  (dev_zero_fd = open("/dev/zero", O_RDWR), \
1820   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
1821    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
1822 
1823 #else
1824 
1825 #define MMAP(addr, size, prot, flags) \
1826  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
1827 
1828 #endif
1829 
1830 
1831 #endif /* HAVE_MMAP */
1832 
1833 
1834 /*
1835   -----------------------  Chunk representations -----------------------
1836 */
1837 
1838 
1839 /*
1840   This struct declaration is misleading (but accurate and necessary).
1841   It declares a "view" into memory allowing access to necessary
1842   fields at known offsets from a given base. See explanation below.
1843 */
1844 
1845 struct malloc_chunk {
1846 
1847   INTERNAL_SIZE_T      prev_size;  /* Size of previous chunk (if free).  */
1848   INTERNAL_SIZE_T      size;       /* Size in bytes, including overhead. */
1849 
1850   struct malloc_chunk* fd;         /* double links -- used only if free. */
1851   struct malloc_chunk* bk;
1852 };
1853 
1854 
1855 typedef struct malloc_chunk* mchunkptr;
1856 
1857 /*
1858    malloc_chunk details:
1859 
1860     (The following includes lightly edited explanations by Colin Plumb.)
1861 
1862     Chunks of memory are maintained using a `boundary tag' method as
1863     described in e.g., Knuth or Standish.  (See the paper by Paul
1864     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1865     survey of such techniques.)  Sizes of free chunks are stored both
1866     in the front of each chunk and at the end.  This makes
1867     consolidating fragmented chunks into bigger chunks very fast.  The
1868     size fields also hold bits representing whether chunks are free or
1869     in use.
1870 
1871     An allocated chunk looks like this:
1872 
1873 
1874     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1875             |             Size of previous chunk, if allocated            | |
1876             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1877             |             Size of chunk, in bytes                         |P|
1878       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1879             |             User data starts here...                          .
1880             .                                                               .
1881             .             (malloc_usable_space() bytes)                     .
1882             .                                                               |
1883 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1884             |             Size of chunk                                     |
1885             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1886 
1887 
1888     Where "chunk" is the front of the chunk for the purpose of most of
1889     the malloc code, but "mem" is the pointer that is returned to the
1890     user.  "Nextchunk" is the beginning of the next contiguous chunk.
1891 
1892     Chunks always begin on even word boundries, so the mem portion
1893     (which is returned to the user) is also on an even word boundary, and
1894     thus at least double-word aligned.
1895 
1896     Free chunks are stored in circular doubly-linked lists, and look like this:
1897 
1898     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1899             |             Size of previous chunk                            |
1900             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1901     `head:' |             Size of chunk, in bytes                         |P|
1902       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1903             |             Forward pointer to next chunk in list             |
1904             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1905             |             Back pointer to previous chunk in list            |
1906             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1907             |             Unused space (may be 0 bytes long)                .
1908             .                                                               .
1909             .                                                               |
1910 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1911     `foot:' |             Size of chunk, in bytes                           |
1912             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1913 
1914     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1915     chunk size (which is always a multiple of two words), is an in-use
1916     bit for the *previous* chunk.  If that bit is *clear*, then the
1917     word before the current chunk size contains the previous chunk
1918     size, and can be used to find the front of the previous chunk.
1919     The very first chunk allocated always has this bit set,
1920     preventing access to non-existent (or non-owned) memory. If
1921     prev_inuse is set for any given chunk, then you CANNOT determine
1922     the size of the previous chunk, and might even get a memory
1923     addressing fault when trying to do so.
1924 
1925     Note that the `foot' of the current chunk is actually represented
1926     as the prev_size of the NEXT chunk. This makes it easier to
1927     deal with alignments etc but can be very confusing when trying
1928     to extend or adapt this code.
1929 
1930     The two exceptions to all this are
1931 
1932      1. The special chunk `top' doesn't bother using the
1933         trailing size field since there is no next contiguous chunk
1934         that would have to index off it. After initialization, `top'
1935         is forced to always exist.  If it would become less than
1936         MINSIZE bytes long, it is replenished.
1937 
1938      2. Chunks allocated via mmap, which have the second-lowest-order
1939         bit (IS_MMAPPED) set in their size fields.  Because they are
1940         allocated one-by-one, each must contain its own trailing size field.
1941 
1942 */
1943 
1944 /*
1945   ---------- Size and alignment checks and conversions ----------
1946 */
1947 
1948 /* conversion from malloc headers to user pointers, and back */
1949 
1950 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1951 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1952 
1953 /* The smallest possible chunk */
1954 #define MIN_CHUNK_SIZE        (sizeof(struct malloc_chunk))
1955 
1956 /* The smallest size we can malloc is an aligned minimal chunk */
1957 
1958 #define MINSIZE  \
1959   (CHUNK_SIZE_T)(((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK))
1960 
1961 /* Check if m has acceptable alignment */
1962 
1963 #define aligned_OK(m)  (((PTR_UINT)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1964 
1965 
1966 /*
1967    Check if a request is so large that it would wrap around zero when
1968    padded and aligned. To simplify some other code, the bound is made
1969    low enough so that adding MINSIZE will also not wrap around sero.
1970 */
1971 
1972 #define REQUEST_OUT_OF_RANGE(req)                                 \
1973   ((CHUNK_SIZE_T)(req) >=                                        \
1974    (CHUNK_SIZE_T)(INTERNAL_SIZE_T)(-2 * MINSIZE))
1975 
1976 /* pad request bytes into a usable size -- internal version */
1977 
1978 #define request2size(req)                                         \
1979   (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?             \
1980    MINSIZE :                                                      \
1981    ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
1982 
1983 /*  Same, except also perform argument check */
1984 
1985 #define checked_request2size(req, sz)                             \
1986   if (REQUEST_OUT_OF_RANGE(req)) {                                \
1987     MALLOC_FAILURE_ACTION;                                        \
1988     return 0;                                                     \
1989   }                                                               \
1990   (sz) = request2size(req);
1991 
1992 /*
1993   --------------- Physical chunk operations ---------------
1994 */
1995 
1996 
1997 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1998 #define PREV_INUSE 0x1
1999 
2000 /* extract inuse bit of previous chunk */
2001 #define prev_inuse(p)       ((p)->size & PREV_INUSE)
2002 
2003 
2004 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
2005 #define IS_MMAPPED 0x2
2006 
2007 /* check for mmap()'ed chunk */
2008 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
2009 
2010 /*
2011   Bits to mask off when extracting size
2012 
2013   Note: IS_MMAPPED is intentionally not masked off from size field in
2014   macros for which mmapped chunks should never be seen. This should
2015   cause helpful core dumps to occur if it is tried by accident by
2016   people extending or adapting this malloc.
2017 */
2018 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
2019 
2020 /* Get size, ignoring use bits */
2021 #define chunksize(p)         ((p)->size & ~(SIZE_BITS))
2022 
2023 
2024 /* Ptr to next physical malloc_chunk. */
2025 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
2026 
2027 /* Ptr to previous physical malloc_chunk */
2028 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
2029 
2030 /* Treat space at ptr + offset as a chunk */
2031 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2032 
2033 /* extract p's inuse bit */
2034 #define inuse(p)\
2035 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
2036 
2037 /* set/clear chunk as being inuse without otherwise disturbing */
2038 #define set_inuse(p)\
2039 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
2040 
2041 #define clear_inuse(p)\
2042 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
2043 
2044 
2045 /* check/set/clear inuse bits in known places */
2046 #define inuse_bit_at_offset(p, s)\
2047  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
2048 
2049 #define set_inuse_bit_at_offset(p, s)\
2050  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
2051 
2052 #define clear_inuse_bit_at_offset(p, s)\
2053  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
2054 
2055 
2056 /* Set size at head, without disturbing its use bit */
2057 #define set_head_size(p, s)  ((p)->size = (((p)->size & PREV_INUSE) | (s)))
2058 
2059 /* Set size/use field */
2060 #define set_head(p, s)       ((p)->size = (s))
2061 
2062 /* Set size at footer (only when chunk is not in use) */
2063 #define set_foot(p, s)       (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
2064 
2065 
2066 /*
2067   -------------------- Internal data structures --------------------
2068 
2069    All internal state is held in an instance of malloc_state defined
2070    below. There are no other static variables, except in two optional
2071    cases:
2072    * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared above.
2073    * If HAVE_MMAP is true, but mmap doesn't support
2074      MAP_ANONYMOUS, a dummy file descriptor for mmap.
2075 
2076    Beware of lots of tricks that minimize the total bookkeeping space
2077    requirements. The result is a little over 1K bytes (for 4byte
2078    pointers and size_t.)
2079 */
2080 
2081 /*
2082   Bins
2083 
2084     An array of bin headers for free chunks. Each bin is doubly
2085     linked.  The bins are approximately proportionally (log) spaced.
2086     There are a lot of these bins (128). This may look excessive, but
2087     works very well in practice.  Most bins hold sizes that are
2088     unusual as malloc request sizes, but are more usual for fragments
2089     and consolidated sets of chunks, which is what these bins hold, so
2090     they can be found quickly.  All procedures maintain the invariant
2091     that no consolidated chunk physically borders another one, so each
2092     chunk in a list is known to be preceeded and followed by either
2093     inuse chunks or the ends of memory.
2094 
2095     Chunks in bins are kept in size order, with ties going to the
2096     approximately least recently used chunk. Ordering isn't needed
2097     for the small bins, which all contain the same-sized chunks, but
2098     facilitates best-fit allocation for larger chunks. These lists
2099     are just sequential. Keeping them in order almost never requires
2100     enough traversal to warrant using fancier ordered data
2101     structures.
2102 
2103     Chunks of the same size are linked with the most
2104     recently freed at the front, and allocations are taken from the
2105     back.  This results in LRU (FIFO) allocation order, which tends
2106     to give each chunk an equal opportunity to be consolidated with
2107     adjacent freed chunks, resulting in larger free chunks and less
2108     fragmentation.
2109 
2110     To simplify use in double-linked lists, each bin header acts
2111     as a malloc_chunk. This avoids special-casing for headers.
2112     But to conserve space and improve locality, we allocate
2113     only the fd/bk pointers of bins, and then use repositioning tricks
2114     to treat these as the fields of a malloc_chunk*.
2115 */
2116 
2117 typedef struct malloc_chunk* mbinptr;
2118 
2119 /* addressing -- note that bin_at(0) does not exist */
2120 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
2121 
2122 /* analog of ++bin */
2123 #define next_bin(b)  ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
2124 
2125 /* Reminders about list directionality within bins */
2126 #define first(b)     ((b)->fd)
2127 #define last(b)      ((b)->bk)
2128 
2129 /* Take a chunk off a bin list */
2130 #define unlink(P, BK, FD) {                                            \
2131   FD = P->fd;                                                          \
2132   BK = P->bk;                                                          \
2133   FD->bk = BK;                                                         \
2134   BK->fd = FD;                                                         \
2135 }
2136 
2137 /*
2138   Indexing
2139 
2140     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
2141     8 bytes apart. Larger bins are approximately logarithmically spaced:
2142 
2143     64 bins of size       8
2144     32 bins of size      64
2145     16 bins of size     512
2146      8 bins of size    4096
2147      4 bins of size   32768
2148      2 bins of size  262144
2149      1 bin  of size what's left
2150 
2151     The bins top out around 1MB because we expect to service large
2152     requests via mmap.
2153 */
2154 
2155 #define NBINS              96
2156 #define NSMALLBINS         32
2157 #define SMALLBIN_WIDTH      8
2158 #define MIN_LARGE_SIZE    256
2159 
2160 #define in_smallbin_range(sz)  \
2161   ((CHUNK_SIZE_T)(sz) < (CHUNK_SIZE_T)MIN_LARGE_SIZE)
2162 
2163 #define smallbin_index(sz)     (((unsigned)(sz)) >> 3)
2164 
2165 /*
2166   Compute index for size. We expect this to be inlined when
2167   compiled with optimization, else not, which works out well.
2168 */
largebin_index(unsigned int sz)2169 static int largebin_index(unsigned int sz) {
2170   unsigned int  x = sz >> SMALLBIN_WIDTH;
2171   unsigned int m;            /* bit position of highest set bit of m */
2172 
2173   if (x >= 0x10000) return NBINS-1;
2174 
2175   /* On intel, use BSRL instruction to find highest bit */
2176 #if defined(__GNUC__) && defined(i386)
2177 
2178   __asm__("bsrl %1,%0\n\t"
2179           : "=r" (m)
2180           : "g"  (x));
2181 
2182 #else
2183   {
2184     /*
2185       Based on branch-free nlz algorithm in chapter 5 of Henry
2186       S. Warren Jr's book "Hacker's Delight".
2187     */
2188 
2189     unsigned int n = ((x - 0x100) >> 16) & 8;
2190     x <<= n;
2191     m = ((x - 0x1000) >> 16) & 4;
2192     n += m;
2193     x <<= m;
2194     m = ((x - 0x4000) >> 16) & 2;
2195     n += m;
2196     x = (x << m) >> 14;
2197     m = 13 - n + (x & ~(x>>1));
2198   }
2199 #endif
2200 
2201   /* Use next 2 bits to create finer-granularity bins */
2202   return NSMALLBINS + (m << 2) + ((sz >> (m + 6)) & 3);
2203 }
2204 
2205 #define bin_index(sz) \
2206  ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
2207 
2208 /*
2209   FIRST_SORTED_BIN_SIZE is the chunk size corresponding to the
2210   first bin that is maintained in sorted order. This must
2211   be the smallest size corresponding to a given bin.
2212 
2213   Normally, this should be MIN_LARGE_SIZE. But you can weaken
2214   best fit guarantees to sometimes speed up malloc by increasing value.
2215   Doing this means that malloc may choose a chunk that is
2216   non-best-fitting by up to the width of the bin.
2217 
2218   Some useful cutoff values:
2219       512 - all bins sorted
2220      2560 - leaves bins <=     64 bytes wide unsorted
2221     12288 - leaves bins <=    512 bytes wide unsorted
2222     65536 - leaves bins <=   4096 bytes wide unsorted
2223    262144 - leaves bins <=  32768 bytes wide unsorted
2224        -1 - no bins sorted (not recommended!)
2225 */
2226 
2227 #define FIRST_SORTED_BIN_SIZE MIN_LARGE_SIZE
2228 /* #define FIRST_SORTED_BIN_SIZE 65536 */
2229 
2230 /*
2231   Unsorted chunks
2232 
2233     All remainders from chunk splits, as well as all returned chunks,
2234     are first placed in the "unsorted" bin. They are then placed
2235     in regular bins after malloc gives them ONE chance to be used before
2236     binning. So, basically, the unsorted_chunks list acts as a queue,
2237     with chunks being placed on it in free (and malloc_consolidate),
2238     and taken off (to be either used or placed in bins) in malloc.
2239 */
2240 
2241 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
2242 #define unsorted_chunks(M)          (bin_at(M, 1))
2243 
2244 /*
2245   Top
2246 
2247     The top-most available chunk (i.e., the one bordering the end of
2248     available memory) is treated specially. It is never included in
2249     any bin, is used only if no other chunk is available, and is
2250     released back to the system if it is very large (see
2251     M_TRIM_THRESHOLD).  Because top initially
2252     points to its own bin with initial zero size, thus forcing
2253     extension on the first malloc request, we avoid having any special
2254     code in malloc to check whether it even exists yet. But we still
2255     need to do so when getting memory from system, so we make
2256     initial_top treat the bin as a legal but unusable chunk during the
2257     interval between initialization and the first call to
2258     sYSMALLOc. (This is somewhat delicate, since it relies on
2259     the 2 preceding words to be zero during this interval as well.)
2260 */
2261 
2262 /* Conveniently, the unsorted bin can be used as dummy top on first call */
2263 #define initial_top(M)              (unsorted_chunks(M))
2264 
2265 /*
2266   Binmap
2267 
2268     To help compensate for the large number of bins, a one-level index
2269     structure is used for bin-by-bin searching.  `binmap' is a
2270     bitvector recording whether bins are definitely empty so they can
2271     be skipped over during during traversals.  The bits are NOT always
2272     cleared as soon as bins are empty, but instead only
2273     when they are noticed to be empty during traversal in malloc.
2274 */
2275 
2276 /* Conservatively use 32 bits per map word, even if on 64bit system */
2277 #define BINMAPSHIFT      5
2278 #define BITSPERMAP       (1U << BINMAPSHIFT)
2279 #define BINMAPSIZE       (NBINS / BITSPERMAP)
2280 
2281 #define idx2block(i)     ((i) >> BINMAPSHIFT)
2282 #define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
2283 
2284 #define mark_bin(m,i)    ((m)->binmap[idx2block(i)] |=  idx2bit(i))
2285 #define unmark_bin(m,i)  ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
2286 #define get_binmap(m,i)  ((m)->binmap[idx2block(i)] &   idx2bit(i))
2287 
2288 /*
2289   Fastbins
2290 
2291     An array of lists holding recently freed small chunks.  Fastbins
2292     are not doubly linked.  It is faster to single-link them, and
2293     since chunks are never removed from the middles of these lists,
2294     double linking is not necessary. Also, unlike regular bins, they
2295     are not even processed in FIFO order (they use faster LIFO) since
2296     ordering doesn't much matter in the transient contexts in which
2297     fastbins are normally used.
2298 
2299     Chunks in fastbins keep their inuse bit set, so they cannot
2300     be consolidated with other free chunks. malloc_consolidate
2301     releases all chunks in fastbins and consolidates them with
2302     other free chunks.
2303 */
2304 
2305 typedef struct malloc_chunk* mfastbinptr;
2306 
2307 /* offset 2 to use otherwise unindexable first 2 bins */
2308 #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2)
2309 
2310 /* The maximum fastbin request size we support */
2311 #define MAX_FAST_SIZE     80
2312 
2313 #define NFASTBINS  (fastbin_index(request2size(MAX_FAST_SIZE))+1)
2314 
2315 /*
2316   FASTBIN_CONSOLIDATION_THRESHOLD is the size of a chunk in free()
2317   that triggers automatic consolidation of possibly-surrounding
2318   fastbin chunks. This is a heuristic, so the exact value should not
2319   matter too much. It is defined at half the default trim threshold as a
2320   compromise heuristic to only attempt consolidation if it is likely
2321   to lead to trimming. However, it is not dynamically tunable, since
2322   consolidation reduces fragmentation surrounding loarge chunks even
2323   if trimming is not used.
2324 */
2325 
2326 #define FASTBIN_CONSOLIDATION_THRESHOLD  \
2327   ((unsigned long)(DEFAULT_TRIM_THRESHOLD) >> 1)
2328 
2329 /*
2330   Since the lowest 2 bits in max_fast don't matter in size comparisons,
2331   they are used as flags.
2332 */
2333 
2334 /*
2335   ANYCHUNKS_BIT held in max_fast indicates that there may be any
2336   freed chunks at all. It is set true when entering a chunk into any
2337   bin.
2338 */
2339 
2340 #define ANYCHUNKS_BIT        (1U)
2341 
2342 #define have_anychunks(M)     (((M)->max_fast &  ANYCHUNKS_BIT))
2343 #define set_anychunks(M)      ((M)->max_fast |=  ANYCHUNKS_BIT)
2344 #define clear_anychunks(M)    ((M)->max_fast &= ~ANYCHUNKS_BIT)
2345 
2346 /*
2347   FASTCHUNKS_BIT held in max_fast indicates that there are probably
2348   some fastbin chunks. It is set true on entering a chunk into any
2349   fastbin, and cleared only in malloc_consolidate.
2350 */
2351 
2352 #define FASTCHUNKS_BIT        (2U)
2353 
2354 #define have_fastchunks(M)   (((M)->max_fast &  FASTCHUNKS_BIT))
2355 #define set_fastchunks(M)    ((M)->max_fast |=  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2356 #define clear_fastchunks(M)  ((M)->max_fast &= ~(FASTCHUNKS_BIT))
2357 
2358 /*
2359    Set value of max_fast.
2360    Use impossibly small value if 0.
2361 */
2362 
2363 #define set_max_fast(M, s) \
2364   (M)->max_fast = (((s) == 0)? SMALLBIN_WIDTH: request2size(s)) | \
2365   ((M)->max_fast &  (FASTCHUNKS_BIT|ANYCHUNKS_BIT))
2366 
2367 #define get_max_fast(M) \
2368   ((M)->max_fast & ~(FASTCHUNKS_BIT | ANYCHUNKS_BIT))
2369 
2370 
2371 /*
2372   morecore_properties is a status word holding dynamically discovered
2373   or controlled properties of the morecore function
2374 */
2375 
2376 #define MORECORE_CONTIGUOUS_BIT  (1U)
2377 
2378 #define contiguous(M) \
2379         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT))
2380 #define noncontiguous(M) \
2381         (((M)->morecore_properties &  MORECORE_CONTIGUOUS_BIT) == 0)
2382 #define set_contiguous(M) \
2383         ((M)->morecore_properties |=  MORECORE_CONTIGUOUS_BIT)
2384 #define set_noncontiguous(M) \
2385         ((M)->morecore_properties &= ~MORECORE_CONTIGUOUS_BIT)
2386 
2387 
2388 /*
2389    ----------- Internal state representation and initialization -----------
2390 */
2391 
2392 struct malloc_state {
2393 
2394   /* The maximum chunk size to be eligible for fastbin */
2395   INTERNAL_SIZE_T  max_fast;   /* low 2 bits used as flags */
2396 
2397   /* Fastbins */
2398   mfastbinptr      fastbins[NFASTBINS];
2399 
2400   /* Base of the topmost chunk -- not otherwise kept in a bin */
2401   mchunkptr        top;
2402 
2403   /* The remainder from the most recent split of a small request */
2404   mchunkptr        last_remainder;
2405 
2406   /* Normal bins packed as described above */
2407   mchunkptr        bins[NBINS * 2];
2408 
2409   /* Bitmap of bins. Trailing zero map handles cases of largest binned size */
2410   unsigned int     binmap[BINMAPSIZE+1];
2411 
2412   /* Tunable parameters */
2413   CHUNK_SIZE_T     trim_threshold;
2414   INTERNAL_SIZE_T  top_pad;
2415   INTERNAL_SIZE_T  mmap_threshold;
2416 
2417   /* Memory map support */
2418   int              n_mmaps;
2419   int              n_mmaps_max;
2420   int              max_n_mmaps;
2421 
2422   /* Cache malloc_getpagesize */
2423   unsigned int     pagesize;
2424 
2425   /* Track properties of MORECORE */
2426   unsigned int     morecore_properties;
2427 
2428   /* Statistics */
2429   INTERNAL_SIZE_T  mmapped_mem;
2430   INTERNAL_SIZE_T  sbrked_mem;
2431   INTERNAL_SIZE_T  max_sbrked_mem;
2432   INTERNAL_SIZE_T  max_mmapped_mem;
2433   INTERNAL_SIZE_T  max_total_mem;
2434 };
2435 
2436 typedef struct malloc_state *mstate;
2437 
2438 /*
2439    There is exactly one instance of this struct in this malloc.
2440    If you are adapting this malloc in a way that does NOT use a static
2441    malloc_state, you MUST explicitly zero-fill it before using. This
2442    malloc relies on the property that malloc_state is initialized to
2443    all zeroes (as is true of C statics).
2444 */
2445 
2446 static struct malloc_state av_;  /* never directly referenced */
2447 
2448 /*
2449    All uses of av_ are via get_malloc_state().
2450    At most one "call" to get_malloc_state is made per invocation of
2451    the public versions of malloc and free, but other routines
2452    that in turn invoke malloc and/or free may call more then once.
2453    Also, it is called in check* routines if DEBUG is set.
2454 */
2455 
2456 #define get_malloc_state() (&(av_))
2457 
2458 /*
2459   Initialize a malloc_state struct.
2460 
2461   This is called only from within malloc_consolidate, which needs
2462   be called in the same contexts anyway.  It is never called directly
2463   outside of malloc_consolidate because some optimizing compilers try
2464   to inline it at all call points, which turns out not to be an
2465   optimization at all. (Inlining it in malloc_consolidate is fine though.)
2466 */
2467 
2468 #if __STD_C
malloc_init_state(mstate av)2469 static void malloc_init_state(mstate av)
2470 #else
2471 static void malloc_init_state(av) mstate av;
2472 #endif
2473 {
2474   int     i;
2475   mbinptr bin;
2476 
2477   /* Establish circular links for normal bins */
2478   for (i = 1; i < NBINS; ++i) {
2479     bin = bin_at(av,i);
2480     bin->fd = bin->bk = bin;
2481   }
2482 
2483   av->top_pad        = DEFAULT_TOP_PAD;
2484   av->n_mmaps_max    = DEFAULT_MMAP_MAX;
2485   av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2486   av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
2487 
2488 #if MORECORE_CONTIGUOUS
2489   set_contiguous(av);
2490 #else
2491   set_noncontiguous(av);
2492 #endif
2493 
2494 
2495   set_max_fast(av, DEFAULT_MXFAST);
2496 
2497   av->top            = initial_top(av);
2498   av->pagesize       = malloc_getpagesize;
2499 }
2500 
2501 /*
2502    Other internal utilities operating on mstates
2503 */
2504 
2505 #if __STD_C
2506 static Void_t*  sYSMALLOc(INTERNAL_SIZE_T, mstate);
2507 static int      sYSTRIm(size_t, mstate);
2508 static void     malloc_consolidate(mstate);
2509 static Void_t** iALLOc(size_t, size_t*, int, Void_t**);
2510 #else
2511 static Void_t*  sYSMALLOc();
2512 static int      sYSTRIm();
2513 static void     malloc_consolidate();
2514 static Void_t** iALLOc();
2515 #endif
2516 
2517 /*
2518   Debugging support
2519 
2520   These routines make a number of assertions about the states
2521   of data structures that should be true at all times. If any
2522   are not true, it's very likely that a user program has somehow
2523   trashed memory. (It's also possible that there is a coding error
2524   in malloc. In which case, please report it!)
2525 */
2526 
2527 #if ! DEBUG
2528 
2529 #define check_chunk(P)
2530 #define check_free_chunk(P)
2531 #define check_inuse_chunk(P)
2532 #define check_remalloced_chunk(P,N)
2533 #define check_malloced_chunk(P,N)
2534 #define check_malloc_state()
2535 
2536 #else
2537 #define check_chunk(P)              do_check_chunk(P)
2538 #define check_free_chunk(P)         do_check_free_chunk(P)
2539 #define check_inuse_chunk(P)        do_check_inuse_chunk(P)
2540 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
2541 #define check_malloced_chunk(P,N)   do_check_malloced_chunk(P,N)
2542 #define check_malloc_state()        do_check_malloc_state()
2543 
2544 /*
2545   Properties of all chunks
2546 */
2547 
2548 #if __STD_C
do_check_chunk(mchunkptr p)2549 static void do_check_chunk(mchunkptr p)
2550 #else
2551 static void do_check_chunk(p) mchunkptr p;
2552 #endif
2553 {
2554   mstate av = get_malloc_state();
2555   CHUNK_SIZE_T  sz = chunksize(p);
2556   /* min and max possible addresses assuming contiguous allocation */
2557   char* max_address = (char*)(av->top) + chunksize(av->top);
2558   char* min_address = max_address - av->sbrked_mem;
2559 
2560   if (!chunk_is_mmapped(p)) {
2561 
2562     /* Has legal address ... */
2563     if (p != av->top) {
2564       if (contiguous(av)) {
2565         assert(((char*)p) >= min_address);
2566         assert(((char*)p + sz) <= ((char*)(av->top)));
2567       }
2568     }
2569     else {
2570       /* top size is always at least MINSIZE */
2571       assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2572       /* top predecessor always marked inuse */
2573       assert(prev_inuse(p));
2574     }
2575 
2576   }
2577   else {
2578 #if HAVE_MMAP
2579     /* address is outside main heap  */
2580     if (contiguous(av) && av->top != initial_top(av)) {
2581       assert(((char*)p) < min_address || ((char*)p) > max_address);
2582     }
2583     /* chunk is page-aligned */
2584     assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
2585     /* mem is aligned */
2586     assert(aligned_OK(chunk2mem(p)));
2587 #else
2588     /* force an appropriate assert violation if debug set */
2589     assert(!chunk_is_mmapped(p));
2590 #endif
2591   }
2592 }
2593 
2594 /*
2595   Properties of free chunks
2596 */
2597 
2598 #if __STD_C
do_check_free_chunk(mchunkptr p)2599 static void do_check_free_chunk(mchunkptr p)
2600 #else
2601 static void do_check_free_chunk(p) mchunkptr p;
2602 #endif
2603 {
2604   mstate av = get_malloc_state();
2605 
2606   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2607   mchunkptr next = chunk_at_offset(p, sz);
2608 
2609   do_check_chunk(p);
2610 
2611   /* Chunk must claim to be free ... */
2612   assert(!inuse(p));
2613   assert (!chunk_is_mmapped(p));
2614 
2615   /* Unless a special marker, must have OK fields */
2616   if ((CHUNK_SIZE_T)(sz) >= MINSIZE)
2617   {
2618     assert((sz & MALLOC_ALIGN_MASK) == 0);
2619     assert(aligned_OK(chunk2mem(p)));
2620     /* ... matching footer field */
2621     assert(next->prev_size == sz);
2622     /* ... and is fully consolidated */
2623     assert(prev_inuse(p));
2624     assert (next == av->top || inuse(next));
2625 
2626     /* ... and has minimally sane links */
2627     assert(p->fd->bk == p);
2628     assert(p->bk->fd == p);
2629   }
2630   else /* markers are always of size SIZE_SZ */
2631     assert(sz == SIZE_SZ);
2632 }
2633 
2634 /*
2635   Properties of inuse chunks
2636 */
2637 
2638 #if __STD_C
do_check_inuse_chunk(mchunkptr p)2639 static void do_check_inuse_chunk(mchunkptr p)
2640 #else
2641 static void do_check_inuse_chunk(p) mchunkptr p;
2642 #endif
2643 {
2644   mstate av = get_malloc_state();
2645   mchunkptr next;
2646   do_check_chunk(p);
2647 
2648   if (chunk_is_mmapped(p))
2649     return; /* mmapped chunks have no next/prev */
2650 
2651   /* Check whether it claims to be in use ... */
2652   assert(inuse(p));
2653 
2654   next = next_chunk(p);
2655 
2656   /* ... and is surrounded by OK chunks.
2657     Since more things can be checked with free chunks than inuse ones,
2658     if an inuse chunk borders them and debug is on, it's worth doing them.
2659   */
2660   if (!prev_inuse(p))  {
2661     /* Note that we cannot even look at prev unless it is not inuse */
2662     mchunkptr prv = prev_chunk(p);
2663     assert(next_chunk(prv) == p);
2664     do_check_free_chunk(prv);
2665   }
2666 
2667   if (next == av->top) {
2668     assert(prev_inuse(next));
2669     assert(chunksize(next) >= MINSIZE);
2670   }
2671   else if (!inuse(next))
2672     do_check_free_chunk(next);
2673 }
2674 
2675 /*
2676   Properties of chunks recycled from fastbins
2677 */
2678 
2679 #if __STD_C
do_check_remalloced_chunk(mchunkptr p,INTERNAL_SIZE_T s)2680 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2681 #else
2682 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2683 #endif
2684 {
2685   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
2686 
2687   do_check_inuse_chunk(p);
2688 
2689   /* Legal size ... */
2690   assert((sz & MALLOC_ALIGN_MASK) == 0);
2691   assert((CHUNK_SIZE_T)(sz) >= MINSIZE);
2692   /* ... and alignment */
2693   assert(aligned_OK(chunk2mem(p)));
2694   /* chunk is less than MINSIZE more than request */
2695   assert((long)(sz) - (long)(s) >= 0);
2696   assert((long)(sz) - (long)(s + MINSIZE) < 0);
2697 }
2698 
2699 /*
2700   Properties of nonrecycled chunks at the point they are malloced
2701 */
2702 
2703 #if __STD_C
do_check_malloced_chunk(mchunkptr p,INTERNAL_SIZE_T s)2704 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
2705 #else
2706 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
2707 #endif
2708 {
2709   /* same as recycled case ... */
2710   do_check_remalloced_chunk(p, s);
2711 
2712   /*
2713     ... plus,  must obey implementation invariant that prev_inuse is
2714     always true of any allocated chunk; i.e., that each allocated
2715     chunk borders either a previously allocated and still in-use
2716     chunk, or the base of its memory arena. This is ensured
2717     by making all allocations from the the `lowest' part of any found
2718     chunk.  This does not necessarily hold however for chunks
2719     recycled via fastbins.
2720   */
2721 
2722   assert(prev_inuse(p));
2723 }
2724 
2725 
2726 /*
2727   Properties of malloc_state.
2728 
2729   This may be useful for debugging malloc, as well as detecting user
2730   programmer errors that somehow write into malloc_state.
2731 
2732   If you are extending or experimenting with this malloc, you can
2733   probably figure out how to hack this routine to print out or
2734   display chunk addresses, sizes, bins, and other instrumentation.
2735 */
2736 
do_check_malloc_state()2737 static void do_check_malloc_state()
2738 {
2739   mstate av = get_malloc_state();
2740   int i;
2741   mchunkptr p;
2742   mchunkptr q;
2743   mbinptr b;
2744   unsigned int binbit;
2745   int empty;
2746   unsigned int idx;
2747   INTERNAL_SIZE_T size;
2748   CHUNK_SIZE_T  total = 0;
2749   int max_fast_bin;
2750 
2751   /* internal size_t must be no wider than pointer type */
2752   assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
2753 
2754   /* alignment is a power of 2 */
2755   assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
2756 
2757   /* cannot run remaining checks until fully initialized */
2758   if (av->top == 0 || av->top == initial_top(av))
2759     return;
2760 
2761   /* pagesize is a power of 2 */
2762   assert((av->pagesize & (av->pagesize-1)) == 0);
2763 
2764   /* properties of fastbins */
2765 
2766   /* max_fast is in allowed range */
2767   assert(get_max_fast(av) <= request2size(MAX_FAST_SIZE));
2768 
2769   max_fast_bin = fastbin_index(av->max_fast);
2770 
2771   for (i = 0; i < NFASTBINS; ++i) {
2772     p = av->fastbins[i];
2773 
2774     /* all bins past max_fast are empty */
2775     if (i > max_fast_bin)
2776       assert(p == 0);
2777 
2778     while (p != 0) {
2779       /* each chunk claims to be inuse */
2780       do_check_inuse_chunk(p);
2781       total += chunksize(p);
2782       /* chunk belongs in this bin */
2783       assert(fastbin_index(chunksize(p)) == i);
2784       p = p->fd;
2785     }
2786   }
2787 
2788   if (total != 0)
2789     assert(have_fastchunks(av));
2790   else if (!have_fastchunks(av))
2791     assert(total == 0);
2792 
2793   /* check normal bins */
2794   for (i = 1; i < NBINS; ++i) {
2795     b = bin_at(av,i);
2796 
2797     /* binmap is accurate (except for bin 1 == unsorted_chunks) */
2798     if (i >= 2) {
2799       binbit = get_binmap(av,i);
2800       empty = last(b) == b;
2801       if (!binbit)
2802         assert(empty);
2803       else if (!empty)
2804         assert(binbit);
2805     }
2806 
2807     for (p = last(b); p != b; p = p->bk) {
2808       /* each chunk claims to be free */
2809       do_check_free_chunk(p);
2810       size = chunksize(p);
2811       total += size;
2812       if (i >= 2) {
2813         /* chunk belongs in bin */
2814         idx = bin_index(size);
2815         assert(idx == i);
2816         /* lists are sorted */
2817         if ((CHUNK_SIZE_T) size >= (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
2818           assert(p->bk == b ||
2819                  (CHUNK_SIZE_T)chunksize(p->bk) >=
2820                  (CHUNK_SIZE_T)chunksize(p));
2821         }
2822       }
2823       /* chunk is followed by a legal chain of inuse chunks */
2824       for (q = next_chunk(p);
2825            (q != av->top && inuse(q) &&
2826              (CHUNK_SIZE_T)(chunksize(q)) >= MINSIZE);
2827            q = next_chunk(q))
2828         do_check_inuse_chunk(q);
2829     }
2830   }
2831 
2832   /* top chunk is OK */
2833   check_chunk(av->top);
2834 
2835   /* sanity checks for statistics */
2836 
2837   assert(total <= (CHUNK_SIZE_T)(av->max_total_mem));
2838   assert(av->n_mmaps >= 0);
2839   assert(av->n_mmaps <= av->max_n_mmaps);
2840 
2841   assert((CHUNK_SIZE_T)(av->sbrked_mem) <=
2842          (CHUNK_SIZE_T)(av->max_sbrked_mem));
2843 
2844   assert((CHUNK_SIZE_T)(av->mmapped_mem) <=
2845          (CHUNK_SIZE_T)(av->max_mmapped_mem));
2846 
2847   assert((CHUNK_SIZE_T)(av->max_total_mem) >=
2848          (CHUNK_SIZE_T)(av->mmapped_mem) + (CHUNK_SIZE_T)(av->sbrked_mem));
2849 }
2850 #endif
2851 
2852 
2853 /* ----------- Routines dealing with system allocation -------------- */
2854 
2855 /*
2856   sysmalloc handles malloc cases requiring more memory from the system.
2857   On entry, it is assumed that av->top does not have enough
2858   space to service request for nb bytes, thus requiring that av->top
2859   be extended or replaced.
2860 */
2861 
2862 #if __STD_C
sYSMALLOc(INTERNAL_SIZE_T nb,mstate av)2863 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
2864 #else
2865 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
2866 #endif
2867 {
2868   mchunkptr       old_top;        /* incoming value of av->top */
2869   INTERNAL_SIZE_T old_size;       /* its size */
2870   char*           old_end;        /* its end address */
2871 
2872   long            size;           /* arg to first MORECORE or mmap call */
2873   char*           brk;            /* return value from MORECORE */
2874 
2875   long            correction;     /* arg to 2nd MORECORE call */
2876   char*           snd_brk;        /* 2nd return val */
2877 
2878   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
2879   INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
2880   char*           aligned_brk;    /* aligned offset into brk */
2881 
2882   mchunkptr       p;              /* the allocated/returned chunk */
2883   mchunkptr       remainder;      /* remainder from allocation */
2884   CHUNK_SIZE_T    remainder_size; /* its size */
2885 
2886   CHUNK_SIZE_T    sum;            /* for updating stats */
2887 
2888   size_t          pagemask  = av->pagesize - 1;
2889 
2890   /*
2891     If there is space available in fastbins, consolidate and retry
2892     malloc from scratch rather than getting memory from system.  This
2893     can occur only if nb is in smallbin range so we didn't consolidate
2894     upon entry to malloc. It is much easier to handle this case here
2895     than in malloc proper.
2896   */
2897 
2898   if (have_fastchunks(av)) {
2899     assert(in_smallbin_range(nb));
2900     malloc_consolidate(av);
2901     return mALLOc(nb - MALLOC_ALIGN_MASK);
2902   }
2903 
2904 
2905 #if HAVE_MMAP
2906 
2907   /*
2908     If have mmap, and the request size meets the mmap threshold, and
2909     the system supports mmap, and there are few enough currently
2910     allocated mmapped regions, try to directly map this request
2911     rather than expanding top.
2912   */
2913 
2914   if ((CHUNK_SIZE_T)(nb) >= (CHUNK_SIZE_T)(av->mmap_threshold) &&
2915       (av->n_mmaps < av->n_mmaps_max)) {
2916 
2917     char* mm;             /* return value from mmap call*/
2918 
2919     /*
2920       Round up size to nearest page.  For mmapped chunks, the overhead
2921       is one SIZE_SZ unit larger than for normal chunks, because there
2922       is no following chunk whose prev_size field could be used.
2923     */
2924     size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
2925 
2926     /* Don't try if size wraps around 0 */
2927     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
2928 
2929       mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
2930 
2931       if (mm != (char*)(MORECORE_FAILURE)) {
2932 
2933         /*
2934           The offset to the start of the mmapped region is stored
2935           in the prev_size field of the chunk. This allows us to adjust
2936           returned start address to meet alignment requirements here
2937           and in memalign(), and still be able to compute proper
2938           address argument for later munmap in free() and realloc().
2939         */
2940 
2941         front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
2942         if (front_misalign > 0) {
2943           correction = MALLOC_ALIGNMENT - front_misalign;
2944           p = (mchunkptr)(mm + correction);
2945           p->prev_size = correction;
2946           set_head(p, (size - correction) |IS_MMAPPED);
2947         }
2948         else {
2949           p = (mchunkptr)mm;
2950           p->prev_size = 0;
2951           set_head(p, size|IS_MMAPPED);
2952         }
2953 
2954         /* update statistics */
2955 
2956         if (++av->n_mmaps > av->max_n_mmaps)
2957           av->max_n_mmaps = av->n_mmaps;
2958 
2959         sum = av->mmapped_mem += size;
2960         if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
2961           av->max_mmapped_mem = sum;
2962         sum += av->sbrked_mem;
2963         if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
2964           av->max_total_mem = sum;
2965 
2966         check_chunk(p);
2967 
2968         return chunk2mem(p);
2969       }
2970     }
2971   }
2972 #endif
2973 
2974   /* Record incoming configuration of top */
2975 
2976   old_top  = av->top;
2977   old_size = chunksize(old_top);
2978   old_end  = (char*)(chunk_at_offset(old_top, old_size));
2979 
2980   brk = snd_brk = (char*)(MORECORE_FAILURE);
2981 
2982   /*
2983      If not the first time through, we require old_size to be
2984      at least MINSIZE and to have prev_inuse set.
2985   */
2986 
2987   assert((old_top == initial_top(av) && old_size == 0) ||
2988          ((CHUNK_SIZE_T) (old_size) >= MINSIZE &&
2989           prev_inuse(old_top)));
2990 
2991   /* Precondition: not enough current space to satisfy nb request */
2992   assert((CHUNK_SIZE_T)(old_size) < (CHUNK_SIZE_T)(nb + MINSIZE));
2993 
2994   /* Precondition: all fastbins are consolidated */
2995   assert(!have_fastchunks(av));
2996 
2997 
2998   /* Request enough space for nb + pad + overhead */
2999 
3000   size = nb + av->top_pad + MINSIZE;
3001 
3002   /*
3003     If contiguous, we can subtract out existing space that we hope to
3004     combine with new space. We add it back later only if
3005     we don't actually get contiguous space.
3006   */
3007 
3008   if (contiguous(av))
3009     size -= old_size;
3010 
3011   /*
3012     Round to a multiple of page size.
3013     If MORECORE is not contiguous, this ensures that we only call it
3014     with whole-page arguments.  And if MORECORE is contiguous and
3015     this is not first time through, this preserves page-alignment of
3016     previous calls. Otherwise, we correct to page-align below.
3017   */
3018 
3019   size = (size + pagemask) & ~pagemask;
3020 
3021   /*
3022     Don't try to call MORECORE if argument is so big as to appear
3023     negative. Note that since mmap takes size_t arg, it may succeed
3024     below even if we cannot call MORECORE.
3025   */
3026 
3027   if (size > 0)
3028     brk = (char*)(MORECORE(size));
3029 
3030   /*
3031     If have mmap, try using it as a backup when MORECORE fails or
3032     cannot be used. This is worth doing on systems that have "holes" in
3033     address space, so sbrk cannot extend to give contiguous space, but
3034     space is available elsewhere.  Note that we ignore mmap max count
3035     and threshold limits, since the space will not be used as a
3036     segregated mmap region.
3037   */
3038 
3039 #if HAVE_MMAP
3040   if (brk == (char*)(MORECORE_FAILURE)) {
3041 
3042     /* Cannot merge with old top, so add its size back in */
3043     if (contiguous(av))
3044       size = (size + old_size + pagemask) & ~pagemask;
3045 
3046     /* If we are relying on mmap as backup, then use larger units */
3047     if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(MMAP_AS_MORECORE_SIZE))
3048       size = MMAP_AS_MORECORE_SIZE;
3049 
3050     /* Don't try if size wraps around 0 */
3051     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb)) {
3052 
3053       brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
3054 
3055       if (brk != (char*)(MORECORE_FAILURE)) {
3056 
3057         /* We do not need, and cannot use, another sbrk call to find end */
3058         snd_brk = brk + size;
3059 
3060         /*
3061            Record that we no longer have a contiguous sbrk region.
3062            After the first time mmap is used as backup, we do not
3063            ever rely on contiguous space since this could incorrectly
3064            bridge regions.
3065         */
3066         set_noncontiguous(av);
3067       }
3068     }
3069   }
3070 #endif
3071 
3072   if (brk != (char*)(MORECORE_FAILURE)) {
3073     av->sbrked_mem += size;
3074 
3075     /*
3076       If MORECORE extends previous space, we can likewise extend top size.
3077     */
3078 
3079     if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
3080       set_head(old_top, (size + old_size) | PREV_INUSE);
3081     }
3082 
3083     /*
3084       Otherwise, make adjustments:
3085 
3086       * If the first time through or noncontiguous, we need to call sbrk
3087         just to find out where the end of memory lies.
3088 
3089       * We need to ensure that all returned chunks from malloc will meet
3090         MALLOC_ALIGNMENT
3091 
3092       * If there was an intervening foreign sbrk, we need to adjust sbrk
3093         request size to account for fact that we will not be able to
3094         combine new space with existing space in old_top.
3095 
3096       * Almost all systems internally allocate whole pages at a time, in
3097         which case we might as well use the whole last page of request.
3098         So we allocate enough more memory to hit a page boundary now,
3099         which in turn causes future contiguous calls to page-align.
3100     */
3101 
3102     else {
3103       front_misalign = 0;
3104       end_misalign = 0;
3105       correction = 0;
3106       aligned_brk = brk;
3107 
3108       /*
3109         If MORECORE returns an address lower than we have seen before,
3110         we know it isn't really contiguous.  This and some subsequent
3111         checks help cope with non-conforming MORECORE functions and
3112         the presence of "foreign" calls to MORECORE from outside of
3113         malloc or by other threads.  We cannot guarantee to detect
3114         these in all cases, but cope with the ones we do detect.
3115       */
3116       if (contiguous(av) && old_size != 0 && brk < old_end) {
3117         set_noncontiguous(av);
3118       }
3119 
3120       /* handle contiguous cases */
3121       if (contiguous(av)) {
3122 
3123         /*
3124            We can tolerate forward non-contiguities here (usually due
3125            to foreign calls) but treat them as part of our space for
3126            stats reporting.
3127         */
3128         if (old_size != 0)
3129           av->sbrked_mem += brk - old_end;
3130 
3131         /* Guarantee alignment of first new chunk made from this space */
3132 
3133         front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
3134         if (front_misalign > 0) {
3135 
3136           /*
3137             Skip over some bytes to arrive at an aligned position.
3138             We don't need to specially mark these wasted front bytes.
3139             They will never be accessed anyway because
3140             prev_inuse of av->top (and any chunk created from its start)
3141             is always true after initialization.
3142           */
3143 
3144           correction = MALLOC_ALIGNMENT - front_misalign;
3145           aligned_brk += correction;
3146         }
3147 
3148         /*
3149           If this isn't adjacent to existing space, then we will not
3150           be able to merge with old_top space, so must add to 2nd request.
3151         */
3152 
3153         correction += old_size;
3154 
3155         /* Extend the end address to hit a page boundary */
3156         end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
3157         correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
3158 
3159         assert(correction >= 0);
3160         snd_brk = (char*)(MORECORE(correction));
3161 
3162         if (snd_brk == (char*)(MORECORE_FAILURE)) {
3163           /*
3164             If can't allocate correction, try to at least find out current
3165             brk.  It might be enough to proceed without failing.
3166           */
3167           correction = 0;
3168           snd_brk = (char*)(MORECORE(0));
3169         }
3170         else if (snd_brk < brk) {
3171           /*
3172             If the second call gives noncontiguous space even though
3173             it says it won't, the only course of action is to ignore
3174             results of second call, and conservatively estimate where
3175             the first call left us. Also set noncontiguous, so this
3176             won't happen again, leaving at most one hole.
3177 
3178             Note that this check is intrinsically incomplete.  Because
3179             MORECORE is allowed to give more space than we ask for,
3180             there is no reliable way to detect a noncontiguity
3181             producing a forward gap for the second call.
3182           */
3183           snd_brk = brk + size;
3184           correction = 0;
3185           set_noncontiguous(av);
3186         }
3187 
3188       }
3189 
3190       /* handle non-contiguous cases */
3191       else {
3192         /* MORECORE/mmap must correctly align */
3193         assert(aligned_OK(chunk2mem(brk)));
3194 
3195         /* Find out current end of memory */
3196         if (snd_brk == (char*)(MORECORE_FAILURE)) {
3197           snd_brk = (char*)(MORECORE(0));
3198           av->sbrked_mem += snd_brk - brk - size;
3199         }
3200       }
3201 
3202       /* Adjust top based on results of second sbrk */
3203       if (snd_brk != (char*)(MORECORE_FAILURE)) {
3204         av->top = (mchunkptr)aligned_brk;
3205         set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
3206         av->sbrked_mem += correction;
3207 
3208         /*
3209           If not the first time through, we either have a
3210           gap due to foreign sbrk or a non-contiguous region.  Insert a
3211           double fencepost at old_top to prevent consolidation with space
3212           we don't own. These fenceposts are artificial chunks that are
3213           marked as inuse and are in any case too small to use.  We need
3214           two to make sizes and alignments work out.
3215         */
3216 
3217         if (old_size != 0) {
3218           /*
3219              Shrink old_top to insert fenceposts, keeping size a
3220              multiple of MALLOC_ALIGNMENT. We know there is at least
3221              enough space in old_top to do this.
3222           */
3223           old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
3224           set_head(old_top, old_size | PREV_INUSE);
3225 
3226           /*
3227             Note that the following assignments completely overwrite
3228             old_top when old_size was previously MINSIZE.  This is
3229             intentional. We need the fencepost, even if old_top otherwise gets
3230             lost.
3231           */
3232           chunk_at_offset(old_top, old_size          )->size =
3233             SIZE_SZ|PREV_INUSE;
3234 
3235           chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
3236             SIZE_SZ|PREV_INUSE;
3237 
3238           /*
3239              If possible, release the rest, suppressing trimming.
3240           */
3241           if (old_size >= MINSIZE) {
3242             INTERNAL_SIZE_T tt = av->trim_threshold;
3243             av->trim_threshold = (INTERNAL_SIZE_T)(-1);
3244             fREe(chunk2mem(old_top));
3245             av->trim_threshold = tt;
3246           }
3247         }
3248       }
3249     }
3250 
3251     /* Update statistics */
3252     sum = av->sbrked_mem;
3253     if (sum > (CHUNK_SIZE_T)(av->max_sbrked_mem))
3254       av->max_sbrked_mem = sum;
3255 
3256     sum += av->mmapped_mem;
3257     if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
3258       av->max_total_mem = sum;
3259 
3260     check_malloc_state();
3261 
3262     /* finally, do the allocation */
3263 
3264     p = av->top;
3265     size = chunksize(p);
3266 
3267     /* check that one of the above allocation paths succeeded */
3268     if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3269       remainder_size = size - nb;
3270       remainder = chunk_at_offset(p, nb);
3271       av->top = remainder;
3272       set_head(p, nb | PREV_INUSE);
3273       set_head(remainder, remainder_size | PREV_INUSE);
3274       check_malloced_chunk(p, nb);
3275       return chunk2mem(p);
3276     }
3277 
3278   }
3279 
3280   /* catch all failure paths */
3281   MALLOC_FAILURE_ACTION;
3282   return 0;
3283 }
3284 
3285 
3286 
3287 
3288 /*
3289   sYSTRIm is an inverse of sorts to sYSMALLOc.  It gives memory back
3290   to the system (via negative arguments to sbrk) if there is unused
3291   memory at the `high' end of the malloc pool. It is called
3292   automatically by free() when top space exceeds the trim
3293   threshold. It is also called by the public malloc_trim routine.  It
3294   returns 1 if it actually released any memory, else 0.
3295 */
3296 
3297 #if __STD_C
sYSTRIm(size_t pad,mstate av)3298 static int sYSTRIm(size_t pad, mstate av)
3299 #else
3300 static int sYSTRIm(pad, av) size_t pad; mstate av;
3301 #endif
3302 {
3303   long  top_size;        /* Amount of top-most memory */
3304   long  extra;           /* Amount to release */
3305   long  released;        /* Amount actually released */
3306   char* current_brk;     /* address returned by pre-check sbrk call */
3307   char* new_brk;         /* address returned by post-check sbrk call */
3308   size_t pagesz;
3309 
3310   pagesz = av->pagesize;
3311   top_size = chunksize(av->top);
3312 
3313   /* Release in pagesize units, keeping at least one page */
3314   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3315 
3316   if (extra > 0) {
3317 
3318     /*
3319       Only proceed if end of memory is where we last set it.
3320       This avoids problems if there were foreign sbrk calls.
3321     */
3322     current_brk = (char*)(MORECORE(0));
3323     if (current_brk == (char*)(av->top) + top_size) {
3324 
3325       /*
3326         Attempt to release memory. We ignore MORECORE return value,
3327         and instead call again to find out where new end of memory is.
3328         This avoids problems if first call releases less than we asked,
3329         of if failure somehow altered brk value. (We could still
3330         encounter problems if it altered brk in some very bad way,
3331         but the only thing we can do is adjust anyway, which will cause
3332         some downstream failure.)
3333       */
3334 
3335       MORECORE(-extra);
3336       new_brk = (char*)(MORECORE(0));
3337 
3338       if (new_brk != (char*)MORECORE_FAILURE) {
3339         released = (long)(current_brk - new_brk);
3340 
3341         if (released != 0) {
3342           /* Success. Adjust top. */
3343           av->sbrked_mem -= released;
3344           set_head(av->top, (top_size - released) | PREV_INUSE);
3345           check_malloc_state();
3346           return 1;
3347         }
3348       }
3349     }
3350   }
3351   return 0;
3352 }
3353 
3354 /*
3355   ------------------------------ malloc ------------------------------
3356 */
3357 
3358 
3359 #if __STD_C
mALLOc(size_t bytes)3360 Void_t* mALLOc(size_t bytes)
3361 #else
3362   Void_t* mALLOc(bytes) size_t bytes;
3363 #endif
3364 {
3365   mstate av = get_malloc_state();
3366 
3367   INTERNAL_SIZE_T nb;               /* normalized request size */
3368   unsigned int    idx;              /* associated bin index */
3369   mbinptr         bin;              /* associated bin */
3370   mfastbinptr*    fb;               /* associated fastbin */
3371 
3372   mchunkptr       victim;           /* inspected/selected chunk */
3373   INTERNAL_SIZE_T size;             /* its size */
3374   int             victim_index;     /* its bin index */
3375 
3376   mchunkptr       remainder;        /* remainder from a split */
3377   CHUNK_SIZE_T    remainder_size;   /* its size */
3378 
3379   unsigned int    block;            /* bit map traverser */
3380   unsigned int    bit;              /* bit map traverser */
3381   unsigned int    map;              /* current word of binmap */
3382 
3383   mchunkptr       fwd;              /* misc temp for linking */
3384   mchunkptr       bck;              /* misc temp for linking */
3385 
3386   /*
3387     Convert request size to internal form by adding SIZE_SZ bytes
3388     overhead plus possibly more to obtain necessary alignment and/or
3389     to obtain a size of at least MINSIZE, the smallest allocatable
3390     size. Also, checked_request2size traps (returning 0) request sizes
3391     that are so large that they wrap around zero when padded and
3392     aligned.
3393   */
3394 
3395   checked_request2size(bytes, nb);
3396 
3397   /*
3398     Bypass search if no frees yet
3399    */
3400   if (!have_anychunks(av)) {
3401     if (av->max_fast == 0) /* initialization check */
3402       malloc_consolidate(av);
3403     goto use_top;
3404   }
3405 
3406   /*
3407     If the size qualifies as a fastbin, first check corresponding bin.
3408   */
3409 
3410   if ((CHUNK_SIZE_T)(nb) <= (CHUNK_SIZE_T)(av->max_fast)) {
3411     fb = &(av->fastbins[(fastbin_index(nb))]);
3412     if ( (victim = *fb) != 0) {
3413       *fb = victim->fd;
3414       check_remalloced_chunk(victim, nb);
3415       return chunk2mem(victim);
3416     }
3417   }
3418 
3419   /*
3420     If a small request, check regular bin.  Since these "smallbins"
3421     hold one size each, no searching within bins is necessary.
3422     (For a large request, we need to wait until unsorted chunks are
3423     processed to find best fit. But for small ones, fits are exact
3424     anyway, so we can check now, which is faster.)
3425   */
3426 
3427   if (in_smallbin_range(nb)) {
3428     idx = smallbin_index(nb);
3429     bin = bin_at(av,idx);
3430 
3431     if ( (victim = last(bin)) != bin) {
3432       bck = victim->bk;
3433       set_inuse_bit_at_offset(victim, nb);
3434       bin->bk = bck;
3435       bck->fd = bin;
3436 
3437       check_malloced_chunk(victim, nb);
3438       return chunk2mem(victim);
3439     }
3440   }
3441 
3442   /*
3443      If this is a large request, consolidate fastbins before continuing.
3444      While it might look excessive to kill all fastbins before
3445      even seeing if there is space available, this avoids
3446      fragmentation problems normally associated with fastbins.
3447      Also, in practice, programs tend to have runs of either small or
3448      large requests, but less often mixtures, so consolidation is not
3449      invoked all that often in most programs. And the programs that
3450      it is called frequently in otherwise tend to fragment.
3451   */
3452 
3453   else {
3454     idx = largebin_index(nb);
3455     if (have_fastchunks(av))
3456       malloc_consolidate(av);
3457   }
3458 
3459   /*
3460     Process recently freed or remaindered chunks, taking one only if
3461     it is exact fit, or, if this a small request, the chunk is remainder from
3462     the most recent non-exact fit.  Place other traversed chunks in
3463     bins.  Note that this step is the only place in any routine where
3464     chunks are placed in bins.
3465   */
3466 
3467   while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
3468     bck = victim->bk;
3469     size = chunksize(victim);
3470 
3471     /*
3472        If a small request, try to use last remainder if it is the
3473        only chunk in unsorted bin.  This helps promote locality for
3474        runs of consecutive small requests. This is the only
3475        exception to best-fit, and applies only when there is
3476        no exact fit for a small chunk.
3477     */
3478 
3479     if (in_smallbin_range(nb) &&
3480         bck == unsorted_chunks(av) &&
3481         victim == av->last_remainder &&
3482         (CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
3483 
3484       /* split and reattach remainder */
3485       remainder_size = size - nb;
3486       remainder = chunk_at_offset(victim, nb);
3487       unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3488       av->last_remainder = remainder;
3489       remainder->bk = remainder->fd = unsorted_chunks(av);
3490 
3491       set_head(victim, nb | PREV_INUSE);
3492       set_head(remainder, remainder_size | PREV_INUSE);
3493       set_foot(remainder, remainder_size);
3494 
3495       check_malloced_chunk(victim, nb);
3496       return chunk2mem(victim);
3497     }
3498 
3499     /* remove from unsorted list */
3500     unsorted_chunks(av)->bk = bck;
3501     bck->fd = unsorted_chunks(av);
3502 
3503     /* Take now instead of binning if exact fit */
3504 
3505     if (size == nb) {
3506       set_inuse_bit_at_offset(victim, size);
3507       check_malloced_chunk(victim, nb);
3508       return chunk2mem(victim);
3509     }
3510 
3511     /* place chunk in bin */
3512 
3513     if (in_smallbin_range(size)) {
3514       victim_index = smallbin_index(size);
3515       bck = bin_at(av, victim_index);
3516       fwd = bck->fd;
3517     }
3518     else {
3519       victim_index = largebin_index(size);
3520       bck = bin_at(av, victim_index);
3521       fwd = bck->fd;
3522 
3523       if (fwd != bck) {
3524         /* if smaller than smallest, place first */
3525         if ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(bck->bk->size)) {
3526           fwd = bck;
3527           bck = bck->bk;
3528         }
3529         else if ((CHUNK_SIZE_T)(size) >=
3530                  (CHUNK_SIZE_T)(FIRST_SORTED_BIN_SIZE)) {
3531 
3532           /* maintain large bins in sorted order */
3533           size |= PREV_INUSE; /* Or with inuse bit to speed comparisons */
3534           while ((CHUNK_SIZE_T)(size) < (CHUNK_SIZE_T)(fwd->size))
3535             fwd = fwd->fd;
3536           bck = fwd->bk;
3537         }
3538       }
3539     }
3540 
3541     mark_bin(av, victim_index);
3542     victim->bk = bck;
3543     victim->fd = fwd;
3544     fwd->bk = victim;
3545     bck->fd = victim;
3546   }
3547 
3548   /*
3549     If a large request, scan through the chunks of current bin to
3550     find one that fits.  (This will be the smallest that fits unless
3551     FIRST_SORTED_BIN_SIZE has been changed from default.)  This is
3552     the only step where an unbounded number of chunks might be
3553     scanned without doing anything useful with them. However the
3554     lists tend to be short.
3555   */
3556 
3557   if (!in_smallbin_range(nb)) {
3558     bin = bin_at(av, idx);
3559 
3560     for (victim = last(bin); victim != bin; victim = victim->bk) {
3561       size = chunksize(victim);
3562 
3563       if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb)) {
3564         remainder_size = size - nb;
3565         unlink(victim, bck, fwd);
3566 
3567         /* Exhaust */
3568         if (remainder_size < MINSIZE)  {
3569           set_inuse_bit_at_offset(victim, size);
3570           check_malloced_chunk(victim, nb);
3571           return chunk2mem(victim);
3572         }
3573         /* Split */
3574         else {
3575           remainder = chunk_at_offset(victim, nb);
3576           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3577           remainder->bk = remainder->fd = unsorted_chunks(av);
3578           set_head(victim, nb | PREV_INUSE);
3579           set_head(remainder, remainder_size | PREV_INUSE);
3580           set_foot(remainder, remainder_size);
3581           check_malloced_chunk(victim, nb);
3582           return chunk2mem(victim);
3583         }
3584       }
3585     }
3586   }
3587 
3588   /*
3589     Search for a chunk by scanning bins, starting with next largest
3590     bin. This search is strictly by best-fit; i.e., the smallest
3591     (with ties going to approximately the least recently used) chunk
3592     that fits is selected.
3593 
3594     The bitmap avoids needing to check that most blocks are nonempty.
3595   */
3596 
3597   ++idx;
3598   bin = bin_at(av,idx);
3599   block = idx2block(idx);
3600   map = av->binmap[block];
3601   bit = idx2bit(idx);
3602 
3603   for (;;) {
3604 
3605     /* Skip rest of block if there are no more set bits in this block.  */
3606     if (bit > map || bit == 0) {
3607       do {
3608         if (++block >= BINMAPSIZE)  /* out of bins */
3609           goto use_top;
3610       } while ( (map = av->binmap[block]) == 0);
3611 
3612       bin = bin_at(av, (block << BINMAPSHIFT));
3613       bit = 1;
3614     }
3615 
3616     /* Advance to bin with set bit. There must be one. */
3617     while ((bit & map) == 0) {
3618       bin = next_bin(bin);
3619       bit <<= 1;
3620       assert(bit != 0);
3621     }
3622 
3623     /* Inspect the bin. It is likely to be non-empty */
3624     victim = last(bin);
3625 
3626     /*  If a false alarm (empty bin), clear the bit. */
3627     if (victim == bin) {
3628       av->binmap[block] = map &= ~bit; /* Write through */
3629       bin = next_bin(bin);
3630       bit <<= 1;
3631     }
3632 
3633     else {
3634       size = chunksize(victim);
3635 
3636       /*  We know the first chunk in this bin is big enough to use. */
3637       assert((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb));
3638 
3639       remainder_size = size - nb;
3640 
3641       /* unlink */
3642       bck = victim->bk;
3643       bin->bk = bck;
3644       bck->fd = bin;
3645 
3646       /* Exhaust */
3647       if (remainder_size < MINSIZE) {
3648         set_inuse_bit_at_offset(victim, size);
3649         check_malloced_chunk(victim, nb);
3650         return chunk2mem(victim);
3651       }
3652 
3653       /* Split */
3654       else {
3655         remainder = chunk_at_offset(victim, nb);
3656 
3657         unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
3658         remainder->bk = remainder->fd = unsorted_chunks(av);
3659         /* advertise as last remainder */
3660         if (in_smallbin_range(nb))
3661           av->last_remainder = remainder;
3662 
3663         set_head(victim, nb | PREV_INUSE);
3664         set_head(remainder, remainder_size | PREV_INUSE);
3665         set_foot(remainder, remainder_size);
3666         check_malloced_chunk(victim, nb);
3667         return chunk2mem(victim);
3668       }
3669     }
3670   }
3671 
3672   use_top:
3673   /*
3674     If large enough, split off the chunk bordering the end of memory
3675     (held in av->top). Note that this is in accord with the best-fit
3676     search rule.  In effect, av->top is treated as larger (and thus
3677     less well fitting) than any other available chunk since it can
3678     be extended to be as large as necessary (up to system
3679     limitations).
3680 
3681     We require that av->top always exists (i.e., has size >=
3682     MINSIZE) after initialization, so if it would otherwise be
3683     exhuasted by current request, it is replenished. (The main
3684     reason for ensuring it exists is that we may need MINSIZE space
3685     to put in fenceposts in sysmalloc.)
3686   */
3687 
3688   victim = av->top;
3689   size = chunksize(victim);
3690 
3691   if ((CHUNK_SIZE_T)(size) >= (CHUNK_SIZE_T)(nb + MINSIZE)) {
3692     remainder_size = size - nb;
3693     remainder = chunk_at_offset(victim, nb);
3694     av->top = remainder;
3695     set_head(victim, nb | PREV_INUSE);
3696     set_head(remainder, remainder_size | PREV_INUSE);
3697 
3698     check_malloced_chunk(victim, nb);
3699     return chunk2mem(victim);
3700   }
3701 
3702   /*
3703      If no space in top, relay to handle system-dependent cases
3704   */
3705   return sYSMALLOc(nb, av);
3706 }
3707 
3708 /*
3709   ------------------------------ free ------------------------------
3710 */
3711 
3712 #if __STD_C
fREe(Void_t * mem)3713 void fREe(Void_t* mem)
3714 #else
3715 void fREe(mem) Void_t* mem;
3716 #endif
3717 {
3718   mstate av = get_malloc_state();
3719 
3720   mchunkptr       p;           /* chunk corresponding to mem */
3721   INTERNAL_SIZE_T size;        /* its size */
3722   mfastbinptr*    fb;          /* associated fastbin */
3723   mchunkptr       nextchunk;   /* next contiguous chunk */
3724   INTERNAL_SIZE_T nextsize;    /* its size */
3725   int             nextinuse;   /* true if nextchunk is used */
3726   INTERNAL_SIZE_T prevsize;    /* size of previous contiguous chunk */
3727   mchunkptr       bck;         /* misc temp for linking */
3728   mchunkptr       fwd;         /* misc temp for linking */
3729 
3730   /* free(0) has no effect */
3731   if (mem != 0) {
3732     p = mem2chunk(mem);
3733     size = chunksize(p);
3734 
3735     check_inuse_chunk(p);
3736 
3737     /*
3738       If eligible, place chunk on a fastbin so it can be found
3739       and used quickly in malloc.
3740     */
3741 
3742     if ((CHUNK_SIZE_T)(size) <= (CHUNK_SIZE_T)(av->max_fast)
3743 
3744 #if TRIM_FASTBINS
3745         /*
3746            If TRIM_FASTBINS set, don't place chunks
3747            bordering top into fastbins
3748         */
3749         && (chunk_at_offset(p, size) != av->top)
3750 #endif
3751         ) {
3752 
3753       set_fastchunks(av);
3754       fb = &(av->fastbins[fastbin_index(size)]);
3755       p->fd = *fb;
3756       *fb = p;
3757     }
3758 
3759     /*
3760        Consolidate other non-mmapped chunks as they arrive.
3761     */
3762 
3763     else if (!chunk_is_mmapped(p)) {
3764       set_anychunks(av);
3765 
3766       nextchunk = chunk_at_offset(p, size);
3767       nextsize = chunksize(nextchunk);
3768 
3769       /* consolidate backward */
3770       if (!prev_inuse(p)) {
3771         prevsize = p->prev_size;
3772         size += prevsize;
3773         p = chunk_at_offset(p, -((long) prevsize));
3774         unlink(p, bck, fwd);
3775       }
3776 
3777       if (nextchunk != av->top) {
3778         /* get and clear inuse bit */
3779         nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3780         set_head(nextchunk, nextsize);
3781 
3782         /* consolidate forward */
3783         if (!nextinuse) {
3784           unlink(nextchunk, bck, fwd);
3785           size += nextsize;
3786         }
3787 
3788         /*
3789           Place the chunk in unsorted chunk list. Chunks are
3790           not placed into regular bins until after they have
3791           been given one chance to be used in malloc.
3792         */
3793 
3794         bck = unsorted_chunks(av);
3795         fwd = bck->fd;
3796         p->bk = bck;
3797         p->fd = fwd;
3798         bck->fd = p;
3799         fwd->bk = p;
3800 
3801         set_head(p, size | PREV_INUSE);
3802         set_foot(p, size);
3803 
3804         check_free_chunk(p);
3805       }
3806 
3807       /*
3808          If the chunk borders the current high end of memory,
3809          consolidate into top
3810       */
3811 
3812       else {
3813         size += nextsize;
3814         set_head(p, size | PREV_INUSE);
3815         av->top = p;
3816         check_chunk(p);
3817       }
3818 
3819       /*
3820         If freeing a large space, consolidate possibly-surrounding
3821         chunks. Then, if the total unused topmost memory exceeds trim
3822         threshold, ask malloc_trim to reduce top.
3823 
3824         Unless max_fast is 0, we don't know if there are fastbins
3825         bordering top, so we cannot tell for sure whether threshold
3826         has been reached unless fastbins are consolidated.  But we
3827         don't want to consolidate on each free.  As a compromise,
3828         consolidation is performed if FASTBIN_CONSOLIDATION_THRESHOLD
3829         is reached.
3830       */
3831 
3832       if ((CHUNK_SIZE_T)(size) >= FASTBIN_CONSOLIDATION_THRESHOLD) {
3833         if (have_fastchunks(av))
3834           malloc_consolidate(av);
3835 
3836 #ifndef MORECORE_CANNOT_TRIM
3837         if ((CHUNK_SIZE_T)(chunksize(av->top)) >=
3838             (CHUNK_SIZE_T)(av->trim_threshold))
3839           sYSTRIm(av->top_pad, av);
3840 #endif
3841       }
3842 
3843     }
3844     /*
3845       If the chunk was allocated via mmap, release via munmap()
3846       Note that if HAVE_MMAP is false but chunk_is_mmapped is
3847       true, then user must have overwritten memory. There's nothing
3848       we can do to catch this error unless DEBUG is set, in which case
3849       check_inuse_chunk (above) will have triggered error.
3850     */
3851 
3852     else {
3853 #if HAVE_MMAP
3854       int ret;
3855       INTERNAL_SIZE_T offset = p->prev_size;
3856       av->n_mmaps--;
3857       av->mmapped_mem -= (size + offset);
3858       ret = munmap((char*)p - offset, size + offset);
3859       /* munmap returns non-zero on failure */
3860       assert(ret == 0);
3861 #endif
3862     }
3863   }
3864 }
3865 
3866 /*
3867   ------------------------- malloc_consolidate -------------------------
3868 
3869   malloc_consolidate is a specialized version of free() that tears
3870   down chunks held in fastbins.  Free itself cannot be used for this
3871   purpose since, among other things, it might place chunks back onto
3872   fastbins.  So, instead, we need to use a minor variant of the same
3873   code.
3874 
3875   Also, because this routine needs to be called the first time through
3876   malloc anyway, it turns out to be the perfect place to trigger
3877   initialization code.
3878 */
3879 
3880 #if __STD_C
malloc_consolidate(mstate av)3881 static void malloc_consolidate(mstate av)
3882 #else
3883 static void malloc_consolidate(av) mstate av;
3884 #endif
3885 {
3886   mfastbinptr*    fb;                 /* current fastbin being consolidated */
3887   mfastbinptr*    maxfb;              /* last fastbin (for loop control) */
3888   mchunkptr       p;                  /* current chunk being consolidated */
3889   mchunkptr       nextp;              /* next chunk to consolidate */
3890   mchunkptr       unsorted_bin;       /* bin header */
3891   mchunkptr       first_unsorted;     /* chunk to link to */
3892 
3893   /* These have same use as in free() */
3894   mchunkptr       nextchunk;
3895   INTERNAL_SIZE_T size;
3896   INTERNAL_SIZE_T nextsize;
3897   INTERNAL_SIZE_T prevsize;
3898   int             nextinuse;
3899   mchunkptr       bck;
3900   mchunkptr       fwd;
3901 
3902   /*
3903     If max_fast is 0, we know that av hasn't
3904     yet been initialized, in which case do so below
3905   */
3906 
3907   if (av->max_fast != 0) {
3908     clear_fastchunks(av);
3909 
3910     unsorted_bin = unsorted_chunks(av);
3911 
3912     /*
3913       Remove each chunk from fast bin and consolidate it, placing it
3914       then in unsorted bin. Among other reasons for doing this,
3915       placing in unsorted bin avoids needing to calculate actual bins
3916       until malloc is sure that chunks aren't immediately going to be
3917       reused anyway.
3918     */
3919 
3920     maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
3921     fb = &(av->fastbins[0]);
3922     do {
3923       if ( (p = *fb) != 0) {
3924         *fb = 0;
3925 
3926         do {
3927           check_inuse_chunk(p);
3928           nextp = p->fd;
3929 
3930           /* Slightly streamlined version of consolidation code in free() */
3931           size = p->size & ~PREV_INUSE;
3932           nextchunk = chunk_at_offset(p, size);
3933           nextsize = chunksize(nextchunk);
3934 
3935           if (!prev_inuse(p)) {
3936             prevsize = p->prev_size;
3937             size += prevsize;
3938             p = chunk_at_offset(p, -((long) prevsize));
3939             unlink(p, bck, fwd);
3940           }
3941 
3942           if (nextchunk != av->top) {
3943             nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
3944             set_head(nextchunk, nextsize);
3945 
3946             if (!nextinuse) {
3947               size += nextsize;
3948               unlink(nextchunk, bck, fwd);
3949             }
3950 
3951             first_unsorted = unsorted_bin->fd;
3952             unsorted_bin->fd = p;
3953             first_unsorted->bk = p;
3954 
3955             set_head(p, size | PREV_INUSE);
3956             p->bk = unsorted_bin;
3957             p->fd = first_unsorted;
3958             set_foot(p, size);
3959           }
3960 
3961           else {
3962             size += nextsize;
3963             set_head(p, size | PREV_INUSE);
3964             av->top = p;
3965           }
3966 
3967         } while ( (p = nextp) != 0);
3968 
3969       }
3970     } while (fb++ != maxfb);
3971   }
3972   else {
3973     malloc_init_state(av);
3974     check_malloc_state();
3975   }
3976 }
3977 
3978 /*
3979   ------------------------------ realloc ------------------------------
3980 */
3981 
3982 
3983 #if __STD_C
rEALLOc(Void_t * oldmem,size_t bytes)3984 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
3985 #else
3986 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
3987 #endif
3988 {
3989   mstate av = get_malloc_state();
3990 
3991   INTERNAL_SIZE_T  nb;              /* padded request size */
3992 
3993   mchunkptr        oldp;            /* chunk corresponding to oldmem */
3994   INTERNAL_SIZE_T  oldsize;         /* its size */
3995 
3996   mchunkptr        newp;            /* chunk to return */
3997   INTERNAL_SIZE_T  newsize;         /* its size */
3998   Void_t*          newmem;          /* corresponding user mem */
3999 
4000   mchunkptr        next;            /* next contiguous chunk after oldp */
4001 
4002   mchunkptr        remainder;       /* extra space at end of newp */
4003   CHUNK_SIZE_T     remainder_size;  /* its size */
4004 
4005   mchunkptr        bck;             /* misc temp for linking */
4006   mchunkptr        fwd;             /* misc temp for linking */
4007 
4008   CHUNK_SIZE_T     copysize;        /* bytes to copy */
4009   unsigned int     ncopies;         /* INTERNAL_SIZE_T words to copy */
4010   INTERNAL_SIZE_T* s;               /* copy source */
4011   INTERNAL_SIZE_T* d;               /* copy destination */
4012 
4013 
4014 #ifdef REALLOC_ZERO_BYTES_FREES
4015   if (bytes == 0) {
4016     fREe(oldmem);
4017     return 0;
4018   }
4019 #endif
4020 
4021   /* realloc of null is supposed to be same as malloc */
4022   if (oldmem == 0) return mALLOc(bytes);
4023 
4024   checked_request2size(bytes, nb);
4025 
4026   oldp    = mem2chunk(oldmem);
4027   oldsize = chunksize(oldp);
4028 
4029   check_inuse_chunk(oldp);
4030 
4031   if (!chunk_is_mmapped(oldp)) {
4032 
4033     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb)) {
4034       /* already big enough; split below */
4035       newp = oldp;
4036       newsize = oldsize;
4037     }
4038 
4039     else {
4040       next = chunk_at_offset(oldp, oldsize);
4041 
4042       /* Try to expand forward into top */
4043       if (next == av->top &&
4044           (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4045           (CHUNK_SIZE_T)(nb + MINSIZE)) {
4046         set_head_size(oldp, nb);
4047         av->top = chunk_at_offset(oldp, nb);
4048         set_head(av->top, (newsize - nb) | PREV_INUSE);
4049         return chunk2mem(oldp);
4050       }
4051 
4052       /* Try to expand forward into next chunk;  split off remainder below */
4053       else if (next != av->top &&
4054                !inuse(next) &&
4055                (CHUNK_SIZE_T)(newsize = oldsize + chunksize(next)) >=
4056                (CHUNK_SIZE_T)(nb)) {
4057         newp = oldp;
4058         unlink(next, bck, fwd);
4059       }
4060 
4061       /* allocate, copy, free */
4062       else {
4063         newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4064         if (newmem == 0)
4065           return 0; /* propagate failure */
4066 
4067         newp = mem2chunk(newmem);
4068         newsize = chunksize(newp);
4069 
4070         /*
4071           Avoid copy if newp is next chunk after oldp.
4072         */
4073         if (newp == next) {
4074           newsize += oldsize;
4075           newp = oldp;
4076         }
4077         else {
4078           /*
4079             Unroll copy of <= 36 bytes (72 if 8byte sizes)
4080             We know that contents have an odd number of
4081             INTERNAL_SIZE_T-sized words; minimally 3.
4082           */
4083 
4084           copysize = oldsize - SIZE_SZ;
4085           s = (INTERNAL_SIZE_T*)(oldmem);
4086           d = (INTERNAL_SIZE_T*)(newmem);
4087           ncopies = copysize / sizeof(INTERNAL_SIZE_T);
4088           assert(ncopies >= 3);
4089 
4090           if (ncopies > 9)
4091             MALLOC_COPY(d, s, copysize);
4092 
4093           else {
4094             *(d+0) = *(s+0);
4095             *(d+1) = *(s+1);
4096             *(d+2) = *(s+2);
4097             if (ncopies > 4) {
4098               *(d+3) = *(s+3);
4099               *(d+4) = *(s+4);
4100               if (ncopies > 6) {
4101                 *(d+5) = *(s+5);
4102                 *(d+6) = *(s+6);
4103                 if (ncopies > 8) {
4104                   *(d+7) = *(s+7);
4105                   *(d+8) = *(s+8);
4106                 }
4107               }
4108             }
4109           }
4110 
4111           fREe(oldmem);
4112           check_inuse_chunk(newp);
4113           return chunk2mem(newp);
4114         }
4115       }
4116     }
4117 
4118     /* If possible, free extra space in old or extended chunk */
4119 
4120     assert((CHUNK_SIZE_T)(newsize) >= (CHUNK_SIZE_T)(nb));
4121 
4122     remainder_size = newsize - nb;
4123 
4124     if (remainder_size < MINSIZE) { /* not enough extra to split off */
4125       set_head_size(newp, newsize);
4126       set_inuse_bit_at_offset(newp, newsize);
4127     }
4128     else { /* split remainder */
4129       remainder = chunk_at_offset(newp, nb);
4130       set_head_size(newp, nb);
4131       set_head(remainder, remainder_size | PREV_INUSE);
4132       /* Mark remainder as inuse so free() won't complain */
4133       set_inuse_bit_at_offset(remainder, remainder_size);
4134       fREe(chunk2mem(remainder));
4135     }
4136 
4137     check_inuse_chunk(newp);
4138     return chunk2mem(newp);
4139   }
4140 
4141   /*
4142     Handle mmap cases
4143   */
4144 
4145   else {
4146 #if HAVE_MMAP
4147 
4148 #if HAVE_MREMAP
4149     INTERNAL_SIZE_T offset = oldp->prev_size;
4150     size_t pagemask = av->pagesize - 1;
4151     char *cp;
4152     CHUNK_SIZE_T  sum;
4153 
4154     /* Note the extra SIZE_SZ overhead */
4155     newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
4156 
4157     /* don't need to remap if still within same page */
4158     if (oldsize == newsize - offset)
4159       return oldmem;
4160 
4161     cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
4162 
4163     if (cp != (char*)MORECORE_FAILURE) {
4164 
4165       newp = (mchunkptr)(cp + offset);
4166       set_head(newp, (newsize - offset)|IS_MMAPPED);
4167 
4168       assert(aligned_OK(chunk2mem(newp)));
4169       assert((newp->prev_size == offset));
4170 
4171       /* update statistics */
4172       sum = av->mmapped_mem += newsize - oldsize;
4173       if (sum > (CHUNK_SIZE_T)(av->max_mmapped_mem))
4174         av->max_mmapped_mem = sum;
4175       sum += av->sbrked_mem;
4176       if (sum > (CHUNK_SIZE_T)(av->max_total_mem))
4177         av->max_total_mem = sum;
4178 
4179       return chunk2mem(newp);
4180     }
4181 #endif
4182 
4183     /* Note the extra SIZE_SZ overhead. */
4184     if ((CHUNK_SIZE_T)(oldsize) >= (CHUNK_SIZE_T)(nb + SIZE_SZ))
4185       newmem = oldmem; /* do nothing */
4186     else {
4187       /* Must alloc, copy, free. */
4188       newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
4189       if (newmem != 0) {
4190         MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
4191         fREe(oldmem);
4192       }
4193     }
4194     return newmem;
4195 
4196 #else
4197     /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
4198     check_malloc_state();
4199     MALLOC_FAILURE_ACTION;
4200     return 0;
4201 #endif
4202   }
4203 }
4204 
4205 /*
4206   ------------------------------ memalign ------------------------------
4207 */
4208 
4209 #if __STD_C
mEMALIGn(size_t alignment,size_t bytes)4210 Void_t* mEMALIGn(size_t alignment, size_t bytes)
4211 #else
4212 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
4213 #endif
4214 {
4215   INTERNAL_SIZE_T nb;             /* padded  request size */
4216   char*           m;              /* memory returned by malloc call */
4217   mchunkptr       p;              /* corresponding chunk */
4218   char*           brk;            /* alignment point within p */
4219   mchunkptr       newp;           /* chunk to return */
4220   INTERNAL_SIZE_T newsize;        /* its size */
4221   INTERNAL_SIZE_T leadsize;       /* leading space before alignment point */
4222   mchunkptr       remainder;      /* spare room at end to split off */
4223   CHUNK_SIZE_T    remainder_size; /* its size */
4224   INTERNAL_SIZE_T size;
4225 
4226   /* If need less alignment than we give anyway, just relay to malloc */
4227 
4228   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
4229 
4230   /* Otherwise, ensure that it is at least a minimum chunk size */
4231 
4232   if (alignment <  MINSIZE) alignment = MINSIZE;
4233 
4234   /* Make sure alignment is power of 2 (in case MINSIZE is not).  */
4235   if ((alignment & (alignment - 1)) != 0) {
4236     size_t a = MALLOC_ALIGNMENT * 2;
4237     while ((CHUNK_SIZE_T)a < (CHUNK_SIZE_T)alignment) a <<= 1;
4238     alignment = a;
4239   }
4240 
4241   checked_request2size(bytes, nb);
4242 
4243   /*
4244     Strategy: find a spot within that chunk that meets the alignment
4245     request, and then possibly free the leading and trailing space.
4246   */
4247 
4248 
4249   /* Call malloc with worst case padding to hit alignment. */
4250 
4251   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
4252 
4253   if (m == 0) return 0; /* propagate failure */
4254 
4255   p = mem2chunk(m);
4256 
4257   if ((((PTR_UINT)(m)) % alignment) != 0) { /* misaligned */
4258 
4259     /*
4260       Find an aligned spot inside chunk.  Since we need to give back
4261       leading space in a chunk of at least MINSIZE, if the first
4262       calculation places us at a spot with less than MINSIZE leader,
4263       we can move to the next aligned spot -- we've allocated enough
4264       total room so that this is always possible.
4265     */
4266 
4267     brk = (char*)mem2chunk((PTR_UINT)(((PTR_UINT)(m + alignment - 1)) &
4268                            -((signed long) alignment)));
4269     if ((CHUNK_SIZE_T)(brk - (char*)(p)) < MINSIZE)
4270       brk += alignment;
4271 
4272     newp = (mchunkptr)brk;
4273     leadsize = brk - (char*)(p);
4274     newsize = chunksize(p) - leadsize;
4275 
4276     /* For mmapped chunks, just adjust offset */
4277     if (chunk_is_mmapped(p)) {
4278       newp->prev_size = p->prev_size + leadsize;
4279       set_head(newp, newsize|IS_MMAPPED);
4280       return chunk2mem(newp);
4281     }
4282 
4283     /* Otherwise, give back leader, use the rest */
4284     set_head(newp, newsize | PREV_INUSE);
4285     set_inuse_bit_at_offset(newp, newsize);
4286     set_head_size(p, leadsize);
4287     fREe(chunk2mem(p));
4288     p = newp;
4289 
4290     assert (newsize >= nb &&
4291             (((PTR_UINT)(chunk2mem(p))) % alignment) == 0);
4292   }
4293 
4294   /* Also give back spare room at the end */
4295   if (!chunk_is_mmapped(p)) {
4296     size = chunksize(p);
4297     if ((CHUNK_SIZE_T)(size) > (CHUNK_SIZE_T)(nb + MINSIZE)) {
4298       remainder_size = size - nb;
4299       remainder = chunk_at_offset(p, nb);
4300       set_head(remainder, remainder_size | PREV_INUSE);
4301       set_head_size(p, nb);
4302       fREe(chunk2mem(remainder));
4303     }
4304   }
4305 
4306   check_inuse_chunk(p);
4307   return chunk2mem(p);
4308 }
4309 
4310 /*
4311   ------------------------------ calloc ------------------------------
4312 */
4313 
4314 #if __STD_C
cALLOc(size_t n_elements,size_t elem_size)4315 Void_t* cALLOc(size_t n_elements, size_t elem_size)
4316 #else
4317 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
4318 #endif
4319 {
4320   mchunkptr p;
4321   CHUNK_SIZE_T  clearsize;
4322   CHUNK_SIZE_T  nclears;
4323   INTERNAL_SIZE_T* d;
4324 
4325   Void_t* mem = mALLOc(n_elements * elem_size);
4326 
4327   if (mem != 0) {
4328     p = mem2chunk(mem);
4329 
4330     if (!chunk_is_mmapped(p))
4331     {
4332       /*
4333         Unroll clear of <= 36 bytes (72 if 8byte sizes)
4334         We know that contents have an odd number of
4335         INTERNAL_SIZE_T-sized words; minimally 3.
4336       */
4337 
4338       d = (INTERNAL_SIZE_T*)mem;
4339       clearsize = chunksize(p) - SIZE_SZ;
4340       nclears = clearsize / sizeof(INTERNAL_SIZE_T);
4341       assert(nclears >= 3);
4342 
4343       if (nclears > 9)
4344         MALLOC_ZERO(d, clearsize);
4345 
4346       else {
4347         *(d+0) = 0;
4348         *(d+1) = 0;
4349         *(d+2) = 0;
4350         if (nclears > 4) {
4351           *(d+3) = 0;
4352           *(d+4) = 0;
4353           if (nclears > 6) {
4354             *(d+5) = 0;
4355             *(d+6) = 0;
4356             if (nclears > 8) {
4357               *(d+7) = 0;
4358               *(d+8) = 0;
4359             }
4360           }
4361         }
4362       }
4363     }
4364 #if ! MMAP_CLEARS
4365     else
4366     {
4367       d = (INTERNAL_SIZE_T*)mem;
4368       /*
4369         Note the additional SIZE_SZ
4370       */
4371       clearsize = chunksize(p) - 2*SIZE_SZ;
4372       MALLOC_ZERO(d, clearsize);
4373     }
4374 #endif
4375   }
4376   return mem;
4377 }
4378 
4379 /*
4380   ------------------------------ cfree ------------------------------
4381 */
4382 
4383 #if __STD_C
cFREe(Void_t * mem)4384 void cFREe(Void_t *mem)
4385 #else
4386 void cFREe(mem) Void_t *mem;
4387 #endif
4388 {
4389   fREe(mem);
4390 }
4391 
4392 /*
4393   ------------------------- independent_calloc -------------------------
4394 */
4395 
4396 #if __STD_C
iCALLOc(size_t n_elements,size_t elem_size,Void_t * chunks[])4397 Void_t** iCALLOc(size_t n_elements, size_t elem_size, Void_t* chunks[])
4398 #else
4399 Void_t** iCALLOc(n_elements, elem_size, chunks) size_t n_elements; size_t elem_size; Void_t* chunks[];
4400 #endif
4401 {
4402   size_t sz = elem_size; /* serves as 1-element array */
4403   /* opts arg of 3 means all elements are same size, and should be cleared */
4404   return iALLOc(n_elements, &sz, 3, chunks);
4405 }
4406 
4407 /*
4408   ------------------------- independent_comalloc -------------------------
4409 */
4410 
4411 #if __STD_C
iCOMALLOc(size_t n_elements,size_t sizes[],Void_t * chunks[])4412 Void_t** iCOMALLOc(size_t n_elements, size_t sizes[], Void_t* chunks[])
4413 #else
4414 Void_t** iCOMALLOc(n_elements, sizes, chunks) size_t n_elements; size_t sizes[]; Void_t* chunks[];
4415 #endif
4416 {
4417   return iALLOc(n_elements, sizes, 0, chunks);
4418 }
4419 
4420 
4421 /*
4422   ------------------------------ ialloc ------------------------------
4423   ialloc provides common support for independent_X routines, handling all of
4424   the combinations that can result.
4425 
4426   The opts arg has:
4427     bit 0 set if all elements are same size (using sizes[0])
4428     bit 1 set if elements should be zeroed
4429 */
4430 
4431 
4432 #if __STD_C
iALLOc(size_t n_elements,size_t * sizes,int opts,Void_t * chunks[])4433 static Void_t** iALLOc(size_t n_elements,
4434                        size_t* sizes,
4435                        int opts,
4436                        Void_t* chunks[])
4437 #else
4438 static Void_t** iALLOc(n_elements, sizes, opts, chunks) size_t n_elements; size_t* sizes; int opts; Void_t* chunks[];
4439 #endif
4440 {
4441   mstate av = get_malloc_state();
4442   INTERNAL_SIZE_T element_size;   /* chunksize of each element, if all same */
4443   INTERNAL_SIZE_T contents_size;  /* total size of elements */
4444   INTERNAL_SIZE_T array_size;     /* request size of pointer array */
4445   Void_t*         mem;            /* malloced aggregate space */
4446   mchunkptr       p;              /* corresponding chunk */
4447   INTERNAL_SIZE_T remainder_size; /* remaining bytes while splitting */
4448   Void_t**        marray;         /* either "chunks" or malloced ptr array */
4449   mchunkptr       array_chunk;    /* chunk for malloced ptr array */
4450   int             mmx;            /* to disable mmap */
4451   INTERNAL_SIZE_T size;
4452   size_t          i;
4453 
4454   /* Ensure initialization */
4455   if (av->max_fast == 0) malloc_consolidate(av);
4456 
4457   /* compute array length, if needed */
4458   if (chunks != 0) {
4459     if (n_elements == 0)
4460       return chunks; /* nothing to do */
4461     marray = chunks;
4462     array_size = 0;
4463   }
4464   else {
4465     /* if empty req, must still return chunk representing empty array */
4466     if (n_elements == 0)
4467       return (Void_t**) mALLOc(0);
4468     marray = 0;
4469     array_size = request2size(n_elements * (sizeof(Void_t*)));
4470   }
4471 
4472   /* compute total element size */
4473   if (opts & 0x1) { /* all-same-size */
4474     element_size = request2size(*sizes);
4475     contents_size = n_elements * element_size;
4476   }
4477   else { /* add up all the sizes */
4478     element_size = 0;
4479     contents_size = 0;
4480     for (i = 0; i != n_elements; ++i)
4481       contents_size += request2size(sizes[i]);
4482   }
4483 
4484   /* subtract out alignment bytes from total to minimize overallocation */
4485   size = contents_size + array_size - MALLOC_ALIGN_MASK;
4486 
4487   /*
4488      Allocate the aggregate chunk.
4489      But first disable mmap so malloc won't use it, since
4490      we would not be able to later free/realloc space internal
4491      to a segregated mmap region.
4492  */
4493   mmx = av->n_mmaps_max;   /* disable mmap */
4494   av->n_mmaps_max = 0;
4495   mem = mALLOc(size);
4496   av->n_mmaps_max = mmx;   /* reset mmap */
4497   if (mem == 0)
4498     return 0;
4499 
4500   p = mem2chunk(mem);
4501   assert(!chunk_is_mmapped(p));
4502   remainder_size = chunksize(p);
4503 
4504   if (opts & 0x2) {       /* optionally clear the elements */
4505     MALLOC_ZERO(mem, remainder_size - SIZE_SZ - array_size);
4506   }
4507 
4508   /* If not provided, allocate the pointer array as final part of chunk */
4509   if (marray == 0) {
4510     array_chunk = chunk_at_offset(p, contents_size);
4511     marray = (Void_t**) (chunk2mem(array_chunk));
4512     set_head(array_chunk, (remainder_size - contents_size) | PREV_INUSE);
4513     remainder_size = contents_size;
4514   }
4515 
4516   /* split out elements */
4517   for (i = 0; ; ++i) {
4518     marray[i] = chunk2mem(p);
4519     if (i != n_elements-1) {
4520       if (element_size != 0)
4521         size = element_size;
4522       else
4523         size = request2size(sizes[i]);
4524       remainder_size -= size;
4525       set_head(p, size | PREV_INUSE);
4526       p = chunk_at_offset(p, size);
4527     }
4528     else { /* the final element absorbs any overallocation slop */
4529       set_head(p, remainder_size | PREV_INUSE);
4530       break;
4531     }
4532   }
4533 
4534 #if DEBUG
4535   if (marray != chunks) {
4536     /* final element must have exactly exhausted chunk */
4537     if (element_size != 0)
4538       assert(remainder_size == element_size);
4539     else
4540       assert(remainder_size == request2size(sizes[i]));
4541     check_inuse_chunk(mem2chunk(marray));
4542   }
4543 
4544   for (i = 0; i != n_elements; ++i)
4545     check_inuse_chunk(mem2chunk(marray[i]));
4546 #endif
4547 
4548   return marray;
4549 }
4550 
4551 
4552 /*
4553   ------------------------------ valloc ------------------------------
4554 */
4555 
4556 #if __STD_C
vALLOc(size_t bytes)4557 Void_t* vALLOc(size_t bytes)
4558 #else
4559 Void_t* vALLOc(bytes) size_t bytes;
4560 #endif
4561 {
4562   /* Ensure initialization */
4563   mstate av = get_malloc_state();
4564   if (av->max_fast == 0) malloc_consolidate(av);
4565   return mEMALIGn(av->pagesize, bytes);
4566 }
4567 
4568 /*
4569   ------------------------------ pvalloc ------------------------------
4570 */
4571 
4572 
4573 #if __STD_C
pVALLOc(size_t bytes)4574 Void_t* pVALLOc(size_t bytes)
4575 #else
4576 Void_t* pVALLOc(bytes) size_t bytes;
4577 #endif
4578 {
4579   mstate av = get_malloc_state();
4580   size_t pagesz;
4581 
4582   /* Ensure initialization */
4583   if (av->max_fast == 0) malloc_consolidate(av);
4584   pagesz = av->pagesize;
4585   return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
4586 }
4587 
4588 
4589 /*
4590   ------------------------------ malloc_trim ------------------------------
4591 */
4592 
4593 #if __STD_C
mTRIm(size_t pad)4594 int mTRIm(size_t pad)
4595 #else
4596 int mTRIm(pad) size_t pad;
4597 #endif
4598 {
4599   mstate av = get_malloc_state();
4600   /* Ensure initialization/consolidation */
4601   malloc_consolidate(av);
4602 
4603 #ifndef MORECORE_CANNOT_TRIM
4604   return sYSTRIm(pad, av);
4605 #else
4606   return 0;
4607 #endif
4608 }
4609 
4610 
4611 /*
4612   ------------------------- malloc_usable_size -------------------------
4613 */
4614 
4615 #if __STD_C
mUSABLe(Void_t * mem)4616 size_t mUSABLe(Void_t* mem)
4617 #else
4618 size_t mUSABLe(mem) Void_t* mem;
4619 #endif
4620 {
4621   mchunkptr p;
4622   if (mem != 0) {
4623     p = mem2chunk(mem);
4624     if (chunk_is_mmapped(p))
4625       return chunksize(p) - 2*SIZE_SZ;
4626     else if (inuse(p))
4627       return chunksize(p) - SIZE_SZ;
4628   }
4629   return 0;
4630 }
4631 
4632 /*
4633   ------------------------------ mallinfo ------------------------------
4634 */
4635 
mALLINFo()4636 struct mallinfo mALLINFo()
4637 {
4638   mstate av = get_malloc_state();
4639   struct mallinfo mi;
4640   int i;
4641   mbinptr b;
4642   mchunkptr p;
4643   INTERNAL_SIZE_T avail;
4644   INTERNAL_SIZE_T fastavail;
4645   int nblocks;
4646   int nfastblocks;
4647 
4648   /* Ensure initialization */
4649   if (av->top == 0)  malloc_consolidate(av);
4650 
4651   check_malloc_state();
4652 
4653   /* Account for top */
4654   avail = chunksize(av->top);
4655   nblocks = 1;  /* top always exists */
4656 
4657   /* traverse fastbins */
4658   nfastblocks = 0;
4659   fastavail = 0;
4660 
4661   for (i = 0; i < (int)NFASTBINS; ++i) {
4662     for (p = av->fastbins[i]; p != 0; p = p->fd) {
4663       ++nfastblocks;
4664       fastavail += chunksize(p);
4665     }
4666   }
4667 
4668   avail += fastavail;
4669 
4670   /* traverse regular bins */
4671   for (i = 1; i < NBINS; ++i) {
4672     b = bin_at(av, i);
4673     for (p = last(b); p != b; p = p->bk) {
4674       ++nblocks;
4675       avail += chunksize(p);
4676     }
4677   }
4678 
4679   mi.smblks = nfastblocks;
4680   mi.ordblks = nblocks;
4681   mi.fordblks = avail;
4682   mi.uordblks = av->sbrked_mem - avail;
4683   mi.arena = av->sbrked_mem;
4684   mi.hblks = av->n_mmaps;
4685   mi.hblkhd = av->mmapped_mem;
4686   mi.fsmblks = fastavail;
4687   mi.keepcost = chunksize(av->top);
4688   mi.usmblks = av->max_total_mem;
4689   return mi;
4690 }
4691 
4692 /*
4693   ------------------------------ malloc_stats ------------------------------
4694 */
4695 
mSTATs(void)4696 void mSTATs(void)
4697 {
4698   struct mallinfo mi = mALLINFo();
4699 
4700 #ifdef WIN32
4701   {
4702     CHUNK_SIZE_T  free, reserved, committed;
4703     vminfo (&free, &reserved, &committed);
4704     fprintf(stderr, "free bytes       = %10lu\n",
4705             free);
4706     fprintf(stderr, "reserved bytes   = %10lu\n",
4707             reserved);
4708     fprintf(stderr, "committed bytes  = %10lu\n",
4709             committed);
4710   }
4711 #endif
4712 
4713 
4714   fprintf(stderr, "max system bytes = %10lu\n",
4715           (CHUNK_SIZE_T)(mi.usmblks));
4716   fprintf(stderr, "system bytes     = %10lu\n",
4717           (CHUNK_SIZE_T)(mi.arena + mi.hblkhd));
4718   fprintf(stderr, "in use bytes     = %10lu\n",
4719           (CHUNK_SIZE_T)(mi.uordblks + mi.hblkhd));
4720 
4721 #ifdef WIN32
4722   {
4723     CHUNK_SIZE_T  kernel, user;
4724     if (cpuinfo (TRUE, &kernel, &user)) {
4725       fprintf(stderr, "kernel ms        = %10lu\n",
4726               kernel);
4727       fprintf(stderr, "user ms          = %10lu\n",
4728               user);
4729     }
4730   }
4731 #endif
4732 }
4733 
4734 
4735 /*
4736   ------------------------------ mallopt ------------------------------
4737 */
4738 
4739 #if __STD_C
mALLOPt(int param_number,int value)4740 int mALLOPt(int param_number, int value)
4741 #else
4742 int mALLOPt(param_number, value) int param_number; int value;
4743 #endif
4744 {
4745   mstate av = get_malloc_state();
4746   /* Ensure initialization/consolidation */
4747   malloc_consolidate(av);
4748 
4749   switch(param_number) {
4750   case M_MXFAST:
4751     if (value >= 0 && value <= MAX_FAST_SIZE) {
4752       set_max_fast(av, value);
4753       return 1;
4754     }
4755     else
4756       return 0;
4757 
4758   case M_TRIM_THRESHOLD:
4759     av->trim_threshold = value;
4760     return 1;
4761 
4762   case M_TOP_PAD:
4763     av->top_pad = value;
4764     return 1;
4765 
4766   case M_MMAP_THRESHOLD:
4767     av->mmap_threshold = value;
4768     return 1;
4769 
4770   case M_MMAP_MAX:
4771 #if !HAVE_MMAP
4772     if (value != 0)
4773       return 0;
4774 #endif
4775     av->n_mmaps_max = value;
4776     return 1;
4777 
4778   default:
4779     return 0;
4780   }
4781 }
4782 
4783 
4784 /*
4785   -------------------- Alternative MORECORE functions --------------------
4786 */
4787 
4788 
4789 /*
4790   General Requirements for MORECORE.
4791 
4792   The MORECORE function must have the following properties:
4793 
4794   If MORECORE_CONTIGUOUS is false:
4795 
4796     * MORECORE must allocate in multiples of pagesize. It will
4797       only be called with arguments that are multiples of pagesize.
4798 
4799     * MORECORE(0) must return an address that is at least
4800       MALLOC_ALIGNMENT aligned. (Page-aligning always suffices.)
4801 
4802   else (i.e. If MORECORE_CONTIGUOUS is true):
4803 
4804     * Consecutive calls to MORECORE with positive arguments
4805       return increasing addresses, indicating that space has been
4806       contiguously extended.
4807 
4808     * MORECORE need not allocate in multiples of pagesize.
4809       Calls to MORECORE need not have args of multiples of pagesize.
4810 
4811     * MORECORE need not page-align.
4812 
4813   In either case:
4814 
4815     * MORECORE may allocate more memory than requested. (Or even less,
4816       but this will generally result in a malloc failure.)
4817 
4818     * MORECORE must not allocate memory when given argument zero, but
4819       instead return one past the end address of memory from previous
4820       nonzero call. This malloc does NOT call MORECORE(0)
4821       until at least one call with positive arguments is made, so
4822       the initial value returned is not important.
4823 
4824     * Even though consecutive calls to MORECORE need not return contiguous
4825       addresses, it must be OK for malloc'ed chunks to span multiple
4826       regions in those cases where they do happen to be contiguous.
4827 
4828     * MORECORE need not handle negative arguments -- it may instead
4829       just return MORECORE_FAILURE when given negative arguments.
4830       Negative arguments are always multiples of pagesize. MORECORE
4831       must not misinterpret negative args as large positive unsigned
4832       args. You can suppress all such calls from even occurring by defining
4833       MORECORE_CANNOT_TRIM,
4834 
4835   There is some variation across systems about the type of the
4836   argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
4837   actually be size_t, because sbrk supports negative args, so it is
4838   normally the signed type of the same width as size_t (sometimes
4839   declared as "intptr_t", and sometimes "ptrdiff_t").  It doesn't much
4840   matter though. Internally, we use "long" as arguments, which should
4841   work across all reasonable possibilities.
4842 
4843   Additionally, if MORECORE ever returns failure for a positive
4844   request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
4845   system allocator. This is a useful backup strategy for systems with
4846   holes in address spaces -- in this case sbrk cannot contiguously
4847   expand the heap, but mmap may be able to map noncontiguous space.
4848 
4849   If you'd like mmap to ALWAYS be used, you can define MORECORE to be
4850   a function that always returns MORECORE_FAILURE.
4851 
4852   Malloc only has limited ability to detect failures of MORECORE
4853   to supply contiguous space when it says it can. In particular,
4854   multithreaded programs that do not use locks may result in
4855   rece conditions across calls to MORECORE that result in gaps
4856   that cannot be detected as such, and subsequent corruption.
4857 
4858   If you are using this malloc with something other than sbrk (or its
4859   emulation) to supply memory regions, you probably want to set
4860   MORECORE_CONTIGUOUS as false.  As an example, here is a custom
4861   allocator kindly contributed for pre-OSX macOS.  It uses virtually
4862   but not necessarily physically contiguous non-paged memory (locked
4863   in, present and won't get swapped out).  You can use it by
4864   uncommenting this section, adding some #includes, and setting up the
4865   appropriate defines above:
4866 
4867       #define MORECORE osMoreCore
4868       #define MORECORE_CONTIGUOUS 0
4869 
4870   There is also a shutdown routine that should somehow be called for
4871   cleanup upon program exit.
4872 
4873   #define MAX_POOL_ENTRIES 100
4874   #define MINIMUM_MORECORE_SIZE  (64 * 1024)
4875   static int next_os_pool;
4876   void *our_os_pools[MAX_POOL_ENTRIES];
4877 
4878   void *osMoreCore(int size)
4879   {
4880     void *ptr = 0;
4881     static void *sbrk_top = 0;
4882 
4883     if (size > 0)
4884     {
4885       if (size < MINIMUM_MORECORE_SIZE)
4886          size = MINIMUM_MORECORE_SIZE;
4887       if (CurrentExecutionLevel() == kTaskLevel)
4888          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4889       if (ptr == 0)
4890       {
4891         return (void *) MORECORE_FAILURE;
4892       }
4893       // save ptrs so they can be freed during cleanup
4894       our_os_pools[next_os_pool] = ptr;
4895       next_os_pool++;
4896       ptr = (void *) ((((CHUNK_SIZE_T) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4897       sbrk_top = (char *) ptr + size;
4898       return ptr;
4899     }
4900     else if (size < 0)
4901     {
4902       // we don't currently support shrink behavior
4903       return (void *) MORECORE_FAILURE;
4904     }
4905     else
4906     {
4907       return sbrk_top;
4908     }
4909   }
4910 
4911   // cleanup any allocated memory pools
4912   // called as last thing before shutting down driver
4913 
4914   void osCleanupMem(void)
4915   {
4916     void **ptr;
4917 
4918     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4919       if (*ptr)
4920       {
4921          PoolDeallocate(*ptr);
4922          *ptr = 0;
4923       }
4924   }
4925 
4926 */
4927 
4928 
4929 /*
4930   --------------------------------------------------------------
4931 
4932   Emulation of sbrk for win32.
4933   Donated by J. Walter <Walter@GeNeSys-e.de>.
4934   For additional information about this code, and malloc on Win32, see
4935      http://www.genesys-e.de/jwalter/
4936 */
4937 
4938 
4939 #ifdef WIN32
4940 
4941 #ifdef _DEBUG
4942 /* #define TRACE */
4943 #endif
4944 
4945 /* Support for USE_MALLOC_LOCK */
4946 #ifdef USE_MALLOC_LOCK
4947 
4948 /* Wait for spin lock */
slwait(int * sl)4949 static int slwait (int *sl) {
4950     while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0)
4951 	    Sleep (0);
4952     return 0;
4953 }
4954 
4955 /* Release spin lock */
slrelease(int * sl)4956 static int slrelease (int *sl) {
4957     InterlockedExchange (sl, 0);
4958     return 0;
4959 }
4960 
4961 #ifdef NEEDED
4962 /* Spin lock for emulation code */
4963 static int g_sl;
4964 #endif
4965 
4966 #endif /* USE_MALLOC_LOCK */
4967 
4968 /* getpagesize for windows */
getpagesize(void)4969 static long getpagesize (void) {
4970     static long g_pagesize = 0;
4971     if (! g_pagesize) {
4972         SYSTEM_INFO system_info;
4973         GetSystemInfo (&system_info);
4974         g_pagesize = system_info.dwPageSize;
4975     }
4976     return g_pagesize;
4977 }
getregionsize(void)4978 static long getregionsize (void) {
4979     static long g_regionsize = 0;
4980     if (! g_regionsize) {
4981         SYSTEM_INFO system_info;
4982         GetSystemInfo (&system_info);
4983         g_regionsize = system_info.dwAllocationGranularity;
4984     }
4985     return g_regionsize;
4986 }
4987 
4988 /* A region list entry */
4989 typedef struct _region_list_entry {
4990     void *top_allocated;
4991     void *top_committed;
4992     void *top_reserved;
4993     long reserve_size;
4994     struct _region_list_entry *previous;
4995 } region_list_entry;
4996 
4997 /* Allocate and link a region entry in the region list */
region_list_append(region_list_entry ** last,void * base_reserved,long reserve_size)4998 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
4999     region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
5000     if (! next)
5001         return FALSE;
5002     next->top_allocated = (char *) base_reserved;
5003     next->top_committed = (char *) base_reserved;
5004     next->top_reserved = (char *) base_reserved + reserve_size;
5005     next->reserve_size = reserve_size;
5006     next->previous = *last;
5007     *last = next;
5008     return TRUE;
5009 }
5010 /* Free and unlink the last region entry from the region list */
region_list_remove(region_list_entry ** last)5011 static int region_list_remove (region_list_entry **last) {
5012     region_list_entry *previous = (*last)->previous;
5013     if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
5014         return FALSE;
5015     *last = previous;
5016     return TRUE;
5017 }
5018 
5019 #define CEIL(size,to)	(((size)+(to)-1)&~((to)-1))
5020 #define FLOOR(size,to)	((size)&~((to)-1))
5021 
5022 #define SBRK_SCALE  0
5023 /* #define SBRK_SCALE  1 */
5024 /* #define SBRK_SCALE  2 */
5025 /* #define SBRK_SCALE  4  */
5026 
5027 /* sbrk for windows */
sbrk(long size)5028 static void *sbrk (long size) {
5029     static long g_pagesize, g_my_pagesize;
5030     static long g_regionsize, g_my_regionsize;
5031     static region_list_entry *g_last;
5032     void *result = (void *) MORECORE_FAILURE;
5033 #ifdef TRACE
5034     printf ("sbrk %d\n", size);
5035 #endif
5036 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5037     /* Wait for spin lock */
5038     slwait (&g_sl);
5039 #endif
5040     /* First time initialization */
5041     if (! g_pagesize) {
5042         g_pagesize = getpagesize ();
5043         g_my_pagesize = g_pagesize << SBRK_SCALE;
5044     }
5045     if (! g_regionsize) {
5046         g_regionsize = getregionsize ();
5047         g_my_regionsize = g_regionsize << SBRK_SCALE;
5048     }
5049     if (! g_last) {
5050         if (! region_list_append (&g_last, 0, 0))
5051            goto sbrk_exit;
5052     }
5053     /* Assert invariants */
5054     assert (g_last);
5055     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5056             g_last->top_allocated <= g_last->top_committed);
5057     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5058             g_last->top_committed <= g_last->top_reserved &&
5059             (unsigned) g_last->top_committed % g_pagesize == 0);
5060     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5061     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5062     /* Allocation requested? */
5063     if (size >= 0) {
5064         /* Allocation size is the requested size */
5065         long allocate_size = size;
5066         /* Compute the size to commit */
5067         long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5068         /* Do we reach the commit limit? */
5069         if (to_commit > 0) {
5070             /* Round size to commit */
5071             long commit_size = CEIL (to_commit, g_my_pagesize);
5072             /* Compute the size to reserve */
5073             long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
5074             /* Do we reach the reserve limit? */
5075             if (to_reserve > 0) {
5076                 /* Compute the remaining size to commit in the current region */
5077                 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
5078                 if (remaining_commit_size > 0) {
5079                     /* Assert preconditions */
5080                     assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5081                     assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
5082                         /* Commit this */
5083                         void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
5084 							                                 MEM_COMMIT, PAGE_READWRITE);
5085                         /* Check returned pointer for consistency */
5086                         if (base_committed != g_last->top_committed)
5087                             goto sbrk_exit;
5088                         /* Assert postconditions */
5089                         assert ((unsigned) base_committed % g_pagesize == 0);
5090 #ifdef TRACE
5091                         printf ("Commit %p %d\n", base_committed, remaining_commit_size);
5092 #endif
5093                         /* Adjust the regions commit top */
5094                         g_last->top_committed = (char *) base_committed + remaining_commit_size;
5095                     }
5096                 } {
5097                     /* Now we are going to search and reserve. */
5098                     int contiguous = -1;
5099                     int found = FALSE;
5100                     MEMORY_BASIC_INFORMATION memory_info;
5101                     void *base_reserved;
5102                     long reserve_size;
5103                     do {
5104                         /* Assume contiguous memory */
5105                         contiguous = TRUE;
5106                         /* Round size to reserve */
5107                         reserve_size = CEIL (to_reserve, g_my_regionsize);
5108                         /* Start with the current region's top */
5109                         memory_info.BaseAddress = g_last->top_reserved;
5110                         /* Assert preconditions */
5111                         assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5112                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5113                         while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5114                             /* Assert postconditions */
5115                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5116 #ifdef TRACE
5117                             printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize,
5118                                     memory_info.State == MEM_FREE ? "FREE":
5119                                     (memory_info.State == MEM_RESERVE ? "RESERVED":
5120                                      (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
5121 #endif
5122                             /* Region is free, well aligned and big enough: we are done */
5123                             if (memory_info.State == MEM_FREE &&
5124                                 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
5125                                 memory_info.RegionSize >= (unsigned) reserve_size) {
5126                                 found = TRUE;
5127                                 break;
5128                             }
5129                             /* From now on we can't get contiguous memory! */
5130                             contiguous = FALSE;
5131                             /* Recompute size to reserve */
5132                             reserve_size = CEIL (allocate_size, g_my_regionsize);
5133                             memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5134                             /* Assert preconditions */
5135                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
5136                             assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5137                         }
5138                         /* Search failed? */
5139                         if (! found)
5140                             goto sbrk_exit;
5141                         /* Assert preconditions */
5142                         assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
5143                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
5144                         /* Try to reserve this */
5145                         base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size,
5146 					                                  MEM_RESERVE, PAGE_NOACCESS);
5147                         if (! base_reserved) {
5148                             int rc = GetLastError ();
5149                             if (rc != ERROR_INVALID_ADDRESS)
5150                                 goto sbrk_exit;
5151                         }
5152                         /* A null pointer signals (hopefully) a race condition with another thread. */
5153                         /* In this case, we try again. */
5154                     } while (! base_reserved);
5155                     /* Check returned pointer for consistency */
5156                     if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
5157                         goto sbrk_exit;
5158                     /* Assert postconditions */
5159                     assert ((unsigned) base_reserved % g_regionsize == 0);
5160 #ifdef TRACE
5161                     printf ("Reserve %p %d\n", base_reserved, reserve_size);
5162 #endif
5163                     /* Did we get contiguous memory? */
5164                     if (contiguous) {
5165                         long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
5166                         /* Adjust allocation size */
5167                         allocate_size -= start_size;
5168                         /* Adjust the regions allocation top */
5169                         g_last->top_allocated = g_last->top_committed;
5170                         /* Recompute the size to commit */
5171                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5172                         /* Round size to commit */
5173                         commit_size = CEIL (to_commit, g_my_pagesize);
5174                     }
5175                     /* Append the new region to the list */
5176                     if (! region_list_append (&g_last, base_reserved, reserve_size))
5177                         goto sbrk_exit;
5178                     /* Didn't we get contiguous memory? */
5179                     if (! contiguous) {
5180                         /* Recompute the size to commit */
5181                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
5182                         /* Round size to commit */
5183                         commit_size = CEIL (to_commit, g_my_pagesize);
5184                     }
5185                 }
5186             }
5187             /* Assert preconditions */
5188             assert ((unsigned) g_last->top_committed % g_pagesize == 0);
5189             assert (0 < commit_size && commit_size % g_pagesize == 0); {
5190                 /* Commit this */
5191                 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size,
5192 				    			                     MEM_COMMIT, PAGE_READWRITE);
5193                 /* Check returned pointer for consistency */
5194                 if (base_committed != g_last->top_committed)
5195                     goto sbrk_exit;
5196                 /* Assert postconditions */
5197                 assert ((unsigned) base_committed % g_pagesize == 0);
5198 #ifdef TRACE
5199                 printf ("Commit %p %d\n", base_committed, commit_size);
5200 #endif
5201                 /* Adjust the regions commit top */
5202                 g_last->top_committed = (char *) base_committed + commit_size;
5203             }
5204         }
5205         /* Adjust the regions allocation top */
5206         g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
5207         result = (char *) g_last->top_allocated - size;
5208     /* Deallocation requested? */
5209     } else if (size < 0) {
5210         long deallocate_size = - size;
5211         /* As long as we have a region to release */
5212         while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
5213             /* Get the size to release */
5214             long release_size = g_last->reserve_size;
5215             /* Get the base address */
5216             void *base_reserved = (char *) g_last->top_reserved - release_size;
5217             /* Assert preconditions */
5218             assert ((unsigned) base_reserved % g_regionsize == 0);
5219             assert (0 < release_size && release_size % g_regionsize == 0); {
5220                 /* Release this */
5221                 int rc = VirtualFree (base_reserved, 0,
5222                                       MEM_RELEASE);
5223                 /* Check returned code for consistency */
5224                 if (! rc)
5225                     goto sbrk_exit;
5226 #ifdef TRACE
5227                 printf ("Release %p %d\n", base_reserved, release_size);
5228 #endif
5229             }
5230             /* Adjust deallocation size */
5231             deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
5232             /* Remove the old region from the list */
5233             if (! region_list_remove (&g_last))
5234                 goto sbrk_exit;
5235         } {
5236             /* Compute the size to decommit */
5237             long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
5238             if (to_decommit >= g_my_pagesize) {
5239                 /* Compute the size to decommit */
5240                 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
5241                 /*  Compute the base address */
5242                 void *base_committed = (char *) g_last->top_committed - decommit_size;
5243                 /* Assert preconditions */
5244                 assert ((unsigned) base_committed % g_pagesize == 0);
5245                 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
5246                     /* Decommit this */
5247                     int rc = VirtualFree ((char *) base_committed, decommit_size,
5248                                           MEM_DECOMMIT);
5249                     /* Check returned code for consistency */
5250                     if (! rc)
5251                         goto sbrk_exit;
5252 #ifdef TRACE
5253                     printf ("Decommit %p %d\n", base_committed, decommit_size);
5254 #endif
5255                 }
5256                 /* Adjust deallocation size and regions commit and allocate top */
5257                 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
5258                 g_last->top_committed = base_committed;
5259                 g_last->top_allocated = base_committed;
5260             }
5261         }
5262         /* Adjust regions allocate top */
5263         g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
5264         /* Check for underflow */
5265         if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
5266             g_last->top_allocated > g_last->top_committed) {
5267             /* Adjust regions allocate top */
5268             g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
5269             goto sbrk_exit;
5270         }
5271         result = g_last->top_allocated;
5272     }
5273     /* Assert invariants */
5274     assert (g_last);
5275     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
5276             g_last->top_allocated <= g_last->top_committed);
5277     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
5278             g_last->top_committed <= g_last->top_reserved &&
5279             (unsigned) g_last->top_committed % g_pagesize == 0);
5280     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
5281     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
5282 
5283 sbrk_exit:
5284 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
5285     /* Release spin lock */
5286     slrelease (&g_sl);
5287 #endif
5288     return result;
5289 }
5290 
vminfo(CHUNK_SIZE_T * free,CHUNK_SIZE_T * reserved,CHUNK_SIZE_T * committed)5291 static void vminfo (CHUNK_SIZE_T  *free, CHUNK_SIZE_T  *reserved, CHUNK_SIZE_T  *committed) {
5292     MEMORY_BASIC_INFORMATION memory_info;
5293     memory_info.BaseAddress = 0;
5294     *free = *reserved = *committed = 0;
5295     while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
5296         switch (memory_info.State) {
5297         case MEM_FREE:
5298             *free += memory_info.RegionSize;
5299             break;
5300         case MEM_RESERVE:
5301             *reserved += memory_info.RegionSize;
5302             break;
5303         case MEM_COMMIT:
5304             *committed += memory_info.RegionSize;
5305             break;
5306         }
5307         memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
5308     }
5309 }
5310 
cpuinfo(int whole,CHUNK_SIZE_T * kernel,CHUNK_SIZE_T * user)5311 static int cpuinfo (int whole, CHUNK_SIZE_T  *kernel, CHUNK_SIZE_T  *user) {
5312     if (whole) {
5313         __int64 creation64, exit64, kernel64, user64;
5314         int rc = GetProcessTimes (GetCurrentProcess (),
5315                                   (FILETIME *) &creation64,
5316                                   (FILETIME *) &exit64,
5317                                   (FILETIME *) &kernel64,
5318                                   (FILETIME *) &user64);
5319         if (! rc) {
5320             *kernel = 0;
5321             *user = 0;
5322             return FALSE;
5323         }
5324         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5325         *user = (CHUNK_SIZE_T) (user64 / 10000);
5326         return TRUE;
5327     } else {
5328         __int64 creation64, exit64, kernel64, user64;
5329         int rc = GetThreadTimes (GetCurrentThread (),
5330                                  (FILETIME *) &creation64,
5331                                  (FILETIME *) &exit64,
5332                                  (FILETIME *) &kernel64,
5333                                  (FILETIME *) &user64);
5334         if (! rc) {
5335             *kernel = 0;
5336             *user = 0;
5337             return FALSE;
5338         }
5339         *kernel = (CHUNK_SIZE_T) (kernel64 / 10000);
5340         *user = (CHUNK_SIZE_T) (user64 / 10000);
5341         return TRUE;
5342     }
5343 }
5344 
5345 #endif /* WIN32 */
5346 
5347 /* ------------------------------------------------------------
5348 History:
5349     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5350       * Fix malloc_state bitmap array misdeclaration
5351 
5352     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5353       * Allow tuning of FIRST_SORTED_BIN_SIZE
5354       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5355       * Better detection and support for non-contiguousness of MORECORE.
5356         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5357       * Bypass most of malloc if no frees. Thanks To Emery Berger.
5358       * Fix freeing of old top non-contiguous chunk im sysmalloc.
5359       * Raised default trim and map thresholds to 256K.
5360       * Fix mmap-related #defines. Thanks to Lubos Lunak.
5361       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5362       * Branch-free bin calculation
5363       * Default trim and mmap thresholds now 256K.
5364 
5365     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5366       * Introduce independent_comalloc and independent_calloc.
5367         Thanks to Michael Pachos for motivation and help.
5368       * Make optional .h file available
5369       * Allow > 2GB requests on 32bit systems.
5370       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5371         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5372         and Anonymous.
5373       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5374         helping test this.)
5375       * memalign: check alignment arg
5376       * realloc: don't try to shift chunks backwards, since this
5377         leads to  more fragmentation in some programs and doesn't
5378         seem to help in any others.
5379       * Collect all cases in malloc requiring system memory into sYSMALLOc
5380       * Use mmap as backup to sbrk
5381       * Place all internal state in malloc_state
5382       * Introduce fastbins (although similar to 2.5.1)
5383       * Many minor tunings and cosmetic improvements
5384       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5385       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5386         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5387       * Include errno.h to support default failure action.
5388 
5389     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5390       * return null for negative arguments
5391       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5392          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5393           (e.g. WIN32 platforms)
5394          * Cleanup header file inclusion for WIN32 platforms
5395          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5396          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5397            memory allocation routines
5398          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5399          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5400            usage of 'assert' in non-WIN32 code
5401          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5402            avoid infinite loop
5403       * Always call 'fREe()' rather than 'free()'
5404 
5405     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5406       * Fixed ordering problem with boundary-stamping
5407 
5408     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5409       * Added pvalloc, as recommended by H.J. Liu
5410       * Added 64bit pointer support mainly from Wolfram Gloger
5411       * Added anonymously donated WIN32 sbrk emulation
5412       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5413       * malloc_extend_top: fix mask error that caused wastage after
5414         foreign sbrks
5415       * Add linux mremap support code from HJ Liu
5416 
5417     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5418       * Integrated most documentation with the code.
5419       * Add support for mmap, with help from
5420         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5421       * Use last_remainder in more cases.
5422       * Pack bins using idea from  colin@nyx10.cs.du.edu
5423       * Use ordered bins instead of best-fit threshhold
5424       * Eliminate block-local decls to simplify tracing and debugging.
5425       * Support another case of realloc via move into top
5426       * Fix error occuring when initial sbrk_base not word-aligned.
5427       * Rely on page size for units instead of SBRK_UNIT to
5428         avoid surprises about sbrk alignment conventions.
5429       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5430         (raymond@es.ele.tue.nl) for the suggestion.
5431       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5432       * More precautions for cases where other routines call sbrk,
5433         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5434       * Added macros etc., allowing use in linux libc from
5435         H.J. Lu (hjl@gnu.ai.mit.edu)
5436       * Inverted this history list
5437 
5438     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5439       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5440       * Removed all preallocation code since under current scheme
5441         the work required to undo bad preallocations exceeds
5442         the work saved in good cases for most test programs.
5443       * No longer use return list or unconsolidated bins since
5444         no scheme using them consistently outperforms those that don't
5445         given above changes.
5446       * Use best fit for very large chunks to prevent some worst-cases.
5447       * Added some support for debugging
5448 
5449     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5450       * Removed footers when chunks are in use. Thanks to
5451         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5452 
5453     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5454       * Added malloc_trim, with help from Wolfram Gloger
5455         (wmglo@Dent.MED.Uni-Muenchen.DE).
5456 
5457     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5458 
5459     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5460       * realloc: try to expand in both directions
5461       * malloc: swap order of clean-bin strategy;
5462       * realloc: only conditionally expand backwards
5463       * Try not to scavenge used bins
5464       * Use bin counts as a guide to preallocation
5465       * Occasionally bin return list chunks in first scan
5466       * Add a few optimizations from colin@nyx10.cs.du.edu
5467 
5468     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5469       * faster bin computation & slightly different binning
5470       * merged all consolidations to one part of malloc proper
5471          (eliminating old malloc_find_space & malloc_clean_bin)
5472       * Scan 2 returns chunks (not just 1)
5473       * Propagate failure in realloc if malloc returns 0
5474       * Add stuff to allow compilation on non-ANSI compilers
5475           from kpv@research.att.com
5476 
5477     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5478       * removed potential for odd address access in prev_chunk
5479       * removed dependency on getpagesize.h
5480       * misc cosmetics and a bit more internal documentation
5481       * anticosmetics: mangled names in macros to evade debugger strangeness
5482       * tested on sparc, hp-700, dec-mips, rs6000
5483           with gcc & native cc (hp, dec only) allowing
5484           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5485 
5486     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5487       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5488          structure of old version,  but most details differ.)
5489 
5490 */
5491