1 #ifdef MALLOC_PROVIDED
2 int _dummy_mallocr = 1;
3 #else
4 /* ---------- To make a malloc.h, start cutting here ------------ */
5
6 /*
7 A version of malloc/free/realloc written by Doug Lea and released to the
8 public domain. Send questions/comments/complaints/performance data
9 to dl@cs.oswego.edu
10
11 * VERSION 2.6.4 Thu Nov 28 07:54:55 1996 Doug Lea (dl at gee)
12
13 Note: There may be an updated version of this malloc obtainable at
14 ftp://g.oswego.edu/pub/misc/malloc.c
15 Check before installing!
16
17 * Why use this malloc?
18
19 This is not the fastest, most space-conserving, most portable, or
20 most tunable malloc ever written. However it is among the fastest
21 while also being among the most space-conserving, portable and tunable.
22 Consistent balance across these factors results in a good general-purpose
23 allocator. For a high-level description, see
24 http://g.oswego.edu/dl/html/malloc.html
25
26 * Synopsis of public routines
27
28 (Much fuller descriptions are contained in the program documentation below.)
29
30 malloc(size_t n);
31 Return a pointer to a newly allocated chunk of at least n bytes, or null
32 if no space is available.
33 free(Void_t* p);
34 Release the chunk of memory pointed to by p, or no effect if p is null.
35 realloc(Void_t* p, size_t n);
36 Return a pointer to a chunk of size n that contains the same data
37 as does chunk p up to the minimum of (n, p's size) bytes, or null
38 if no space is available. The returned pointer may or may not be
39 the same as p. If p is null, equivalent to malloc. Unless the
40 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
41 size argument of zero (re)allocates a minimum-sized chunk.
42 memalign(size_t alignment, size_t n);
43 Return a pointer to a newly allocated chunk of n bytes, aligned
44 in accord with the alignment argument, which must be a power of
45 two.
46 valloc(size_t n);
47 Equivalent to memalign(pagesize, n), where pagesize is the page
48 size of the system (or as near to this as can be figured out from
49 all the includes/defines below.)
50 pvalloc(size_t n);
51 Equivalent to valloc(minimum-page-that-holds(n)), that is,
52 round up n to nearest pagesize.
53 calloc(size_t unit, size_t quantity);
54 Returns a pointer to quantity * unit bytes, with all locations
55 set to zero.
56 cfree(Void_t* p);
57 Equivalent to free(p).
58 malloc_trim(size_t pad);
59 Release all but pad bytes of freed top-most memory back
60 to the system. Return 1 if successful, else 0.
61 malloc_usable_size(Void_t* p);
62 Report the number usable allocated bytes associated with allocated
63 chunk p. This may or may not report more bytes than were requested,
64 due to alignment and minimum size constraints.
65 malloc_stats();
66 Prints brief summary statistics on stderr.
67 mallinfo()
68 Returns (by copy) a struct containing various summary statistics.
69 mallopt(int parameter_number, int parameter_value)
70 Changes one of the tunable parameters described below. Returns
71 1 if successful in changing the parameter, else 0.
72
73 * Vital statistics:
74
75 Alignment: 8-byte
76 8 byte alignment is currently hardwired into the design. This
77 seems to suffice for all current machines and C compilers.
78
79 Assumed pointer representation: 4 or 8 bytes
80 Code for 8-byte pointers is untested by me but has worked
81 reliably by Wolfram Gloger, who contributed most of the
82 changes supporting this.
83
84 Assumed size_t representation: 4 or 8 bytes
85 Note that size_t is allowed to be 4 bytes even if pointers are 8.
86
87 Minimum overhead per allocated chunk: 4 or 8 bytes
88 Each malloced chunk has a hidden overhead of 4 bytes holding size
89 and status information.
90
91 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
92 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
93
94 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
95 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
96 needed; 4 (8) for a trailing size field
97 and 8 (16) bytes for free list pointers. Thus, the minimum
98 allocatable size is 16/24/32 bytes.
99
100 Even a request for zero bytes (i.e., malloc(0)) returns a
101 pointer to something of the minimum allocatable size.
102
103 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
104 8-byte size_t: 2^63 - 16 bytes
105
106 It is assumed that (possibly signed) size_t bit values suffice to
107 represent chunk sizes. `Possibly signed' is due to the fact
108 that `size_t' may be defined on a system as either a signed or
109 an unsigned type. To be conservative, values that would appear
110 as negative numbers are avoided.
111 Requests for sizes with a negative sign bit will return a
112 minimum-sized chunk.
113
114 Maximum overhead wastage per allocated chunk: normally 15 bytes
115
116 Alignnment demands, plus the minimum allocatable size restriction
117 make the normal worst-case wastage 15 bytes (i.e., up to 15
118 more bytes will be allocated than were requested in malloc), with
119 two exceptions:
120 1. Because requests for zero bytes allocate non-zero space,
121 the worst case wastage for a request of zero bytes is 24 bytes.
122 2. For requests >= mmap_threshold that are serviced via
123 mmap(), the worst case wastage is 8 bytes plus the remainder
124 from a system page (the minimal mmap unit); typically 4096 bytes.
125
126 * Limitations
127
128 Here are some features that are NOT currently supported
129
130 * No user-definable hooks for callbacks and the like.
131 * No automated mechanism for fully checking that all accesses
132 to malloced memory stay within their bounds.
133 * No support for compaction.
134
135 * Synopsis of compile-time options:
136
137 People have reported using previous versions of this malloc on all
138 versions of Unix, sometimes by tweaking some of the defines
139 below. It has been tested most extensively on Solaris and
140 Linux. It is also reported to work on WIN32 platforms.
141 People have also reported adapting this malloc for use in
142 stand-alone embedded systems.
143
144 The implementation is in straight, hand-tuned ANSI C. Among other
145 consequences, it uses a lot of macros. Because of this, to be at
146 all usable, this code should be compiled using an optimizing compiler
147 (for example gcc -O2) that can simplify expressions and control
148 paths.
149
150 __STD_C (default: derived from C compiler defines)
151 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
152 a C compiler sufficiently close to ANSI to get away with it.
153 DEBUG (default: NOT defined)
154 Define to enable debugging. Adds fairly extensive assertion-based
155 checking to help track down memory errors, but noticeably slows down
156 execution.
157 SEPARATE_OBJECTS (default: NOT defined)
158 Define this to compile into separate .o files. You must then
159 compile malloc.c several times, defining a DEFINE_* macro each
160 time. The list of DEFINE_* macros appears below.
161 MALLOC_LOCK (default: NOT defined)
162 MALLOC_UNLOCK (default: NOT defined)
163 Define these to C expressions which are run to lock and unlock
164 the malloc data structures. Calls may be nested; that is,
165 MALLOC_LOCK may be called more than once before the corresponding
166 MALLOC_UNLOCK calls. MALLOC_LOCK must avoid waiting for a lock
167 that it already holds.
168 MALLOC_ALIGNMENT (default: NOT defined)
169 Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
170 which is the normal default.
171 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
172 Define this if you think that realloc(p, 0) should be equivalent
173 to free(p). Otherwise, since malloc returns a unique pointer for
174 malloc(0), so does realloc(p, 0).
175 HAVE_MEMCPY (default: defined)
176 Define if you are not otherwise using ANSI STD C, but still
177 have memcpy and memset in your C library and want to use them.
178 Otherwise, simple internal versions are supplied.
179 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
180 Define as 1 if you want the C library versions of memset and
181 memcpy called in realloc and calloc (otherwise macro versions are used).
182 At least on some platforms, the simple macro versions usually
183 outperform libc versions.
184 HAVE_MMAP (default: defined as 1)
185 Define to non-zero to optionally make malloc() use mmap() to
186 allocate very large blocks.
187 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
188 Define to non-zero to optionally make realloc() use mremap() to
189 reallocate very large blocks.
190 malloc_getpagesize (default: derived from system #includes)
191 Either a constant or routine call returning the system page size.
192 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
193 Optionally define if you are on a system with a /usr/include/malloc.h
194 that declares struct mallinfo. It is not at all necessary to
195 define this even if you do, but will ensure consistency.
196 INTERNAL_SIZE_T (default: size_t)
197 Define to a 32-bit type (probably `unsigned int') if you are on a
198 64-bit machine, yet do not want or need to allow malloc requests of
199 greater than 2^31 to be handled. This saves space, especially for
200 very small chunks.
201 INTERNAL_LINUX_C_LIB (default: NOT defined)
202 Defined only when compiled as part of Linux libc.
203 Also note that there is some odd internal name-mangling via defines
204 (for example, internally, `malloc' is named `mALLOc') needed
205 when compiling in this case. These look funny but don't otherwise
206 affect anything.
207 INTERNAL_NEWLIB (default: NOT defined)
208 Defined only when compiled as part of the Cygnus newlib
209 distribution.
210 WIN32 (default: undefined)
211 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
212 LACKS_UNISTD_H (default: undefined)
213 Define this if your system does not have a <unistd.h>.
214 MORECORE (default: sbrk)
215 The name of the routine to call to obtain more memory from the system.
216 MORECORE_FAILURE (default: -1)
217 The value returned upon failure of MORECORE.
218 MORECORE_CLEARS (default 1)
219 True (1) if the routine mapped to MORECORE zeroes out memory (which
220 holds for sbrk).
221 DEFAULT_TRIM_THRESHOLD
222 DEFAULT_TOP_PAD
223 DEFAULT_MMAP_THRESHOLD
224 DEFAULT_MMAP_MAX
225 Default values of tunable parameters (described in detail below)
226 controlling interaction with host system routines (sbrk, mmap, etc).
227 These values may also be changed dynamically via mallopt(). The
228 preset defaults are those that give best performance for typical
229 programs/systems.
230
231
232 */
233
234
235
236
237 /* Preliminaries */
238
239 #ifndef __STD_C
240 #ifdef __STDC__
241 #define __STD_C 1
242 #else
243 #if __cplusplus
244 #define __STD_C 1
245 #else
246 #define __STD_C 0
247 #endif /*__cplusplus*/
248 #endif /*__STDC__*/
249 #endif /*__STD_C*/
250
251 #ifndef Void_t
252 #if __STD_C
253 #define Void_t void
254 #else
255 #define Void_t char
256 #endif
257 #endif /*Void_t*/
258
259 #if __STD_C
260 #include <stddef.h> /* for size_t */
261 #else
262 #include <sys/types.h>
263 #endif
264
265 #ifdef __cplusplus
266 extern "C" {
267 #endif
268
269 #include <stdio.h> /* needed for malloc_stats */
270 #include <limits.h> /* needed for overflow checks */
271
272 #ifdef WIN32
273 #define WIN32_LEAN_AND_MEAN
274 #include <windows.h>
275 #endif
276
277 /*
278 Compile-time options
279 */
280
281
282 /*
283
284 Special defines for Cygnus newlib distribution.
285
286 */
287
288 #ifdef INTERNAL_NEWLIB
289
290 #include <sys/config.h>
291
292 /*
293 In newlib, all the publically visible routines take a reentrancy
294 pointer. We don't currently do anything much with it, but we do
295 pass it to the lock routine.
296 */
297
298 #include <reent.h>
299
300 #define POINTER_UINT unsigned _POINTER_INT
301 #define SEPARATE_OBJECTS
302 #define HAVE_MMAP 0
303 #define MORECORE(size) _sbrk_r(reent_ptr, (size))
304 #define MORECORE_CLEARS 0
305 #define MALLOC_LOCK __malloc_lock(reent_ptr)
306 #define MALLOC_UNLOCK __malloc_unlock(reent_ptr)
307
308 #ifdef __CYGWIN__
309 # undef _WIN32
310 # undef WIN32
311 #endif
312
313 #ifndef _WIN32
314 #ifdef SMALL_MEMORY
315 #define malloc_getpagesize (128)
316 #else
317 #define malloc_getpagesize (4096)
318 #endif
319 #endif
320
321 #if __STD_C
322 extern void __malloc_lock(struct _reent *);
323 extern void __malloc_unlock(struct _reent *);
324 #else
325 extern void __malloc_lock();
326 extern void __malloc_unlock();
327 #endif
328
329 #if __STD_C
330 #define RARG struct _reent *reent_ptr,
331 #define RONEARG struct _reent *reent_ptr
332 #else
333 #define RARG reent_ptr
334 #define RONEARG reent_ptr
335 #define RDECL struct _reent *reent_ptr;
336 #endif
337
338 #define RCALL reent_ptr,
339 #define RONECALL reent_ptr
340
341 #else /* ! INTERNAL_NEWLIB */
342
343 #define POINTER_UINT unsigned long
344 #define RARG
345 #define RONEARG
346 #define RDECL
347 #define RCALL
348 #define RONECALL
349
350 #endif /* ! INTERNAL_NEWLIB */
351
352 /*
353 Debugging:
354
355 Because freed chunks may be overwritten with link fields, this
356 malloc will often die when freed memory is overwritten by user
357 programs. This can be very effective (albeit in an annoying way)
358 in helping track down dangling pointers.
359
360 If you compile with -DDEBUG, a number of assertion checks are
361 enabled that will catch more memory errors. You probably won't be
362 able to make much sense of the actual assertion errors, but they
363 should help you locate incorrectly overwritten memory. The
364 checking is fairly extensive, and will slow down execution
365 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
366 attempt to check every non-mmapped allocated and free chunk in the
367 course of computing the summmaries. (By nature, mmapped regions
368 cannot be checked very much automatically.)
369
370 Setting DEBUG may also be helpful if you are trying to modify
371 this code. The assertions in the check routines spell out in more
372 detail the assumptions and invariants underlying the algorithms.
373
374 */
375
376 #if DEBUG
377 #include <assert.h>
378 #else
379 #define assert(x) ((void)0)
380 #endif
381
382
383 /*
384 SEPARATE_OBJECTS should be defined if you want each function to go
385 into a separate .o file. You must then compile malloc.c once per
386 function, defining the appropriate DEFINE_ macro. See below for the
387 list of macros.
388 */
389
390 #ifndef SEPARATE_OBJECTS
391 #define DEFINE_MALLOC
392 #define DEFINE_FREE
393 #define DEFINE_REALLOC
394 #define DEFINE_CALLOC
395 #define DEFINE_CFREE
396 #define DEFINE_MEMALIGN
397 #define DEFINE_VALLOC
398 #define DEFINE_PVALLOC
399 #define DEFINE_MALLINFO
400 #define DEFINE_MALLOC_STATS
401 #define DEFINE_MALLOC_USABLE_SIZE
402 #define DEFINE_MALLOPT
403
404 #define STATIC static
405 #else
406 #define STATIC
407 #endif
408
409 /*
410 Define MALLOC_LOCK and MALLOC_UNLOCK to C expressions to run to
411 lock and unlock the malloc data structures. MALLOC_LOCK may be
412 called recursively.
413 */
414
415 #ifndef MALLOC_LOCK
416 #define MALLOC_LOCK
417 #endif
418
419 #ifndef MALLOC_UNLOCK
420 #define MALLOC_UNLOCK
421 #endif
422
423 /*
424 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
425 of chunk sizes. On a 64-bit machine, you can reduce malloc
426 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
427 at the expense of not being able to handle requests greater than
428 2^31. This limitation is hardly ever a concern; you are encouraged
429 to set this. However, the default version is the same as size_t.
430 */
431
432 #ifndef INTERNAL_SIZE_T
433 #define INTERNAL_SIZE_T size_t
434 #endif
435
436 /*
437 Following is needed on implementations whereby long > size_t.
438 The problem is caused because the code performs subtractions of
439 size_t values and stores the result in long values. In the case
440 where long > size_t and the first value is actually less than
441 the second value, the resultant value is positive. For example,
442 (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
443 which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF. This is due to the
444 fact that assignment from unsigned to signed won't sign extend.
445 */
446
447 #define long_sub_size_t(x, y) \
448 (sizeof (long) > sizeof (INTERNAL_SIZE_T) && x < y \
449 ? -(long) (y - x) \
450 : (long) (x - y))
451
452 /*
453 REALLOC_ZERO_BYTES_FREES should be set if a call to
454 realloc with zero bytes should be the same as a call to free.
455 Some people think it should. Otherwise, since this malloc
456 returns a unique pointer for malloc(0), so does realloc(p, 0).
457 */
458
459
460 /* #define REALLOC_ZERO_BYTES_FREES */
461
462
463 /*
464 WIN32 causes an emulation of sbrk to be compiled in
465 mmap-based options are not currently supported in WIN32.
466 */
467
468 /* #define WIN32 */
469 #ifdef WIN32
470 #define MORECORE wsbrk
471 #define HAVE_MMAP 0
472 #endif
473
474
475 /*
476 HAVE_MEMCPY should be defined if you are not otherwise using
477 ANSI STD C, but still have memcpy and memset in your C library
478 and want to use them in calloc and realloc. Otherwise simple
479 macro versions are defined here.
480
481 USE_MEMCPY should be defined as 1 if you actually want to
482 have memset and memcpy called. People report that the macro
483 versions are often enough faster than libc versions on many
484 systems that it is better to use them.
485
486 */
487
488 #define HAVE_MEMCPY
489
490 #ifndef USE_MEMCPY
491 #ifdef HAVE_MEMCPY
492 #define USE_MEMCPY 1
493 #else
494 #define USE_MEMCPY 0
495 #endif
496 #endif
497
498 #if (__STD_C || defined(HAVE_MEMCPY))
499
500 #if __STD_C
501 void* memset(void*, int, size_t);
502 void* memcpy(void*, const void*, size_t);
503 #else
504 Void_t* memset();
505 Void_t* memcpy();
506 #endif
507 #endif
508
509 #if USE_MEMCPY
510
511 /* The following macros are only invoked with (2n+1)-multiples of
512 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
513 for fast inline execution when n is small. */
514
515 #define MALLOC_ZERO(charp, nbytes) \
516 do { \
517 INTERNAL_SIZE_T mzsz = (nbytes); \
518 if(mzsz <= 9*sizeof(mzsz)) { \
519 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
520 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
521 *mz++ = 0; \
522 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
523 *mz++ = 0; \
524 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
525 *mz++ = 0; }}} \
526 *mz++ = 0; \
527 *mz++ = 0; \
528 *mz = 0; \
529 } else memset((charp), 0, mzsz); \
530 } while(0)
531
532 #define MALLOC_COPY(dest,src,nbytes) \
533 do { \
534 INTERNAL_SIZE_T mcsz = (nbytes); \
535 if(mcsz <= 9*sizeof(mcsz)) { \
536 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
537 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
538 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
539 *mcdst++ = *mcsrc++; \
540 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
541 *mcdst++ = *mcsrc++; \
542 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
543 *mcdst++ = *mcsrc++; }}} \
544 *mcdst++ = *mcsrc++; \
545 *mcdst++ = *mcsrc++; \
546 *mcdst = *mcsrc ; \
547 } else memcpy(dest, src, mcsz); \
548 } while(0)
549
550 #else /* !USE_MEMCPY */
551
552 /* Use Duff's device for good zeroing/copying performance. */
553
554 #define MALLOC_ZERO(charp, nbytes) \
555 do { \
556 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
557 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
558 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
559 switch (mctmp) { \
560 case 0: for(;;) { *mzp++ = 0; \
561 case 7: *mzp++ = 0; \
562 case 6: *mzp++ = 0; \
563 case 5: *mzp++ = 0; \
564 case 4: *mzp++ = 0; \
565 case 3: *mzp++ = 0; \
566 case 2: *mzp++ = 0; \
567 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
568 } \
569 } while(0)
570
571 #define MALLOC_COPY(dest,src,nbytes) \
572 do { \
573 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
574 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
575 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
576 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
577 switch (mctmp) { \
578 case 0: for(;;) { *mcdst++ = *mcsrc++; \
579 case 7: *mcdst++ = *mcsrc++; \
580 case 6: *mcdst++ = *mcsrc++; \
581 case 5: *mcdst++ = *mcsrc++; \
582 case 4: *mcdst++ = *mcsrc++; \
583 case 3: *mcdst++ = *mcsrc++; \
584 case 2: *mcdst++ = *mcsrc++; \
585 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
586 } \
587 } while(0)
588
589 #endif
590
591
592 /*
593 Define HAVE_MMAP to optionally make malloc() use mmap() to
594 allocate very large blocks. These will be returned to the
595 operating system immediately after a free().
596 */
597
598 #ifndef HAVE_MMAP
599 #define HAVE_MMAP 1
600 #endif
601
602 /*
603 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
604 large blocks. This is currently only possible on Linux with
605 kernel versions newer than 1.3.77.
606 */
607
608 #ifndef HAVE_MREMAP
609 #ifdef INTERNAL_LINUX_C_LIB
610 #define HAVE_MREMAP 1
611 #else
612 #define HAVE_MREMAP 0
613 #endif
614 #endif
615
616 #if HAVE_MMAP
617
618 #include <unistd.h>
619 #include <fcntl.h>
620 #include <sys/mman.h>
621
622 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
623 #define MAP_ANONYMOUS MAP_ANON
624 #endif
625
626 #endif /* HAVE_MMAP */
627
628 /*
629 Access to system page size. To the extent possible, this malloc
630 manages memory from the system in page-size units.
631
632 The following mechanics for getpagesize were adapted from
633 bsd/gnu getpagesize.h
634 */
635
636 #ifndef LACKS_UNISTD_H
637 # include <unistd.h>
638 #endif
639
640 #ifndef malloc_getpagesize
641 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
642 # ifndef _SC_PAGE_SIZE
643 # define _SC_PAGE_SIZE _SC_PAGESIZE
644 # endif
645 # endif
646 # ifdef _SC_PAGE_SIZE
647 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
648 # else
649 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
650 extern size_t getpagesize();
651 # define malloc_getpagesize getpagesize()
652 # else
653 # include <sys/param.h>
654 # ifdef EXEC_PAGESIZE
655 # define malloc_getpagesize EXEC_PAGESIZE
656 # else
657 # ifdef NBPG
658 # ifndef CLSIZE
659 # define malloc_getpagesize NBPG
660 # else
661 # define malloc_getpagesize (NBPG * CLSIZE)
662 # endif
663 # else
664 # ifdef NBPC
665 # define malloc_getpagesize NBPC
666 # else
667 # ifdef PAGESIZE
668 # define malloc_getpagesize PAGESIZE
669 # else
670 # define malloc_getpagesize (4096) /* just guess */
671 # endif
672 # endif
673 # endif
674 # endif
675 # endif
676 # endif
677 #endif
678
679
680
681 /*
682
683 This version of malloc supports the standard SVID/XPG mallinfo
684 routine that returns a struct containing the same kind of
685 information you can get from malloc_stats. It should work on
686 any SVID/XPG compliant system that has a /usr/include/malloc.h
687 defining struct mallinfo. (If you'd like to install such a thing
688 yourself, cut out the preliminary declarations as described above
689 and below and save them in a malloc.h file. But there's no
690 compelling reason to bother to do this.)
691
692 The main declaration needed is the mallinfo struct that is returned
693 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
694 bunch of fields, most of which are not even meaningful in this
695 version of malloc. Some of these fields are are instead filled by
696 mallinfo() with other numbers that might possibly be of interest.
697
698 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
699 /usr/include/malloc.h file that includes a declaration of struct
700 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
701 version is declared below. These must be precisely the same for
702 mallinfo() to work.
703
704 */
705
706 /* #define HAVE_USR_INCLUDE_MALLOC_H */
707
708 #if HAVE_USR_INCLUDE_MALLOC_H
709 #include "/usr/include/malloc.h"
710 #else
711
712 /* SVID2/XPG mallinfo structure */
713
714 struct mallinfo {
715 int arena; /* total space allocated from system */
716 int ordblks; /* number of non-inuse chunks */
717 int smblks; /* unused -- always zero */
718 int hblks; /* number of mmapped regions */
719 int hblkhd; /* total space in mmapped regions */
720 int usmblks; /* unused -- always zero */
721 int fsmblks; /* unused -- always zero */
722 int uordblks; /* total allocated space */
723 int fordblks; /* total non-inuse space */
724 int keepcost; /* top-most, releasable (via malloc_trim) space */
725 };
726
727 /* SVID2/XPG mallopt options */
728
729 #define M_MXFAST 1 /* UNUSED in this malloc */
730 #define M_NLBLKS 2 /* UNUSED in this malloc */
731 #define M_GRAIN 3 /* UNUSED in this malloc */
732 #define M_KEEP 4 /* UNUSED in this malloc */
733
734 #endif
735
736 /* mallopt options that actually do something */
737
738 #define M_TRIM_THRESHOLD -1
739 #define M_TOP_PAD -2
740 #define M_MMAP_THRESHOLD -3
741 #define M_MMAP_MAX -4
742
743
744
745 #ifndef DEFAULT_TRIM_THRESHOLD
746 #define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
747 #endif
748
749 /*
750 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
751 to keep before releasing via malloc_trim in free().
752
753 Automatic trimming is mainly useful in long-lived programs.
754 Because trimming via sbrk can be slow on some systems, and can
755 sometimes be wasteful (in cases where programs immediately
756 afterward allocate more large chunks) the value should be high
757 enough so that your overall system performance would improve by
758 releasing.
759
760 The trim threshold and the mmap control parameters (see below)
761 can be traded off with one another. Trimming and mmapping are
762 two different ways of releasing unused memory back to the
763 system. Between these two, it is often possible to keep
764 system-level demands of a long-lived program down to a bare
765 minimum. For example, in one test suite of sessions measuring
766 the XF86 X server on Linux, using a trim threshold of 128K and a
767 mmap threshold of 192K led to near-minimal long term resource
768 consumption.
769
770 If you are using this malloc in a long-lived program, it should
771 pay to experiment with these values. As a rough guide, you
772 might set to a value close to the average size of a process
773 (program) running on your system. Releasing this much memory
774 would allow such a process to run in memory. Generally, it's
775 worth it to tune for trimming rather tham memory mapping when a
776 program undergoes phases where several large chunks are
777 allocated and released in ways that can reuse each other's
778 storage, perhaps mixed with phases where there are no such
779 chunks at all. And in well-behaved long-lived programs,
780 controlling release of large blocks via trimming versus mapping
781 is usually faster.
782
783 However, in most programs, these parameters serve mainly as
784 protection against the system-level effects of carrying around
785 massive amounts of unneeded memory. Since frequent calls to
786 sbrk, mmap, and munmap otherwise degrade performance, the default
787 parameters are set to relatively high values that serve only as
788 safeguards.
789
790 The default trim value is high enough to cause trimming only in
791 fairly extreme (by current memory consumption standards) cases.
792 It must be greater than page size to have any useful effect. To
793 disable trimming completely, you can set to (unsigned long)(-1);
794
795
796 */
797
798
799 #ifndef DEFAULT_TOP_PAD
800 #define DEFAULT_TOP_PAD (0)
801 #endif
802
803 /*
804 M_TOP_PAD is the amount of extra `padding' space to allocate or
805 retain whenever sbrk is called. It is used in two ways internally:
806
807 * When sbrk is called to extend the top of the arena to satisfy
808 a new malloc request, this much padding is added to the sbrk
809 request.
810
811 * When malloc_trim is called automatically from free(),
812 it is used as the `pad' argument.
813
814 In both cases, the actual amount of padding is rounded
815 so that the end of the arena is always a system page boundary.
816
817 The main reason for using padding is to avoid calling sbrk so
818 often. Having even a small pad greatly reduces the likelihood
819 that nearly every malloc request during program start-up (or
820 after trimming) will invoke sbrk, which needlessly wastes
821 time.
822
823 Automatic rounding-up to page-size units is normally sufficient
824 to avoid measurable overhead, so the default is 0. However, in
825 systems where sbrk is relatively slow, it can pay to increase
826 this value, at the expense of carrying around more memory than
827 the program needs.
828
829 */
830
831
832 #ifndef DEFAULT_MMAP_THRESHOLD
833 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
834 #endif
835
836 /*
837
838 M_MMAP_THRESHOLD is the request size threshold for using mmap()
839 to service a request. Requests of at least this size that cannot
840 be allocated using already-existing space will be serviced via mmap.
841 (If enough normal freed space already exists it is used instead.)
842
843 Using mmap segregates relatively large chunks of memory so that
844 they can be individually obtained and released from the host
845 system. A request serviced through mmap is never reused by any
846 other request (at least not directly; the system may just so
847 happen to remap successive requests to the same locations).
848
849 Segregating space in this way has the benefit that mmapped space
850 can ALWAYS be individually released back to the system, which
851 helps keep the system level memory demands of a long-lived
852 program low. Mapped memory can never become `locked' between
853 other chunks, as can happen with normally allocated chunks, which
854 menas that even trimming via malloc_trim would not release them.
855
856 However, it has the disadvantages that:
857
858 1. The space cannot be reclaimed, consolidated, and then
859 used to service later requests, as happens with normal chunks.
860 2. It can lead to more wastage because of mmap page alignment
861 requirements
862 3. It causes malloc performance to be more dependent on host
863 system memory management support routines which may vary in
864 implementation quality and may impose arbitrary
865 limitations. Generally, servicing a request via normal
866 malloc steps is faster than going through a system's mmap.
867
868 All together, these considerations should lead you to use mmap
869 only for relatively large requests.
870
871
872 */
873
874
875
876 #ifndef DEFAULT_MMAP_MAX
877 #if HAVE_MMAP
878 #define DEFAULT_MMAP_MAX (64)
879 #else
880 #define DEFAULT_MMAP_MAX (0)
881 #endif
882 #endif
883
884 /*
885 M_MMAP_MAX is the maximum number of requests to simultaneously
886 service using mmap. This parameter exists because:
887
888 1. Some systems have a limited number of internal tables for
889 use by mmap.
890 2. In most systems, overreliance on mmap can degrade overall
891 performance.
892 3. If a program allocates many large regions, it is probably
893 better off using normal sbrk-based allocation routines that
894 can reclaim and reallocate normal heap memory. Using a
895 small value allows transition into this mode after the
896 first few allocations.
897
898 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
899 the default value is 0, and attempts to set it to non-zero values
900 in mallopt will fail.
901 */
902
903
904
905
906 /*
907
908 Special defines for linux libc
909
910 Except when compiled using these special defines for Linux libc
911 using weak aliases, this malloc is NOT designed to work in
912 multithreaded applications. No semaphores or other concurrency
913 control are provided to ensure that multiple malloc or free calls
914 don't run at the same time, which could be disasterous. A single
915 semaphore could be used across malloc, realloc, and free (which is
916 essentially the effect of the linux weak alias approach). It would
917 be hard to obtain finer granularity.
918
919 */
920
921
922 #ifdef INTERNAL_LINUX_C_LIB
923
924 #if __STD_C
925
926 Void_t * __default_morecore_init (ptrdiff_t);
927 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
928
929 #else
930
931 Void_t * __default_morecore_init ();
932 Void_t *(*__morecore)() = __default_morecore_init;
933
934 #endif
935
936 #define MORECORE (*__morecore)
937 #define MORECORE_FAILURE 0
938 #define MORECORE_CLEARS 1
939
940 #else /* INTERNAL_LINUX_C_LIB */
941
942 #ifndef INTERNAL_NEWLIB
943 #if __STD_C
944 extern Void_t* sbrk(ptrdiff_t);
945 #else
946 extern Void_t* sbrk();
947 #endif
948 #endif
949
950 #ifndef MORECORE
951 #define MORECORE sbrk
952 #endif
953
954 #ifndef MORECORE_FAILURE
955 #define MORECORE_FAILURE -1
956 #endif
957
958 #ifndef MORECORE_CLEARS
959 #define MORECORE_CLEARS 1
960 #endif
961
962 #endif /* INTERNAL_LINUX_C_LIB */
963
964 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
965
966 #define cALLOc __libc_calloc
967 #define fREe __libc_free
968 #define mALLOc __libc_malloc
969 #define mEMALIGn __libc_memalign
970 #define rEALLOc __libc_realloc
971 #define vALLOc __libc_valloc
972 #define pvALLOc __libc_pvalloc
973 #define mALLINFo __libc_mallinfo
974 #define mALLOPt __libc_mallopt
975
976 #pragma weak calloc = __libc_calloc
977 #pragma weak free = __libc_free
978 #pragma weak cfree = __libc_free
979 #pragma weak malloc = __libc_malloc
980 #pragma weak memalign = __libc_memalign
981 #pragma weak realloc = __libc_realloc
982 #pragma weak valloc = __libc_valloc
983 #pragma weak pvalloc = __libc_pvalloc
984 #pragma weak mallinfo = __libc_mallinfo
985 #pragma weak mallopt = __libc_mallopt
986
987 #else
988
989 #ifdef INTERNAL_NEWLIB
990
991 #define cALLOc _calloc_r
992 #define fREe _free_r
993 #define mALLOc _malloc_r
994 #define mEMALIGn _memalign_r
995 #define rEALLOc _realloc_r
996 #define vALLOc _valloc_r
997 #define pvALLOc _pvalloc_r
998 #define mALLINFo _mallinfo_r
999 #define mALLOPt _mallopt_r
1000
1001 #define malloc_stats _malloc_stats_r
1002 #define malloc_trim _malloc_trim_r
1003 #define malloc_usable_size _malloc_usable_size_r
1004
1005 #define malloc_update_mallinfo __malloc_update_mallinfo
1006
1007 #define malloc_av_ __malloc_av_
1008 #define malloc_current_mallinfo __malloc_current_mallinfo
1009 #define malloc_max_sbrked_mem __malloc_max_sbrked_mem
1010 #define malloc_max_total_mem __malloc_max_total_mem
1011 #define malloc_sbrk_base __malloc_sbrk_base
1012 #define malloc_top_pad __malloc_top_pad
1013 #define malloc_trim_threshold __malloc_trim_threshold
1014
1015 #else /* ! INTERNAL_NEWLIB */
1016
1017 #define cALLOc calloc
1018 #define fREe free
1019 #define mALLOc malloc
1020 #define mEMALIGn memalign
1021 #define rEALLOc realloc
1022 #define vALLOc valloc
1023 #define pvALLOc pvalloc
1024 #define mALLINFo mallinfo
1025 #define mALLOPt mallopt
1026
1027 #endif /* ! INTERNAL_NEWLIB */
1028 #endif
1029
1030 /* Public routines */
1031
1032 #if __STD_C
1033
1034 Void_t* mALLOc(RARG size_t);
1035 void fREe(RARG Void_t*);
1036 Void_t* rEALLOc(RARG Void_t*, size_t);
1037 Void_t* mEMALIGn(RARG size_t, size_t);
1038 Void_t* vALLOc(RARG size_t);
1039 Void_t* pvALLOc(RARG size_t);
1040 Void_t* cALLOc(RARG size_t, size_t);
1041 void cfree(Void_t*);
1042 int malloc_trim(RARG size_t);
1043 size_t malloc_usable_size(RARG Void_t*);
1044 void malloc_stats(RONEARG);
1045 int mALLOPt(RARG int, int);
1046 struct mallinfo mALLINFo(RONEARG);
1047 #else
1048 Void_t* mALLOc();
1049 void fREe();
1050 Void_t* rEALLOc();
1051 Void_t* mEMALIGn();
1052 Void_t* vALLOc();
1053 Void_t* pvALLOc();
1054 Void_t* cALLOc();
1055 void cfree();
1056 int malloc_trim();
1057 size_t malloc_usable_size();
1058 void malloc_stats();
1059 int mALLOPt();
1060 struct mallinfo mALLINFo();
1061 #endif
1062
1063
1064 #ifdef __cplusplus
1065 }; /* end of extern "C" */
1066 #endif
1067
1068 /* ---------- To make a malloc.h, end cutting here ------------ */
1069
1070
1071 /*
1072 Emulation of sbrk for WIN32
1073 All code within the ifdef WIN32 is untested by me.
1074 */
1075
1076
1077 #ifdef WIN32
1078
1079 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1080 ~(malloc_getpagesize-1))
1081
1082 /* resrve 64MB to insure large contiguous space */
1083 #define RESERVED_SIZE (1024*1024*64)
1084 #define NEXT_SIZE (2048*1024)
1085 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1086
1087 struct GmListElement;
1088 typedef struct GmListElement GmListElement;
1089
1090 struct GmListElement
1091 {
1092 GmListElement* next;
1093 void* base;
1094 };
1095
1096 static GmListElement* head = 0;
1097 static unsigned int gNextAddress = 0;
1098 static unsigned int gAddressBase = 0;
1099 static unsigned int gAllocatedSize = 0;
1100
1101 static
makeGmListElement(void * bas)1102 GmListElement* makeGmListElement (void* bas)
1103 {
1104 GmListElement* this;
1105 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1106 ASSERT (this);
1107 if (this)
1108 {
1109 this->base = bas;
1110 this->next = head;
1111 head = this;
1112 }
1113 return this;
1114 }
1115
gcleanup()1116 void gcleanup ()
1117 {
1118 BOOL rval;
1119 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
1120 if (gAddressBase && (gNextAddress - gAddressBase))
1121 {
1122 rval = VirtualFree ((void*)gAddressBase,
1123 gNextAddress - gAddressBase,
1124 MEM_DECOMMIT);
1125 ASSERT (rval);
1126 }
1127 while (head)
1128 {
1129 GmListElement* next = head->next;
1130 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1131 ASSERT (rval);
1132 LocalFree (head);
1133 head = next;
1134 }
1135 }
1136
1137 static
findRegion(void * start_address,unsigned long size)1138 void* findRegion (void* start_address, unsigned long size)
1139 {
1140 MEMORY_BASIC_INFORMATION info;
1141 while ((unsigned long)start_address < TOP_MEMORY)
1142 {
1143 VirtualQuery (start_address, &info, sizeof (info));
1144 if (info.State != MEM_FREE)
1145 start_address = (char*)info.BaseAddress + info.RegionSize;
1146 else if (info.RegionSize >= size)
1147 return start_address;
1148 else
1149 start_address = (char*)info.BaseAddress + info.RegionSize;
1150 }
1151 return NULL;
1152
1153 }
1154
1155
wsbrk(long size)1156 void* wsbrk (long size)
1157 {
1158 void* tmp;
1159 if (size > 0)
1160 {
1161 if (gAddressBase == 0)
1162 {
1163 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1164 gNextAddress = gAddressBase =
1165 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1166 MEM_RESERVE, PAGE_NOACCESS);
1167 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1168 gAllocatedSize))
1169 {
1170 long new_size = max (NEXT_SIZE, AlignPage (size));
1171 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1172 do
1173 {
1174 new_address = findRegion (new_address, new_size);
1175
1176 if (new_address == 0)
1177 return (void*)-1;
1178
1179 gAddressBase = gNextAddress =
1180 (unsigned int)VirtualAlloc (new_address, new_size,
1181 MEM_RESERVE, PAGE_NOACCESS);
1182 // repeat in case of race condition
1183 // The region that we found has been snagged
1184 // by another thread
1185 }
1186 while (gAddressBase == 0);
1187
1188 ASSERT (new_address == (void*)gAddressBase);
1189
1190 gAllocatedSize = new_size;
1191
1192 if (!makeGmListElement ((void*)gAddressBase))
1193 return (void*)-1;
1194 }
1195 if ((size + gNextAddress) > AlignPage (gNextAddress))
1196 {
1197 void* res;
1198 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1199 (size + gNextAddress -
1200 AlignPage (gNextAddress)),
1201 MEM_COMMIT, PAGE_READWRITE);
1202 if (res == 0)
1203 return (void*)-1;
1204 }
1205 tmp = (void*)gNextAddress;
1206 gNextAddress = (unsigned int)tmp + size;
1207 return tmp;
1208 }
1209 else if (size < 0)
1210 {
1211 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1212 /* Trim by releasing the virtual memory */
1213 if (alignedGoal >= gAddressBase)
1214 {
1215 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1216 MEM_DECOMMIT);
1217 gNextAddress = gNextAddress + size;
1218 return (void*)gNextAddress;
1219 }
1220 else
1221 {
1222 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1223 MEM_DECOMMIT);
1224 gNextAddress = gAddressBase;
1225 return (void*)-1;
1226 }
1227 }
1228 else
1229 {
1230 return (void*)gNextAddress;
1231 }
1232 }
1233
1234 #endif
1235
1236
1237
1238 /*
1239 Type declarations
1240 */
1241
1242
1243 struct malloc_chunk
1244 {
1245 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1246 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1247 struct malloc_chunk* fd; /* double links -- used only if free. */
1248 struct malloc_chunk* bk;
1249 };
1250
1251 typedef struct malloc_chunk* mchunkptr;
1252
1253 /*
1254
1255 malloc_chunk details:
1256
1257 (The following includes lightly edited explanations by Colin Plumb.)
1258
1259 Chunks of memory are maintained using a `boundary tag' method as
1260 described in e.g., Knuth or Standish. (See the paper by Paul
1261 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1262 survey of such techniques.) Sizes of free chunks are stored both
1263 in the front of each chunk and at the end. This makes
1264 consolidating fragmented chunks into bigger chunks very fast. The
1265 size fields also hold bits representing whether chunks are free or
1266 in use.
1267
1268 An allocated chunk looks like this:
1269
1270
1271 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1272 | Size of previous chunk, if allocated | |
1273 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1274 | Size of chunk, in bytes |P|
1275 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1276 | User data starts here... .
1277 . .
1278 . (malloc_usable_space() bytes) .
1279 . |
1280 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1281 | Size of chunk |
1282 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1283
1284
1285 Where "chunk" is the front of the chunk for the purpose of most of
1286 the malloc code, but "mem" is the pointer that is returned to the
1287 user. "Nextchunk" is the beginning of the next contiguous chunk.
1288
1289 Chunks always begin on even word boundries, so the mem portion
1290 (which is returned to the user) is also on an even word boundary, and
1291 thus double-word aligned.
1292
1293 Free chunks are stored in circular doubly-linked lists, and look like this:
1294
1295 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1296 | Size of previous chunk |
1297 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1298 `head:' | Size of chunk, in bytes |P|
1299 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1300 | Forward pointer to next chunk in list |
1301 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1302 | Back pointer to previous chunk in list |
1303 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1304 | Unused space (may be 0 bytes long) .
1305 . .
1306 . |
1307 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1308 `foot:' | Size of chunk, in bytes |
1309 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1310
1311 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1312 chunk size (which is always a multiple of two words), is an in-use
1313 bit for the *previous* chunk. If that bit is *clear*, then the
1314 word before the current chunk size contains the previous chunk
1315 size, and can be used to find the front of the previous chunk.
1316 (The very first chunk allocated always has this bit set,
1317 preventing access to non-existent (or non-owned) memory.)
1318
1319 Note that the `foot' of the current chunk is actually represented
1320 as the prev_size of the NEXT chunk. (This makes it easier to
1321 deal with alignments etc).
1322
1323 The two exceptions to all this are
1324
1325 1. The special chunk `top', which doesn't bother using the
1326 trailing size field since there is no
1327 next contiguous chunk that would have to index off it. (After
1328 initialization, `top' is forced to always exist. If it would
1329 become less than MINSIZE bytes long, it is replenished via
1330 malloc_extend_top.)
1331
1332 2. Chunks allocated via mmap, which have the second-lowest-order
1333 bit (IS_MMAPPED) set in their size fields. Because they are
1334 never merged or traversed from any other chunk, they have no
1335 foot size or inuse information.
1336
1337 Available chunks are kept in any of several places (all declared below):
1338
1339 * `av': An array of chunks serving as bin headers for consolidated
1340 chunks. Each bin is doubly linked. The bins are approximately
1341 proportionally (log) spaced. There are a lot of these bins
1342 (128). This may look excessive, but works very well in
1343 practice. All procedures maintain the invariant that no
1344 consolidated chunk physically borders another one. Chunks in
1345 bins are kept in size order, with ties going to the
1346 approximately least recently used chunk.
1347
1348 The chunks in each bin are maintained in decreasing sorted order by
1349 size. This is irrelevant for the small bins, which all contain
1350 the same-sized chunks, but facilitates best-fit allocation for
1351 larger chunks. (These lists are just sequential. Keeping them in
1352 order almost never requires enough traversal to warrant using
1353 fancier ordered data structures.) Chunks of the same size are
1354 linked with the most recently freed at the front, and allocations
1355 are taken from the back. This results in LRU or FIFO allocation
1356 order, which tends to give each chunk an equal opportunity to be
1357 consolidated with adjacent freed chunks, resulting in larger free
1358 chunks and less fragmentation.
1359
1360 * `top': The top-most available chunk (i.e., the one bordering the
1361 end of available memory) is treated specially. It is never
1362 included in any bin, is used only if no other chunk is
1363 available, and is released back to the system if it is very
1364 large (see M_TRIM_THRESHOLD).
1365
1366 * `last_remainder': A bin holding only the remainder of the
1367 most recently split (non-top) chunk. This bin is checked
1368 before other non-fitting chunks, so as to provide better
1369 locality for runs of sequentially allocated chunks.
1370
1371 * Implicitly, through the host system's memory mapping tables.
1372 If supported, requests greater than a threshold are usually
1373 serviced via calls to mmap, and then later released via munmap.
1374
1375 */
1376
1377
1378
1379
1380
1381
1382 /* sizes, alignments */
1383
1384 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1385 #ifndef MALLOC_ALIGNMENT
1386 #define MALLOC_ALIGN 8
1387 #define MALLOC_ALIGNMENT (SIZE_SZ < 4 ? 8 : (SIZE_SZ + SIZE_SZ))
1388 #else
1389 #define MALLOC_ALIGN MALLOC_ALIGNMENT
1390 #endif
1391 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1392 #define MINSIZE (sizeof(struct malloc_chunk))
1393
1394 /* conversion from malloc headers to user pointers, and back */
1395
1396 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1397 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1398
1399 /* pad request bytes into a usable size */
1400
1401 #define request2size(req) \
1402 (((unsigned long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1403 (unsigned long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1404 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1405
1406 /* Check if m has acceptable alignment */
1407
1408 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1409
1410
1411
1412
1413 /*
1414 Physical chunk operations
1415 */
1416
1417
1418 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1419
1420 #define PREV_INUSE 0x1
1421
1422 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1423
1424 #define IS_MMAPPED 0x2
1425
1426 /* Bits to mask off when extracting size */
1427
1428 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1429
1430
1431 /* Ptr to next physical malloc_chunk. */
1432
1433 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1434
1435 /* Ptr to previous physical malloc_chunk */
1436
1437 #define prev_chunk(p)\
1438 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1439
1440
1441 /* Treat space at ptr + offset as a chunk */
1442
1443 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1444
1445
1446
1447
1448 /*
1449 Dealing with use bits
1450 */
1451
1452 /* extract p's inuse bit */
1453
1454 #define inuse(p)\
1455 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1456
1457 /* extract inuse bit of previous chunk */
1458
1459 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1460
1461 /* check for mmap()'ed chunk */
1462
1463 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1464
1465 /* set/clear chunk as in use without otherwise disturbing */
1466
1467 #define set_inuse(p)\
1468 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1469
1470 #define clear_inuse(p)\
1471 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1472
1473 /* check/set/clear inuse bits in known places */
1474
1475 #define inuse_bit_at_offset(p, s)\
1476 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1477
1478 #define set_inuse_bit_at_offset(p, s)\
1479 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1480
1481 #define clear_inuse_bit_at_offset(p, s)\
1482 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1483
1484
1485
1486
1487 /*
1488 Dealing with size fields
1489 */
1490
1491 /* Get size, ignoring use bits */
1492
1493 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1494
1495 /* Set size at head, without disturbing its use bit */
1496
1497 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1498
1499 /* Set size/use ignoring previous bits in header */
1500
1501 #define set_head(p, s) ((p)->size = (s))
1502
1503 /* Set size at footer (only when chunk is not in use) */
1504
1505 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1506
1507
1508
1509
1510
1511 /*
1512 Bins
1513
1514 The bins, `av_' are an array of pairs of pointers serving as the
1515 heads of (initially empty) doubly-linked lists of chunks, laid out
1516 in a way so that each pair can be treated as if it were in a
1517 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1518 and chunks are the same).
1519
1520 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1521 8 bytes apart. Larger bins are approximately logarithmically
1522 spaced. (See the table below.) The `av_' array is never mentioned
1523 directly in the code, but instead via bin access macros.
1524
1525 Bin layout:
1526
1527 64 bins of size 8
1528 32 bins of size 64
1529 16 bins of size 512
1530 8 bins of size 4096
1531 4 bins of size 32768
1532 2 bins of size 262144
1533 1 bin of size what's left
1534
1535 There is actually a little bit of slop in the numbers in bin_index
1536 for the sake of speed. This makes no difference elsewhere.
1537
1538 The special chunks `top' and `last_remainder' get their own bins,
1539 (this is implemented via yet more trickery with the av_ array),
1540 although `top' is never properly linked to its bin since it is
1541 always handled specially.
1542
1543 */
1544
1545 #ifdef SEPARATE_OBJECTS
1546 #define av_ malloc_av_
1547 #endif
1548
1549 #define NAV 128 /* number of bins */
1550
1551 typedef struct malloc_chunk* mbinptr;
1552
1553 /* access macros */
1554
1555 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1556 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1557 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1558
1559 /*
1560 The first 2 bins are never indexed. The corresponding av_ cells are instead
1561 used for bookkeeping. This is not to save space, but to simplify
1562 indexing, maintain locality, and avoid some initialization tests.
1563 */
1564
1565 #define top (bin_at(0)->fd) /* The topmost chunk */
1566 #define last_remainder (bin_at(1)) /* remainder from last split */
1567
1568
1569 /*
1570 Because top initially points to its own bin with initial
1571 zero size, thus forcing extension on the first malloc request,
1572 we avoid having any special code in malloc to check whether
1573 it even exists yet. But we still need to in malloc_extend_top.
1574 */
1575
1576 #define initial_top ((mchunkptr)(bin_at(0)))
1577
1578 /* Helper macro to initialize bins */
1579
1580 #define IAV(i) bin_at(i), bin_at(i)
1581
1582 #ifdef DEFINE_MALLOC
1583 STATIC mbinptr av_[NAV * 2 + 2] = {
1584 0, 0,
1585 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1586 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1587 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1588 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1589 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1590 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1591 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1592 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1593 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1594 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1595 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1596 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1597 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1598 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1599 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1600 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1601 };
1602 #else
1603 extern mbinptr av_[NAV * 2 + 2];
1604 #endif
1605
1606
1607
1608 /* field-extraction macros */
1609
1610 #define first(b) ((b)->fd)
1611 #define last(b) ((b)->bk)
1612
1613 /*
1614 Indexing into bins
1615 */
1616
1617 #define bin_index(sz) \
1618 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1619 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1620 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1621 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1622 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1623 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1624 126)
1625 /*
1626 bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1627 identically sized chunks. This is exploited in malloc.
1628 */
1629
1630 #define MAX_SMALLBIN_SIZE 512
1631 #define SMALLBIN_WIDTH 8
1632 #define SMALLBIN_WIDTH_BITS 3
1633 #define MAX_SMALLBIN (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1634
1635 #define smallbin_index(sz) (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1636
1637 /*
1638 Requests are `small' if both the corresponding and the next bin are small
1639 */
1640
1641 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1642
1643
1644
1645 /*
1646 To help compensate for the large number of bins, a one-level index
1647 structure is used for bin-by-bin searching. `binblocks' is a
1648 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1649 have any (possibly) non-empty bins, so they can be skipped over
1650 all at once during during traversals. The bits are NOT always
1651 cleared as soon as all bins in a block are empty, but instead only
1652 when all are noticed to be empty during traversal in malloc.
1653 */
1654
1655 #define BINBLOCKWIDTH 4 /* bins per block */
1656
1657 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1658
1659 /* bin<->block macros */
1660
1661 #define idx2binblock(ix) ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1662 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1663 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1664
1665
1666
1667
1668
1669 /* Other static bookkeeping data */
1670
1671 #ifdef SEPARATE_OBJECTS
1672 #define trim_threshold malloc_trim_threshold
1673 #define top_pad malloc_top_pad
1674 #define n_mmaps_max malloc_n_mmaps_max
1675 #define mmap_threshold malloc_mmap_threshold
1676 #define sbrk_base malloc_sbrk_base
1677 #define max_sbrked_mem malloc_max_sbrked_mem
1678 #define max_total_mem malloc_max_total_mem
1679 #define current_mallinfo malloc_current_mallinfo
1680 #define n_mmaps malloc_n_mmaps
1681 #define max_n_mmaps malloc_max_n_mmaps
1682 #define mmapped_mem malloc_mmapped_mem
1683 #define max_mmapped_mem malloc_max_mmapped_mem
1684 #endif
1685
1686 /* variables holding tunable values */
1687
1688 #ifdef DEFINE_MALLOC
1689
1690 STATIC unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1691 STATIC unsigned long top_pad = DEFAULT_TOP_PAD;
1692 #if HAVE_MMAP
1693 STATIC unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1694 STATIC unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1695 #endif
1696
1697 /* The first value returned from sbrk */
1698 STATIC char* sbrk_base = (char*)(-1);
1699
1700 /* The maximum memory obtained from system via sbrk */
1701 STATIC unsigned long max_sbrked_mem = 0;
1702
1703 /* The maximum via either sbrk or mmap */
1704 STATIC unsigned long max_total_mem = 0;
1705
1706 /* internal working copy of mallinfo */
1707 STATIC struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1708
1709 #if HAVE_MMAP
1710
1711 /* Tracking mmaps */
1712
1713 STATIC unsigned int n_mmaps = 0;
1714 STATIC unsigned int max_n_mmaps = 0;
1715 STATIC unsigned long mmapped_mem = 0;
1716 STATIC unsigned long max_mmapped_mem = 0;
1717
1718 #endif
1719
1720 #else /* ! DEFINE_MALLOC */
1721
1722 extern unsigned long trim_threshold;
1723 extern unsigned long top_pad;
1724 #if HAVE_MMAP
1725 extern unsigned int n_mmaps_max;
1726 extern unsigned long mmap_threshold;
1727 #endif
1728 extern char* sbrk_base;
1729 extern unsigned long max_sbrked_mem;
1730 extern unsigned long max_total_mem;
1731 extern struct mallinfo current_mallinfo;
1732 #if HAVE_MMAP
1733 extern unsigned int n_mmaps;
1734 extern unsigned int max_n_mmaps;
1735 extern unsigned long mmapped_mem;
1736 extern unsigned long max_mmapped_mem;
1737 #endif
1738
1739 #endif /* ! DEFINE_MALLOC */
1740
1741 /* The total memory obtained from system via sbrk */
1742 #define sbrked_mem (current_mallinfo.arena)
1743
1744
1745
1746 /*
1747 Debugging support
1748 */
1749
1750 #if DEBUG
1751
1752
1753 /*
1754 These routines make a number of assertions about the states
1755 of data structures that should be true at all times. If any
1756 are not true, it's very likely that a user program has somehow
1757 trashed memory. (It's also possible that there is a coding error
1758 in malloc. In which case, please report it!)
1759 */
1760
1761 #if __STD_C
do_check_chunk(mchunkptr p)1762 static void do_check_chunk(mchunkptr p)
1763 #else
1764 static void do_check_chunk(p) mchunkptr p;
1765 #endif
1766 {
1767 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1768
1769 /* No checkable chunk is mmapped */
1770 assert(!chunk_is_mmapped(p));
1771
1772 /* Check for legal address ... */
1773 assert((char*)p >= sbrk_base);
1774 if (p != top)
1775 assert((char*)p + sz <= (char*)top);
1776 else
1777 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1778
1779 }
1780
1781
1782 #if __STD_C
do_check_free_chunk(mchunkptr p)1783 static void do_check_free_chunk(mchunkptr p)
1784 #else
1785 static void do_check_free_chunk(p) mchunkptr p;
1786 #endif
1787 {
1788 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1789 mchunkptr next = chunk_at_offset(p, sz);
1790
1791 do_check_chunk(p);
1792
1793 /* Check whether it claims to be free ... */
1794 assert(!inuse(p));
1795
1796 /* Unless a special marker, must have OK fields */
1797 if ((long)sz >= (long)MINSIZE)
1798 {
1799 assert((sz & MALLOC_ALIGN_MASK) == 0);
1800 assert(aligned_OK(chunk2mem(p)));
1801 /* ... matching footer field */
1802 assert(next->prev_size == sz);
1803 /* ... and is fully consolidated */
1804 assert(prev_inuse(p));
1805 assert (next == top || inuse(next));
1806
1807 /* ... and has minimally sane links */
1808 assert(p->fd->bk == p);
1809 assert(p->bk->fd == p);
1810 }
1811 else /* markers are always of size SIZE_SZ */
1812 assert(sz == SIZE_SZ);
1813 }
1814
1815 #if __STD_C
do_check_inuse_chunk(mchunkptr p)1816 static void do_check_inuse_chunk(mchunkptr p)
1817 #else
1818 static void do_check_inuse_chunk(p) mchunkptr p;
1819 #endif
1820 {
1821 mchunkptr next = next_chunk(p);
1822 do_check_chunk(p);
1823
1824 /* Check whether it claims to be in use ... */
1825 assert(inuse(p));
1826
1827 /* ... and is surrounded by OK chunks.
1828 Since more things can be checked with free chunks than inuse ones,
1829 if an inuse chunk borders them and debug is on, it's worth doing them.
1830 */
1831 if (!prev_inuse(p))
1832 {
1833 mchunkptr prv = prev_chunk(p);
1834 assert(next_chunk(prv) == p);
1835 do_check_free_chunk(prv);
1836 }
1837 if (next == top)
1838 {
1839 assert(prev_inuse(next));
1840 assert(chunksize(next) >= MINSIZE);
1841 }
1842 else if (!inuse(next))
1843 do_check_free_chunk(next);
1844
1845 }
1846
1847 #if __STD_C
do_check_malloced_chunk(mchunkptr p,INTERNAL_SIZE_T s)1848 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1849 #else
1850 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1851 #endif
1852 {
1853 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1854 long room = long_sub_size_t(sz, s);
1855
1856 do_check_inuse_chunk(p);
1857
1858 /* Legal size ... */
1859 assert((long)sz >= (long)MINSIZE);
1860 assert((sz & MALLOC_ALIGN_MASK) == 0);
1861 assert(room >= 0);
1862 assert(room < (long)MINSIZE);
1863
1864 /* ... and alignment */
1865 assert(aligned_OK(chunk2mem(p)));
1866
1867
1868 /* ... and was allocated at front of an available chunk */
1869 assert(prev_inuse(p));
1870
1871 }
1872
1873
1874 #define check_free_chunk(P) do_check_free_chunk(P)
1875 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1876 #define check_chunk(P) do_check_chunk(P)
1877 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1878 #else
1879 #define check_free_chunk(P)
1880 #define check_inuse_chunk(P)
1881 #define check_chunk(P)
1882 #define check_malloced_chunk(P,N)
1883 #endif
1884
1885
1886
1887 /*
1888 Macro-based internal utilities
1889 */
1890
1891
1892 /*
1893 Linking chunks in bin lists.
1894 Call these only with variables, not arbitrary expressions, as arguments.
1895 */
1896
1897 /*
1898 Place chunk p of size s in its bin, in size order,
1899 putting it ahead of others of same size.
1900 */
1901
1902
1903 #define frontlink(P, S, IDX, BK, FD) \
1904 { \
1905 if (S < MAX_SMALLBIN_SIZE) \
1906 { \
1907 IDX = smallbin_index(S); \
1908 mark_binblock(IDX); \
1909 BK = bin_at(IDX); \
1910 FD = BK->fd; \
1911 P->bk = BK; \
1912 P->fd = FD; \
1913 FD->bk = BK->fd = P; \
1914 } \
1915 else \
1916 { \
1917 IDX = bin_index(S); \
1918 BK = bin_at(IDX); \
1919 FD = BK->fd; \
1920 if (FD == BK) mark_binblock(IDX); \
1921 else \
1922 { \
1923 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1924 BK = FD->bk; \
1925 } \
1926 P->bk = BK; \
1927 P->fd = FD; \
1928 FD->bk = BK->fd = P; \
1929 } \
1930 }
1931
1932
1933 /* take a chunk off a list */
1934
1935 #define unlink(P, BK, FD) \
1936 { \
1937 BK = P->bk; \
1938 FD = P->fd; \
1939 FD->bk = BK; \
1940 BK->fd = FD; \
1941 } \
1942
1943 /* Place p as the last remainder */
1944
1945 #define link_last_remainder(P) \
1946 { \
1947 last_remainder->fd = last_remainder->bk = P; \
1948 P->fd = P->bk = last_remainder; \
1949 }
1950
1951 /* Clear the last_remainder bin */
1952
1953 #define clear_last_remainder \
1954 (last_remainder->fd = last_remainder->bk = last_remainder)
1955
1956
1957
1958
1959
1960
1961 /* Routines dealing with mmap(). */
1962
1963 #if HAVE_MMAP
1964
1965 #ifdef DEFINE_MALLOC
1966
1967 #if __STD_C
mmap_chunk(size_t size)1968 static mchunkptr mmap_chunk(size_t size)
1969 #else
1970 static mchunkptr mmap_chunk(size) size_t size;
1971 #endif
1972 {
1973 size_t page_mask = malloc_getpagesize - 1;
1974 mchunkptr p;
1975
1976 #ifndef MAP_ANONYMOUS
1977 static int fd = -1;
1978 #endif
1979
1980 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1981
1982 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1983 * there is no following chunk whose prev_size field could be used.
1984 */
1985 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1986
1987 #ifdef MAP_ANONYMOUS
1988 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1989 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1990 #else /* !MAP_ANONYMOUS */
1991 if (fd < 0)
1992 {
1993 fd = open("/dev/zero", O_RDWR);
1994 if(fd < 0) return 0;
1995 }
1996 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1997 #endif
1998
1999 if(p == (mchunkptr)-1) return 0;
2000
2001 n_mmaps++;
2002 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2003
2004 /* We demand that eight bytes into a page must be 8-byte aligned. */
2005 assert(aligned_OK(chunk2mem(p)));
2006
2007 /* The offset to the start of the mmapped region is stored
2008 * in the prev_size field of the chunk; normally it is zero,
2009 * but that can be changed in memalign().
2010 */
2011 p->prev_size = 0;
2012 set_head(p, size|IS_MMAPPED);
2013
2014 mmapped_mem += size;
2015 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2016 max_mmapped_mem = mmapped_mem;
2017 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2018 max_total_mem = mmapped_mem + sbrked_mem;
2019 return p;
2020 }
2021
2022 #endif /* DEFINE_MALLOC */
2023
2024 #ifdef SEPARATE_OBJECTS
2025 #define munmap_chunk malloc_munmap_chunk
2026 #endif
2027
2028 #ifdef DEFINE_FREE
2029
2030 #if __STD_C
munmap_chunk(mchunkptr p)2031 STATIC void munmap_chunk(mchunkptr p)
2032 #else
2033 STATIC void munmap_chunk(p) mchunkptr p;
2034 #endif
2035 {
2036 INTERNAL_SIZE_T size = chunksize(p);
2037 int ret;
2038
2039 assert (chunk_is_mmapped(p));
2040 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2041 assert((n_mmaps > 0));
2042 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2043
2044 n_mmaps--;
2045 mmapped_mem -= (size + p->prev_size);
2046
2047 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2048
2049 /* munmap returns non-zero on failure */
2050 assert(ret == 0);
2051 }
2052
2053 #else /* ! DEFINE_FREE */
2054
2055 #if __STD_C
2056 extern void munmap_chunk(mchunkptr);
2057 #else
2058 extern void munmap_chunk();
2059 #endif
2060
2061 #endif /* ! DEFINE_FREE */
2062
2063 #if HAVE_MREMAP
2064
2065 #ifdef DEFINE_REALLOC
2066
2067 #if __STD_C
mremap_chunk(mchunkptr p,size_t new_size)2068 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2069 #else
2070 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2071 #endif
2072 {
2073 size_t page_mask = malloc_getpagesize - 1;
2074 INTERNAL_SIZE_T offset = p->prev_size;
2075 INTERNAL_SIZE_T size = chunksize(p);
2076 char *cp;
2077
2078 assert (chunk_is_mmapped(p));
2079 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2080 assert((n_mmaps > 0));
2081 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2082
2083 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2084 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2085
2086 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2087
2088 if (cp == (char *)-1) return 0;
2089
2090 p = (mchunkptr)(cp + offset);
2091
2092 assert(aligned_OK(chunk2mem(p)));
2093
2094 assert((p->prev_size == offset));
2095 set_head(p, (new_size - offset)|IS_MMAPPED);
2096
2097 mmapped_mem -= size + offset;
2098 mmapped_mem += new_size;
2099 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2100 max_mmapped_mem = mmapped_mem;
2101 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2102 max_total_mem = mmapped_mem + sbrked_mem;
2103 return p;
2104 }
2105
2106 #endif /* DEFINE_REALLOC */
2107
2108 #endif /* HAVE_MREMAP */
2109
2110 #endif /* HAVE_MMAP */
2111
2112
2113
2114
2115 #ifdef DEFINE_MALLOC
2116
2117 /*
2118 Extend the top-most chunk by obtaining memory from system.
2119 Main interface to sbrk (but see also malloc_trim).
2120 */
2121
2122 #if __STD_C
malloc_extend_top(RARG INTERNAL_SIZE_T nb)2123 static void malloc_extend_top(RARG INTERNAL_SIZE_T nb)
2124 #else
2125 static void malloc_extend_top(RARG nb) RDECL INTERNAL_SIZE_T nb;
2126 #endif
2127 {
2128 char* brk; /* return value from sbrk */
2129 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2130 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2131 int correction_failed = 0; /* whether we should relax the assertion */
2132 char* new_brk; /* return of 2nd sbrk call */
2133 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2134
2135 mchunkptr old_top = top; /* Record state of old top */
2136 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2137 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2138
2139 /* Pad request with top_pad plus minimal overhead */
2140
2141 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2142 unsigned long pagesz = malloc_getpagesize;
2143
2144 /* If not the first time through, round to preserve page boundary */
2145 /* Otherwise, we need to correct to a page size below anyway. */
2146 /* (We also correct below if an intervening foreign sbrk call.) */
2147
2148 if (sbrk_base != (char*)(-1))
2149 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2150
2151 brk = (char*)(MORECORE (sbrk_size));
2152
2153 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2154 if (brk == (char*)(MORECORE_FAILURE) ||
2155 (brk < old_end && old_top != initial_top))
2156 return;
2157
2158 sbrked_mem += sbrk_size;
2159
2160 if (brk == old_end /* can just add bytes to current top, unless
2161 previous correction failed */
2162 && ((POINTER_UINT)old_end & (pagesz - 1)) == 0)
2163 {
2164 top_size = sbrk_size + old_top_size;
2165 set_head(top, top_size | PREV_INUSE);
2166 }
2167 else
2168 {
2169 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2170 sbrk_base = brk;
2171 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2172 sbrked_mem += brk - (char*)old_end;
2173
2174 /* Guarantee alignment of first new chunk made from this space */
2175 front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2176 if (front_misalign > 0)
2177 {
2178 correction = (MALLOC_ALIGNMENT) - front_misalign;
2179 brk += correction;
2180 }
2181 else
2182 correction = 0;
2183
2184 /* Guarantee the next brk will be at a page boundary */
2185 correction += pagesz - ((POINTER_UINT)(brk + sbrk_size) & (pagesz - 1));
2186
2187 /* Allocate correction */
2188 new_brk = (char*)(MORECORE (correction));
2189 if (new_brk == (char*)(MORECORE_FAILURE))
2190 {
2191 correction = 0;
2192 correction_failed = 1;
2193 new_brk = brk;
2194 }
2195
2196 sbrked_mem += correction;
2197
2198 top = (mchunkptr)brk;
2199 top_size = new_brk - brk + correction;
2200 set_head(top, top_size | PREV_INUSE);
2201
2202 if (old_top != initial_top)
2203 {
2204
2205 /* There must have been an intervening foreign sbrk call. */
2206 /* A double fencepost is necessary to prevent consolidation */
2207
2208 /* If not enough space to do this, then user did something very wrong */
2209 if (old_top_size < MINSIZE)
2210 {
2211 set_head(top, PREV_INUSE); /* will force null return from malloc */
2212 return;
2213 }
2214
2215 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2216 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2217 set_head_size(old_top, old_top_size);
2218 chunk_at_offset(old_top, old_top_size )->size =
2219 SIZE_SZ|PREV_INUSE;
2220 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2221 SIZE_SZ|PREV_INUSE;
2222 /* If possible, release the rest. */
2223 if (old_top_size >= MINSIZE)
2224 fREe(RCALL chunk2mem(old_top));
2225 }
2226 }
2227
2228 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2229 max_sbrked_mem = sbrked_mem;
2230 #if HAVE_MMAP
2231 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2232 max_total_mem = mmapped_mem + sbrked_mem;
2233 #else
2234 if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem)
2235 max_total_mem = sbrked_mem;
2236 #endif
2237
2238 /* We always land on a page boundary */
2239 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0
2240 || correction_failed);
2241 }
2242
2243 #endif /* DEFINE_MALLOC */
2244
2245
2246 /* Main public routines */
2247
2248 #ifdef DEFINE_MALLOC
2249
2250 /*
2251 Malloc Algorthim:
2252
2253 The requested size is first converted into a usable form, `nb'.
2254 This currently means to add 4 bytes overhead plus possibly more to
2255 obtain 8-byte alignment and/or to obtain a size of at least
2256 MINSIZE (currently 16 bytes), the smallest allocatable size.
2257 (All fits are considered `exact' if they are within MINSIZE bytes.)
2258
2259 From there, the first successful of the following steps is taken:
2260
2261 1. The bin corresponding to the request size is scanned, and if
2262 a chunk of exactly the right size is found, it is taken.
2263
2264 2. The most recently remaindered chunk is used if it is big
2265 enough. This is a form of (roving) first fit, used only in
2266 the absence of exact fits. Runs of consecutive requests use
2267 the remainder of the chunk used for the previous such request
2268 whenever possible. This limited use of a first-fit style
2269 allocation strategy tends to give contiguous chunks
2270 coextensive lifetimes, which improves locality and can reduce
2271 fragmentation in the long run.
2272
2273 3. Other bins are scanned in increasing size order, using a
2274 chunk big enough to fulfill the request, and splitting off
2275 any remainder. This search is strictly by best-fit; i.e.,
2276 the smallest (with ties going to approximately the least
2277 recently used) chunk that fits is selected.
2278
2279 4. If large enough, the chunk bordering the end of memory
2280 (`top') is split off. (This use of `top' is in accord with
2281 the best-fit search rule. In effect, `top' is treated as
2282 larger (and thus less well fitting) than any other available
2283 chunk since it can be extended to be as large as necessary
2284 (up to system limitations).
2285
2286 5. If the request size meets the mmap threshold and the
2287 system supports mmap, and there are few enough currently
2288 allocated mmapped regions, and a call to mmap succeeds,
2289 the request is allocated via direct memory mapping.
2290
2291 6. Otherwise, the top of memory is extended by
2292 obtaining more space from the system (normally using sbrk,
2293 but definable to anything else via the MORECORE macro).
2294 Memory is gathered from the system (in system page-sized
2295 units) in a way that allows chunks obtained across different
2296 sbrk calls to be consolidated, but does not require
2297 contiguous memory. Thus, it should be safe to intersperse
2298 mallocs with other sbrk calls.
2299
2300
2301 All allocations are made from the the `lowest' part of any found
2302 chunk. (The implementation invariant is that prev_inuse is
2303 always true of any allocated chunk; i.e., that each allocated
2304 chunk borders either a previously allocated and still in-use chunk,
2305 or the base of its memory arena.)
2306
2307 */
2308
2309 #if __STD_C
mALLOc(RARG size_t bytes)2310 Void_t* mALLOc(RARG size_t bytes)
2311 #else
2312 Void_t* mALLOc(RARG bytes) RDECL size_t bytes;
2313 #endif
2314 {
2315 #ifdef MALLOC_PROVIDED
2316
2317 malloc (bytes);
2318
2319 #else
2320
2321 mchunkptr victim; /* inspected/selected chunk */
2322 INTERNAL_SIZE_T victim_size; /* its size */
2323 int idx; /* index for bin traversal */
2324 mbinptr bin; /* associated bin */
2325 mchunkptr remainder; /* remainder from a split */
2326 long remainder_size; /* its size */
2327 int remainder_index; /* its bin index */
2328 unsigned long block; /* block traverser bit */
2329 int startidx; /* first bin of a traversed block */
2330 mchunkptr fwd; /* misc temp for linking */
2331 mchunkptr bck; /* misc temp for linking */
2332 mbinptr q; /* misc temp */
2333
2334 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2335
2336 /* Check for overflow and just fail, if so. */
2337 if (nb > INT_MAX || nb < bytes)
2338 return 0;
2339
2340 MALLOC_LOCK;
2341
2342 /* Check for exact match in a bin */
2343
2344 if (is_small_request(nb)) /* Faster version for small requests */
2345 {
2346 idx = smallbin_index(nb);
2347
2348 /* No traversal or size check necessary for small bins. */
2349
2350 q = bin_at(idx);
2351 victim = last(q);
2352
2353 #if MALLOC_ALIGN != 16
2354 /* Also scan the next one, since it would have a remainder < MINSIZE */
2355 if (victim == q)
2356 {
2357 q = next_bin(q);
2358 victim = last(q);
2359 }
2360 #endif
2361 if (victim != q)
2362 {
2363 victim_size = chunksize(victim);
2364 unlink(victim, bck, fwd);
2365 set_inuse_bit_at_offset(victim, victim_size);
2366 check_malloced_chunk(victim, nb);
2367 MALLOC_UNLOCK;
2368 return chunk2mem(victim);
2369 }
2370
2371 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2372
2373 }
2374 else
2375 {
2376 idx = bin_index(nb);
2377 bin = bin_at(idx);
2378
2379 for (victim = last(bin); victim != bin; victim = victim->bk)
2380 {
2381 victim_size = chunksize(victim);
2382 remainder_size = long_sub_size_t(victim_size, nb);
2383
2384 if (remainder_size >= (long)MINSIZE) /* too big */
2385 {
2386 --idx; /* adjust to rescan below after checking last remainder */
2387 break;
2388 }
2389
2390 else if (remainder_size >= 0) /* exact fit */
2391 {
2392 unlink(victim, bck, fwd);
2393 set_inuse_bit_at_offset(victim, victim_size);
2394 check_malloced_chunk(victim, nb);
2395 MALLOC_UNLOCK;
2396 return chunk2mem(victim);
2397 }
2398 }
2399
2400 ++idx;
2401
2402 }
2403
2404 /* Try to use the last split-off remainder */
2405
2406 if ( (victim = last_remainder->fd) != last_remainder)
2407 {
2408 victim_size = chunksize(victim);
2409 remainder_size = long_sub_size_t(victim_size, nb);
2410
2411 if (remainder_size >= (long)MINSIZE) /* re-split */
2412 {
2413 remainder = chunk_at_offset(victim, nb);
2414 set_head(victim, nb | PREV_INUSE);
2415 link_last_remainder(remainder);
2416 set_head(remainder, remainder_size | PREV_INUSE);
2417 set_foot(remainder, remainder_size);
2418 check_malloced_chunk(victim, nb);
2419 MALLOC_UNLOCK;
2420 return chunk2mem(victim);
2421 }
2422
2423 clear_last_remainder;
2424
2425 if (remainder_size >= 0) /* exhaust */
2426 {
2427 set_inuse_bit_at_offset(victim, victim_size);
2428 check_malloced_chunk(victim, nb);
2429 MALLOC_UNLOCK;
2430 return chunk2mem(victim);
2431 }
2432
2433 /* Else place in bin */
2434
2435 frontlink(victim, victim_size, remainder_index, bck, fwd);
2436 }
2437
2438 /*
2439 If there are any possibly nonempty big-enough blocks,
2440 search for best fitting chunk by scanning bins in blockwidth units.
2441 */
2442
2443 if ( (block = idx2binblock(idx)) <= binblocks)
2444 {
2445
2446 /* Get to the first marked block */
2447
2448 if ( (block & binblocks) == 0)
2449 {
2450 /* force to an even block boundary */
2451 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2452 block <<= 1;
2453 while ((block & binblocks) == 0)
2454 {
2455 idx += BINBLOCKWIDTH;
2456 block <<= 1;
2457 }
2458 }
2459
2460 /* For each possibly nonempty block ... */
2461 for (;;)
2462 {
2463 startidx = idx; /* (track incomplete blocks) */
2464 q = bin = bin_at(idx);
2465
2466 /* For each bin in this block ... */
2467 do
2468 {
2469 /* Find and use first big enough chunk ... */
2470
2471 for (victim = last(bin); victim != bin; victim = victim->bk)
2472 {
2473 victim_size = chunksize(victim);
2474 remainder_size = long_sub_size_t(victim_size, nb);
2475
2476 if (remainder_size >= (long)MINSIZE) /* split */
2477 {
2478 remainder = chunk_at_offset(victim, nb);
2479 set_head(victim, nb | PREV_INUSE);
2480 unlink(victim, bck, fwd);
2481 link_last_remainder(remainder);
2482 set_head(remainder, remainder_size | PREV_INUSE);
2483 set_foot(remainder, remainder_size);
2484 check_malloced_chunk(victim, nb);
2485 MALLOC_UNLOCK;
2486 return chunk2mem(victim);
2487 }
2488
2489 else if (remainder_size >= 0) /* take */
2490 {
2491 set_inuse_bit_at_offset(victim, victim_size);
2492 unlink(victim, bck, fwd);
2493 check_malloced_chunk(victim, nb);
2494 MALLOC_UNLOCK;
2495 return chunk2mem(victim);
2496 }
2497
2498 }
2499
2500 bin = next_bin(bin);
2501
2502 #if MALLOC_ALIGN == 16
2503 if (idx < MAX_SMALLBIN)
2504 {
2505 bin = next_bin(bin);
2506 ++idx;
2507 }
2508 #endif
2509 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2510
2511 /* Clear out the block bit. */
2512
2513 do /* Possibly backtrack to try to clear a partial block */
2514 {
2515 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2516 {
2517 binblocks &= ~block;
2518 break;
2519 }
2520 --startidx;
2521 q = prev_bin(q);
2522 } while (first(q) == q);
2523
2524 /* Get to the next possibly nonempty block */
2525
2526 if ( (block <<= 1) <= binblocks && (block != 0) )
2527 {
2528 while ((block & binblocks) == 0)
2529 {
2530 idx += BINBLOCKWIDTH;
2531 block <<= 1;
2532 }
2533 }
2534 else
2535 break;
2536 }
2537 }
2538
2539
2540 /* Try to use top chunk */
2541
2542 /* Require that there be a remainder, ensuring top always exists */
2543 remainder_size = long_sub_size_t(chunksize(top), nb);
2544 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2545 {
2546
2547 #if HAVE_MMAP
2548 /* If big and would otherwise need to extend, try to use mmap instead */
2549 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2550 (victim = mmap_chunk(nb)) != 0)
2551 {
2552 MALLOC_UNLOCK;
2553 return chunk2mem(victim);
2554 }
2555 #endif
2556
2557 /* Try to extend */
2558 malloc_extend_top(RCALL nb);
2559 remainder_size = long_sub_size_t(chunksize(top), nb);
2560 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2561 {
2562 MALLOC_UNLOCK;
2563 return 0; /* propagate failure */
2564 }
2565 }
2566
2567 victim = top;
2568 set_head(victim, nb | PREV_INUSE);
2569 top = chunk_at_offset(victim, nb);
2570 set_head(top, remainder_size | PREV_INUSE);
2571 check_malloced_chunk(victim, nb);
2572 MALLOC_UNLOCK;
2573 return chunk2mem(victim);
2574
2575 #endif /* MALLOC_PROVIDED */
2576 }
2577
2578 #endif /* DEFINE_MALLOC */
2579
2580 #ifdef DEFINE_FREE
2581
2582 /*
2583
2584 free() algorithm :
2585
2586 cases:
2587
2588 1. free(0) has no effect.
2589
2590 2. If the chunk was allocated via mmap, it is release via munmap().
2591
2592 3. If a returned chunk borders the current high end of memory,
2593 it is consolidated into the top, and if the total unused
2594 topmost memory exceeds the trim threshold, malloc_trim is
2595 called.
2596
2597 4. Other chunks are consolidated as they arrive, and
2598 placed in corresponding bins. (This includes the case of
2599 consolidating with the current `last_remainder').
2600
2601 */
2602
2603
2604 #if __STD_C
fREe(RARG Void_t * mem)2605 void fREe(RARG Void_t* mem)
2606 #else
2607 void fREe(RARG mem) RDECL Void_t* mem;
2608 #endif
2609 {
2610 #ifdef MALLOC_PROVIDED
2611
2612 free (mem);
2613
2614 #else
2615
2616 mchunkptr p; /* chunk corresponding to mem */
2617 INTERNAL_SIZE_T hd; /* its head field */
2618 INTERNAL_SIZE_T sz; /* its size */
2619 int idx; /* its bin index */
2620 mchunkptr next; /* next contiguous chunk */
2621 INTERNAL_SIZE_T nextsz; /* its size */
2622 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2623 mchunkptr bck; /* misc temp for linking */
2624 mchunkptr fwd; /* misc temp for linking */
2625 int islr; /* track whether merging with last_remainder */
2626
2627 if (mem == 0) /* free(0) has no effect */
2628 return;
2629
2630 MALLOC_LOCK;
2631
2632 p = mem2chunk(mem);
2633 hd = p->size;
2634
2635 #if HAVE_MMAP
2636 if (hd & IS_MMAPPED) /* release mmapped memory. */
2637 {
2638 munmap_chunk(p);
2639 MALLOC_UNLOCK;
2640 return;
2641 }
2642 #endif
2643
2644 check_inuse_chunk(p);
2645
2646 sz = hd & ~PREV_INUSE;
2647 next = chunk_at_offset(p, sz);
2648 nextsz = chunksize(next);
2649
2650 if (next == top) /* merge with top */
2651 {
2652 sz += nextsz;
2653
2654 if (!(hd & PREV_INUSE)) /* consolidate backward */
2655 {
2656 prevsz = p->prev_size;
2657 p = chunk_at_offset(p, -prevsz);
2658 sz += prevsz;
2659 unlink(p, bck, fwd);
2660 }
2661
2662 set_head(p, sz | PREV_INUSE);
2663 top = p;
2664 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2665 malloc_trim(RCALL top_pad);
2666 MALLOC_UNLOCK;
2667 return;
2668 }
2669
2670 set_head(next, nextsz); /* clear inuse bit */
2671
2672 islr = 0;
2673
2674 if (!(hd & PREV_INUSE)) /* consolidate backward */
2675 {
2676 prevsz = p->prev_size;
2677 p = chunk_at_offset(p, -prevsz);
2678 sz += prevsz;
2679
2680 if (p->fd == last_remainder) /* keep as last_remainder */
2681 islr = 1;
2682 else
2683 unlink(p, bck, fwd);
2684 }
2685
2686 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2687 {
2688 sz += nextsz;
2689
2690 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2691 {
2692 islr = 1;
2693 link_last_remainder(p);
2694 }
2695 else
2696 unlink(next, bck, fwd);
2697 }
2698
2699
2700 set_head(p, sz | PREV_INUSE);
2701 set_foot(p, sz);
2702 if (!islr)
2703 frontlink(p, sz, idx, bck, fwd);
2704
2705 MALLOC_UNLOCK;
2706
2707 #endif /* MALLOC_PROVIDED */
2708 }
2709
2710 #endif /* DEFINE_FREE */
2711
2712 #ifdef DEFINE_REALLOC
2713
2714 /*
2715
2716 Realloc algorithm:
2717
2718 Chunks that were obtained via mmap cannot be extended or shrunk
2719 unless HAVE_MREMAP is defined, in which case mremap is used.
2720 Otherwise, if their reallocation is for additional space, they are
2721 copied. If for less, they are just left alone.
2722
2723 Otherwise, if the reallocation is for additional space, and the
2724 chunk can be extended, it is, else a malloc-copy-free sequence is
2725 taken. There are several different ways that a chunk could be
2726 extended. All are tried:
2727
2728 * Extending forward into following adjacent free chunk.
2729 * Shifting backwards, joining preceding adjacent space
2730 * Both shifting backwards and extending forward.
2731 * Extending into newly sbrked space
2732
2733 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2734 size argument of zero (re)allocates a minimum-sized chunk.
2735
2736 If the reallocation is for less space, and the new request is for
2737 a `small' (<512 bytes) size, then the newly unused space is lopped
2738 off and freed.
2739
2740 The old unix realloc convention of allowing the last-free'd chunk
2741 to be used as an argument to realloc is no longer supported.
2742 I don't know of any programs still relying on this feature,
2743 and allowing it would also allow too many other incorrect
2744 usages of realloc to be sensible.
2745
2746
2747 */
2748
2749
2750 #if __STD_C
rEALLOc(RARG Void_t * oldmem,size_t bytes)2751 Void_t* rEALLOc(RARG Void_t* oldmem, size_t bytes)
2752 #else
2753 Void_t* rEALLOc(RARG oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2754 #endif
2755 {
2756 #ifdef MALLOC_PROVIDED
2757
2758 realloc (oldmem, bytes);
2759
2760 #else
2761
2762 INTERNAL_SIZE_T nb; /* padded request size */
2763
2764 mchunkptr oldp; /* chunk corresponding to oldmem */
2765 INTERNAL_SIZE_T oldsize; /* its size */
2766
2767 mchunkptr newp; /* chunk to return */
2768 INTERNAL_SIZE_T newsize; /* its size */
2769 Void_t* newmem; /* corresponding user mem */
2770
2771 mchunkptr next; /* next contiguous chunk after oldp */
2772 INTERNAL_SIZE_T nextsize; /* its size */
2773
2774 mchunkptr prev; /* previous contiguous chunk before oldp */
2775 INTERNAL_SIZE_T prevsize; /* its size */
2776
2777 mchunkptr remainder; /* holds split off extra space from newp */
2778 INTERNAL_SIZE_T remainder_size; /* its size */
2779
2780 mchunkptr bck; /* misc temp for linking */
2781 mchunkptr fwd; /* misc temp for linking */
2782
2783 #ifdef REALLOC_ZERO_BYTES_FREES
2784 if (bytes == 0) { fREe(RCALL oldmem); return 0; }
2785 #endif
2786
2787
2788 /* realloc of null is supposed to be same as malloc */
2789 if (oldmem == 0) return mALLOc(RCALL bytes);
2790
2791 MALLOC_LOCK;
2792
2793 newp = oldp = mem2chunk(oldmem);
2794 newsize = oldsize = chunksize(oldp);
2795
2796
2797 nb = request2size(bytes);
2798
2799 /* Check for overflow and just fail, if so. */
2800 if (nb > INT_MAX || nb < bytes)
2801 return 0;
2802
2803 #if HAVE_MMAP
2804 if (chunk_is_mmapped(oldp))
2805 {
2806 #if HAVE_MREMAP
2807 newp = mremap_chunk(oldp, nb);
2808 if(newp)
2809 {
2810 MALLOC_UNLOCK;
2811 return chunk2mem(newp);
2812 }
2813 #endif
2814 /* Note the extra SIZE_SZ overhead. */
2815 if(oldsize - SIZE_SZ >= nb)
2816 {
2817 MALLOC_UNLOCK;
2818 return oldmem; /* do nothing */
2819 }
2820 /* Must alloc, copy, free. */
2821 newmem = mALLOc(RCALL bytes);
2822 if (newmem == 0)
2823 {
2824 MALLOC_UNLOCK;
2825 return 0; /* propagate failure */
2826 }
2827 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2828 munmap_chunk(oldp);
2829 MALLOC_UNLOCK;
2830 return newmem;
2831 }
2832 #endif
2833
2834 check_inuse_chunk(oldp);
2835
2836 if ((long)(oldsize) < (long)(nb))
2837 {
2838
2839 /* Try expanding forward */
2840
2841 next = chunk_at_offset(oldp, oldsize);
2842 if (next == top || !inuse(next))
2843 {
2844 nextsize = chunksize(next);
2845
2846 /* Forward into top only if a remainder */
2847 if (next == top)
2848 {
2849 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2850 {
2851 newsize += nextsize;
2852 top = chunk_at_offset(oldp, nb);
2853 set_head(top, (newsize - nb) | PREV_INUSE);
2854 set_head_size(oldp, nb);
2855 MALLOC_UNLOCK;
2856 return chunk2mem(oldp);
2857 }
2858 }
2859
2860 /* Forward into next chunk */
2861 else if (((long)(nextsize + newsize) >= (long)(nb)))
2862 {
2863 unlink(next, bck, fwd);
2864 newsize += nextsize;
2865 goto split;
2866 }
2867 }
2868 else
2869 {
2870 next = 0;
2871 nextsize = 0;
2872 }
2873
2874 /* Try shifting backwards. */
2875
2876 if (!prev_inuse(oldp))
2877 {
2878 prev = prev_chunk(oldp);
2879 prevsize = chunksize(prev);
2880
2881 /* try forward + backward first to save a later consolidation */
2882
2883 if (next != 0)
2884 {
2885 /* into top */
2886 if (next == top)
2887 {
2888 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2889 {
2890 unlink(prev, bck, fwd);
2891 newp = prev;
2892 newsize += prevsize + nextsize;
2893 newmem = chunk2mem(newp);
2894 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2895 top = chunk_at_offset(newp, nb);
2896 set_head(top, (newsize - nb) | PREV_INUSE);
2897 set_head_size(newp, nb);
2898 MALLOC_UNLOCK;
2899 return newmem;
2900 }
2901 }
2902
2903 /* into next chunk */
2904 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2905 {
2906 unlink(next, bck, fwd);
2907 unlink(prev, bck, fwd);
2908 newp = prev;
2909 newsize += nextsize + prevsize;
2910 newmem = chunk2mem(newp);
2911 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2912 goto split;
2913 }
2914 }
2915
2916 /* backward only */
2917 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2918 {
2919 unlink(prev, bck, fwd);
2920 newp = prev;
2921 newsize += prevsize;
2922 newmem = chunk2mem(newp);
2923 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2924 goto split;
2925 }
2926 }
2927
2928 /* Must allocate */
2929
2930 newmem = mALLOc (RCALL bytes);
2931
2932 if (newmem == 0) /* propagate failure */
2933 {
2934 MALLOC_UNLOCK;
2935 return 0;
2936 }
2937
2938 /* Avoid copy if newp is next chunk after oldp. */
2939 /* (This can only happen when new chunk is sbrk'ed.) */
2940
2941 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2942 {
2943 newsize += chunksize(newp);
2944 newp = oldp;
2945 goto split;
2946 }
2947
2948 /* Otherwise copy, free, and exit */
2949 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2950 fREe(RCALL oldmem);
2951 MALLOC_UNLOCK;
2952 return newmem;
2953 }
2954
2955
2956 split: /* split off extra room in old or expanded chunk */
2957
2958 remainder_size = long_sub_size_t(newsize, nb);
2959
2960 if (remainder_size >= (long)MINSIZE) /* split off remainder */
2961 {
2962 remainder = chunk_at_offset(newp, nb);
2963 set_head_size(newp, nb);
2964 set_head(remainder, remainder_size | PREV_INUSE);
2965 set_inuse_bit_at_offset(remainder, remainder_size);
2966 fREe(RCALL chunk2mem(remainder)); /* let free() deal with it */
2967 }
2968 else
2969 {
2970 set_head_size(newp, newsize);
2971 set_inuse_bit_at_offset(newp, newsize);
2972 }
2973
2974 check_inuse_chunk(newp);
2975 MALLOC_UNLOCK;
2976 return chunk2mem(newp);
2977
2978 #endif /* MALLOC_PROVIDED */
2979 }
2980
2981 #endif /* DEFINE_REALLOC */
2982
2983 #ifdef DEFINE_MEMALIGN
2984
2985 /*
2986
2987 memalign algorithm:
2988
2989 memalign requests more than enough space from malloc, finds a spot
2990 within that chunk that meets the alignment request, and then
2991 possibly frees the leading and trailing space.
2992
2993 The alignment argument must be a power of two. This property is not
2994 checked by memalign, so misuse may result in random runtime errors.
2995
2996 8-byte alignment is guaranteed by normal malloc calls, so don't
2997 bother calling memalign with an argument of 8 or less.
2998
2999 Overreliance on memalign is a sure way to fragment space.
3000
3001 */
3002
3003
3004 #if __STD_C
mEMALIGn(RARG size_t alignment,size_t bytes)3005 Void_t* mEMALIGn(RARG size_t alignment, size_t bytes)
3006 #else
3007 Void_t* mEMALIGn(RARG alignment, bytes) RDECL size_t alignment; size_t bytes;
3008 #endif
3009 {
3010 INTERNAL_SIZE_T nb; /* padded request size */
3011 char* m; /* memory returned by malloc call */
3012 mchunkptr p; /* corresponding chunk */
3013 char* brk; /* alignment point within p */
3014 mchunkptr newp; /* chunk to return */
3015 INTERNAL_SIZE_T newsize; /* its size */
3016 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3017 mchunkptr remainder; /* spare room at end to split off */
3018 long remainder_size; /* its size */
3019
3020 /* If need less alignment than we give anyway, just relay to malloc */
3021
3022 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(RCALL bytes);
3023
3024 /* Otherwise, ensure that it is at least a minimum chunk size */
3025
3026 if (alignment < MINSIZE) alignment = MINSIZE;
3027
3028 /* Call malloc with worst case padding to hit alignment. */
3029
3030 nb = request2size(bytes);
3031
3032 /* Check for overflow. */
3033 if (nb > INT_MAX || nb < bytes)
3034 return 0;
3035
3036 m = (char*)(mALLOc(RCALL nb + alignment + MINSIZE));
3037
3038 if (m == 0) return 0; /* propagate failure */
3039
3040 MALLOC_LOCK;
3041
3042 p = mem2chunk(m);
3043
3044 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3045 {
3046 #if HAVE_MMAP
3047 if(chunk_is_mmapped(p))
3048 {
3049 MALLOC_UNLOCK;
3050 return chunk2mem(p); /* nothing more to do */
3051 }
3052 #endif
3053 }
3054 else /* misaligned */
3055 {
3056 /*
3057 Find an aligned spot inside chunk.
3058 Since we need to give back leading space in a chunk of at
3059 least MINSIZE, if the first calculation places us at
3060 a spot with less than MINSIZE leader, we can move to the
3061 next aligned spot -- we've allocated enough total room so that
3062 this is always possible.
3063 */
3064
3065 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
3066 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3067
3068 newp = (mchunkptr)brk;
3069 leadsize = brk - (char*)(p);
3070 newsize = chunksize(p) - leadsize;
3071
3072 #if HAVE_MMAP
3073 if(chunk_is_mmapped(p))
3074 {
3075 newp->prev_size = p->prev_size + leadsize;
3076 set_head(newp, newsize|IS_MMAPPED);
3077 MALLOC_UNLOCK;
3078 return chunk2mem(newp);
3079 }
3080 #endif
3081
3082 /* give back leader, use the rest */
3083
3084 set_head(newp, newsize | PREV_INUSE);
3085 set_inuse_bit_at_offset(newp, newsize);
3086 set_head_size(p, leadsize);
3087 fREe(RCALL chunk2mem(p));
3088 p = newp;
3089
3090 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3091 }
3092
3093 /* Also give back spare room at the end */
3094
3095 remainder_size = long_sub_size_t(chunksize(p), nb);
3096
3097 if (remainder_size >= (long)MINSIZE)
3098 {
3099 remainder = chunk_at_offset(p, nb);
3100 set_head(remainder, remainder_size | PREV_INUSE);
3101 set_head_size(p, nb);
3102 fREe(RCALL chunk2mem(remainder));
3103 }
3104
3105 check_inuse_chunk(p);
3106 MALLOC_UNLOCK;
3107 return chunk2mem(p);
3108
3109 }
3110
3111 #endif /* DEFINE_MEMALIGN */
3112
3113 #ifdef DEFINE_VALLOC
3114
3115 /*
3116 valloc just invokes memalign with alignment argument equal
3117 to the page size of the system (or as near to this as can
3118 be figured out from all the includes/defines above.)
3119 */
3120
3121 #if __STD_C
vALLOc(RARG size_t bytes)3122 Void_t* vALLOc(RARG size_t bytes)
3123 #else
3124 Void_t* vALLOc(RARG bytes) RDECL size_t bytes;
3125 #endif
3126 {
3127 return mEMALIGn (RCALL malloc_getpagesize, bytes);
3128 }
3129
3130 #endif /* DEFINE_VALLOC */
3131
3132 #ifdef DEFINE_PVALLOC
3133
3134 /*
3135 pvalloc just invokes valloc for the nearest pagesize
3136 that will accommodate request
3137 */
3138
3139
3140 #if __STD_C
pvALLOc(RARG size_t bytes)3141 Void_t* pvALLOc(RARG size_t bytes)
3142 #else
3143 Void_t* pvALLOc(RARG bytes) RDECL size_t bytes;
3144 #endif
3145 {
3146 size_t pagesize = malloc_getpagesize;
3147 return mEMALIGn (RCALL pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3148 }
3149
3150 #endif /* DEFINE_PVALLOC */
3151
3152 #ifdef DEFINE_CALLOC
3153
3154 /*
3155
3156 calloc calls malloc, then zeroes out the allocated chunk.
3157
3158 */
3159
3160 #if __STD_C
cALLOc(RARG size_t n,size_t elem_size)3161 Void_t* cALLOc(RARG size_t n, size_t elem_size)
3162 #else
3163 Void_t* cALLOc(RARG n, elem_size) RDECL size_t n; size_t elem_size;
3164 #endif
3165 {
3166 mchunkptr p;
3167 INTERNAL_SIZE_T csz;
3168
3169 INTERNAL_SIZE_T sz = n * elem_size;
3170
3171 #if MORECORE_CLEARS
3172 mchunkptr oldtop;
3173 INTERNAL_SIZE_T oldtopsize;
3174 #endif
3175 Void_t* mem;
3176
3177 /* check if expand_top called, in which case don't need to clear */
3178 #if MORECORE_CLEARS
3179 MALLOC_LOCK;
3180 oldtop = top;
3181 oldtopsize = chunksize(top);
3182 #endif
3183
3184 mem = mALLOc (RCALL sz);
3185
3186 if (mem == 0)
3187 {
3188 #if MORECORE_CLEARS
3189 MALLOC_UNLOCK;
3190 #endif
3191 return 0;
3192 }
3193 else
3194 {
3195 p = mem2chunk(mem);
3196
3197 /* Two optional cases in which clearing not necessary */
3198
3199
3200 #if HAVE_MMAP
3201 if (chunk_is_mmapped(p))
3202 {
3203 #if MORECORE_CLEARS
3204 MALLOC_UNLOCK;
3205 #endif
3206 return mem;
3207 }
3208 #endif
3209
3210 csz = chunksize(p);
3211
3212 #if MORECORE_CLEARS
3213 if (p == oldtop && csz > oldtopsize)
3214 {
3215 /* clear only the bytes from non-freshly-sbrked memory */
3216 csz = oldtopsize;
3217 }
3218 MALLOC_UNLOCK;
3219 #endif
3220
3221 MALLOC_ZERO(mem, csz - SIZE_SZ);
3222 return mem;
3223 }
3224 }
3225
3226 #endif /* DEFINE_CALLOC */
3227
3228 #if defined(DEFINE_CFREE) && !defined(__CYGWIN__)
3229
3230 /*
3231
3232 cfree just calls free. It is needed/defined on some systems
3233 that pair it with calloc, presumably for odd historical reasons.
3234
3235 */
3236
3237 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3238 #if !defined(INTERNAL_NEWLIB) || !defined(_REENT_ONLY)
3239 #if __STD_C
cfree(Void_t * mem)3240 void cfree(Void_t *mem)
3241 #else
3242 void cfree(mem) Void_t *mem;
3243 #endif
3244 {
3245 #ifdef INTERNAL_NEWLIB
3246 fREe(_REENT, mem);
3247 #else
3248 fREe(mem);
3249 #endif
3250 }
3251 #endif
3252 #endif
3253
3254 #endif /* DEFINE_CFREE */
3255
3256 #ifdef DEFINE_FREE
3257
3258 /*
3259
3260 Malloc_trim gives memory back to the system (via negative
3261 arguments to sbrk) if there is unused memory at the `high' end of
3262 the malloc pool. You can call this after freeing large blocks of
3263 memory to potentially reduce the system-level memory requirements
3264 of a program. However, it cannot guarantee to reduce memory. Under
3265 some allocation patterns, some large free blocks of memory will be
3266 locked between two used chunks, so they cannot be given back to
3267 the system.
3268
3269 The `pad' argument to malloc_trim represents the amount of free
3270 trailing space to leave untrimmed. If this argument is zero,
3271 only the minimum amount of memory to maintain internal data
3272 structures will be left (one page or less). Non-zero arguments
3273 can be supplied to maintain enough trailing space to service
3274 future expected allocations without having to re-obtain memory
3275 from the system.
3276
3277 Malloc_trim returns 1 if it actually released any memory, else 0.
3278
3279 */
3280
3281 #if __STD_C
malloc_trim(RARG size_t pad)3282 int malloc_trim(RARG size_t pad)
3283 #else
3284 int malloc_trim(RARG pad) RDECL size_t pad;
3285 #endif
3286 {
3287 long top_size; /* Amount of top-most memory */
3288 long extra; /* Amount to release */
3289 char* current_brk; /* address returned by pre-check sbrk call */
3290 char* new_brk; /* address returned by negative sbrk call */
3291
3292 unsigned long pagesz = malloc_getpagesize;
3293
3294 MALLOC_LOCK;
3295
3296 top_size = chunksize(top);
3297 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3298
3299 if (extra < (long)pagesz) /* Not enough memory to release */
3300 {
3301 MALLOC_UNLOCK;
3302 return 0;
3303 }
3304
3305 else
3306 {
3307 /* Test to make sure no one else called sbrk */
3308 current_brk = (char*)(MORECORE (0));
3309 if (current_brk != (char*)(top) + top_size)
3310 {
3311 MALLOC_UNLOCK;
3312 return 0; /* Apparently we don't own memory; must fail */
3313 }
3314
3315 else
3316 {
3317 new_brk = (char*)(MORECORE (-extra));
3318
3319 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3320 {
3321 /* Try to figure out what we have */
3322 current_brk = (char*)(MORECORE (0));
3323 top_size = current_brk - (char*)top;
3324 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3325 {
3326 sbrked_mem = current_brk - sbrk_base;
3327 set_head(top, top_size | PREV_INUSE);
3328 }
3329 check_chunk(top);
3330 MALLOC_UNLOCK;
3331 return 0;
3332 }
3333
3334 else
3335 {
3336 /* Success. Adjust top accordingly. */
3337 set_head(top, (top_size - extra) | PREV_INUSE);
3338 sbrked_mem -= extra;
3339 check_chunk(top);
3340 MALLOC_UNLOCK;
3341 return 1;
3342 }
3343 }
3344 }
3345 }
3346
3347 #endif /* DEFINE_FREE */
3348
3349 #ifdef DEFINE_MALLOC_USABLE_SIZE
3350
3351 /*
3352 malloc_usable_size:
3353
3354 This routine tells you how many bytes you can actually use in an
3355 allocated chunk, which may be more than you requested (although
3356 often not). You can use this many bytes without worrying about
3357 overwriting other allocated objects. Not a particularly great
3358 programming practice, but still sometimes useful.
3359
3360 */
3361
3362 #if __STD_C
malloc_usable_size(RARG Void_t * mem)3363 size_t malloc_usable_size(RARG Void_t* mem)
3364 #else
3365 size_t malloc_usable_size(RARG mem) RDECL Void_t* mem;
3366 #endif
3367 {
3368 mchunkptr p;
3369 if (mem == 0)
3370 return 0;
3371 else
3372 {
3373 p = mem2chunk(mem);
3374 if(!chunk_is_mmapped(p))
3375 {
3376 if (!inuse(p)) return 0;
3377 #if DEBUG
3378 MALLOC_LOCK;
3379 check_inuse_chunk(p);
3380 MALLOC_UNLOCK;
3381 #endif
3382 return chunksize(p) - SIZE_SZ;
3383 }
3384 return chunksize(p) - 2*SIZE_SZ;
3385 }
3386 }
3387
3388 #endif /* DEFINE_MALLOC_USABLE_SIZE */
3389
3390 #ifdef DEFINE_MALLINFO
3391
3392 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3393
malloc_update_mallinfo()3394 STATIC void malloc_update_mallinfo()
3395 {
3396 int i;
3397 mbinptr b;
3398 mchunkptr p;
3399 #if DEBUG
3400 mchunkptr q;
3401 #endif
3402
3403 INTERNAL_SIZE_T avail = chunksize(top);
3404 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3405
3406 for (i = 1; i < NAV; ++i)
3407 {
3408 b = bin_at(i);
3409 for (p = last(b); p != b; p = p->bk)
3410 {
3411 #if DEBUG
3412 check_free_chunk(p);
3413 for (q = next_chunk(p);
3414 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3415 q = next_chunk(q))
3416 check_inuse_chunk(q);
3417 #endif
3418 avail += chunksize(p);
3419 navail++;
3420 }
3421 }
3422
3423 current_mallinfo.ordblks = navail;
3424 current_mallinfo.uordblks = sbrked_mem - avail;
3425 current_mallinfo.fordblks = avail;
3426 #if HAVE_MMAP
3427 current_mallinfo.hblks = n_mmaps;
3428 current_mallinfo.hblkhd = mmapped_mem;
3429 #endif
3430 current_mallinfo.keepcost = chunksize(top);
3431
3432 }
3433
3434 #else /* ! DEFINE_MALLINFO */
3435
3436 #if __STD_C
3437 extern void malloc_update_mallinfo(void);
3438 #else
3439 extern void malloc_update_mallinfo();
3440 #endif
3441
3442 #endif /* ! DEFINE_MALLINFO */
3443
3444 #ifdef DEFINE_MALLOC_STATS
3445
3446 /*
3447
3448 malloc_stats:
3449
3450 Prints on stderr the amount of space obtain from the system (both
3451 via sbrk and mmap), the maximum amount (which may be more than
3452 current if malloc_trim and/or munmap got called), the maximum
3453 number of simultaneous mmap regions used, and the current number
3454 of bytes allocated via malloc (or realloc, etc) but not yet
3455 freed. (Note that this is the number of bytes allocated, not the
3456 number requested. It will be larger than the number requested
3457 because of alignment and bookkeeping overhead.)
3458
3459 */
3460
3461 #if __STD_C
malloc_stats(RONEARG)3462 void malloc_stats(RONEARG)
3463 #else
3464 void malloc_stats(RONEARG) RDECL
3465 #endif
3466 {
3467 unsigned long local_max_total_mem;
3468 int local_sbrked_mem;
3469 struct mallinfo local_mallinfo;
3470 #if HAVE_MMAP
3471 unsigned long local_mmapped_mem, local_max_n_mmaps;
3472 #endif
3473 FILE *fp;
3474
3475 MALLOC_LOCK;
3476 malloc_update_mallinfo();
3477 local_max_total_mem = max_total_mem;
3478 local_sbrked_mem = sbrked_mem;
3479 local_mallinfo = current_mallinfo;
3480 #if HAVE_MMAP
3481 local_mmapped_mem = mmapped_mem;
3482 local_max_n_mmaps = max_n_mmaps;
3483 #endif
3484 MALLOC_UNLOCK;
3485
3486 #ifdef INTERNAL_NEWLIB
3487 _REENT_SMALL_CHECK_INIT(_stderr_r (reent_ptr));
3488 fp = _stderr_r(reent_ptr);
3489 #define fprintf fiprintf
3490 #else
3491 fp = stderr;
3492 #endif
3493
3494 fprintf(fp, "max system bytes = %10u\n",
3495 (unsigned int)(local_max_total_mem));
3496 #if HAVE_MMAP
3497 fprintf(fp, "system bytes = %10u\n",
3498 (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3499 fprintf(fp, "in use bytes = %10u\n",
3500 (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3501 #else
3502 fprintf(fp, "system bytes = %10u\n",
3503 (unsigned int)local_sbrked_mem);
3504 fprintf(fp, "in use bytes = %10u\n",
3505 (unsigned int)local_mallinfo.uordblks);
3506 #endif
3507 #if HAVE_MMAP
3508 fprintf(fp, "max mmap regions = %10u\n",
3509 (unsigned int)local_max_n_mmaps);
3510 #endif
3511 }
3512
3513 #endif /* DEFINE_MALLOC_STATS */
3514
3515 #ifdef DEFINE_MALLINFO
3516
3517 /*
3518 mallinfo returns a copy of updated current mallinfo.
3519 */
3520
3521 #if __STD_C
mALLINFo(RONEARG)3522 struct mallinfo mALLINFo(RONEARG)
3523 #else
3524 struct mallinfo mALLINFo(RONEARG) RDECL
3525 #endif
3526 {
3527 struct mallinfo ret;
3528
3529 MALLOC_LOCK;
3530 malloc_update_mallinfo();
3531 ret = current_mallinfo;
3532 MALLOC_UNLOCK;
3533 return ret;
3534 }
3535
3536 #endif /* DEFINE_MALLINFO */
3537
3538 #ifdef DEFINE_MALLOPT
3539
3540 /*
3541 mallopt:
3542
3543 mallopt is the general SVID/XPG interface to tunable parameters.
3544 The format is to provide a (parameter-number, parameter-value) pair.
3545 mallopt then sets the corresponding parameter to the argument
3546 value if it can (i.e., so long as the value is meaningful),
3547 and returns 1 if successful else 0.
3548
3549 See descriptions of tunable parameters above.
3550
3551 */
3552
3553 #if __STD_C
mALLOPt(RARG int param_number,int value)3554 int mALLOPt(RARG int param_number, int value)
3555 #else
3556 int mALLOPt(RARG param_number, value) RDECL int param_number; int value;
3557 #endif
3558 {
3559 MALLOC_LOCK;
3560 switch(param_number)
3561 {
3562 case M_TRIM_THRESHOLD:
3563 trim_threshold = value; MALLOC_UNLOCK; return 1;
3564 case M_TOP_PAD:
3565 top_pad = value; MALLOC_UNLOCK; return 1;
3566 case M_MMAP_THRESHOLD:
3567 #if HAVE_MMAP
3568 mmap_threshold = value;
3569 #endif
3570 MALLOC_UNLOCK;
3571 return 1;
3572 case M_MMAP_MAX:
3573 #if HAVE_MMAP
3574 n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3575 #else
3576 MALLOC_UNLOCK; return value == 0;
3577 #endif
3578
3579 default:
3580 MALLOC_UNLOCK;
3581 return 0;
3582 }
3583 }
3584
3585 #endif /* DEFINE_MALLOPT */
3586
3587 /*
3588
3589 History:
3590
3591 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3592 * Added pvalloc, as recommended by H.J. Liu
3593 * Added 64bit pointer support mainly from Wolfram Gloger
3594 * Added anonymously donated WIN32 sbrk emulation
3595 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3596 * malloc_extend_top: fix mask error that caused wastage after
3597 foreign sbrks
3598 * Add linux mremap support code from HJ Liu
3599
3600 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3601 * Integrated most documentation with the code.
3602 * Add support for mmap, with help from
3603 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3604 * Use last_remainder in more cases.
3605 * Pack bins using idea from colin@nyx10.cs.du.edu
3606 * Use ordered bins instead of best-fit threshhold
3607 * Eliminate block-local decls to simplify tracing and debugging.
3608 * Support another case of realloc via move into top
3609 * Fix error occuring when initial sbrk_base not word-aligned.
3610 * Rely on page size for units instead of SBRK_UNIT to
3611 avoid surprises about sbrk alignment conventions.
3612 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3613 (raymond@es.ele.tue.nl) for the suggestion.
3614 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3615 * More precautions for cases where other routines call sbrk,
3616 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3617 * Added macros etc., allowing use in linux libc from
3618 H.J. Lu (hjl@gnu.ai.mit.edu)
3619 * Inverted this history list
3620
3621 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3622 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3623 * Removed all preallocation code since under current scheme
3624 the work required to undo bad preallocations exceeds
3625 the work saved in good cases for most test programs.
3626 * No longer use return list or unconsolidated bins since
3627 no scheme using them consistently outperforms those that don't
3628 given above changes.
3629 * Use best fit for very large chunks to prevent some worst-cases.
3630 * Added some support for debugging
3631
3632 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3633 * Removed footers when chunks are in use. Thanks to
3634 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3635
3636 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3637 * Added malloc_trim, with help from Wolfram Gloger
3638 (wmglo@Dent.MED.Uni-Muenchen.DE).
3639
3640 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3641
3642 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3643 * realloc: try to expand in both directions
3644 * malloc: swap order of clean-bin strategy;
3645 * realloc: only conditionally expand backwards
3646 * Try not to scavenge used bins
3647 * Use bin counts as a guide to preallocation
3648 * Occasionally bin return list chunks in first scan
3649 * Add a few optimizations from colin@nyx10.cs.du.edu
3650
3651 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3652 * faster bin computation & slightly different binning
3653 * merged all consolidations to one part of malloc proper
3654 (eliminating old malloc_find_space & malloc_clean_bin)
3655 * Scan 2 returns chunks (not just 1)
3656 * Propagate failure in realloc if malloc returns 0
3657 * Add stuff to allow compilation on non-ANSI compilers
3658 from kpv@research.att.com
3659
3660 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3661 * removed potential for odd address access in prev_chunk
3662 * removed dependency on getpagesize.h
3663 * misc cosmetics and a bit more internal documentation
3664 * anticosmetics: mangled names in macros to evade debugger strangeness
3665 * tested on sparc, hp-700, dec-mips, rs6000
3666 with gcc & native cc (hp, dec only) allowing
3667 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3668
3669 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3670 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3671 structure of old version, but most details differ.)
3672
3673 */
3674 #endif
3675