1 /*!
2 
3 **A fast bump allocation arena for Rust.**
4 
5 [![](https://docs.rs/bumpalo/badge.svg)](https://docs.rs/bumpalo/)
6 [![](https://img.shields.io/crates/v/bumpalo.svg)](https://crates.io/crates/bumpalo)
7 [![](https://img.shields.io/crates/d/bumpalo.svg)](https://crates.io/crates/bumpalo)
8 [![Build Status](https://dev.azure.com/fitzgen/bumpalo/_apis/build/status/fitzgen.bumpalo?branchName=master)](https://dev.azure.com/fitzgen/bumpalo/_build/latest?definitionId=2&branchName=master)
9 
10 ![](https://github.com/fitzgen/bumpalo/raw/master/bumpalo.png)
11 
12 ## Bump Allocation
13 
14 Bump allocation is a fast, but limited approach to allocation. We have a chunk
15 of memory, and we maintain a pointer within that memory. Whenever we allocate an
16 object, we do a quick test that we have enough capacity left in our chunk to
17 allocate the object and then update the pointer by the object's size. *That's
18 it!*
19 
20 The disadvantage of bump allocation is that there is no general way to
21 deallocate individual objects or reclaim the memory region for a
22 no-longer-in-use object.
23 
24 These trade offs make bump allocation well-suited for *phase-oriented*
25 allocations. That is, a group of objects that will all be allocated during the
26 same program phase, used, and then can all be deallocated together as a group.
27 
28 ## Deallocation en Masse, but No `Drop`
29 
30 To deallocate all the objects in the arena at once, we can simply reset the bump
31 pointer back to the start of the arena's memory chunk. This makes mass
32 deallocation *extremely* fast, but allocated objects' `Drop` implementations are
33 not invoked.
34 
35 > **However:** [`bumpalo::boxed::Box<T>`][crate::boxed::Box] can be used to wrap
36 > `T` values allocated in the `Bump` arena, and calls `T`'s `Drop`
37 > implementation when the `Box<T>` wrapper goes out of scope. This is similar to
38 > how [`std::boxed::Box`] works, except without deallocating its backing memory.
39 
40 [`std::boxed::Box`]: https://doc.rust-lang.org/std/boxed/struct.Box.html
41 
42 ## What happens when the memory chunk is full?
43 
44 This implementation will allocate a new memory chunk from the global allocator
45 and then start bump allocating into this new memory chunk.
46 
47 ## Example
48 
49 ```
50 use bumpalo::Bump;
51 use std::u64;
52 
53 struct Doggo {
54     cuteness: u64,
55     age: u8,
56     scritches_required: bool,
57 }
58 
59 // Create a new arena to bump allocate into.
60 let bump = Bump::new();
61 
62 // Allocate values into the arena.
63 let scooter = bump.alloc(Doggo {
64     cuteness: u64::max_value(),
65     age: 8,
66     scritches_required: true,
67 });
68 
69 assert!(scooter.scritches_required);
70 ```
71 
72 ## Collections
73 
74 When the `"collections"` cargo feature is enabled, a fork of some of the `std`
75 library's collections are available in the `collections` module. These
76 collection types are modified to allocate their space inside `bumpalo::Bump`
77 arenas.
78 
79 ```rust
80 # #[cfg(feature = "collections")]
81 # {
82 use bumpalo::{Bump, collections::Vec};
83 
84 // Create a new bump arena.
85 let bump = Bump::new();
86 
87 // Create a vector of integers whose storage is backed by the bump arena. The
88 // vector cannot outlive its backing arena, and this property is enforced with
89 // Rust's lifetime rules.
90 let mut v = Vec::new_in(&bump);
91 
92 // Push a bunch of integers onto `v`!
93 for i in 0..100 {
94     v.push(i);
95 }
96 # }
97 ```
98 
99 Eventually [all `std` collection types will be parameterized by an
100 allocator](https://github.com/rust-lang/rust/issues/42774) and we can remove
101 this `collections` module and use the `std` versions.
102 
103 ## `bumpalo::boxed::Box`
104 
105 When the `"boxed"` cargo feature is enabled, a fork of `std::boxed::Box` library
106 is available in the `boxed` module. This `Box` type is modified to allocate its
107 space inside `bumpalo::Bump` arenas.
108 
109 **A `Box<T>` runs `T`'s drop implementation when the `Box<T>` is dropped.** You
110 can use this to work around the fact that `Bump` does not drop values allocated
111 in its space itself.
112 
113 ```rust
114 # #[cfg(feature = "boxed")]
115 # {
116 use bumpalo::{Bump, boxed::Box};
117 use std::sync::atomic::{AtomicUsize, Ordering};
118 
119 static NUM_DROPPED: AtomicUsize = AtomicUsize::new(0);
120 
121 struct CountDrops;
122 
123 impl Drop for CountDrops {
124     fn drop(&mut self) {
125         NUM_DROPPED.fetch_add(1, Ordering::SeqCst);
126     }
127 }
128 
129 // Create a new bump arena.
130 let bump = Bump::new();
131 
132 // Create a `CountDrops` inside the bump arena.
133 let mut c = Box::new_in(CountDrops, &bump);
134 
135 // No `CountDrops` have been dropped yet.
136 assert_eq!(NUM_DROPPED.load(Ordering::SeqCst), 0);
137 
138 // Drop our `Box<CountDrops>`.
139 drop(c);
140 
141 // Its `Drop` implementation was run, and so `NUM_DROPS` has been incremented.
142 assert_eq!(NUM_DROPPED.load(Ordering::SeqCst), 1);
143 # }
144 ```
145 
146 ## `#![no_std]` Support
147 
148 Bumpalo is a `no_std` crate. It depends only on the `alloc` and `core` crates.
149 
150  */
151 
152 #![deny(missing_debug_implementations)]
153 #![deny(missing_docs)]
154 #![no_std]
155 
156 #[doc(hidden)]
157 pub extern crate alloc as core_alloc;
158 
159 #[cfg(feature = "boxed")]
160 pub mod boxed;
161 #[cfg(feature = "collections")]
162 pub mod collections;
163 
164 mod alloc;
165 
166 use core::cell::Cell;
167 use core::iter;
168 use core::marker::PhantomData;
169 use core::mem;
170 use core::ptr::{self, NonNull};
171 use core::slice;
172 use core::str;
173 use core_alloc::alloc::{alloc, dealloc, Layout};
174 
175 /// An arena to bump allocate into.
176 ///
177 /// ## No `Drop`s
178 ///
179 /// Objects that are bump-allocated will never have their `Drop` implementation
180 /// called &mdash; unless you do it manually yourself. This makes it relatively
181 /// easy to leak memory or other resources.
182 ///
183 /// If you have a type which internally manages
184 ///
185 /// * an allocation from the global heap (e.g. `Vec<T>`),
186 /// * open file descriptors (e.g. `std::fs::File`), or
187 /// * any other resource that must be cleaned up (e.g. an `mmap`)
188 ///
189 /// and relies on its `Drop` implementation to clean up the internal resource,
190 /// then if you allocate that type with a `Bump`, you need to find a new way to
191 /// clean up after it yourself.
192 ///
193 /// Potential solutions are
194 ///
195 /// * calling [`drop_in_place`][drop_in_place] or using
196 ///   [`std::mem::ManuallyDrop`][manuallydrop] to manually drop these types,
197 /// * using [`bumpalo::collections::Vec`] instead of [`std::vec::Vec`]
198 /// * using [`bumpalo::boxed::Box::new_in`] instead of [`Bump::alloc`],
199 ///   that will drop wrapped values similarly to [`std::boxed::Box`].
200 /// * simply avoiding allocating these problematic types within a `Bump`.
201 ///
202 /// Note that not calling `Drop` is memory safe! Destructors are never
203 /// guaranteed to run in Rust, you can't rely on them for enforcing memory
204 /// safety.
205 ///
206 /// [drop_in_place]: https://doc.rust-lang.org/std/ptr/fn.drop_in_place.html
207 /// [manuallydrop]: https://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html
208 /// [`bumpalo::collections::Vec`]: ./collections/struct.Vec.html
209 /// [`std::vec::Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html
210 /// [`bumpalo::boxed::Box::new_in`]: ./boxed/struct.Box.html#method.new_in
211 /// [`Bump::alloc`]: ./struct.Bump.html#method.alloc
212 /// [`std::boxed::Box`]: https://doc.rust-lang.org/std/boxed/struct.Box.html
213 ///
214 /// ## Example
215 ///
216 /// ```
217 /// use bumpalo::Bump;
218 ///
219 /// // Create a new bump arena.
220 /// let bump = Bump::new();
221 ///
222 /// // Allocate values into the arena.
223 /// let forty_two = bump.alloc(42);
224 /// assert_eq!(*forty_two, 42);
225 ///
226 /// // Mutable references are returned from allocation.
227 /// let mut s = bump.alloc("bumpalo");
228 /// *s = "the bump allocator; and also is a buffalo";
229 /// ```
230 #[derive(Debug)]
231 pub struct Bump {
232     // The current chunk we are bump allocating within.
233     current_chunk_footer: Cell<NonNull<ChunkFooter>>,
234 }
235 
236 #[repr(C)]
237 #[derive(Debug)]
238 struct ChunkFooter {
239     // Pointer to the start of this chunk allocation. This footer is always at
240     // the end of the chunk.
241     data: NonNull<u8>,
242 
243     // The layout of this chunk's allocation.
244     layout: Layout,
245 
246     // Link to the previous chunk, if any.
247     prev: Cell<Option<NonNull<ChunkFooter>>>,
248 
249     // Bump allocation finger that is always in the range `self.data..=self`.
250     ptr: Cell<NonNull<u8>>,
251 }
252 
253 impl Default for Bump {
default() -> Bump254     fn default() -> Bump {
255         Bump::new()
256     }
257 }
258 
259 impl Drop for Bump {
drop(&mut self)260     fn drop(&mut self) {
261         unsafe {
262             dealloc_chunk_list(Some(self.current_chunk_footer.get()));
263         }
264     }
265 }
266 
267 #[inline]
dealloc_chunk_list(mut footer: Option<NonNull<ChunkFooter>>)268 unsafe fn dealloc_chunk_list(mut footer: Option<NonNull<ChunkFooter>>) {
269     while let Some(f) = footer {
270         footer = f.as_ref().prev.get();
271         dealloc(f.as_ref().data.as_ptr(), f.as_ref().layout);
272     }
273 }
274 
275 // `Bump`s are safe to send between threads because nothing aliases its owned
276 // chunks until you start allocating from it. But by the time you allocate from
277 // it, the returned references to allocations borrow the `Bump` and therefore
278 // prevent sending the `Bump` across threads until the borrows end.
279 unsafe impl Send for Bump {}
280 
281 #[inline]
round_up_to(n: usize, divisor: usize) -> Option<usize>282 pub(crate) fn round_up_to(n: usize, divisor: usize) -> Option<usize> {
283     debug_assert!(divisor > 0);
284     debug_assert!(divisor.is_power_of_two());
285     Some(n.checked_add(divisor - 1)? & !(divisor - 1))
286 }
287 
288 // After this point, we try to hit page boundaries instead of powers of 2
289 const PAGE_STRATEGY_CUTOFF: usize = 0x1000;
290 
291 // We only support alignments of up to 16 bytes for iter_allocated_chunks.
292 const SUPPORTED_ITER_ALIGNMENT: usize = 16;
293 const CHUNK_ALIGN: usize = SUPPORTED_ITER_ALIGNMENT;
294 const FOOTER_SIZE: usize = mem::size_of::<ChunkFooter>();
295 
296 // Assert that ChunkFooter is at most the supported alignment. This will give a compile time error if it is not the case
297 const _FOOTER_ALIGN_ASSERTION: bool = mem::align_of::<ChunkFooter>() <= CHUNK_ALIGN;
298 const _: [(); _FOOTER_ALIGN_ASSERTION as usize] = [()];
299 
300 // Maximum typical overhead per allocation imposed by allocators.
301 const MALLOC_OVERHEAD: usize = 16;
302 
303 // This is the overhead from malloc, footer and alignment. For instance, if
304 // we want to request a chunk of memory that has at least X bytes usable for
305 // allocations (where X is aligned to CHUNK_ALIGN), then we expect that the
306 // after adding a footer, malloc overhead and alignment, the chunk of memory
307 // the allocator actually sets asside for us is X+OVERHEAD rounded up to the
308 // nearest suitable size boundary.
309 const OVERHEAD: usize = (MALLOC_OVERHEAD + FOOTER_SIZE + (CHUNK_ALIGN - 1)) & !(CHUNK_ALIGN - 1);
310 
311 // Choose a relatively small default initial chunk size, since we double chunk
312 // sizes as we grow bump arenas to amortize costs of hitting the global
313 // allocator.
314 const FIRST_ALLOCATION_GOAL: usize = 1 << 9;
315 
316 // The actual size of the first allocation is going to be a bit smaller
317 // than the goal. We need to make room for the footer, and we also need
318 // take the alignment into account.
319 const DEFAULT_CHUNK_SIZE_WITHOUT_FOOTER: usize = FIRST_ALLOCATION_GOAL - OVERHEAD;
320 
321 #[inline]
layout_for_array<T>(len: usize) -> Option<Layout>322 fn layout_for_array<T>(len: usize) -> Option<Layout> {
323     // TODO: use Layout::array once the rust feature `alloc_layout_extra`
324     // gets stabilized
325     //
326     // According to https://doc.rust-lang.org/reference/type-layout.html#size-and-alignment
327     // the size of a value is always a multiple of it's alignment. But that does not seem to match
328     // with https://doc.rust-lang.org/std/alloc/struct.Layout.html#method.from_size_align
329     //
330     // Let's be on the safe size and round up to the padding in any case.
331     //
332     // An interesting question is whether there needs to be padding at the end of
333     // the last object in the array. Again, we take the safe approach and include it.
334 
335     let layout = Layout::new::<T>();
336     let size_rounded_up = round_up_to(layout.size(), layout.align())?;
337     let total_size = len.checked_mul(size_rounded_up)?;
338 
339     Layout::from_size_align(total_size, layout.align()).ok()
340 }
341 
342 /// Wrapper around `Layout::from_size_align` that adds debug assertions.
343 #[inline]
layout_from_size_align(size: usize, align: usize) -> Layout344 unsafe fn layout_from_size_align(size: usize, align: usize) -> Layout {
345     if cfg!(debug_assertions) {
346         Layout::from_size_align(size, align).unwrap()
347     } else {
348         Layout::from_size_align_unchecked(size, align)
349     }
350 }
351 
352 #[inline(never)]
allocation_size_overflow<T>() -> T353 fn allocation_size_overflow<T>() -> T {
354     panic!("requested allocation size overflowed")
355 }
356 
357 impl Bump {
358     /// Construct a new arena to bump allocate into.
359     ///
360     /// ## Example
361     ///
362     /// ```
363     /// let bump = bumpalo::Bump::new();
364     /// # let _ = bump;
365     /// ```
new() -> Bump366     pub fn new() -> Bump {
367         Self::with_capacity(0)
368     }
369 
370     /// Attempt to construct a new arena to bump allocate into.
371     ///
372     /// ## Example
373     ///
374     /// ```
375     /// let bump = bumpalo::Bump::try_new();
376     /// # let _ = bump.unwrap();
377     /// ```
try_new() -> Result<Bump, alloc::AllocErr>378     pub fn try_new() -> Result<Bump, alloc::AllocErr> {
379         Bump::try_with_capacity(0)
380     }
381 
382     /// Construct a new arena with the specified byte capacity to bump allocate into.
383     ///
384     /// ## Example
385     ///
386     /// ```
387     /// let bump = bumpalo::Bump::with_capacity(100);
388     /// # let _ = bump;
389     /// ```
with_capacity(capacity: usize) -> Bump390     pub fn with_capacity(capacity: usize) -> Bump {
391         Bump::try_with_capacity(capacity).unwrap_or_else(|_| oom())
392     }
393 
394     /// Attempt to construct a new arena with the specified byte capacity to bump allocate into.
395     ///
396     /// ## Example
397     ///
398     /// ```
399     /// let bump = bumpalo::Bump::try_with_capacity(100);
400     /// # let _ = bump.unwrap();
401     /// ```
try_with_capacity(capacity: usize) -> Result<Self, alloc::AllocErr>402     pub fn try_with_capacity(capacity: usize) -> Result<Self, alloc::AllocErr> {
403         let chunk_footer = Self::new_chunk(
404             None,
405             Some(unsafe { layout_from_size_align(capacity, 1) }),
406             None,
407         )
408         .ok_or(alloc::AllocErr {})?;
409 
410         Ok(Bump {
411             current_chunk_footer: Cell::new(chunk_footer),
412         })
413     }
414 
415     /// Allocate a new chunk and return its initialized footer.
416     ///
417     /// If given, `layouts` is a tuple of the current chunk size and the
418     /// layout of the allocation request that triggered us to fall back to
419     /// allocating a new chunk of memory.
new_chunk( old_size_with_footer: Option<usize>, requested_layout: Option<Layout>, prev: Option<NonNull<ChunkFooter>>, ) -> Option<NonNull<ChunkFooter>>420     fn new_chunk(
421         old_size_with_footer: Option<usize>,
422         requested_layout: Option<Layout>,
423         prev: Option<NonNull<ChunkFooter>>,
424     ) -> Option<NonNull<ChunkFooter>> {
425         unsafe {
426             // As a sane default, we want our new allocation to be about twice as
427             // big as the previous allocation
428             let mut new_size_without_footer =
429                 if let Some(old_size_with_footer) = old_size_with_footer {
430                     let old_size_without_footer = old_size_with_footer - FOOTER_SIZE;
431                     old_size_without_footer.checked_mul(2)?
432                 } else {
433                     DEFAULT_CHUNK_SIZE_WITHOUT_FOOTER
434                 };
435 
436             // We want to have CHUNK_ALIGN or better alignment
437             let mut align = CHUNK_ALIGN;
438 
439             // If we already know we need to fulfill some request,
440             // make sure we allocate at least enough to satisfy it
441             if let Some(requested_layout) = requested_layout {
442                 align = align.max(requested_layout.align());
443                 let requested_size = round_up_to(requested_layout.size(), align)
444                     .unwrap_or_else(allocation_size_overflow);
445                 new_size_without_footer = new_size_without_footer.max(requested_size);
446             }
447 
448             // We want our allocations to play nice with the memory allocator,
449             // and waste as little memory as possible.
450             // For small allocations, this means that the entire allocation
451             // including the chunk footer and mallocs internal overhead is
452             // as close to a power of two as we can go without going over.
453             // For larger allocations, we only need to get close to a page
454             // boundary without going over.
455             if new_size_without_footer < PAGE_STRATEGY_CUTOFF {
456                 new_size_without_footer =
457                     (new_size_without_footer + OVERHEAD).next_power_of_two() - OVERHEAD;
458             } else {
459                 new_size_without_footer =
460                     round_up_to(new_size_without_footer + OVERHEAD, 0x1000)? - OVERHEAD;
461             }
462 
463             debug_assert_eq!(align % CHUNK_ALIGN, 0);
464             debug_assert_eq!(new_size_without_footer % CHUNK_ALIGN, 0);
465             let size = new_size_without_footer
466                 .checked_add(FOOTER_SIZE)
467                 .unwrap_or_else(allocation_size_overflow);
468             let layout = layout_from_size_align(size, align);
469 
470             debug_assert!(size >= old_size_with_footer.unwrap_or(0) * 2);
471 
472             let data = alloc(layout);
473             let data = NonNull::new(data)?;
474 
475             // The `ChunkFooter` is at the end of the chunk.
476             let footer_ptr = data.as_ptr() as usize + new_size_without_footer;
477             debug_assert_eq!((data.as_ptr() as usize) % align, 0);
478             debug_assert_eq!(footer_ptr % CHUNK_ALIGN, 0);
479             let footer_ptr = footer_ptr as *mut ChunkFooter;
480 
481             // The bump pointer is initialized to the end of the range we will
482             // bump out of.
483             let ptr = Cell::new(NonNull::new_unchecked(footer_ptr as *mut u8));
484 
485             ptr::write(
486                 footer_ptr,
487                 ChunkFooter {
488                     data,
489                     layout,
490                     prev: Cell::new(prev),
491                     ptr,
492                 },
493             );
494 
495             Some(NonNull::new_unchecked(footer_ptr))
496         }
497     }
498 
499     /// Reset this bump allocator.
500     ///
501     /// Performs mass deallocation on everything allocated in this arena by
502     /// resetting the pointer into the underlying chunk of memory to the start
503     /// of the chunk. Does not run any `Drop` implementations on deallocated
504     /// objects; see [the `Bump` type's top-level
505     /// documentation](./struct.Bump.html) for details.
506     ///
507     /// If this arena has allocated multiple chunks to bump allocate into, then
508     /// the excess chunks are returned to the global allocator.
509     ///
510     /// ## Example
511     ///
512     /// ```
513     /// let mut bump = bumpalo::Bump::new();
514     ///
515     /// // Allocate a bunch of things.
516     /// {
517     ///     for i in 0..100 {
518     ///         bump.alloc(i);
519     ///     }
520     /// }
521     ///
522     /// // Reset the arena.
523     /// bump.reset();
524     ///
525     /// // Allocate some new things in the space previously occupied by the
526     /// // original things.
527     /// for j in 200..400 {
528     ///     bump.alloc(j);
529     /// }
530     ///```
reset(&mut self)531     pub fn reset(&mut self) {
532         // Takes `&mut self` so `self` must be unique and there can't be any
533         // borrows active that would get invalidated by resetting.
534         unsafe {
535             let cur_chunk = self.current_chunk_footer.get();
536 
537             // Deallocate all chunks except the current one
538             let prev_chunk = cur_chunk.as_ref().prev.replace(None);
539             dealloc_chunk_list(prev_chunk);
540 
541             // Reset the bump finger to the end of the chunk.
542             cur_chunk.as_ref().ptr.set(cur_chunk.cast());
543 
544             debug_assert!(
545                 self.current_chunk_footer
546                     .get()
547                     .as_ref()
548                     .prev
549                     .get()
550                     .is_none(),
551                 "We should only have a single chunk"
552             );
553             debug_assert_eq!(
554                 self.current_chunk_footer.get().as_ref().ptr.get(),
555                 self.current_chunk_footer.get().cast(),
556                 "Our chunk's bump finger should be reset to the start of its allocation"
557             );
558         }
559     }
560 
561     /// Allocate an object in this `Bump` and return an exclusive reference to
562     /// it.
563     ///
564     /// ## Panics
565     ///
566     /// Panics if reserving space for `T` would cause an overflow.
567     ///
568     /// ## Example
569     ///
570     /// ```
571     /// let bump = bumpalo::Bump::new();
572     /// let x = bump.alloc("hello");
573     /// assert_eq!(*x, "hello");
574     /// ```
575     #[inline(always)]
576     #[allow(clippy::mut_from_ref)]
alloc<T>(&self, val: T) -> &mut T577     pub fn alloc<T>(&self, val: T) -> &mut T {
578         self.alloc_with(|| val)
579     }
580 
581     /// Pre-allocate space for an object in this `Bump`, initializes it using
582     /// the closure, then returns an exclusive reference to it.
583     ///
584     /// Calling `bump.alloc(x)` is essentially equivalent to calling
585     /// `bump.alloc_with(|| x)`. However if you use `alloc_with`, then the
586     /// closure will not be invoked until after allocating space for storing
587     /// `x` on the heap.
588     ///
589     /// This can be useful in certain edge-cases related to compiler
590     /// optimizations. When evaluating `bump.alloc(x)`, semantically `x` is
591     /// first put on the stack and then moved onto the heap. In some cases,
592     /// the compiler is able to optimize this into constructing `x` directly
593     /// on the heap, however in many cases it does not.
594     ///
595     /// The function `alloc_with` tries to help the compiler be smarter. In
596     /// most cases doing `bump.alloc_with(|| x)` on release mode will be
597     /// enough to help the compiler to realize this optimization is valid
598     /// and construct `x` directly onto the heap.
599     ///
600     /// ## Warning
601     ///
602     /// This function critically depends on compiler optimizations to achieve
603     /// its desired effect. This means that it is not an effective tool when
604     /// compiling without optimizations on.
605     ///
606     /// Even when optimizations are on, this function does not **guarantee**
607     /// that the value is constructed on the heap. To the best of our
608     /// knowledge no such guarantee can be made in stable Rust as of 1.33.
609     ///
610     /// ## Panics
611     ///
612     /// Panics if reserving space for `T` would cause an overflow.
613     ///
614     /// ## Example
615     ///
616     /// ```
617     /// let bump = bumpalo::Bump::new();
618     /// let x = bump.alloc_with(|| "hello");
619     /// assert_eq!(*x, "hello");
620     /// ```
621     #[inline(always)]
622     #[allow(clippy::mut_from_ref)]
alloc_with<F, T>(&self, f: F) -> &mut T where F: FnOnce() -> T,623     pub fn alloc_with<F, T>(&self, f: F) -> &mut T
624     where
625         F: FnOnce() -> T,
626     {
627         #[inline(always)]
628         unsafe fn inner_writer<T, F>(ptr: *mut T, f: F)
629         where
630             F: FnOnce() -> T,
631         {
632             // This function is translated as:
633             // - allocate space for a T on the stack
634             // - call f() with the return value being put onto this stack space
635             // - memcpy from the stack to the heap
636             //
637             // Ideally we want LLVM to always realize that doing a stack
638             // allocation is unnecessary and optimize the code so it writes
639             // directly into the heap instead. It seems we get it to realize
640             // this most consistently if we put this critical line into it's
641             // own function instead of inlining it into the surrounding code.
642             ptr::write(ptr, f())
643         }
644 
645         let layout = Layout::new::<T>();
646 
647         unsafe {
648             let p = self.alloc_layout(layout);
649             let p = p.as_ptr() as *mut T;
650             inner_writer(p, f);
651             &mut *p
652         }
653     }
654 
655     /// `Copy` a slice into this `Bump` and return an exclusive reference to
656     /// the copy.
657     ///
658     /// ## Panics
659     ///
660     /// Panics if reserving space for the slice would cause an overflow.
661     ///
662     /// ## Example
663     ///
664     /// ```
665     /// let bump = bumpalo::Bump::new();
666     /// let x = bump.alloc_slice_copy(&[1, 2, 3]);
667     /// assert_eq!(x, &[1, 2, 3]);
668     /// ```
669     #[inline(always)]
670     #[allow(clippy::mut_from_ref)]
alloc_slice_copy<T>(&self, src: &[T]) -> &mut [T] where T: Copy,671     pub fn alloc_slice_copy<T>(&self, src: &[T]) -> &mut [T]
672     where
673         T: Copy,
674     {
675         let layout = Layout::for_value(src);
676         let dst = self.alloc_layout(layout).cast::<T>();
677 
678         unsafe {
679             ptr::copy_nonoverlapping(src.as_ptr(), dst.as_ptr(), src.len());
680             slice::from_raw_parts_mut(dst.as_ptr(), src.len())
681         }
682     }
683 
684     /// `Clone` a slice into this `Bump` and return an exclusive reference to
685     /// the clone. Prefer `alloc_slice_copy` if `T` is `Copy`.
686     ///
687     /// ## Panics
688     ///
689     /// Panics if reserving space for the slice would cause an overflow.
690     ///
691     /// ## Example
692     ///
693     /// ```
694     /// #[derive(Clone, Debug, Eq, PartialEq)]
695     /// struct Sheep {
696     ///     name: String,
697     /// }
698     ///
699     /// let originals = vec![
700     ///     Sheep { name: "Alice".into() },
701     ///     Sheep { name: "Bob".into() },
702     ///     Sheep { name: "Cathy".into() },
703     /// ];
704     ///
705     /// let bump = bumpalo::Bump::new();
706     /// let clones = bump.alloc_slice_clone(&originals);
707     /// assert_eq!(originals, clones);
708     /// ```
709     #[inline(always)]
710     #[allow(clippy::mut_from_ref)]
alloc_slice_clone<T>(&self, src: &[T]) -> &mut [T] where T: Clone,711     pub fn alloc_slice_clone<T>(&self, src: &[T]) -> &mut [T]
712     where
713         T: Clone,
714     {
715         let layout = Layout::for_value(src);
716         let dst = self.alloc_layout(layout).cast::<T>();
717 
718         unsafe {
719             for (i, val) in src.iter().cloned().enumerate() {
720                 ptr::write(dst.as_ptr().add(i), val);
721             }
722 
723             slice::from_raw_parts_mut(dst.as_ptr(), src.len())
724         }
725     }
726 
727     /// `Copy` a string slice into this `Bump` and return an exclusive reference to it.
728     ///
729     /// ## Panics
730     ///
731     /// Panics if reserving space for the string would cause an overflow.
732     ///
733     /// ## Example
734     ///
735     /// ```
736     /// let bump = bumpalo::Bump::new();
737     /// let hello = bump.alloc_str("hello world");
738     /// assert_eq!("hello world", hello);
739     /// ```
740     #[inline(always)]
741     #[allow(clippy::mut_from_ref)]
alloc_str(&self, src: &str) -> &mut str742     pub fn alloc_str(&self, src: &str) -> &mut str {
743         let buffer = self.alloc_slice_copy(src.as_bytes());
744         unsafe {
745             // This is OK, because it already came in as str, so it is guaranteed to be utf8
746             str::from_utf8_unchecked_mut(buffer)
747         }
748     }
749 
750     /// Allocates a new slice of size `len` into this `Bump` and returns an
751     /// exclusive reference to the copy.
752     ///
753     /// The elements of the slice are initialized using the supplied closure.
754     /// The closure argument is the position in the slice.
755     ///
756     /// ## Panics
757     ///
758     /// Panics if reserving space for the slice would cause an overflow.
759     ///
760     /// ## Example
761     ///
762     /// ```
763     /// let bump = bumpalo::Bump::new();
764     /// let x = bump.alloc_slice_fill_with(5, |i| 5*(i+1));
765     /// assert_eq!(x, &[5, 10, 15, 20, 25]);
766     /// ```
767     #[inline(always)]
768     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_with<T, F>(&self, len: usize, mut f: F) -> &mut [T] where F: FnMut(usize) -> T,769     pub fn alloc_slice_fill_with<T, F>(&self, len: usize, mut f: F) -> &mut [T]
770     where
771         F: FnMut(usize) -> T,
772     {
773         let layout = layout_for_array::<T>(len).unwrap_or_else(|| oom());
774         let dst = self.alloc_layout(layout).cast::<T>();
775 
776         unsafe {
777             for i in 0..len {
778                 ptr::write(dst.as_ptr().add(i), f(i));
779             }
780 
781             let result = slice::from_raw_parts_mut(dst.as_ptr(), len);
782             debug_assert_eq!(Layout::for_value(result), layout);
783             result
784         }
785     }
786 
787     /// Allocates a new slice of size `len` into this `Bump` and returns an
788     /// exclusive reference to the copy.
789     ///
790     /// All elements of the slice are initialized to `value`.
791     ///
792     /// ## Panics
793     ///
794     /// Panics if reserving space for the slice would cause an overflow.
795     ///
796     /// ## Example
797     ///
798     /// ```
799     /// let bump = bumpalo::Bump::new();
800     /// let x = bump.alloc_slice_fill_copy(5, 42);
801     /// assert_eq!(x, &[42, 42, 42, 42, 42]);
802     /// ```
803     #[inline(always)]
804     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_copy<T: Copy>(&self, len: usize, value: T) -> &mut [T]805     pub fn alloc_slice_fill_copy<T: Copy>(&self, len: usize, value: T) -> &mut [T] {
806         self.alloc_slice_fill_with(len, |_| value)
807     }
808 
809     /// Allocates a new slice of size `len` slice into this `Bump` and return an
810     /// exclusive reference to the copy.
811     ///
812     /// All elements of the slice are initialized to `value.clone()`.
813     ///
814     /// ## Panics
815     ///
816     /// Panics if reserving space for the slice would cause an overflow.
817     ///
818     /// ## Example
819     ///
820     /// ```
821     /// let bump = bumpalo::Bump::new();
822     /// let s: String = "Hello Bump!".to_string();
823     /// let x: &[String] = bump.alloc_slice_fill_clone(2, &s);
824     /// assert_eq!(x.len(), 2);
825     /// assert_eq!(&x[0], &s);
826     /// assert_eq!(&x[1], &s);
827     /// ```
828     #[inline(always)]
829     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_clone<T: Clone>(&self, len: usize, value: &T) -> &mut [T]830     pub fn alloc_slice_fill_clone<T: Clone>(&self, len: usize, value: &T) -> &mut [T] {
831         self.alloc_slice_fill_with(len, |_| value.clone())
832     }
833 
834     /// Allocates a new slice of size `len` slice into this `Bump` and return an
835     /// exclusive reference to the copy.
836     ///
837     /// The elements are initialized using the supplied iterator.
838     ///
839     /// ## Panics
840     ///
841     /// Panics if reserving space for the slice would cause an overflow, or if the supplied
842     /// iterator returns fewer elements than it promised.
843     ///
844     /// ## Example
845     ///
846     /// ```
847     /// let bump = bumpalo::Bump::new();
848     /// let x: &[i32] = bump.alloc_slice_fill_iter([2, 3, 5].iter().cloned().map(|i| i * i));
849     /// assert_eq!(x, [4, 9, 25]);
850     /// ```
851     #[inline(always)]
852     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_iter<T, I>(&self, iter: I) -> &mut [T] where I: IntoIterator<Item = T>, I::IntoIter: ExactSizeIterator,853     pub fn alloc_slice_fill_iter<T, I>(&self, iter: I) -> &mut [T]
854     where
855         I: IntoIterator<Item = T>,
856         I::IntoIter: ExactSizeIterator,
857     {
858         let mut iter = iter.into_iter();
859         self.alloc_slice_fill_with(iter.len(), |_| {
860             iter.next().expect("Iterator supplied too few elements")
861         })
862     }
863 
864     /// Allocates a new slice of size `len` slice into this `Bump` and return an
865     /// exclusive reference to the copy.
866     ///
867     /// All elements of the slice are initialized to `T::default()`.
868     ///
869     /// ## Panics
870     ///
871     /// Panics if reserving space for the slice would cause an overflow.
872     ///
873     /// ## Example
874     ///
875     /// ```
876     /// let bump = bumpalo::Bump::new();
877     /// let x = bump.alloc_slice_fill_default::<u32>(5);
878     /// assert_eq!(x, &[0, 0, 0, 0, 0]);
879     /// ```
880     #[inline(always)]
881     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_default<T: Default>(&self, len: usize) -> &mut [T]882     pub fn alloc_slice_fill_default<T: Default>(&self, len: usize) -> &mut [T] {
883         self.alloc_slice_fill_with(len, |_| T::default())
884     }
885 
886     /// Allocate space for an object with the given `Layout`.
887     ///
888     /// The returned pointer points at uninitialized memory, and should be
889     /// initialized with
890     /// [`std::ptr::write`](https://doc.rust-lang.org/std/ptr/fn.write.html).
891     #[inline(always)]
alloc_layout(&self, layout: Layout) -> NonNull<u8>892     pub fn alloc_layout(&self, layout: Layout) -> NonNull<u8> {
893         self.try_alloc_layout(layout).unwrap_or_else(|_| oom())
894     }
895 
896     /// Attempts to allocate space for an object with the given `Layout` or else returns
897     /// an `Err`.
898     ///
899     /// The returned pointer points at uninitialized memory, and should be
900     /// initialized with
901     /// [`std::ptr::write`](https://doc.rust-lang.org/std/ptr/fn.write.html).
902     #[inline(always)]
try_alloc_layout(&self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr>903     pub fn try_alloc_layout(&self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr> {
904         if let Some(p) = self.try_alloc_layout_fast(layout) {
905             Ok(p)
906         } else {
907             self.alloc_layout_slow(layout).ok_or(alloc::AllocErr {})
908         }
909     }
910 
911     #[inline(always)]
try_alloc_layout_fast(&self, layout: Layout) -> Option<NonNull<u8>>912     fn try_alloc_layout_fast(&self, layout: Layout) -> Option<NonNull<u8>> {
913         // We don't need to check for ZSTs here since they will automatically
914         // be handled properly: the pointer will be bumped by zero bytes,
915         // modulo alignment. This keeps the fast path optimized for non-ZSTs,
916         // which are much more common.
917         unsafe {
918             let footer = self.current_chunk_footer.get();
919             let footer = footer.as_ref();
920             let ptr = footer.ptr.get().as_ptr() as usize;
921             let start = footer.data.as_ptr() as usize;
922             debug_assert!(start <= ptr);
923             debug_assert!(ptr <= footer as *const _ as usize);
924 
925             let ptr = ptr.checked_sub(layout.size())?;
926             let aligned_ptr = ptr & !(layout.align() - 1);
927 
928             if aligned_ptr >= start {
929                 let aligned_ptr = NonNull::new_unchecked(aligned_ptr as *mut u8);
930                 footer.ptr.set(aligned_ptr);
931                 Some(aligned_ptr)
932             } else {
933                 None
934             }
935         }
936     }
937 
938     /// Gets the remaining capacity in the current chunk (in bytes).
939     ///
940     /// ## Example
941     ///
942     /// ```
943     /// use bumpalo::Bump;
944     ///
945     /// let bump = Bump::with_capacity(100);
946     ///
947     /// let capacity = bump.chunk_capacity();
948     /// assert!(capacity >= 100);
949     /// ```
chunk_capacity(&self) -> usize950     pub fn chunk_capacity(&self) -> usize {
951         let current_footer = self.current_chunk_footer.get();
952         let current_footer = unsafe { current_footer.as_ref() };
953 
954         current_footer as *const _ as usize - current_footer.data.as_ptr() as usize
955     }
956 
957     /// Slow path allocation for when we need to allocate a new chunk from the
958     /// parent bump set because there isn't enough room in our current chunk.
959     #[inline(never)]
alloc_layout_slow(&self, layout: Layout) -> Option<NonNull<u8>>960     fn alloc_layout_slow(&self, layout: Layout) -> Option<NonNull<u8>> {
961         unsafe {
962             let size = layout.size();
963 
964             // Get a new chunk from the global allocator.
965             let current_footer = self.current_chunk_footer.get();
966             let current_layout = current_footer.as_ref().layout;
967             let new_footer = Bump::new_chunk(
968                 Some(current_layout.size()),
969                 Some(layout),
970                 Some(current_footer),
971             )?;
972             debug_assert_eq!(
973                 new_footer.as_ref().data.as_ptr() as usize % layout.align(),
974                 0
975             );
976 
977             // Set the new chunk as our new current chunk.
978             self.current_chunk_footer.set(new_footer);
979 
980             let new_footer = new_footer.as_ref();
981 
982             // Move the bump ptr finger down to allocate room for `val`. We know
983             // this can't overflow because we successfully allocated a chunk of
984             // at least the requested size.
985             let ptr = new_footer.ptr.get().as_ptr() as usize - size;
986             // Round the pointer down to the requested alignment.
987             let ptr = ptr & !(layout.align() - 1);
988             debug_assert!(
989                 ptr <= new_footer as *const _ as usize,
990                 "{:#x} <= {:#x}",
991                 ptr,
992                 new_footer as *const _ as usize
993             );
994             let ptr = NonNull::new_unchecked(ptr as *mut u8);
995             new_footer.ptr.set(ptr);
996 
997             // Return a pointer to the freshly allocated region in this chunk.
998             Some(ptr)
999         }
1000     }
1001 
1002     /// Returns an iterator over each chunk of allocated memory that
1003     /// this arena has bump allocated into.
1004     ///
1005     /// The chunks are returned ordered by allocation time, with the most
1006     /// recently allocated chunk being returned first, and the least recently
1007     /// allocated chunk being returned last.
1008     ///
1009     /// The values inside each chunk are also ordered by allocation time, with
1010     /// the most recent allocation being earlier in the slice, and the least
1011     /// recent allocation being towards the end of the slice.
1012     ///
1013     /// ## Safety
1014     ///
1015     /// Because this method takes `&mut self`, we know that the bump arena
1016     /// reference is unique and therefore there aren't any active references to
1017     /// any of the objects we've allocated in it either. This potential aliasing
1018     /// of exclusive references is one common footgun for unsafe code that we
1019     /// don't need to worry about here.
1020     ///
1021     /// However, there could be regions of uninitialized memory used as padding
1022     /// between allocations, which is why this iterator has items of type
1023     /// `[MaybeUninit<u8>]`, instead of simply `[u8]`.
1024     ///
1025     /// The only way to guarantee that there is no padding between allocations
1026     /// or within allocated objects is if all of these properties hold:
1027     ///
1028     /// 1. Every object allocated in this arena has the same alignment,
1029     ///    and that alignment is at most 16.
1030     /// 2. Every object's size is a multiple of its alignment.
1031     /// 3. None of the objects allocated in this arena contain any internal
1032     ///    padding.
1033     ///
1034     /// If you want to use this `iter_allocated_chunks` method, it is *your*
1035     /// responsibility to ensure that these properties hold before calling
1036     /// `MaybeUninit::assume_init` or otherwise reading the returned values.
1037     ///
1038     /// Finally, you must also ensure that any values allocated into the bump
1039     /// arena have not had their `Drop` implementations called on them,
1040     /// e.g. after dropping a [`bumpalo::boxed::Box<T>`][crate::boxed::Box].
1041     ///
1042     /// ## Example
1043     ///
1044     /// ```
1045     /// let mut bump = bumpalo::Bump::new();
1046     ///
1047     /// // Allocate a bunch of `i32`s in this bump arena, potentially causing
1048     /// // additional memory chunks to be reserved.
1049     /// for i in 0..10000 {
1050     ///     bump.alloc(i);
1051     /// }
1052     ///
1053     /// // Iterate over each chunk we've bump allocated into. This is safe
1054     /// // because we have only allocated `i32`s in this arena, which fulfills
1055     /// // the above requirements.
1056     /// for ch in bump.iter_allocated_chunks() {
1057     ///     println!("Used a chunk that is {} bytes long", ch.len());
1058     ///     println!("The first byte is {:?}", unsafe {
1059     ///         ch.get(0).unwrap().assume_init()
1060     ///     });
1061     /// }
1062     ///
1063     /// // Within a chunk, allocations are ordered from most recent to least
1064     /// // recent. If we allocated 'a', then 'b', then 'c', when we iterate
1065     /// // through the chunk's data, we get them in the order 'c', then 'b',
1066     /// // then 'a'.
1067     ///
1068     /// bump.reset();
1069     /// bump.alloc(b'a');
1070     /// bump.alloc(b'b');
1071     /// bump.alloc(b'c');
1072     ///
1073     /// assert_eq!(bump.iter_allocated_chunks().count(), 1);
1074     /// let chunk = bump.iter_allocated_chunks().nth(0).unwrap();
1075     /// assert_eq!(chunk.len(), 3);
1076     ///
1077     /// // Safe because we've only allocated `u8`s in this arena, which
1078     /// // fulfills the above requirements.
1079     /// unsafe {
1080     ///     assert_eq!(chunk[0].assume_init(), b'c');
1081     ///     assert_eq!(chunk[1].assume_init(), b'b');
1082     ///     assert_eq!(chunk[2].assume_init(), b'a');
1083     /// }
1084     /// ```
iter_allocated_chunks(&mut self) -> ChunkIter<'_>1085     pub fn iter_allocated_chunks(&mut self) -> ChunkIter<'_> {
1086         ChunkIter {
1087             footer: Some(self.current_chunk_footer.get()),
1088             bump: PhantomData,
1089         }
1090     }
1091 
1092     /// Calculates the number of bytes currently allocated across all chunks.
1093     ///
1094     /// If you allocate types of different alignments or types with
1095     /// larger-than-typical alignment in the same arena, some padding
1096     /// bytes might get allocated in the bump arena. Note that those padding
1097     /// bytes will add to this method's resulting sum, so you cannot rely
1098     /// on it only counting the sum of the sizes of the things
1099     /// you've allocated in the arena.
1100     ///
1101     /// ## Example
1102     ///
1103     /// ```
1104     /// let bump = bumpalo::Bump::new();
1105     /// let _x = bump.alloc_slice_fill_default::<u32>(5);
1106     /// let bytes = bump.allocated_bytes();
1107     /// assert!(bytes >= core::mem::size_of::<u32>() * 5);
1108     /// ```
allocated_bytes(&self) -> usize1109     pub fn allocated_bytes(&self) -> usize {
1110         let mut footer = Some(self.current_chunk_footer.get());
1111 
1112         let mut bytes = 0;
1113 
1114         while let Some(f) = footer {
1115             let foot = unsafe { f.as_ref() };
1116 
1117             let ptr = foot.ptr.get().as_ptr() as usize;
1118             debug_assert!(ptr <= foot as *const _ as usize);
1119 
1120             bytes += foot as *const _ as usize - ptr;
1121 
1122             footer = foot.prev.get();
1123         }
1124 
1125         bytes
1126     }
1127 
1128     #[inline]
is_last_allocation(&self, ptr: NonNull<u8>) -> bool1129     unsafe fn is_last_allocation(&self, ptr: NonNull<u8>) -> bool {
1130         let footer = self.current_chunk_footer.get();
1131         let footer = footer.as_ref();
1132         footer.ptr.get() == ptr
1133     }
1134 }
1135 
1136 /// An iterator over each chunk of allocated memory that
1137 /// an arena has bump allocated into.
1138 ///
1139 /// The chunks are returned ordered by allocation time, with the most recently
1140 /// allocated chunk being returned first.
1141 ///
1142 /// The values inside each chunk is also ordered by allocation time, with the most
1143 /// recent allocation being earlier in the slice.
1144 ///
1145 /// This struct is created by the [`iter_allocated_chunks`] method on
1146 /// [`Bump`]. See that function for a safety description regarding reading from the returned items.
1147 ///
1148 /// [`Bump`]: ./struct.Bump.html
1149 /// [`iter_allocated_chunks`]: ./struct.Bump.html#method.iter_allocated_chunks
1150 #[derive(Debug)]
1151 pub struct ChunkIter<'a> {
1152     footer: Option<NonNull<ChunkFooter>>,
1153     bump: PhantomData<&'a mut Bump>,
1154 }
1155 
1156 impl<'a> Iterator for ChunkIter<'a> {
1157     type Item = &'a [mem::MaybeUninit<u8>];
next(&mut self) -> Option<&'a [mem::MaybeUninit<u8>]>1158     fn next(&mut self) -> Option<&'a [mem::MaybeUninit<u8>]> {
1159         unsafe {
1160             let foot = self.footer?;
1161             let foot = foot.as_ref();
1162             let data = foot.data.as_ptr() as usize;
1163             let ptr = foot.ptr.get().as_ptr() as usize;
1164             debug_assert!(data <= ptr);
1165             debug_assert!(ptr <= foot as *const _ as usize);
1166 
1167             let len = foot as *const _ as usize - ptr;
1168             let slice = slice::from_raw_parts(ptr as *const mem::MaybeUninit<u8>, len);
1169             self.footer = foot.prev.get();
1170             Some(slice)
1171         }
1172     }
1173 }
1174 
1175 impl<'a> iter::FusedIterator for ChunkIter<'a> {}
1176 
1177 #[inline(never)]
1178 #[cold]
oom() -> !1179 fn oom() -> ! {
1180     panic!("out of memory")
1181 }
1182 
1183 unsafe impl<'a> alloc::Alloc for &'a Bump {
1184     #[inline(always)]
alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr>1185     unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr> {
1186         self.try_alloc_layout(layout)
1187     }
1188 
1189     #[inline]
dealloc(&mut self, ptr: NonNull<u8>, layout: Layout)1190     unsafe fn dealloc(&mut self, ptr: NonNull<u8>, layout: Layout) {
1191         // If the pointer is the last allocation we made, we can reuse the bytes,
1192         // otherwise they are simply leaked -- at least until somebody calls reset().
1193         if self.is_last_allocation(ptr) {
1194             let ptr = NonNull::new_unchecked(ptr.as_ptr().add(layout.size()));
1195             self.current_chunk_footer.get().as_ref().ptr.set(ptr);
1196         }
1197     }
1198 
1199     #[inline]
realloc( &mut self, ptr: NonNull<u8>, layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, alloc::AllocErr>1200     unsafe fn realloc(
1201         &mut self,
1202         ptr: NonNull<u8>,
1203         layout: Layout,
1204         new_size: usize,
1205     ) -> Result<NonNull<u8>, alloc::AllocErr> {
1206         let old_size = layout.size();
1207 
1208         if old_size == 0 {
1209             return self.alloc(layout);
1210         }
1211 
1212         if new_size <= old_size {
1213             if self.is_last_allocation(ptr)
1214                 // Only reclaim the excess space (which requires a copy) if it
1215                 // is worth it: we are actually going to recover "enough" space
1216                 // and we can do a non-overlapping copy.
1217                 && new_size <= old_size / 2
1218             {
1219                 let delta = old_size - new_size;
1220                 let footer = self.current_chunk_footer.get();
1221                 let footer = footer.as_ref();
1222                 footer
1223                     .ptr
1224                     .set(NonNull::new_unchecked(footer.ptr.get().as_ptr().add(delta)));
1225                 let new_ptr = footer.ptr.get();
1226                 // NB: we know it is non-overlapping because of the size check
1227                 // in the `if` condition.
1228                 ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), new_size);
1229                 return Ok(new_ptr);
1230             } else {
1231                 return Ok(ptr);
1232             }
1233         }
1234 
1235         if self.is_last_allocation(ptr) {
1236             // Try to allocate the delta size within this same block so we can
1237             // reuse the currently allocated space.
1238             let delta = new_size - old_size;
1239             if let Some(p) =
1240                 self.try_alloc_layout_fast(layout_from_size_align(delta, layout.align()))
1241             {
1242                 ptr::copy(ptr.as_ptr(), p.as_ptr(), old_size);
1243                 return Ok(p);
1244             }
1245         }
1246 
1247         // Fallback: do a fresh allocation and copy the existing data into it.
1248         let new_layout = layout_from_size_align(new_size, layout.align());
1249         let new_ptr = self.try_alloc_layout(new_layout)?;
1250         ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), old_size);
1251         Ok(new_ptr)
1252     }
1253 }
1254 
1255 #[cfg(test)]
1256 mod tests {
1257     use super::*;
1258 
1259     #[test]
chunk_footer_is_five_words()1260     fn chunk_footer_is_five_words() {
1261         assert_eq!(mem::size_of::<ChunkFooter>(), mem::size_of::<usize>() * 5);
1262     }
1263 
1264     #[test]
1265     #[allow(clippy::cognitive_complexity)]
test_realloc()1266     fn test_realloc() {
1267         use crate::alloc::Alloc;
1268 
1269         unsafe {
1270             const CAPACITY: usize = 1024 - OVERHEAD;
1271             let mut b = Bump::with_capacity(CAPACITY);
1272 
1273             // `realloc` doesn't shrink allocations that aren't "worth it".
1274             let layout = Layout::from_size_align(100, 1).unwrap();
1275             let p = b.alloc_layout(layout);
1276             let q = (&b).realloc(p, layout, 51).unwrap();
1277             assert_eq!(p, q);
1278             b.reset();
1279 
1280             // `realloc` will shrink allocations that are "worth it".
1281             let layout = Layout::from_size_align(100, 1).unwrap();
1282             let p = b.alloc_layout(layout);
1283             let q = (&b).realloc(p, layout, 50).unwrap();
1284             assert!(p != q);
1285             b.reset();
1286 
1287             // `realloc` will reuse the last allocation when growing.
1288             let layout = Layout::from_size_align(10, 1).unwrap();
1289             let p = b.alloc_layout(layout);
1290             let q = (&b).realloc(p, layout, 11).unwrap();
1291             assert_eq!(q.as_ptr() as usize, p.as_ptr() as usize - 1);
1292             b.reset();
1293 
1294             // `realloc` will allocate a new chunk when growing the last
1295             // allocation, if need be.
1296             let layout = Layout::from_size_align(1, 1).unwrap();
1297             let p = b.alloc_layout(layout);
1298             let q = (&b).realloc(p, layout, CAPACITY + 1).unwrap();
1299             assert!(q.as_ptr() as usize != p.as_ptr() as usize - CAPACITY);
1300             b = Bump::with_capacity(CAPACITY);
1301 
1302             // `realloc` will allocate and copy when reallocating anything that
1303             // wasn't the last allocation.
1304             let layout = Layout::from_size_align(1, 1).unwrap();
1305             let p = b.alloc_layout(layout);
1306             let _ = b.alloc_layout(layout);
1307             let q = (&b).realloc(p, layout, 2).unwrap();
1308             assert!(q.as_ptr() as usize != p.as_ptr() as usize - 1);
1309             b.reset();
1310         }
1311     }
1312 
1313     #[test]
invalid_read()1314     fn invalid_read() {
1315         use alloc::Alloc;
1316 
1317         let mut b = &Bump::new();
1318 
1319         unsafe {
1320             let l1 = Layout::from_size_align(12000, 4).unwrap();
1321             let p1 = Alloc::alloc(&mut b, l1).unwrap();
1322 
1323             let l2 = Layout::from_size_align(1000, 4).unwrap();
1324             Alloc::alloc(&mut b, l2).unwrap();
1325 
1326             let p1 = b.realloc(p1, l1, 24000).unwrap();
1327             let l3 = Layout::from_size_align(24000, 4).unwrap();
1328             b.realloc(p1, l3, 48000).unwrap();
1329         }
1330     }
1331 }
1332