1 /*!
2 
3 **A fast bump allocation arena for Rust.**
4 
5 [![](https://docs.rs/bumpalo/badge.svg)](https://docs.rs/bumpalo/)
6 [![](https://img.shields.io/crates/v/bumpalo.svg)](https://crates.io/crates/bumpalo)
7 [![](https://img.shields.io/crates/d/bumpalo.svg)](https://crates.io/crates/bumpalo)
8 [![Build Status](https://github.com/fitzgen/bumpalo/workflows/Rust/badge.svg)](https://github.com/fitzgen/bumpalo/actions?query=workflow%3ARust)
9 
10 ![](https://github.com/fitzgen/bumpalo/raw/master/bumpalo.png)
11 
12 ## Bump Allocation
13 
14 Bump allocation is a fast, but limited approach to allocation. We have a chunk
15 of memory, and we maintain a pointer within that memory. Whenever we allocate an
16 object, we do a quick test that we have enough capacity left in our chunk to
17 allocate the object and then update the pointer by the object's size. *That's
18 it!*
19 
20 The disadvantage of bump allocation is that there is no general way to
21 deallocate individual objects or reclaim the memory region for a
22 no-longer-in-use object.
23 
24 These trade offs make bump allocation well-suited for *phase-oriented*
25 allocations. That is, a group of objects that will all be allocated during the
26 same program phase, used, and then can all be deallocated together as a group.
27 
28 ## Deallocation en Masse, but No `Drop`
29 
30 To deallocate all the objects in the arena at once, we can simply reset the bump
31 pointer back to the start of the arena's memory chunk. This makes mass
32 deallocation *extremely* fast, but allocated objects' `Drop` implementations are
33 not invoked.
34 
35 > **However:** [`bumpalo::boxed::Box<T>`][crate::boxed::Box] can be used to wrap
36 > `T` values allocated in the `Bump` arena, and calls `T`'s `Drop`
37 > implementation when the `Box<T>` wrapper goes out of scope. This is similar to
38 > how [`std::boxed::Box`] works, except without deallocating its backing memory.
39 
40 [`std::boxed::Box`]: https://doc.rust-lang.org/std/boxed/struct.Box.html
41 
42 ## What happens when the memory chunk is full?
43 
44 This implementation will allocate a new memory chunk from the global allocator
45 and then start bump allocating into this new memory chunk.
46 
47 ## Example
48 
49 ```
50 use bumpalo::Bump;
51 use std::u64;
52 
53 struct Doggo {
54     cuteness: u64,
55     age: u8,
56     scritches_required: bool,
57 }
58 
59 // Create a new arena to bump allocate into.
60 let bump = Bump::new();
61 
62 // Allocate values into the arena.
63 let scooter = bump.alloc(Doggo {
64     cuteness: u64::max_value(),
65     age: 8,
66     scritches_required: true,
67 });
68 
69 // Exclusive, mutable references to the just-allocated value are returned.
70 assert!(scooter.scritches_required);
71 scooter.age += 1;
72 ```
73 
74 ## Collections
75 
76 When the `"collections"` cargo feature is enabled, a fork of some of the `std`
77 library's collections are available in the `collections` module. These
78 collection types are modified to allocate their space inside `bumpalo::Bump`
79 arenas.
80 
81 ```rust
82 # #[cfg(feature = "collections")]
83 # {
84 use bumpalo::{Bump, collections::Vec};
85 
86 // Create a new bump arena.
87 let bump = Bump::new();
88 
89 // Create a vector of integers whose storage is backed by the bump arena. The
90 // vector cannot outlive its backing arena, and this property is enforced with
91 // Rust's lifetime rules.
92 let mut v = Vec::new_in(&bump);
93 
94 // Push a bunch of integers onto `v`!
95 for i in 0..100 {
96     v.push(i);
97 }
98 # }
99 ```
100 
101 Eventually [all `std` collection types will be parameterized by an
102 allocator](https://github.com/rust-lang/rust/issues/42774) and we can remove
103 this `collections` module and use the `std` versions.
104 
105 For unstable, nightly-only support for custom allocators in `std`, see the
106 `allocator_api` section below.
107 
108 ## `bumpalo::boxed::Box`
109 
110 When the `"boxed"` cargo feature is enabled, a fork of `std::boxed::Box` library
111 is available in the `boxed` module. This `Box` type is modified to allocate its
112 space inside `bumpalo::Bump` arenas.
113 
114 **A `Box<T>` runs `T`'s drop implementation when the `Box<T>` is dropped.** You
115 can use this to work around the fact that `Bump` does not drop values allocated
116 in its space itself.
117 
118 ```rust
119 # #[cfg(feature = "boxed")]
120 # {
121 use bumpalo::{Bump, boxed::Box};
122 use std::sync::atomic::{AtomicUsize, Ordering};
123 
124 static NUM_DROPPED: AtomicUsize = AtomicUsize::new(0);
125 
126 struct CountDrops;
127 
128 impl Drop for CountDrops {
129     fn drop(&mut self) {
130         NUM_DROPPED.fetch_add(1, Ordering::SeqCst);
131     }
132 }
133 
134 // Create a new bump arena.
135 let bump = Bump::new();
136 
137 // Create a `CountDrops` inside the bump arena.
138 let mut c = Box::new_in(CountDrops, &bump);
139 
140 // No `CountDrops` have been dropped yet.
141 assert_eq!(NUM_DROPPED.load(Ordering::SeqCst), 0);
142 
143 // Drop our `Box<CountDrops>`.
144 drop(c);
145 
146 // Its `Drop` implementation was run, and so `NUM_DROPS` has been incremented.
147 assert_eq!(NUM_DROPPED.load(Ordering::SeqCst), 1);
148 # }
149 ```
150 
151 ## `#![no_std]` Support
152 
153 Bumpalo is a `no_std` crate. It depends only on the `alloc` and `core` crates.
154 
155 ## Thread support
156 
157 The `Bump` is `!Sync`, which makes it hard to use in certain situations around threads ‒ for
158 example in `rayon`.
159 
160 The [`bumpalo-herd`](https://crates.io/crates/bumpalo-herd) crate provides a pool of `Bump`
161 allocators for use in such situations.
162 
163 ## Nightly Rust `feature(allocator_api)` Support
164 
165 The unstable, nightly-only Rust `allocator_api` feature defines an `Allocator`
166 trait and exposes custom allocators for `std` types. Bumpalo has a matching
167 `allocator_api` cargo feature to enable implementing `Allocator` and using
168 `Bump` with `std` collections. Note that, as `feature(allocator_api)` is
169 unstable and only in nightly Rust, Bumpalo's matching `allocator_api` cargo
170 feature should be considered unstable, and will not follow the semver
171 conventions that the rest of the crate does.
172 
173 First, enable the `allocator_api` feature in your `Cargo.toml`:
174 
175 ```toml
176 [dependencies]
177 bumpalo = { version = "3.4.0", features = ["allocator_api"] }
178 ```
179 
180 Next, enable the `allocator_api` nightly Rust feature in your `src/lib.rs` or `src/main.rs`:
181 
182 ```rust
183 # #[cfg(feature = "allocator_api")]
184 # {
185 #![feature(allocator_api)]
186 # }
187 ```
188 
189 Finally, use `std` collections with `Bump`, so that their internal heap
190 allocations are made within the given bump arena:
191 
192 ```
193 # #![cfg_attr(feature = "allocator_api", feature(allocator_api))]
194 # #[cfg(feature = "allocator_api")]
195 # {
196 #![feature(allocator_api)]
197 use bumpalo::Bump;
198 
199 // Create a new bump arena.
200 let bump = Bump::new();
201 
202 // Create a `Vec` whose elements are allocated within the bump arena.
203 let mut v = Vec::new_in(&bump);
204 v.push(0);
205 v.push(1);
206 v.push(2);
207 # }
208 ```
209 
210 ### Minimum Supported Rust Version (MSRV)
211 
212 This crate is guaranteed to compile on stable Rust 1.44 and up. It might compile
213 with older versions but that may change in any new patch release.
214 
215 We reserve the right to increment the MSRV on minor releases, however we will strive
216 to only do it deliberately and for good reasons.
217 
218  */
219 
220 #![deny(missing_debug_implementations)]
221 #![deny(missing_docs)]
222 #![no_std]
223 #![cfg_attr(
224     feature = "allocator_api",
225     feature(allocator_api, nonnull_slice_from_raw_parts)
226 )]
227 
228 #[doc(hidden)]
229 pub extern crate alloc as core_alloc;
230 
231 #[cfg(feature = "boxed")]
232 pub mod boxed;
233 #[cfg(feature = "collections")]
234 pub mod collections;
235 
236 mod alloc;
237 
238 use core::cell::Cell;
239 use core::fmt::Display;
240 use core::iter;
241 use core::marker::PhantomData;
242 use core::mem;
243 use core::ptr::{self, NonNull};
244 use core::slice;
245 use core::str;
246 use core_alloc::alloc::{alloc, dealloc, Layout};
247 #[cfg(feature = "allocator_api")]
248 use core_alloc::alloc::{AllocError, Allocator};
249 
250 /// An error returned from [`Bump::try_alloc_try_with`].
251 #[derive(Clone, PartialEq, Eq, Debug)]
252 pub enum AllocOrInitError<E> {
253     /// Indicates that the initial allocation failed.
254     Alloc(alloc::AllocErr),
255     /// Indicates that the initializer failed with the contained error after
256     /// allocation.
257     ///
258     /// It is possible but not guaranteed that the allocated memory has been
259     /// released back to the allocator at this point.
260     Init(E),
261 }
262 impl<E> From<alloc::AllocErr> for AllocOrInitError<E> {
from(e: alloc::AllocErr) -> Self263     fn from(e: alloc::AllocErr) -> Self {
264         Self::Alloc(e)
265     }
266 }
267 impl<E: Display> Display for AllocOrInitError<E> {
fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result268     fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
269         match self {
270             AllocOrInitError::Alloc(err) => err.fmt(f),
271             AllocOrInitError::Init(err) => write!(f, "initialization failed: {}", err),
272         }
273     }
274 }
275 
276 /// An arena to bump allocate into.
277 ///
278 /// ## No `Drop`s
279 ///
280 /// Objects that are bump-allocated will never have their `Drop` implementation
281 /// called &mdash; unless you do it manually yourself. This makes it relatively
282 /// easy to leak memory or other resources.
283 ///
284 /// If you have a type which internally manages
285 ///
286 /// * an allocation from the global heap (e.g. `Vec<T>`),
287 /// * open file descriptors (e.g. `std::fs::File`), or
288 /// * any other resource that must be cleaned up (e.g. an `mmap`)
289 ///
290 /// and relies on its `Drop` implementation to clean up the internal resource,
291 /// then if you allocate that type with a `Bump`, you need to find a new way to
292 /// clean up after it yourself.
293 ///
294 /// Potential solutions are:
295 ///
296 /// * Using [`bumpalo::boxed::Box::new_in`] instead of [`Bump::alloc`], that
297 ///   will drop wrapped values similarly to [`std::boxed::Box`]. Note that this
298 ///   requires enabling the `"boxed"` Cargo feature for this crate. **This is
299 ///   often the easiest solution.**
300 ///
301 /// * Calling [`drop_in_place`][drop_in_place] or using
302 ///   [`std::mem::ManuallyDrop`][manuallydrop] to manually drop these types.
303 ///
304 /// * Using [`bumpalo::collections::Vec`] instead of [`std::vec::Vec`].
305 ///
306 /// * Avoiding allocating these problematic types within a `Bump`.
307 ///
308 /// Note that not calling `Drop` is memory safe! Destructors are never
309 /// guaranteed to run in Rust, you can't rely on them for enforcing memory
310 /// safety.
311 ///
312 /// [drop_in_place]: https://doc.rust-lang.org/std/ptr/fn.drop_in_place.html
313 /// [manuallydrop]: https://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html
314 /// [`bumpalo::collections::Vec`]: ./collections/struct.Vec.html
315 /// [`std::vec::Vec`]: https://doc.rust-lang.org/std/vec/struct.Vec.html
316 /// [`bumpalo::boxed::Box::new_in`]: ./boxed/struct.Box.html#method.new_in
317 /// [`Bump::alloc`]: ./struct.Bump.html#method.alloc
318 /// [`std::boxed::Box`]: https://doc.rust-lang.org/std/boxed/struct.Box.html
319 ///
320 /// ## Example
321 ///
322 /// ```
323 /// use bumpalo::Bump;
324 ///
325 /// // Create a new bump arena.
326 /// let bump = Bump::new();
327 ///
328 /// // Allocate values into the arena.
329 /// let forty_two = bump.alloc(42);
330 /// assert_eq!(*forty_two, 42);
331 ///
332 /// // Mutable references are returned from allocation.
333 /// let mut s = bump.alloc("bumpalo");
334 /// *s = "the bump allocator; and also is a buffalo";
335 /// ```
336 ///
337 /// ## Allocation Methods Come in Many Flavors
338 ///
339 /// There are various allocation methods on `Bump`, the simplest being
340 /// [`alloc`][Bump::alloc]. The others exist to satisfy some combination of
341 /// fallible allocation and initialization. The allocation methods are
342 /// summarized in the following table:
343 ///
344 /// <table>
345 ///   <thead>
346 ///     <tr>
347 ///       <th></th>
348 ///       <th>Infallible Allocation</th>
349 ///       <th>Fallible Allocation</th>
350 ///     </tr>
351 ///   </thead>
352 ///     <tr>
353 ///       <th>By Value</th>
354 ///       <td><a href="#method.alloc"><code>alloc</code></a></td>
355 ///       <td><a href="#method.try_alloc"><code>try_alloc</code></a></td>
356 ///     </tr>
357 ///     <tr>
358 ///       <th>Infallible Initializer Function</th>
359 ///       <td><a href="#method.alloc_with"><code>alloc_with</code></a></td>
360 ///       <td><a href="#method.try_alloc_with"><code>try_alloc_with</code></a></td>
361 ///     </tr>
362 ///     <tr>
363 ///       <th>Fallible Initializer Function</th>
364 ///       <td><a href="#method.alloc_try_with"><code>alloc_try_with</code></a></td>
365 ///       <td><a href="#method.try_alloc_try_with"><code>try_alloc_try_with</code></a></td>
366 ///     </tr>
367 ///   <tbody>
368 ///   </tbody>
369 /// </table>
370 ///
371 /// ### Fallible Allocation: The `try_alloc_` Method Prefix
372 ///
373 /// These allocation methods let you recover from out-of-memory (OOM)
374 /// scenarioes, rather than raising a panic on OOM.
375 ///
376 /// ```
377 /// use bumpalo::Bump;
378 ///
379 /// let bump = Bump::new();
380 ///
381 /// match bump.try_alloc(MyStruct {
382 ///     // ...
383 /// }) {
384 ///     Ok(my_struct) => {
385 ///         // Allocation succeeded.
386 ///     }
387 ///     Err(e) => {
388 ///         // Out of memory.
389 ///     }
390 /// }
391 ///
392 /// struct MyStruct {
393 ///     // ...
394 /// }
395 /// ```
396 ///
397 /// ### Initializer Functions: The `_with` Method Suffix
398 ///
399 /// Calling one of the generic `…alloc(x)` methods is essentially equivalent to
400 /// the matching [`…alloc_with(|| x)`](?search=alloc_with). However if you use
401 /// `…alloc_with`, then the closure will not be invoked until after allocating
402 /// space for storing `x` on the heap.
403 ///
404 /// This can be useful in certain edge-cases related to compiler optimizations.
405 /// When evaluating for example `bump.alloc(x)`, semantically `x` is first put
406 /// on the stack and then moved onto the heap. In some cases, the compiler is
407 /// able to optimize this into constructing `x` directly on the heap, however
408 /// in many cases it does not.
409 ///
410 /// The `*alloc_with` functions try to help the compiler be smarter. In most
411 /// cases doing for example `bump.try_alloc_with(|| x)` on release mode will be
412 /// enough to help the compiler realize that this optimization is valid and
413 /// to construct `x` directly onto the heap.
414 ///
415 /// #### Warning
416 ///
417 /// These functions critically depend on compiler optimizations to achieve their
418 /// desired effect. This means that it is not an effective tool when compiling
419 /// without optimizations on.
420 ///
421 /// Even when optimizations are on, these functions do not **guarantee** that
422 /// the value is constructed on the heap. To the best of our knowledge no such
423 /// guarantee can be made in stable Rust as of 1.44.
424 ///
425 /// ### Fallible Initialization: The `_try_with` Method Suffix
426 ///
427 /// The generic [`…alloc_try_with(|| x)`](?search=_try_with) methods behave
428 /// like the purely `_with` suffixed methods explained above. However, they
429 /// allow for fallible initialization by accepting a closure that returns a
430 /// [`Result`] and will attempt to undo the initial allocation if this closure
431 /// returns [`Err`].
432 ///
433 /// #### Warning
434 ///
435 /// If the inner closure returns [`Ok`], space for the entire [`Result`] remains
436 /// allocated inside `self`. This can be a problem especially if the [`Err`]
437 /// variant is larger, but even otherwise there may be overhead for the
438 /// [`Result`]'s discriminant.
439 ///
440 /// <p><details><summary>Undoing the allocation in the <code>Err</code> case
441 /// always fails if <code>f</code> successfully made any additional allocations
442 /// in <code>self</code>.</summary>
443 ///
444 /// For example, the following will always leak also space for the [`Result`]
445 /// into this `Bump`, even though the inner reference isn't kept and the [`Err`]
446 /// payload is returned semantically by value:
447 ///
448 /// ```rust
449 /// let bump = bumpalo::Bump::new();
450 ///
451 /// let r: Result<&mut [u8; 1000], ()> = bump.alloc_try_with(|| {
452 ///     let _ = bump.alloc(0_u8);
453 ///     Err(())
454 /// });
455 ///
456 /// assert!(r.is_err());
457 /// ```
458 ///
459 ///</details></p>
460 ///
461 /// Since [`Err`] payloads are first placed on the heap and then moved to the
462 /// stack, `bump.…alloc_try_with(|| x)?` is likely to execute more slowly than
463 /// the matching `bump.…alloc(x?)` in case of initialization failure. If this
464 /// happens frequently, using the plain un-suffixed method may perform better.
465 #[derive(Debug)]
466 pub struct Bump {
467     // The current chunk we are bump allocating within.
468     current_chunk_footer: Cell<NonNull<ChunkFooter>>,
469 }
470 
471 #[repr(C)]
472 #[derive(Debug)]
473 struct ChunkFooter {
474     // Pointer to the start of this chunk allocation. This footer is always at
475     // the end of the chunk.
476     data: NonNull<u8>,
477 
478     // The layout of this chunk's allocation.
479     layout: Layout,
480 
481     // Link to the previous chunk, if any.
482     prev: Cell<Option<NonNull<ChunkFooter>>>,
483 
484     // Bump allocation finger that is always in the range `self.data..=self`.
485     ptr: Cell<NonNull<u8>>,
486 }
487 
488 impl ChunkFooter {
489     // Returns the start and length of the currently allocated region of this
490     // chunk.
as_raw_parts(&self) -> (*const u8, usize)491     fn as_raw_parts(&self) -> (*const u8, usize) {
492         let data = self.data.as_ptr() as usize;
493         let ptr = self.ptr.get().as_ptr() as usize;
494         debug_assert!(data <= ptr);
495         debug_assert!(ptr <= self as *const _ as usize);
496         let len = self as *const _ as usize - ptr;
497         (ptr as *const u8, len)
498     }
499 }
500 
501 impl Default for Bump {
default() -> Bump502     fn default() -> Bump {
503         Bump::new()
504     }
505 }
506 
507 impl Drop for Bump {
drop(&mut self)508     fn drop(&mut self) {
509         unsafe {
510             dealloc_chunk_list(Some(self.current_chunk_footer.get()));
511         }
512     }
513 }
514 
515 #[inline]
dealloc_chunk_list(mut footer: Option<NonNull<ChunkFooter>>)516 unsafe fn dealloc_chunk_list(mut footer: Option<NonNull<ChunkFooter>>) {
517     while let Some(f) = footer {
518         footer = f.as_ref().prev.get();
519         dealloc(f.as_ref().data.as_ptr(), f.as_ref().layout);
520     }
521 }
522 
523 // `Bump`s are safe to send between threads because nothing aliases its owned
524 // chunks until you start allocating from it. But by the time you allocate from
525 // it, the returned references to allocations borrow the `Bump` and therefore
526 // prevent sending the `Bump` across threads until the borrows end.
527 unsafe impl Send for Bump {}
528 
529 #[inline]
round_up_to(n: usize, divisor: usize) -> Option<usize>530 pub(crate) fn round_up_to(n: usize, divisor: usize) -> Option<usize> {
531     debug_assert!(divisor > 0);
532     debug_assert!(divisor.is_power_of_two());
533     Some(n.checked_add(divisor - 1)? & !(divisor - 1))
534 }
535 
536 // After this point, we try to hit page boundaries instead of powers of 2
537 const PAGE_STRATEGY_CUTOFF: usize = 0x1000;
538 
539 // We only support alignments of up to 16 bytes for iter_allocated_chunks.
540 const SUPPORTED_ITER_ALIGNMENT: usize = 16;
541 const CHUNK_ALIGN: usize = SUPPORTED_ITER_ALIGNMENT;
542 const FOOTER_SIZE: usize = mem::size_of::<ChunkFooter>();
543 
544 // Assert that ChunkFooter is at most the supported alignment. This will give a compile time error if it is not the case
545 const _FOOTER_ALIGN_ASSERTION: bool = mem::align_of::<ChunkFooter>() <= CHUNK_ALIGN;
546 const _: [(); _FOOTER_ALIGN_ASSERTION as usize] = [()];
547 
548 // Maximum typical overhead per allocation imposed by allocators.
549 const MALLOC_OVERHEAD: usize = 16;
550 
551 // This is the overhead from malloc, footer and alignment. For instance, if
552 // we want to request a chunk of memory that has at least X bytes usable for
553 // allocations (where X is aligned to CHUNK_ALIGN), then we expect that the
554 // after adding a footer, malloc overhead and alignment, the chunk of memory
555 // the allocator actually sets aside for us is X+OVERHEAD rounded up to the
556 // nearest suitable size boundary.
557 const OVERHEAD: usize = (MALLOC_OVERHEAD + FOOTER_SIZE + (CHUNK_ALIGN - 1)) & !(CHUNK_ALIGN - 1);
558 
559 // Choose a relatively small default initial chunk size, since we double chunk
560 // sizes as we grow bump arenas to amortize costs of hitting the global
561 // allocator.
562 const FIRST_ALLOCATION_GOAL: usize = 1 << 9;
563 
564 // The actual size of the first allocation is going to be a bit smaller
565 // than the goal. We need to make room for the footer, and we also need
566 // take the alignment into account.
567 const DEFAULT_CHUNK_SIZE_WITHOUT_FOOTER: usize = FIRST_ALLOCATION_GOAL - OVERHEAD;
568 
569 /// Wrapper around `Layout::from_size_align` that adds debug assertions.
570 #[inline]
layout_from_size_align(size: usize, align: usize) -> Layout571 unsafe fn layout_from_size_align(size: usize, align: usize) -> Layout {
572     if cfg!(debug_assertions) {
573         Layout::from_size_align(size, align).unwrap()
574     } else {
575         Layout::from_size_align_unchecked(size, align)
576     }
577 }
578 
579 #[inline(never)]
allocation_size_overflow<T>() -> T580 fn allocation_size_overflow<T>() -> T {
581     panic!("requested allocation size overflowed")
582 }
583 
584 impl Bump {
585     /// Construct a new arena to bump allocate into.
586     ///
587     /// ## Example
588     ///
589     /// ```
590     /// let bump = bumpalo::Bump::new();
591     /// # let _ = bump;
592     /// ```
new() -> Bump593     pub fn new() -> Bump {
594         Self::with_capacity(0)
595     }
596 
597     /// Attempt to construct a new arena to bump allocate into.
598     ///
599     /// ## Example
600     ///
601     /// ```
602     /// let bump = bumpalo::Bump::try_new();
603     /// # let _ = bump.unwrap();
604     /// ```
try_new() -> Result<Bump, alloc::AllocErr>605     pub fn try_new() -> Result<Bump, alloc::AllocErr> {
606         Bump::try_with_capacity(0)
607     }
608 
609     /// Construct a new arena with the specified byte capacity to bump allocate into.
610     ///
611     /// ## Example
612     ///
613     /// ```
614     /// let bump = bumpalo::Bump::with_capacity(100);
615     /// # let _ = bump;
616     /// ```
with_capacity(capacity: usize) -> Bump617     pub fn with_capacity(capacity: usize) -> Bump {
618         Bump::try_with_capacity(capacity).unwrap_or_else(|_| oom())
619     }
620 
621     /// Attempt to construct a new arena with the specified byte capacity to bump allocate into.
622     ///
623     /// ## Example
624     ///
625     /// ```
626     /// let bump = bumpalo::Bump::try_with_capacity(100);
627     /// # let _ = bump.unwrap();
628     /// ```
try_with_capacity(capacity: usize) -> Result<Self, alloc::AllocErr>629     pub fn try_with_capacity(capacity: usize) -> Result<Self, alloc::AllocErr> {
630         let chunk_footer = Self::new_chunk(
631             None,
632             Some(unsafe { layout_from_size_align(capacity, 1) }),
633             None,
634         )
635         .ok_or(alloc::AllocErr {})?;
636 
637         Ok(Bump {
638             current_chunk_footer: Cell::new(chunk_footer),
639         })
640     }
641 
642     /// Allocate a new chunk and return its initialized footer.
643     ///
644     /// If given, `layouts` is a tuple of the current chunk size and the
645     /// layout of the allocation request that triggered us to fall back to
646     /// allocating a new chunk of memory.
new_chunk( new_size_without_footer: Option<usize>, requested_layout: Option<Layout>, prev: Option<NonNull<ChunkFooter>>, ) -> Option<NonNull<ChunkFooter>>647     fn new_chunk(
648         new_size_without_footer: Option<usize>,
649         requested_layout: Option<Layout>,
650         prev: Option<NonNull<ChunkFooter>>,
651     ) -> Option<NonNull<ChunkFooter>> {
652         unsafe {
653             let mut new_size_without_footer =
654                 new_size_without_footer.unwrap_or(DEFAULT_CHUNK_SIZE_WITHOUT_FOOTER);
655 
656             // We want to have CHUNK_ALIGN or better alignment
657             let mut align = CHUNK_ALIGN;
658 
659             // If we already know we need to fulfill some request,
660             // make sure we allocate at least enough to satisfy it
661             if let Some(requested_layout) = requested_layout {
662                 align = align.max(requested_layout.align());
663                 let requested_size = round_up_to(requested_layout.size(), align)
664                     .unwrap_or_else(allocation_size_overflow);
665                 new_size_without_footer = new_size_without_footer.max(requested_size);
666             }
667 
668             // We want our allocations to play nice with the memory allocator,
669             // and waste as little memory as possible.
670             // For small allocations, this means that the entire allocation
671             // including the chunk footer and mallocs internal overhead is
672             // as close to a power of two as we can go without going over.
673             // For larger allocations, we only need to get close to a page
674             // boundary without going over.
675             if new_size_without_footer < PAGE_STRATEGY_CUTOFF {
676                 new_size_without_footer =
677                     (new_size_without_footer + OVERHEAD).next_power_of_two() - OVERHEAD;
678             } else {
679                 new_size_without_footer =
680                     round_up_to(new_size_without_footer + OVERHEAD, 0x1000)? - OVERHEAD;
681             }
682 
683             debug_assert_eq!(align % CHUNK_ALIGN, 0);
684             debug_assert_eq!(new_size_without_footer % CHUNK_ALIGN, 0);
685             let size = new_size_without_footer
686                 .checked_add(FOOTER_SIZE)
687                 .unwrap_or_else(allocation_size_overflow);
688             let layout = layout_from_size_align(size, align);
689 
690             debug_assert!(requested_layout.map_or(true, |layout| size >= layout.size()));
691 
692             let data = alloc(layout);
693             let data = NonNull::new(data)?;
694 
695             // The `ChunkFooter` is at the end of the chunk.
696             let footer_ptr = data.as_ptr() as usize + new_size_without_footer;
697             debug_assert_eq!((data.as_ptr() as usize) % align, 0);
698             debug_assert_eq!(footer_ptr % CHUNK_ALIGN, 0);
699             let footer_ptr = footer_ptr as *mut ChunkFooter;
700 
701             // The bump pointer is initialized to the end of the range we will
702             // bump out of.
703             let ptr = Cell::new(NonNull::new_unchecked(footer_ptr as *mut u8));
704 
705             ptr::write(
706                 footer_ptr,
707                 ChunkFooter {
708                     data,
709                     layout,
710                     prev: Cell::new(prev),
711                     ptr,
712                 },
713             );
714 
715             Some(NonNull::new_unchecked(footer_ptr))
716         }
717     }
718 
719     /// Reset this bump allocator.
720     ///
721     /// Performs mass deallocation on everything allocated in this arena by
722     /// resetting the pointer into the underlying chunk of memory to the start
723     /// of the chunk. Does not run any `Drop` implementations on deallocated
724     /// objects; see [the `Bump` type's top-level
725     /// documentation](./struct.Bump.html) for details.
726     ///
727     /// If this arena has allocated multiple chunks to bump allocate into, then
728     /// the excess chunks are returned to the global allocator.
729     ///
730     /// ## Example
731     ///
732     /// ```
733     /// let mut bump = bumpalo::Bump::new();
734     ///
735     /// // Allocate a bunch of things.
736     /// {
737     ///     for i in 0..100 {
738     ///         bump.alloc(i);
739     ///     }
740     /// }
741     ///
742     /// // Reset the arena.
743     /// bump.reset();
744     ///
745     /// // Allocate some new things in the space previously occupied by the
746     /// // original things.
747     /// for j in 200..400 {
748     ///     bump.alloc(j);
749     /// }
750     ///```
reset(&mut self)751     pub fn reset(&mut self) {
752         // Takes `&mut self` so `self` must be unique and there can't be any
753         // borrows active that would get invalidated by resetting.
754         unsafe {
755             let cur_chunk = self.current_chunk_footer.get();
756 
757             // Deallocate all chunks except the current one
758             let prev_chunk = cur_chunk.as_ref().prev.replace(None);
759             dealloc_chunk_list(prev_chunk);
760 
761             // Reset the bump finger to the end of the chunk.
762             cur_chunk.as_ref().ptr.set(cur_chunk.cast());
763 
764             debug_assert!(
765                 self.current_chunk_footer
766                     .get()
767                     .as_ref()
768                     .prev
769                     .get()
770                     .is_none(),
771                 "We should only have a single chunk"
772             );
773             debug_assert_eq!(
774                 self.current_chunk_footer.get().as_ref().ptr.get(),
775                 self.current_chunk_footer.get().cast(),
776                 "Our chunk's bump finger should be reset to the start of its allocation"
777             );
778         }
779     }
780 
781     /// Allocate an object in this `Bump` and return an exclusive reference to
782     /// it.
783     ///
784     /// ## Panics
785     ///
786     /// Panics if reserving space for `T` fails.
787     ///
788     /// ## Example
789     ///
790     /// ```
791     /// let bump = bumpalo::Bump::new();
792     /// let x = bump.alloc("hello");
793     /// assert_eq!(*x, "hello");
794     /// ```
795     #[inline(always)]
796     #[allow(clippy::mut_from_ref)]
alloc<T>(&self, val: T) -> &mut T797     pub fn alloc<T>(&self, val: T) -> &mut T {
798         self.alloc_with(|| val)
799     }
800 
801     /// Try to allocate an object in this `Bump` and return an exclusive
802     /// reference to it.
803     ///
804     /// ## Errors
805     ///
806     /// Errors if reserving space for `T` fails.
807     ///
808     /// ## Example
809     ///
810     /// ```
811     /// let bump = bumpalo::Bump::new();
812     /// let x = bump.try_alloc("hello");
813     /// assert_eq!(x, Ok(&mut"hello"));
814     /// ```
815     #[inline(always)]
816     #[allow(clippy::mut_from_ref)]
try_alloc<T>(&self, val: T) -> Result<&mut T, alloc::AllocErr>817     pub fn try_alloc<T>(&self, val: T) -> Result<&mut T, alloc::AllocErr> {
818         self.try_alloc_with(|| val)
819     }
820 
821     /// Pre-allocate space for an object in this `Bump`, initializes it using
822     /// the closure, then returns an exclusive reference to it.
823     ///
824     /// See [The `_with` Method Suffix](#the-_with-method-suffix) for a
825     /// discussion on the differences between the `_with` suffixed methods and
826     /// those methods without it, their performance characteristics, and when
827     /// you might or might not choose a `_with` suffixed method.
828     ///
829     /// ## Panics
830     ///
831     /// Panics if reserving space for `T` fails.
832     ///
833     /// ## Example
834     ///
835     /// ```
836     /// let bump = bumpalo::Bump::new();
837     /// let x = bump.alloc_with(|| "hello");
838     /// assert_eq!(*x, "hello");
839     /// ```
840     #[inline(always)]
841     #[allow(clippy::mut_from_ref)]
alloc_with<F, T>(&self, f: F) -> &mut T where F: FnOnce() -> T,842     pub fn alloc_with<F, T>(&self, f: F) -> &mut T
843     where
844         F: FnOnce() -> T,
845     {
846         #[inline(always)]
847         unsafe fn inner_writer<T, F>(ptr: *mut T, f: F)
848         where
849             F: FnOnce() -> T,
850         {
851             // This function is translated as:
852             // - allocate space for a T on the stack
853             // - call f() with the return value being put onto this stack space
854             // - memcpy from the stack to the heap
855             //
856             // Ideally we want LLVM to always realize that doing a stack
857             // allocation is unnecessary and optimize the code so it writes
858             // directly into the heap instead. It seems we get it to realize
859             // this most consistently if we put this critical line into it's
860             // own function instead of inlining it into the surrounding code.
861             ptr::write(ptr, f())
862         }
863 
864         let layout = Layout::new::<T>();
865 
866         unsafe {
867             let p = self.alloc_layout(layout);
868             let p = p.as_ptr() as *mut T;
869             inner_writer(p, f);
870             &mut *p
871         }
872     }
873 
874     /// Tries to pre-allocate space for an object in this `Bump`, initializes
875     /// it using the closure, then returns an exclusive reference to it.
876     ///
877     /// See [The `_with` Method Suffix](#the-_with-method-suffix) for a
878     /// discussion on the differences between the `_with` suffixed methods and
879     /// those methods without it, their performance characteristics, and when
880     /// you might or might not choose a `_with` suffixed method.
881     ///
882     /// ## Errors
883     ///
884     /// Errors if reserving space for `T` fails.
885     ///
886     /// ## Example
887     ///
888     /// ```
889     /// let bump = bumpalo::Bump::new();
890     /// let x = bump.try_alloc_with(|| "hello");
891     /// assert_eq!(x, Ok(&mut "hello"));
892     /// ```
893     #[inline(always)]
894     #[allow(clippy::mut_from_ref)]
try_alloc_with<F, T>(&self, f: F) -> Result<&mut T, alloc::AllocErr> where F: FnOnce() -> T,895     pub fn try_alloc_with<F, T>(&self, f: F) -> Result<&mut T, alloc::AllocErr>
896     where
897         F: FnOnce() -> T,
898     {
899         #[inline(always)]
900         unsafe fn inner_writer<T, F>(ptr: *mut T, f: F)
901         where
902             F: FnOnce() -> T,
903         {
904             // This function is translated as:
905             // - allocate space for a T on the stack
906             // - call f() with the return value being put onto this stack space
907             // - memcpy from the stack to the heap
908             //
909             // Ideally we want LLVM to always realize that doing a stack
910             // allocation is unnecessary and optimize the code so it writes
911             // directly into the heap instead. It seems we get it to realize
912             // this most consistently if we put this critical line into it's
913             // own function instead of inlining it into the surrounding code.
914             ptr::write(ptr, f())
915         }
916 
917         //SAFETY: Self-contained:
918         // `p` is allocated for `T` and then a `T` is written.
919         let layout = Layout::new::<T>();
920         let p = self.try_alloc_layout(layout)?;
921         let p = p.as_ptr() as *mut T;
922 
923         unsafe {
924             inner_writer(p, f);
925             Ok(&mut *p)
926         }
927     }
928 
929     /// Pre-allocates space for a [`Result`] in this `Bump`, initializes it using
930     /// the closure, then returns an exclusive reference to its `T` if [`Ok`].
931     ///
932     /// Iff the allocation fails, the closure is not run.
933     ///
934     /// Iff [`Err`], an allocator rewind is *attempted* and the `E` instance is
935     /// moved out of the allocator to be consumed or dropped as normal.
936     ///
937     /// See [The `_with` Method Suffix](#the-_with-method-suffix) for a
938     /// discussion on the differences between the `_with` suffixed methods and
939     /// those methods without it, their performance characteristics, and when
940     /// you might or might not choose a `_with` suffixed method.
941     ///
942     /// For caveats specific to fallible initialization, see
943     /// [The `_try_with` Method Suffix](#the-_try_with-method-suffix).
944     ///
945     /// ## Errors
946     ///
947     /// Iff the allocation succeeds but `f` fails, that error is forwarded by value.
948     ///
949     /// ## Panics
950     ///
951     /// Panics if reserving space for `Result<T, E>` fails.
952     ///
953     /// ## Example
954     ///
955     /// ```
956     /// let bump = bumpalo::Bump::new();
957     /// let x = bump.alloc_try_with(|| Ok("hello"))?;
958     /// assert_eq!(*x, "hello");
959     /// # Result::<_, ()>::Ok(())
960     /// ```
961     #[inline(always)]
962     #[allow(clippy::mut_from_ref)]
alloc_try_with<F, T, E>(&self, f: F) -> Result<&mut T, E> where F: FnOnce() -> Result<T, E>,963     pub fn alloc_try_with<F, T, E>(&self, f: F) -> Result<&mut T, E>
964     where
965         F: FnOnce() -> Result<T, E>,
966     {
967         let rewind_footer = self.current_chunk_footer.get();
968         let rewind_ptr = unsafe { rewind_footer.as_ref() }.ptr.get();
969         let mut inner_result_ptr = NonNull::from(self.alloc_with(f));
970         let inner_result_address = inner_result_ptr.as_ptr() as usize;
971         match unsafe { inner_result_ptr.as_mut() } {
972             Ok(t) => Ok(unsafe {
973                 //SAFETY:
974                 // The `&mut Result<T, E>` returned by `alloc_with` may be
975                 // lifetime-limited by `E`, but the derived `&mut T` still has
976                 // the same validity as in `alloc_with` since the error variant
977                 // is already ruled out here.
978 
979                 // We could conditionally truncate the allocation here, but
980                 // since it grows backwards, it seems unlikely that we'd get
981                 // any more than the `Result`'s discriminant this way, if
982                 // anything at all.
983                 &mut *(t as *mut _)
984             }),
985             Err(e) => unsafe {
986                 // If this result was the last allocation in this arena, we can
987                 // reclaim its space. In fact, sometimes we can do even better
988                 // than simply calling `dealloc` on the result pointer: we can
989                 // reclaim any alignment padding we might have added (which
990                 // `dealloc` cannot do) if we didn't allocate a new chunk for
991                 // this result.
992                 if self.is_last_allocation(NonNull::new_unchecked(inner_result_address as *mut _)) {
993                     let current_footer_p = self.current_chunk_footer.get();
994                     let current_ptr = &current_footer_p.as_ref().ptr;
995                     if current_footer_p == rewind_footer {
996                         // It's still the same chunk, so reset the bump pointer
997                         // to its original value upon entry to this method
998                         // (reclaiming any alignment padding we may have
999                         // added).
1000                         current_ptr.set(rewind_ptr);
1001                     } else {
1002                         // We allocated a new chunk for this result.
1003                         //
1004                         // We know the result is the only allocation in this
1005                         // chunk: Any additional allocations since the start of
1006                         // this method could only have happened when running
1007                         // the initializer function, which is called *after*
1008                         // reserving space for this result. Therefore, since we
1009                         // already determined via the check above that this
1010                         // result was the last allocation, there must not have
1011                         // been any other allocations, and this result is the
1012                         // only allocation in this chunk.
1013                         //
1014                         // Because this is the only allocation in this chunk,
1015                         // we can reset the chunk's bump finger to the start of
1016                         // the chunk.
1017                         current_ptr.set(current_footer_p.as_ref().data);
1018                     }
1019                 }
1020                 //SAFETY:
1021                 // As we received `E` semantically by value from `f`, we can
1022                 // just copy that value here as long as we avoid a double-drop
1023                 // (which can't happen as any specific references to the `E`'s
1024                 // data in `self` are destroyed when this function returns).
1025                 //
1026                 // The order between this and the deallocation doesn't matter
1027                 // because `Self: !Sync`.
1028                 Err(ptr::read(e as *const _))
1029             },
1030         }
1031     }
1032 
1033     /// Tries to pre-allocates space for a [`Result`] in this `Bump`,
1034     /// initializes it using the closure, then returns an exclusive reference
1035     /// to its `T` if all [`Ok`].
1036     ///
1037     /// Iff the allocation fails, the closure is not run.
1038     ///
1039     /// Iff the closure returns [`Err`], an allocator rewind is *attempted* and
1040     /// the `E` instance is moved out of the allocator to be consumed or dropped
1041     /// as normal.
1042     ///
1043     /// See [The `_with` Method Suffix](#the-_with-method-suffix) for a
1044     /// discussion on the differences between the `_with` suffixed methods and
1045     /// those methods without it, their performance characteristics, and when
1046     /// you might or might not choose a `_with` suffixed method.
1047     ///
1048     /// For caveats specific to fallible initialization, see
1049     /// [The `_try_with` Method Suffix](#the-_try_with-method-suffix).
1050     ///
1051     /// ## Errors
1052     ///
1053     /// Errors with the [`Alloc`](`AllocOrInitError::Alloc`) variant iff
1054     /// reserving space for `Result<T, E>` fails.
1055     ///
1056     /// Iff the allocation succeeds but `f` fails, that error is forwarded by
1057     /// value inside the [`Init`](`AllocOrInitError::Init`) variant.
1058     ///
1059     /// ## Example
1060     ///
1061     /// ```
1062     /// let bump = bumpalo::Bump::new();
1063     /// let x = bump.try_alloc_try_with(|| Ok("hello"))?;
1064     /// assert_eq!(*x, "hello");
1065     /// # Result::<_, bumpalo::AllocOrInitError<()>>::Ok(())
1066     /// ```
1067     #[inline(always)]
1068     #[allow(clippy::mut_from_ref)]
try_alloc_try_with<F, T, E>(&self, f: F) -> Result<&mut T, AllocOrInitError<E>> where F: FnOnce() -> Result<T, E>,1069     pub fn try_alloc_try_with<F, T, E>(&self, f: F) -> Result<&mut T, AllocOrInitError<E>>
1070     where
1071         F: FnOnce() -> Result<T, E>,
1072     {
1073         let rewind_footer = self.current_chunk_footer.get();
1074         let rewind_ptr = unsafe { rewind_footer.as_ref() }.ptr.get();
1075         let mut inner_result_ptr = NonNull::from(self.try_alloc_with(f)?);
1076         let inner_result_address = inner_result_ptr.as_ptr() as usize;
1077         match unsafe { inner_result_ptr.as_mut() } {
1078             Ok(t) => Ok(unsafe {
1079                 //SAFETY:
1080                 // The `&mut Result<T, E>` returned by `alloc_with` may be
1081                 // lifetime-limited by `E`, but the derived `&mut T` still has
1082                 // the same validity as in `alloc_with` since the error variant
1083                 // is already ruled out here.
1084 
1085                 // We could conditionally truncate the allocation here, but
1086                 // since it grows backwards, it seems unlikely that we'd get
1087                 // any more than the `Result`'s discriminant this way, if
1088                 // anything at all.
1089                 &mut *(t as *mut _)
1090             }),
1091             Err(e) => unsafe {
1092                 // If this result was the last allocation in this arena, we can
1093                 // reclaim its space. In fact, sometimes we can do even better
1094                 // than simply calling `dealloc` on the result pointer: we can
1095                 // reclaim any alignment padding we might have added (which
1096                 // `dealloc` cannot do) if we didn't allocate a new chunk for
1097                 // this result.
1098                 if self.is_last_allocation(NonNull::new_unchecked(inner_result_address as *mut _)) {
1099                     let current_footer_p = self.current_chunk_footer.get();
1100                     let current_ptr = &current_footer_p.as_ref().ptr;
1101                     if current_footer_p == rewind_footer {
1102                         // It's still the same chunk, so reset the bump pointer
1103                         // to its original value upon entry to this method
1104                         // (reclaiming any alignment padding we may have
1105                         // added).
1106                         current_ptr.set(rewind_ptr);
1107                     } else {
1108                         // We allocated a new chunk for this result.
1109                         //
1110                         // We know the result is the only allocation in this
1111                         // chunk: Any additional allocations since the start of
1112                         // this method could only have happened when running
1113                         // the initializer function, which is called *after*
1114                         // reserving space for this result. Therefore, since we
1115                         // already determined via the check above that this
1116                         // result was the last allocation, there must not have
1117                         // been any other allocations, and this result is the
1118                         // only allocation in this chunk.
1119                         //
1120                         // Because this is the only allocation in this chunk,
1121                         // we can reset the chunk's bump finger to the start of
1122                         // the chunk.
1123                         current_ptr.set(current_footer_p.as_ref().data);
1124                     }
1125                 }
1126                 //SAFETY:
1127                 // As we received `E` semantically by value from `f`, we can
1128                 // just copy that value here as long as we avoid a double-drop
1129                 // (which can't happen as any specific references to the `E`'s
1130                 // data in `self` are destroyed when this function returns).
1131                 //
1132                 // The order between this and the deallocation doesn't matter
1133                 // because `Self: !Sync`.
1134                 Err(AllocOrInitError::Init(ptr::read(e as *const _)))
1135             },
1136         }
1137     }
1138 
1139     /// `Copy` a slice into this `Bump` and return an exclusive reference to
1140     /// the copy.
1141     ///
1142     /// ## Panics
1143     ///
1144     /// Panics if reserving space for the slice fails.
1145     ///
1146     /// ## Example
1147     ///
1148     /// ```
1149     /// let bump = bumpalo::Bump::new();
1150     /// let x = bump.alloc_slice_copy(&[1, 2, 3]);
1151     /// assert_eq!(x, &[1, 2, 3]);
1152     /// ```
1153     #[inline(always)]
1154     #[allow(clippy::mut_from_ref)]
alloc_slice_copy<T>(&self, src: &[T]) -> &mut [T] where T: Copy,1155     pub fn alloc_slice_copy<T>(&self, src: &[T]) -> &mut [T]
1156     where
1157         T: Copy,
1158     {
1159         let layout = Layout::for_value(src);
1160         let dst = self.alloc_layout(layout).cast::<T>();
1161 
1162         unsafe {
1163             ptr::copy_nonoverlapping(src.as_ptr(), dst.as_ptr(), src.len());
1164             slice::from_raw_parts_mut(dst.as_ptr(), src.len())
1165         }
1166     }
1167 
1168     /// `Clone` a slice into this `Bump` and return an exclusive reference to
1169     /// the clone. Prefer `alloc_slice_copy` if `T` is `Copy`.
1170     ///
1171     /// ## Panics
1172     ///
1173     /// Panics if reserving space for the slice fails.
1174     ///
1175     /// ## Example
1176     ///
1177     /// ```
1178     /// #[derive(Clone, Debug, Eq, PartialEq)]
1179     /// struct Sheep {
1180     ///     name: String,
1181     /// }
1182     ///
1183     /// let originals = vec![
1184     ///     Sheep { name: "Alice".into() },
1185     ///     Sheep { name: "Bob".into() },
1186     ///     Sheep { name: "Cathy".into() },
1187     /// ];
1188     ///
1189     /// let bump = bumpalo::Bump::new();
1190     /// let clones = bump.alloc_slice_clone(&originals);
1191     /// assert_eq!(originals, clones);
1192     /// ```
1193     #[inline(always)]
1194     #[allow(clippy::mut_from_ref)]
alloc_slice_clone<T>(&self, src: &[T]) -> &mut [T] where T: Clone,1195     pub fn alloc_slice_clone<T>(&self, src: &[T]) -> &mut [T]
1196     where
1197         T: Clone,
1198     {
1199         let layout = Layout::for_value(src);
1200         let dst = self.alloc_layout(layout).cast::<T>();
1201 
1202         unsafe {
1203             for (i, val) in src.iter().cloned().enumerate() {
1204                 ptr::write(dst.as_ptr().add(i), val);
1205             }
1206 
1207             slice::from_raw_parts_mut(dst.as_ptr(), src.len())
1208         }
1209     }
1210 
1211     /// `Copy` a string slice into this `Bump` and return an exclusive reference to it.
1212     ///
1213     /// ## Panics
1214     ///
1215     /// Panics if reserving space for the string fails.
1216     ///
1217     /// ## Example
1218     ///
1219     /// ```
1220     /// let bump = bumpalo::Bump::new();
1221     /// let hello = bump.alloc_str("hello world");
1222     /// assert_eq!("hello world", hello);
1223     /// ```
1224     #[inline(always)]
1225     #[allow(clippy::mut_from_ref)]
alloc_str(&self, src: &str) -> &mut str1226     pub fn alloc_str(&self, src: &str) -> &mut str {
1227         let buffer = self.alloc_slice_copy(src.as_bytes());
1228         unsafe {
1229             // This is OK, because it already came in as str, so it is guaranteed to be utf8
1230             str::from_utf8_unchecked_mut(buffer)
1231         }
1232     }
1233 
1234     /// Allocates a new slice of size `len` into this `Bump` and returns an
1235     /// exclusive reference to the copy.
1236     ///
1237     /// The elements of the slice are initialized using the supplied closure.
1238     /// The closure argument is the position in the slice.
1239     ///
1240     /// ## Panics
1241     ///
1242     /// Panics if reserving space for the slice fails.
1243     ///
1244     /// ## Example
1245     ///
1246     /// ```
1247     /// let bump = bumpalo::Bump::new();
1248     /// let x = bump.alloc_slice_fill_with(5, |i| 5*(i+1));
1249     /// assert_eq!(x, &[5, 10, 15, 20, 25]);
1250     /// ```
1251     #[inline(always)]
1252     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_with<T, F>(&self, len: usize, mut f: F) -> &mut [T] where F: FnMut(usize) -> T,1253     pub fn alloc_slice_fill_with<T, F>(&self, len: usize, mut f: F) -> &mut [T]
1254     where
1255         F: FnMut(usize) -> T,
1256     {
1257         let layout = Layout::array::<T>(len).unwrap_or_else(|_| oom());
1258         let dst = self.alloc_layout(layout).cast::<T>();
1259 
1260         unsafe {
1261             for i in 0..len {
1262                 ptr::write(dst.as_ptr().add(i), f(i));
1263             }
1264 
1265             let result = slice::from_raw_parts_mut(dst.as_ptr(), len);
1266             debug_assert_eq!(Layout::for_value(result), layout);
1267             result
1268         }
1269     }
1270 
1271     /// Allocates a new slice of size `len` into this `Bump` and returns an
1272     /// exclusive reference to the copy.
1273     ///
1274     /// All elements of the slice are initialized to `value`.
1275     ///
1276     /// ## Panics
1277     ///
1278     /// Panics if reserving space for the slice fails.
1279     ///
1280     /// ## Example
1281     ///
1282     /// ```
1283     /// let bump = bumpalo::Bump::new();
1284     /// let x = bump.alloc_slice_fill_copy(5, 42);
1285     /// assert_eq!(x, &[42, 42, 42, 42, 42]);
1286     /// ```
1287     #[inline(always)]
1288     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_copy<T: Copy>(&self, len: usize, value: T) -> &mut [T]1289     pub fn alloc_slice_fill_copy<T: Copy>(&self, len: usize, value: T) -> &mut [T] {
1290         self.alloc_slice_fill_with(len, |_| value)
1291     }
1292 
1293     /// Allocates a new slice of size `len` slice into this `Bump` and return an
1294     /// exclusive reference to the copy.
1295     ///
1296     /// All elements of the slice are initialized to `value.clone()`.
1297     ///
1298     /// ## Panics
1299     ///
1300     /// Panics if reserving space for the slice fails.
1301     ///
1302     /// ## Example
1303     ///
1304     /// ```
1305     /// let bump = bumpalo::Bump::new();
1306     /// let s: String = "Hello Bump!".to_string();
1307     /// let x: &[String] = bump.alloc_slice_fill_clone(2, &s);
1308     /// assert_eq!(x.len(), 2);
1309     /// assert_eq!(&x[0], &s);
1310     /// assert_eq!(&x[1], &s);
1311     /// ```
1312     #[inline(always)]
1313     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_clone<T: Clone>(&self, len: usize, value: &T) -> &mut [T]1314     pub fn alloc_slice_fill_clone<T: Clone>(&self, len: usize, value: &T) -> &mut [T] {
1315         self.alloc_slice_fill_with(len, |_| value.clone())
1316     }
1317 
1318     /// Allocates a new slice of size `len` slice into this `Bump` and return an
1319     /// exclusive reference to the copy.
1320     ///
1321     /// The elements are initialized using the supplied iterator.
1322     ///
1323     /// ## Panics
1324     ///
1325     /// Panics if reserving space for the slice fails, or if the supplied
1326     /// iterator returns fewer elements than it promised.
1327     ///
1328     /// ## Example
1329     ///
1330     /// ```
1331     /// let bump = bumpalo::Bump::new();
1332     /// let x: &[i32] = bump.alloc_slice_fill_iter([2, 3, 5].iter().cloned().map(|i| i * i));
1333     /// assert_eq!(x, [4, 9, 25]);
1334     /// ```
1335     #[inline(always)]
1336     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_iter<T, I>(&self, iter: I) -> &mut [T] where I: IntoIterator<Item = T>, I::IntoIter: ExactSizeIterator,1337     pub fn alloc_slice_fill_iter<T, I>(&self, iter: I) -> &mut [T]
1338     where
1339         I: IntoIterator<Item = T>,
1340         I::IntoIter: ExactSizeIterator,
1341     {
1342         let mut iter = iter.into_iter();
1343         self.alloc_slice_fill_with(iter.len(), |_| {
1344             iter.next().expect("Iterator supplied too few elements")
1345         })
1346     }
1347 
1348     /// Allocates a new slice of size `len` slice into this `Bump` and return an
1349     /// exclusive reference to the copy.
1350     ///
1351     /// All elements of the slice are initialized to `T::default()`.
1352     ///
1353     /// ## Panics
1354     ///
1355     /// Panics if reserving space for the slice fails.
1356     ///
1357     /// ## Example
1358     ///
1359     /// ```
1360     /// let bump = bumpalo::Bump::new();
1361     /// let x = bump.alloc_slice_fill_default::<u32>(5);
1362     /// assert_eq!(x, &[0, 0, 0, 0, 0]);
1363     /// ```
1364     #[inline(always)]
1365     #[allow(clippy::mut_from_ref)]
alloc_slice_fill_default<T: Default>(&self, len: usize) -> &mut [T]1366     pub fn alloc_slice_fill_default<T: Default>(&self, len: usize) -> &mut [T] {
1367         self.alloc_slice_fill_with(len, |_| T::default())
1368     }
1369 
1370     /// Allocate space for an object with the given `Layout`.
1371     ///
1372     /// The returned pointer points at uninitialized memory, and should be
1373     /// initialized with
1374     /// [`std::ptr::write`](https://doc.rust-lang.org/std/ptr/fn.write.html).
1375     ///
1376     /// # Panics
1377     ///
1378     /// Panics if reserving space matching `layout` fails.
1379     #[inline(always)]
alloc_layout(&self, layout: Layout) -> NonNull<u8>1380     pub fn alloc_layout(&self, layout: Layout) -> NonNull<u8> {
1381         self.try_alloc_layout(layout).unwrap_or_else(|_| oom())
1382     }
1383 
1384     /// Attempts to allocate space for an object with the given `Layout` or else returns
1385     /// an `Err`.
1386     ///
1387     /// The returned pointer points at uninitialized memory, and should be
1388     /// initialized with
1389     /// [`std::ptr::write`](https://doc.rust-lang.org/std/ptr/fn.write.html).
1390     ///
1391     /// # Errors
1392     ///
1393     /// Errors if reserving space matching `layout` fails.
1394     #[inline(always)]
try_alloc_layout(&self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr>1395     pub fn try_alloc_layout(&self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr> {
1396         if let Some(p) = self.try_alloc_layout_fast(layout) {
1397             Ok(p)
1398         } else {
1399             self.alloc_layout_slow(layout).ok_or(alloc::AllocErr {})
1400         }
1401     }
1402 
1403     #[inline(always)]
try_alloc_layout_fast(&self, layout: Layout) -> Option<NonNull<u8>>1404     fn try_alloc_layout_fast(&self, layout: Layout) -> Option<NonNull<u8>> {
1405         // We don't need to check for ZSTs here since they will automatically
1406         // be handled properly: the pointer will be bumped by zero bytes,
1407         // modulo alignment. This keeps the fast path optimized for non-ZSTs,
1408         // which are much more common.
1409         unsafe {
1410             let footer = self.current_chunk_footer.get();
1411             let footer = footer.as_ref();
1412             let ptr = footer.ptr.get().as_ptr() as usize;
1413             let start = footer.data.as_ptr() as usize;
1414             debug_assert!(start <= ptr);
1415             debug_assert!(ptr <= footer as *const _ as usize);
1416 
1417             let ptr = ptr.checked_sub(layout.size())?;
1418             let aligned_ptr = ptr & !(layout.align() - 1);
1419 
1420             if aligned_ptr >= start {
1421                 let aligned_ptr = NonNull::new_unchecked(aligned_ptr as *mut u8);
1422                 footer.ptr.set(aligned_ptr);
1423                 Some(aligned_ptr)
1424             } else {
1425                 None
1426             }
1427         }
1428     }
1429 
1430     /// Gets the remaining capacity in the current chunk (in bytes).
1431     ///
1432     /// ## Example
1433     ///
1434     /// ```
1435     /// use bumpalo::Bump;
1436     ///
1437     /// let bump = Bump::with_capacity(100);
1438     ///
1439     /// let capacity = bump.chunk_capacity();
1440     /// assert!(capacity >= 100);
1441     /// ```
chunk_capacity(&self) -> usize1442     pub fn chunk_capacity(&self) -> usize {
1443         let current_footer = self.current_chunk_footer.get();
1444         let current_footer = unsafe { current_footer.as_ref() };
1445 
1446         current_footer as *const _ as usize - current_footer.data.as_ptr() as usize
1447     }
1448 
1449     /// Slow path allocation for when we need to allocate a new chunk from the
1450     /// parent bump set because there isn't enough room in our current chunk.
1451     #[inline(never)]
alloc_layout_slow(&self, layout: Layout) -> Option<NonNull<u8>>1452     fn alloc_layout_slow(&self, layout: Layout) -> Option<NonNull<u8>> {
1453         unsafe {
1454             let size = layout.size();
1455 
1456             // Get a new chunk from the global allocator.
1457             let current_footer = self.current_chunk_footer.get();
1458             let current_layout = current_footer.as_ref().layout;
1459 
1460             // By default, we want our new chunk to be about twice as big
1461             // as the previous chunk. If the global allocator refuses it,
1462             // we try to divide it by half until it works or the requested
1463             // size is smaller than the default footer size.
1464             let min_new_chunk_size = layout.size().max(DEFAULT_CHUNK_SIZE_WITHOUT_FOOTER);
1465             let mut base_size = (current_layout.size() - FOOTER_SIZE)
1466                 .checked_mul(2)?
1467                 .max(min_new_chunk_size);
1468             let sizes = iter::from_fn(|| {
1469                 if base_size >= min_new_chunk_size {
1470                     let size = base_size;
1471                     base_size = base_size / 2;
1472                     Some(size)
1473                 } else {
1474                     None
1475                 }
1476             });
1477 
1478             let new_footer = sizes
1479                 .filter_map(|size| Bump::new_chunk(Some(size), Some(layout), Some(current_footer)))
1480                 .next()?;
1481 
1482             debug_assert_eq!(
1483                 new_footer.as_ref().data.as_ptr() as usize % layout.align(),
1484                 0
1485             );
1486 
1487             // Set the new chunk as our new current chunk.
1488             self.current_chunk_footer.set(new_footer);
1489 
1490             let new_footer = new_footer.as_ref();
1491 
1492             // Move the bump ptr finger down to allocate room for `val`. We know
1493             // this can't overflow because we successfully allocated a chunk of
1494             // at least the requested size.
1495             let ptr = new_footer.ptr.get().as_ptr() as usize - size;
1496             // Round the pointer down to the requested alignment.
1497             let ptr = ptr & !(layout.align() - 1);
1498             debug_assert!(
1499                 ptr <= new_footer as *const _ as usize,
1500                 "{:#x} <= {:#x}",
1501                 ptr,
1502                 new_footer as *const _ as usize
1503             );
1504             let ptr = NonNull::new_unchecked(ptr as *mut u8);
1505             new_footer.ptr.set(ptr);
1506 
1507             // Return a pointer to the freshly allocated region in this chunk.
1508             Some(ptr)
1509         }
1510     }
1511 
1512     /// Returns an iterator over each chunk of allocated memory that
1513     /// this arena has bump allocated into.
1514     ///
1515     /// The chunks are returned ordered by allocation time, with the most
1516     /// recently allocated chunk being returned first, and the least recently
1517     /// allocated chunk being returned last.
1518     ///
1519     /// The values inside each chunk are also ordered by allocation time, with
1520     /// the most recent allocation being earlier in the slice, and the least
1521     /// recent allocation being towards the end of the slice.
1522     ///
1523     /// ## Safety
1524     ///
1525     /// Because this method takes `&mut self`, we know that the bump arena
1526     /// reference is unique and therefore there aren't any active references to
1527     /// any of the objects we've allocated in it either. This potential aliasing
1528     /// of exclusive references is one common footgun for unsafe code that we
1529     /// don't need to worry about here.
1530     ///
1531     /// However, there could be regions of uninitialized memory used as padding
1532     /// between allocations, which is why this iterator has items of type
1533     /// `[MaybeUninit<u8>]`, instead of simply `[u8]`.
1534     ///
1535     /// The only way to guarantee that there is no padding between allocations
1536     /// or within allocated objects is if all of these properties hold:
1537     ///
1538     /// 1. Every object allocated in this arena has the same alignment,
1539     ///    and that alignment is at most 16.
1540     /// 2. Every object's size is a multiple of its alignment.
1541     /// 3. None of the objects allocated in this arena contain any internal
1542     ///    padding.
1543     ///
1544     /// If you want to use this `iter_allocated_chunks` method, it is *your*
1545     /// responsibility to ensure that these properties hold before calling
1546     /// `MaybeUninit::assume_init` or otherwise reading the returned values.
1547     ///
1548     /// Finally, you must also ensure that any values allocated into the bump
1549     /// arena have not had their `Drop` implementations called on them,
1550     /// e.g. after dropping a [`bumpalo::boxed::Box<T>`][crate::boxed::Box].
1551     ///
1552     /// ## Example
1553     ///
1554     /// ```
1555     /// let mut bump = bumpalo::Bump::new();
1556     ///
1557     /// // Allocate a bunch of `i32`s in this bump arena, potentially causing
1558     /// // additional memory chunks to be reserved.
1559     /// for i in 0..10000 {
1560     ///     bump.alloc(i);
1561     /// }
1562     ///
1563     /// // Iterate over each chunk we've bump allocated into. This is safe
1564     /// // because we have only allocated `i32`s in this arena, which fulfills
1565     /// // the above requirements.
1566     /// for ch in bump.iter_allocated_chunks() {
1567     ///     println!("Used a chunk that is {} bytes long", ch.len());
1568     ///     println!("The first byte is {:?}", unsafe {
1569     ///         ch.get(0).unwrap().assume_init()
1570     ///     });
1571     /// }
1572     ///
1573     /// // Within a chunk, allocations are ordered from most recent to least
1574     /// // recent. If we allocated 'a', then 'b', then 'c', when we iterate
1575     /// // through the chunk's data, we get them in the order 'c', then 'b',
1576     /// // then 'a'.
1577     ///
1578     /// bump.reset();
1579     /// bump.alloc(b'a');
1580     /// bump.alloc(b'b');
1581     /// bump.alloc(b'c');
1582     ///
1583     /// assert_eq!(bump.iter_allocated_chunks().count(), 1);
1584     /// let chunk = bump.iter_allocated_chunks().nth(0).unwrap();
1585     /// assert_eq!(chunk.len(), 3);
1586     ///
1587     /// // Safe because we've only allocated `u8`s in this arena, which
1588     /// // fulfills the above requirements.
1589     /// unsafe {
1590     ///     assert_eq!(chunk[0].assume_init(), b'c');
1591     ///     assert_eq!(chunk[1].assume_init(), b'b');
1592     ///     assert_eq!(chunk[2].assume_init(), b'a');
1593     /// }
1594     /// ```
iter_allocated_chunks(&mut self) -> ChunkIter<'_>1595     pub fn iter_allocated_chunks(&mut self) -> ChunkIter<'_> {
1596         // SAFE: Ensured by mutable borrow of `self`.
1597         let raw = unsafe { self.iter_allocated_chunks_raw() };
1598         ChunkIter {
1599             raw,
1600             bump: PhantomData,
1601         }
1602     }
1603 
1604     /// Returns an iterator over raw pointers to chunks of allocated memory that
1605     /// this arena has bump allocated into.
1606     ///
1607     /// This is an unsafe version of [`iter_allocated_chunks()`](Bump::iter_allocated_chunks),
1608     /// with the caller responsible for safe usage of the returned pointers as
1609     /// well as ensuring that the iterator is not invalidated by new
1610     /// allocations.
1611     ///
1612     /// ## Safety
1613     ///
1614     /// Allocations from this arena must not be performed while the returned
1615     /// iterator is alive. If reading the chunk data (or casting to a reference)
1616     /// the caller must ensure that there exist no mutable references to
1617     /// previously allocated data.
1618     ///
1619     /// In addition, all of the caveats when reading the chunk data from
1620     /// [`iter_allocated_chunks()`](Bump::iter_allocated_chunks) still apply.
iter_allocated_chunks_raw(&self) -> ChunkRawIter<'_>1621     pub unsafe fn iter_allocated_chunks_raw(&self) -> ChunkRawIter<'_> {
1622         ChunkRawIter {
1623             footer: Some(self.current_chunk_footer.get()),
1624             bump: PhantomData,
1625         }
1626     }
1627 
1628     /// Calculates the number of bytes currently allocated across all chunks in
1629     /// this bump arena.
1630     ///
1631     /// If you allocate types of different alignments or types with
1632     /// larger-than-typical alignment in the same arena, some padding
1633     /// bytes might get allocated in the bump arena. Note that those padding
1634     /// bytes will add to this method's resulting sum, so you cannot rely
1635     /// on it only counting the sum of the sizes of the things
1636     /// you've allocated in the arena.
1637     ///
1638     /// ## Example
1639     ///
1640     /// ```
1641     /// let bump = bumpalo::Bump::new();
1642     /// let _x = bump.alloc_slice_fill_default::<u32>(5);
1643     /// let bytes = bump.allocated_bytes();
1644     /// assert!(bytes >= core::mem::size_of::<u32>() * 5);
1645     /// ```
allocated_bytes(&self) -> usize1646     pub fn allocated_bytes(&self) -> usize {
1647         let mut footer = Some(self.current_chunk_footer.get());
1648 
1649         let mut bytes = 0;
1650 
1651         while let Some(f) = footer {
1652             let foot = unsafe { f.as_ref() };
1653 
1654             let ptr = foot.ptr.get().as_ptr() as usize;
1655             debug_assert!(ptr <= foot as *const _ as usize);
1656 
1657             bytes += foot as *const _ as usize - ptr;
1658 
1659             footer = foot.prev.get();
1660         }
1661 
1662         bytes
1663     }
1664 
1665     #[inline]
is_last_allocation(&self, ptr: NonNull<u8>) -> bool1666     unsafe fn is_last_allocation(&self, ptr: NonNull<u8>) -> bool {
1667         let footer = self.current_chunk_footer.get();
1668         let footer = footer.as_ref();
1669         footer.ptr.get() == ptr
1670     }
1671 
1672     #[inline]
dealloc(&self, ptr: NonNull<u8>, layout: Layout)1673     unsafe fn dealloc(&self, ptr: NonNull<u8>, layout: Layout) {
1674         // If the pointer is the last allocation we made, we can reuse the bytes,
1675         // otherwise they are simply leaked -- at least until somebody calls reset().
1676         if self.is_last_allocation(ptr) {
1677             let ptr = NonNull::new_unchecked(ptr.as_ptr().add(layout.size()));
1678             self.current_chunk_footer.get().as_ref().ptr.set(ptr);
1679         }
1680     }
1681 
1682     #[inline]
shrink( &self, ptr: NonNull<u8>, layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, alloc::AllocErr>1683     unsafe fn shrink(
1684         &self,
1685         ptr: NonNull<u8>,
1686         layout: Layout,
1687         new_size: usize,
1688     ) -> Result<NonNull<u8>, alloc::AllocErr> {
1689         let old_size = layout.size();
1690         if self.is_last_allocation(ptr)
1691                 // Only reclaim the excess space (which requires a copy) if it
1692                 // is worth it: we are actually going to recover "enough" space
1693                 // and we can do a non-overlapping copy.
1694                 && new_size <= old_size / 2
1695         {
1696             let delta = old_size - new_size;
1697             let footer = self.current_chunk_footer.get();
1698             let footer = footer.as_ref();
1699             footer
1700                 .ptr
1701                 .set(NonNull::new_unchecked(footer.ptr.get().as_ptr().add(delta)));
1702             let new_ptr = footer.ptr.get();
1703             // NB: we know it is non-overlapping because of the size check
1704             // in the `if` condition.
1705             ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), new_size);
1706             return Ok(new_ptr);
1707         } else {
1708             return Ok(ptr);
1709         }
1710     }
1711 
1712     #[inline]
grow( &self, ptr: NonNull<u8>, layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, alloc::AllocErr>1713     unsafe fn grow(
1714         &self,
1715         ptr: NonNull<u8>,
1716         layout: Layout,
1717         new_size: usize,
1718     ) -> Result<NonNull<u8>, alloc::AllocErr> {
1719         let old_size = layout.size();
1720         if self.is_last_allocation(ptr) {
1721             // Try to allocate the delta size within this same block so we can
1722             // reuse the currently allocated space.
1723             let delta = new_size - old_size;
1724             if let Some(p) =
1725                 self.try_alloc_layout_fast(layout_from_size_align(delta, layout.align()))
1726             {
1727                 ptr::copy(ptr.as_ptr(), p.as_ptr(), old_size);
1728                 return Ok(p);
1729             }
1730         }
1731 
1732         // Fallback: do a fresh allocation and copy the existing data into it.
1733         let new_layout = layout_from_size_align(new_size, layout.align());
1734         let new_ptr = self.try_alloc_layout(new_layout)?;
1735         ptr::copy_nonoverlapping(ptr.as_ptr(), new_ptr.as_ptr(), old_size);
1736         Ok(new_ptr)
1737     }
1738 }
1739 
1740 /// An iterator over each chunk of allocated memory that
1741 /// an arena has bump allocated into.
1742 ///
1743 /// The chunks are returned ordered by allocation time, with the most recently
1744 /// allocated chunk being returned first.
1745 ///
1746 /// The values inside each chunk is also ordered by allocation time, with the most
1747 /// recent allocation being earlier in the slice.
1748 ///
1749 /// This struct is created by the [`iter_allocated_chunks`] method on
1750 /// [`Bump`]. See that function for a safety description regarding reading from the returned items.
1751 ///
1752 /// [`Bump`]: ./struct.Bump.html
1753 /// [`iter_allocated_chunks`]: ./struct.Bump.html#method.iter_allocated_chunks
1754 #[derive(Debug)]
1755 pub struct ChunkIter<'a> {
1756     raw: ChunkRawIter<'a>,
1757     bump: PhantomData<&'a mut Bump>,
1758 }
1759 
1760 impl<'a> Iterator for ChunkIter<'a> {
1761     type Item = &'a [mem::MaybeUninit<u8>];
next(&mut self) -> Option<&'a [mem::MaybeUninit<u8>]>1762     fn next(&mut self) -> Option<&'a [mem::MaybeUninit<u8>]> {
1763         unsafe {
1764             let (ptr, len) = self.raw.next()?;
1765             let slice = slice::from_raw_parts(ptr as *const mem::MaybeUninit<u8>, len);
1766             Some(slice)
1767         }
1768     }
1769 }
1770 
1771 impl<'a> iter::FusedIterator for ChunkIter<'a> {}
1772 
1773 /// An iterator over raw pointers to chunks of allocated memory that this
1774 /// arena has bump allocated into.
1775 ///
1776 /// See [`ChunkIter`] for details regarding the returned chunks.
1777 ///
1778 /// This struct is created by the [`iter_allocated_chunks_raw`] method on
1779 /// [`Bump`]. See that function for a safety description regarding reading from
1780 /// the returned items.
1781 ///
1782 /// [`Bump`]: ./struct.Bump.html
1783 /// [`iter_allocated_chunks_raw`]: ./struct.Bump.html#method.iter_allocated_chunks_raw
1784 #[derive(Debug)]
1785 pub struct ChunkRawIter<'a> {
1786     footer: Option<NonNull<ChunkFooter>>,
1787     bump: PhantomData<&'a Bump>,
1788 }
1789 
1790 impl Iterator for ChunkRawIter<'_> {
1791     type Item = (*mut u8, usize);
next(&mut self) -> Option<(*mut u8, usize)>1792     fn next(&mut self) -> Option<(*mut u8, usize)> {
1793         unsafe {
1794             let foot = self.footer?;
1795             let foot = foot.as_ref();
1796             let (ptr, len) = foot.as_raw_parts();
1797             self.footer = foot.prev.get();
1798             Some((ptr as *mut u8, len))
1799         }
1800     }
1801 }
1802 
1803 impl iter::FusedIterator for ChunkRawIter<'_> {}
1804 
1805 #[inline(never)]
1806 #[cold]
oom() -> !1807 fn oom() -> ! {
1808     panic!("out of memory")
1809 }
1810 
1811 unsafe impl<'a> alloc::Alloc for &'a Bump {
1812     #[inline(always)]
alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr>1813     unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<u8>, alloc::AllocErr> {
1814         self.try_alloc_layout(layout)
1815     }
1816 
1817     #[inline]
dealloc(&mut self, ptr: NonNull<u8>, layout: Layout)1818     unsafe fn dealloc(&mut self, ptr: NonNull<u8>, layout: Layout) {
1819         Bump::dealloc(self, ptr, layout)
1820     }
1821 
1822     #[inline]
realloc( &mut self, ptr: NonNull<u8>, layout: Layout, new_size: usize, ) -> Result<NonNull<u8>, alloc::AllocErr>1823     unsafe fn realloc(
1824         &mut self,
1825         ptr: NonNull<u8>,
1826         layout: Layout,
1827         new_size: usize,
1828     ) -> Result<NonNull<u8>, alloc::AllocErr> {
1829         let old_size = layout.size();
1830 
1831         if old_size == 0 {
1832             return self.try_alloc_layout(layout);
1833         }
1834 
1835         if new_size <= old_size {
1836             self.shrink(ptr, layout, new_size)
1837         } else {
1838             self.grow(ptr, layout, new_size)
1839         }
1840     }
1841 }
1842 
1843 #[cfg(feature = "allocator_api")]
1844 unsafe impl<'a> Allocator for &'a Bump {
allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError>1845     fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
1846         self.try_alloc_layout(layout)
1847             .map(|p| NonNull::slice_from_raw_parts(p, layout.size()))
1848             .map_err(|_| AllocError)
1849     }
1850 
deallocate(&self, ptr: NonNull<u8>, layout: Layout)1851     unsafe fn deallocate(&self, ptr: NonNull<u8>, layout: Layout) {
1852         Bump::dealloc(self, ptr, layout)
1853     }
1854 
shrink( &self, ptr: NonNull<u8>, old_layout: Layout, new_layout: Layout, ) -> Result<NonNull<[u8]>, AllocError>1855     unsafe fn shrink(
1856         &self,
1857         ptr: NonNull<u8>,
1858         old_layout: Layout,
1859         new_layout: Layout,
1860     ) -> Result<NonNull<[u8]>, AllocError> {
1861         let new_size = new_layout.size();
1862         Bump::shrink(self, ptr, old_layout, new_size)
1863             .map(|p| NonNull::slice_from_raw_parts(p, new_size))
1864             .map_err(|_| AllocError)
1865     }
1866 
grow( &self, ptr: NonNull<u8>, old_layout: Layout, new_layout: Layout, ) -> Result<NonNull<[u8]>, AllocError>1867     unsafe fn grow(
1868         &self,
1869         ptr: NonNull<u8>,
1870         old_layout: Layout,
1871         new_layout: Layout,
1872     ) -> Result<NonNull<[u8]>, AllocError> {
1873         let new_size = new_layout.size();
1874         Bump::grow(self, ptr, old_layout, new_size)
1875             .map(|p| NonNull::slice_from_raw_parts(p, new_size))
1876             .map_err(|_| AllocError)
1877     }
1878 
grow_zeroed( &self, ptr: NonNull<u8>, old_layout: Layout, new_layout: Layout, ) -> Result<NonNull<[u8]>, AllocError>1879     unsafe fn grow_zeroed(
1880         &self,
1881         ptr: NonNull<u8>,
1882         old_layout: Layout,
1883         new_layout: Layout,
1884     ) -> Result<NonNull<[u8]>, AllocError> {
1885         let mut ptr = self.grow(ptr, old_layout, new_layout)?;
1886         ptr.as_mut()[old_layout.size()..].fill(0);
1887         Ok(ptr)
1888     }
1889 }
1890 
1891 #[cfg(test)]
1892 mod tests {
1893     use super::*;
1894 
1895     #[test]
chunk_footer_is_five_words()1896     fn chunk_footer_is_five_words() {
1897         assert_eq!(mem::size_of::<ChunkFooter>(), mem::size_of::<usize>() * 5);
1898     }
1899 
1900     #[test]
1901     #[allow(clippy::cognitive_complexity)]
test_realloc()1902     fn test_realloc() {
1903         use crate::alloc::Alloc;
1904 
1905         unsafe {
1906             const CAPACITY: usize = 1024 - OVERHEAD;
1907             let mut b = Bump::with_capacity(CAPACITY);
1908 
1909             // `realloc` doesn't shrink allocations that aren't "worth it".
1910             let layout = Layout::from_size_align(100, 1).unwrap();
1911             let p = b.alloc_layout(layout);
1912             let q = (&b).realloc(p, layout, 51).unwrap();
1913             assert_eq!(p, q);
1914             b.reset();
1915 
1916             // `realloc` will shrink allocations that are "worth it".
1917             let layout = Layout::from_size_align(100, 1).unwrap();
1918             let p = b.alloc_layout(layout);
1919             let q = (&b).realloc(p, layout, 50).unwrap();
1920             assert!(p != q);
1921             b.reset();
1922 
1923             // `realloc` will reuse the last allocation when growing.
1924             let layout = Layout::from_size_align(10, 1).unwrap();
1925             let p = b.alloc_layout(layout);
1926             let q = (&b).realloc(p, layout, 11).unwrap();
1927             assert_eq!(q.as_ptr() as usize, p.as_ptr() as usize - 1);
1928             b.reset();
1929 
1930             // `realloc` will allocate a new chunk when growing the last
1931             // allocation, if need be.
1932             let layout = Layout::from_size_align(1, 1).unwrap();
1933             let p = b.alloc_layout(layout);
1934             let q = (&b).realloc(p, layout, CAPACITY + 1).unwrap();
1935             assert!(q.as_ptr() as usize != p.as_ptr() as usize - CAPACITY);
1936             b = Bump::with_capacity(CAPACITY);
1937 
1938             // `realloc` will allocate and copy when reallocating anything that
1939             // wasn't the last allocation.
1940             let layout = Layout::from_size_align(1, 1).unwrap();
1941             let p = b.alloc_layout(layout);
1942             let _ = b.alloc_layout(layout);
1943             let q = (&b).realloc(p, layout, 2).unwrap();
1944             assert!(q.as_ptr() as usize != p.as_ptr() as usize - 1);
1945             b.reset();
1946         }
1947     }
1948 
1949     #[test]
invalid_read()1950     fn invalid_read() {
1951         use alloc::Alloc;
1952 
1953         let mut b = &Bump::new();
1954 
1955         unsafe {
1956             let l1 = Layout::from_size_align(12000, 4).unwrap();
1957             let p1 = Alloc::alloc(&mut b, l1).unwrap();
1958 
1959             let l2 = Layout::from_size_align(1000, 4).unwrap();
1960             Alloc::alloc(&mut b, l2).unwrap();
1961 
1962             let p1 = b.realloc(p1, l1, 24000).unwrap();
1963             let l3 = Layout::from_size_align(24000, 4).unwrap();
1964             b.realloc(p1, l3, 48000).unwrap();
1965         }
1966     }
1967 }
1968