1 //! A lock-free concurrent slab.
2 //!
3 //! Slabs provide pre-allocated storage for many instances of a single data
4 //! type. When a large number of values of a single type are required,
5 //! this can be more efficient than allocating each item individually. Since the
6 //! allocated items are the same size, memory fragmentation is reduced, and
7 //! creating and removing new items can be very cheap.
8 //!
9 //! This crate implements a lock-free concurrent slab, indexed by `usize`s.
10 //!
11 //! ## Usage
12 //!
13 //! First, add this to your `Cargo.toml`:
14 //!
15 //! ```toml
16 //! sharded-slab = "0.1.1"
17 //! ```
18 //!
19 //! This crate provides two  types, [`Slab`] and [`Pool`], which provide
20 //! slightly different APIs for using a sharded slab.
21 //!
22 //! [`Slab`] implements a slab for _storing_ small types, sharing them between
23 //! threads, and accessing them by index. New entries are allocated by
24 //! [inserting] data, moving it in by value. Similarly, entries may be
25 //! deallocated by [taking] from the slab, moving the value out. This API is
26 //! similar to a `Vec<Option<T>>`, but allowing lock-free concurrent insertion
27 //! and removal.
28 //!
29 //! In contrast, the [`Pool`] type provides an [object pool] style API for
30 //! _reusing storage_. Rather than constructing values and moving them into the
31 //! pool, as with [`Slab`], [allocating an entry][create] from the pool takes a
32 //! closure that's provided with a mutable reference to initialize the entry in
33 //! place. When entries are deallocated, they are [cleared] in place. Types
34 //! which own a heap allocation can be cleared by dropping any _data_ they
35 //! store, but retaining any previously-allocated capacity. This means that a
36 //! [`Pool`] may be used to reuse a set of existing heap allocations, reducing
37 //! allocator load.
38 //!
39 //! [`Slab`]: struct.Slab.html
40 //! [inserting]: struct.Slab.html#method.insert
41 //! [taking]: struct.Slab.html#method.take
42 //! [`Pool`]: struct.Pool.html
43 //! [create]: struct.Pool.html#method.create
44 //! [cleared]: trait.Clear.html
45 //! [object pool]: https://en.wikipedia.org/wiki/Object_pool_pattern
46 //!
47 //! # Examples
48 //!
49 //! Inserting an item into the slab, returning an index:
50 //! ```rust
51 //! # use sharded_slab::Slab;
52 //! let slab = Slab::new();
53 //!
54 //! let key = slab.insert("hello world").unwrap();
55 //! assert_eq!(slab.get(key).unwrap(), "hello world");
56 //! ```
57 //!
58 //! To share a slab across threads, it may be wrapped in an `Arc`:
59 //! ```rust
60 //! # use sharded_slab::Slab;
61 //! use std::sync::Arc;
62 //! let slab = Arc::new(Slab::new());
63 //!
64 //! let slab2 = slab.clone();
65 //! let thread2 = std::thread::spawn(move || {
66 //!     let key = slab2.insert("hello from thread two").unwrap();
67 //!     assert_eq!(slab2.get(key).unwrap(), "hello from thread two");
68 //!     key
69 //! });
70 //!
71 //! let key1 = slab.insert("hello from thread one").unwrap();
72 //! assert_eq!(slab.get(key1).unwrap(), "hello from thread one");
73 //!
74 //! // Wait for thread 2 to complete.
75 //! let key2 = thread2.join().unwrap();
76 //!
77 //! // The item inserted by thread 2 remains in the slab.
78 //! assert_eq!(slab.get(key2).unwrap(), "hello from thread two");
79 //!```
80 //!
81 //! If items in the slab must be mutated, a `Mutex` or `RwLock` may be used for
82 //! each item, providing granular locking of items rather than of the slab:
83 //!
84 //! ```rust
85 //! # use sharded_slab::Slab;
86 //! use std::sync::{Arc, Mutex};
87 //! let slab = Arc::new(Slab::new());
88 //!
89 //! let key = slab.insert(Mutex::new(String::from("hello world"))).unwrap();
90 //!
91 //! let slab2 = slab.clone();
92 //! let thread2 = std::thread::spawn(move || {
93 //!     let hello = slab2.get(key).expect("item missing");
94 //!     let mut hello = hello.lock().expect("mutex poisoned");
95 //!     *hello = String::from("hello everyone!");
96 //! });
97 //!
98 //! thread2.join().unwrap();
99 //!
100 //! let hello = slab.get(key).expect("item missing");
101 //! let mut hello = hello.lock().expect("mutex poisoned");
102 //! assert_eq!(hello.as_str(), "hello everyone!");
103 //! ```
104 //!
105 //! # Configuration
106 //!
107 //! For performance reasons, several values used by the slab are calculated as
108 //! constants. In order to allow users to tune the slab's parameters, we provide
109 //! a [`Config`] trait which defines these parameters as associated `consts`.
110 //! The `Slab` type is generic over a `C: Config` parameter.
111 //!
112 //! [`Config`]: trait.Config.html
113 //!
114 //! # Comparison with Similar Crates
115 //!
116 //! - [`slab`]: Carl Lerche's `slab` crate provides a slab implementation with a
117 //!   similar API, implemented by storing all data in a single vector.
118 //!
119 //!   Unlike `sharded_slab`, inserting and removing elements from the slab
120 //!   requires  mutable access. This means that if the slab is accessed
121 //!   concurrently by multiple threads, it is necessary for it to be protected
122 //!   by a `Mutex` or `RwLock`. Items may not be inserted or removed (or
123 //!   accessed, if a `Mutex` is used) concurrently, even when they are
124 //!   unrelated. In many cases, the lock can become a significant bottleneck. On
125 //!   the other hand, this crate allows separate indices in the slab to be
126 //!   accessed, inserted, and removed concurrently without requiring a global
127 //!   lock. Therefore, when the slab is shared across multiple threads, this
128 //!   crate offers significantly better performance than `slab`.
129 //!
130 //!   However, the lock free slab introduces some additional constant-factor
131 //!   overhead. This means that in use-cases where a slab is _not_ shared by
132 //!   multiple threads and locking is not required, this crate will likely offer
133 //!   slightly worse performance.
134 //!
135 //!   In summary: `sharded-slab` offers significantly improved performance in
136 //!   concurrent use-cases, while `slab` should be preferred in single-threaded
137 //!   use-cases.
138 //!
139 //! [`slab`]: https://crates.io/crates/loom
140 //!
141 //! # Safety and Correctness
142 //!
143 //! Most implementations of lock-free data structures in Rust require some
144 //! amount of unsafe code, and this crate is not an exception. In order to catch
145 //! potential bugs in this unsafe code, we make use of [`loom`], a
146 //! permutation-testing tool for concurrent Rust programs. All `unsafe` blocks
147 //! this crate occur in accesses to `loom` `UnsafeCell`s. This means that when
148 //! those accesses occur in this crate's tests, `loom` will assert that they are
149 //! valid under the C11 memory model across multiple permutations of concurrent
150 //! executions of those tests.
151 //!
152 //! In order to guard against the [ABA problem][aba], this crate makes use of
153 //! _generational indices_. Each slot in the slab tracks a generation counter
154 //! which is incremented every time a value is inserted into that slot, and the
155 //! indices returned by [`Slab::insert`] include the generation of the slot when
156 //! the value was inserted, packed into the high-order bits of the index. This
157 //! ensures that if a value is inserted, removed,  and a new value is inserted
158 //! into the same slot in the slab, the key returned by the first call to
159 //! `insert` will not map to the new value.
160 //!
161 //! Since a fixed number of bits are set aside to use for storing the generation
162 //! counter, the counter will wrap  around after being incremented a number of
163 //! times. To avoid situations where a returned index lives long enough to see the
164 //! generation counter wrap around to the same value, it is good to be fairly
165 //! generous when configuring the allocation of index bits.
166 //!
167 //! [`loom`]: https://crates.io/crates/loom
168 //! [aba]: https://en.wikipedia.org/wiki/ABA_problem
169 //! [`Slab::insert`]: struct.Slab.html#method.insert
170 //!
171 //! # Performance
172 //!
173 //! These graphs were produced by [benchmarks] of the sharded slab implementation,
174 //! using the [`criterion`] crate.
175 //!
176 //! The first shows the results of a benchmark where an increasing number of
177 //! items are inserted and then removed into a slab concurrently by five
178 //! threads. It compares the performance of the sharded slab implementation
179 //! with a `RwLock<slab::Slab>`:
180 //!
181 //! <img width="1124" alt="Screen Shot 2019-10-01 at 5 09 49 PM" src="https://user-images.githubusercontent.com/2796466/66078398-cd6c9f80-e516-11e9-9923-0ed6292e8498.png">
182 //!
183 //! The second graph shows the results of a benchmark where an increasing
184 //! number of items are inserted and then removed by a _single_ thread. It
185 //! compares the performance of the sharded slab implementation with an
186 //! `RwLock<slab::Slab>` and a `mut slab::Slab`.
187 //!
188 //! <img width="925" alt="Screen Shot 2019-10-01 at 5 13 45 PM" src="https://user-images.githubusercontent.com/2796466/66078469-f0974f00-e516-11e9-95b5-f65f0aa7e494.png">
189 //!
190 //! These benchmarks demonstrate that, while the sharded approach introduces
191 //! a small constant-factor overhead, it offers significantly better
192 //! performance across concurrent accesses.
193 //!
194 //! [benchmarks]: https://github.com/hawkw/sharded-slab/blob/master/benches/bench.rs
195 //! [`criterion`]: https://crates.io/crates/criterion
196 //!
197 //! # Implementation Notes
198 //!
199 //! See [this page](implementation/index.html) for details on this crate's design
200 //! and implementation.
201 //!
202 #![doc(html_root_url = "https://docs.rs/sharded-slab/0.1.1")]
203 #![warn(missing_debug_implementations, missing_docs, missing_doc_code_examples)]
204 
205 macro_rules! test_println {
206     ($($arg:tt)*) => {
207         if cfg!(test) && cfg!(slab_print) {
208             if std::thread::panicking() {
209                 // getting the thread ID while panicking doesn't seem to play super nicely with loom's
210                 // mock lazy_static...
211                 println!("[PANIC {:>17}:{:<3}] {}", file!(), line!(), format_args!($($arg)*))
212             } else {
213                 println!("[{:?} {:>17}:{:<3}] {}", crate::Tid::<crate::DefaultConfig>::current(), file!(), line!(), format_args!($($arg)*))
214             }
215         }
216     }
217 }
218 
219 #[cfg(all(test, loom))]
220 macro_rules! test_dbg {
221     ($e:expr) => {
222         match $e {
223             e => {
224                 test_println!("{} = {:?}", stringify!($e), &e);
225                 e
226             }
227         }
228     };
229 }
230 
231 mod clear;
232 pub mod implementation;
233 mod page;
234 pub mod pool;
235 pub(crate) mod sync;
236 mod tid;
237 pub(crate) use tid::Tid;
238 pub(crate) mod cfg;
239 mod iter;
240 mod shard;
241 use cfg::CfgPrivate;
242 pub use cfg::{Config, DefaultConfig};
243 pub use clear::Clear;
244 #[doc(inline)]
245 pub use pool::Pool;
246 use shard::Shard;
247 use std::ptr;
248 use std::{fmt, marker::PhantomData, sync::Arc};
249 
250 /// A sharded slab.
251 ///
252 /// See the [crate-level documentation](index.html) for details on using this type.
253 pub struct Slab<T, C: cfg::Config = DefaultConfig> {
254     shards: shard::Array<Option<T>, C>,
255     _cfg: PhantomData<C>,
256 }
257 
258 /// A handle that allows access to an object in a slab.
259 ///
260 /// While the guard exists, it indicates to the slab that the item the guard
261 /// references is currently being accessed. If the item is removed from the slab
262 /// while a guard exists, the removal will be deferred until all guards are dropped.
263 pub struct Entry<'a, T, C: cfg::Config = DefaultConfig> {
264     inner: page::slot::Guard<Option<T>, C>,
265     value: ptr::NonNull<T>,
266     shard: &'a Shard<Option<T>, C>,
267     key: usize,
268 }
269 
270 /// A handle to a vacant entry in a `Slab`.
271 ///
272 /// `VacantEntry` allows constructing values with the key that they will be
273 /// assigned to.
274 ///
275 /// # Examples
276 ///
277 /// ```
278 /// # use sharded_slab::Slab;
279 /// let mut slab = Slab::new();
280 ///
281 /// let hello = {
282 ///     let entry = slab.vacant_entry().unwrap();
283 ///     let key = entry.key();
284 ///
285 ///     entry.insert((key, "hello"));
286 ///     key
287 /// };
288 ///
289 /// assert_eq!(hello, slab.get(hello).unwrap().0);
290 /// assert_eq!("hello", slab.get(hello).unwrap().1);
291 /// ```
292 #[derive(Debug)]
293 pub struct VacantEntry<'a, T, C: cfg::Config = DefaultConfig> {
294     inner: page::slot::InitGuard<Option<T>, C>,
295     key: usize,
296     _lt: PhantomData<&'a ()>,
297 }
298 
299 /// An owned guard that allows access to an object in a `S.ab`.
300 ///
301 /// While the guard exists, it indicates to the slab that the item the guard references is
302 /// currently being accessed. If the item is removed from the slab while the guard exists, the
303 /// removal will be deferred until all guards are dropped.
304 ///
305 /// Unlike [`Entry`], which borrows the slab, an `OwnedEntry` clones the `Arc`
306 /// around the slab. Therefore, it keeps the slab from being dropped until all
307 /// such guards have been dropped. This means that an `OwnedEntry` may be held for
308 /// an arbitrary lifetime.
309 ///
310 /// # Examples
311 ///
312 /// ```
313 /// # use sharded_slab::Slab;
314 /// use std::sync::Arc;
315 ///
316 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
317 /// let key = slab.insert("hello world").unwrap();
318 ///
319 /// // Look up the created key, returning an `OwnedEntry`.
320 /// let value = slab.clone().get_owned(key).unwrap();
321 ///
322 /// // Now, the original `Arc` clone of the slab may be dropped, but the
323 /// // returned `OwnedEntry` can still access the value.
324 /// assert_eq!(value, "hello world");
325 /// ```
326 ///
327 /// Unlike [`Entry`], an `OwnedEntry` may be stored in a struct which must live
328 /// for the `'static` lifetime:
329 ///
330 /// ```
331 /// # use sharded_slab::Slab;
332 /// use sharded_slab::OwnedEntry;
333 /// use std::sync::Arc;
334 ///
335 /// pub struct MyStruct {
336 ///     entry: OwnedEntry<&'static str>,
337 ///     // ... other fields ...
338 /// }
339 ///
340 /// // Suppose this is some arbitrary function which requires a value that
341 /// // lives for the 'static lifetime...
342 /// fn function_requiring_static<T: 'static>(t: &T) {
343 ///     // ... do something extremely important and interesting ...
344 /// }
345 ///
346 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
347 /// let key = slab.insert("hello world").unwrap();
348 ///
349 /// // Look up the created key, returning an `OwnedEntry`.
350 /// let entry = slab.clone().get_owned(key).unwrap();
351 /// let my_struct = MyStruct {
352 ///     entry,
353 ///     // ...
354 /// };
355 ///
356 /// // We can use `my_struct` anywhere where it is required to have the
357 /// // `'static` lifetime:
358 /// function_requiring_static(&my_struct);
359 /// ```
360 ///
361 /// `OwnedEntry`s may be sent between threads:
362 ///
363 /// ```
364 /// # use sharded_slab::Slab;
365 /// use std::{thread, sync::Arc};
366 ///
367 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
368 /// let key = slab.insert("hello world").unwrap();
369 ///
370 /// // Look up the created key, returning an `OwnedEntry`.
371 /// let value = slab.clone().get_owned(key).unwrap();
372 ///
373 /// thread::spawn(move || {
374 ///     assert_eq!(value, "hello world");
375 ///     // ...
376 /// }).join().unwrap();
377 /// ```
378 ///
379 /// [`get`]: Slab::get
380 /// [`OwnedEntry`]: crate::OwnedEntry
381 /// [`Entry`]: crate::Entry
382 ///
383 /// [`Entry`]: crate::Entry
384 pub struct OwnedEntry<T, C = DefaultConfig>
385 where
386     C: cfg::Config,
387 {
388     inner: page::slot::Guard<Option<T>, C>,
389     value: ptr::NonNull<T>,
390     slab: Arc<Slab<T, C>>,
391     key: usize,
392 }
393 
394 impl<T> Slab<T> {
395     /// Returns a new slab with the default configuration parameters.
new() -> Self396     pub fn new() -> Self {
397         Self::new_with_config()
398     }
399 
400     /// Returns a new slab with the provided configuration parameters.
new_with_config<C: cfg::Config>() -> Slab<T, C>401     pub fn new_with_config<C: cfg::Config>() -> Slab<T, C> {
402         C::validate();
403         Slab {
404             shards: shard::Array::new(),
405             _cfg: PhantomData,
406         }
407     }
408 }
409 
410 impl<T, C: cfg::Config> Slab<T, C> {
411     /// The number of bits in each index which are used by the slab.
412     ///
413     /// If other data is packed into the `usize` indices returned by
414     /// [`Slab::insert`], user code is free to use any bits higher than the
415     /// `USED_BITS`-th bit freely.
416     ///
417     /// This is determined by the [`Config`] type that configures the slab's
418     /// parameters. By default, all bits are used; this can be changed by
419     /// overriding the [`Config::RESERVED_BITS`][res] constant.
420     ///
421     /// [`Config`]: trait.Config.html
422     /// [res]: trait.Config.html#associatedconstant.RESERVED_BITS
423     /// [`Slab::insert`]: struct.Slab.html#method.insert
424     pub const USED_BITS: usize = C::USED_BITS;
425 
426     /// Inserts a value into the slab, returning a key that can be used to
427     /// access it.
428     ///
429     /// If this function returns `None`, then the shard for the current thread
430     /// is full and no items can be added until some are removed, or the maximum
431     /// number of shards has been reached.
432     ///
433     /// # Examples
434     /// ```rust
435     /// # use sharded_slab::Slab;
436     /// let slab = Slab::new();
437     ///
438     /// let key = slab.insert("hello world").unwrap();
439     /// assert_eq!(slab.get(key).unwrap(), "hello world");
440     /// ```
insert(&self, value: T) -> Option<usize>441     pub fn insert(&self, value: T) -> Option<usize> {
442         let (tid, shard) = self.shards.current();
443         test_println!("insert {:?}", tid);
444         let mut value = Some(value);
445         shard
446             .init_with(|idx, slot| {
447                 let gen = slot.insert(&mut value)?;
448                 Some(gen.pack(idx))
449             })
450             .map(|idx| tid.pack(idx))
451     }
452 
453     /// Return a handle to a vacant entry allowing for further manipulation.
454     ///
455     /// This function is useful when creating values that must contain their
456     /// slab key. The returned `VacantEntry` reserves a slot in the slab and is
457     /// able to query the associated key.
458     ///
459     /// # Examples
460     ///
461     /// ```
462     /// # use sharded_slab::Slab;
463     /// let mut slab = Slab::new();
464     ///
465     /// let hello = {
466     ///     let entry = slab.vacant_entry().unwrap();
467     ///     let key = entry.key();
468     ///
469     ///     entry.insert((key, "hello"));
470     ///     key
471     /// };
472     ///
473     /// assert_eq!(hello, slab.get(hello).unwrap().0);
474     /// assert_eq!("hello", slab.get(hello).unwrap().1);
475     /// ```
vacant_entry(&self) -> Option<VacantEntry<'_, T, C>>476     pub fn vacant_entry(&self) -> Option<VacantEntry<'_, T, C>> {
477         let (tid, shard) = self.shards.current();
478         test_println!("vacant_entry {:?}", tid);
479         shard.init_with(|idx, slot| {
480             let inner = slot.init()?;
481             let key = inner.generation().pack(tid.pack(idx));
482             Some(VacantEntry {
483                 inner,
484                 key,
485                 _lt: PhantomData,
486             })
487         })
488     }
489 
490     /// Remove the value associated with the given key from the slab, returning
491     /// `true` if a value was removed.
492     ///
493     /// Unlike [`take`], this method does _not_ block the current thread until
494     /// the value can be removed. Instead, if another thread is currently
495     /// accessing that value, this marks it to be removed by that thread when it
496     /// finishes accessing the value.
497     ///
498     /// # Examples
499     ///
500     /// ```rust
501     /// let slab = sharded_slab::Slab::new();
502     /// let key = slab.insert("hello world").unwrap();
503     ///
504     /// // Remove the item from the slab.
505     /// assert!(slab.remove(key));
506     ///
507     /// // Now, the slot is empty.
508     /// assert!(!slab.contains(key));
509     /// ```
510     ///
511     /// ```rust
512     /// use std::sync::Arc;
513     ///
514     /// let slab = Arc::new(sharded_slab::Slab::new());
515     /// let key = slab.insert("hello world").unwrap();
516     ///
517     /// let slab2 = slab.clone();
518     /// let thread2 = std::thread::spawn(move || {
519     ///     // Depending on when this thread begins executing, the item may
520     ///     // or may not have already been removed...
521     ///     if let Some(item) = slab2.get(key) {
522     ///         assert_eq!(item, "hello world");
523     ///     }
524     /// });
525     ///
526     /// // The item will be removed by thread2 when it finishes accessing it.
527     /// assert!(slab.remove(key));
528     ///
529     /// thread2.join().unwrap();
530     /// assert!(!slab.contains(key));
531     /// ```
532     /// [`take`]: #method.take
remove(&self, idx: usize) -> bool533     pub fn remove(&self, idx: usize) -> bool {
534         // The `Drop` impl for `Entry` calls `remove_local` or `remove_remote` based
535         // on where the guard was dropped from. If the dropped guard was the last one, this will
536         // call `Slot::remove_value` which actually clears storage.
537         let tid = C::unpack_tid(idx);
538 
539         test_println!("rm_deferred {:?}", tid);
540         let shard = self.shards.get(tid.as_usize());
541         if tid.is_current() {
542             shard.map(|shard| shard.remove_local(idx)).unwrap_or(false)
543         } else {
544             shard.map(|shard| shard.remove_remote(idx)).unwrap_or(false)
545         }
546     }
547 
548     /// Removes the value associated with the given key from the slab, returning
549     /// it.
550     ///
551     /// If the slab does not contain a value for that key, `None` is returned
552     /// instead.
553     ///
554     /// If the value associated with the given key is currently being
555     /// accessed by another thread, this method will block the current thread
556     /// until the item is no longer accessed. If this is not desired, use
557     /// [`remove`] instead.
558     ///
559     /// **Note**: This method blocks the calling thread by spinning until the
560     /// currently outstanding references are released. Spinning for long periods
561     /// of time can result in high CPU time and power consumption. Therefore,
562     /// `take` should only be called when other references to the slot are
563     /// expected to be dropped soon (e.g., when all accesses are relatively
564     /// short).
565     ///
566     /// # Examples
567     ///
568     /// ```rust
569     /// let slab = sharded_slab::Slab::new();
570     /// let key = slab.insert("hello world").unwrap();
571     ///
572     /// // Remove the item from the slab, returning it.
573     /// assert_eq!(slab.take(key), Some("hello world"));
574     ///
575     /// // Now, the slot is empty.
576     /// assert!(!slab.contains(key));
577     /// ```
578     ///
579     /// ```rust
580     /// use std::sync::Arc;
581     ///
582     /// let slab = Arc::new(sharded_slab::Slab::new());
583     /// let key = slab.insert("hello world").unwrap();
584     ///
585     /// let slab2 = slab.clone();
586     /// let thread2 = std::thread::spawn(move || {
587     ///     // Depending on when this thread begins executing, the item may
588     ///     // or may not have already been removed...
589     ///     if let Some(item) = slab2.get(key) {
590     ///         assert_eq!(item, "hello world");
591     ///     }
592     /// });
593     ///
594     /// // The item will only be removed when the other thread finishes
595     /// // accessing it.
596     /// assert_eq!(slab.take(key), Some("hello world"));
597     ///
598     /// thread2.join().unwrap();
599     /// assert!(!slab.contains(key));
600     /// ```
601     /// [`remove`]: #method.remove
take(&self, idx: usize) -> Option<T>602     pub fn take(&self, idx: usize) -> Option<T> {
603         let tid = C::unpack_tid(idx);
604 
605         test_println!("rm {:?}", tid);
606         let shard = self.shards.get(tid.as_usize())?;
607         if tid.is_current() {
608             shard.take_local(idx)
609         } else {
610             shard.take_remote(idx)
611         }
612     }
613 
614     /// Return a reference to the value associated with the given key.
615     ///
616     /// If the slab does not contain a value for the given key, or if the
617     /// maximum number of concurrent references to the slot has been reached,
618     /// `None` is returned instead.
619     ///
620     /// # Examples
621     ///
622     /// ```rust
623     /// let slab = sharded_slab::Slab::new();
624     /// let key = slab.insert("hello world").unwrap();
625     ///
626     /// assert_eq!(slab.get(key).unwrap(), "hello world");
627     /// assert!(slab.get(12345).is_none());
628     /// ```
get(&self, key: usize) -> Option<Entry<'_, T, C>>629     pub fn get(&self, key: usize) -> Option<Entry<'_, T, C>> {
630         let tid = C::unpack_tid(key);
631 
632         test_println!("get {:?}; current={:?}", tid, Tid::<C>::current());
633         let shard = self.shards.get(tid.as_usize())?;
634         shard.with_slot(key, |slot| {
635             let inner = slot.get(C::unpack_gen(key))?;
636             let value = ptr::NonNull::from(slot.value().as_ref().unwrap());
637             Some(Entry {
638                 inner,
639                 value,
640                 shard,
641                 key,
642             })
643         })
644     }
645 
646     /// Return an owned reference to the value associated with the given key.
647     ///
648     /// If the slab does not contain a value for the given key, `None` is
649     /// returned instead.
650     ///
651     /// Unlike [`get`], which borrows the slab, this method _clones_ the `Arc`
652     /// around the slab. This means that the returned [`OwnedEntry`] can be held
653     /// for an arbitrary lifetime. However,  this method requires that the slab
654     /// itself be wrapped in an `Arc`.
655     ///
656     /// # Examples
657     ///
658     /// ```
659     /// # use sharded_slab::Slab;
660     /// use std::sync::Arc;
661     ///
662     /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
663     /// let key = slab.insert("hello world").unwrap();
664     ///
665     /// // Look up the created key, returning an `OwnedEntry`.
666     /// let value = slab.clone().get_owned(key).unwrap();
667     ///
668     /// // Now, the original `Arc` clone of the slab may be dropped, but the
669     /// // returned `OwnedEntry` can still access the value.
670     /// assert_eq!(value, "hello world");
671     /// ```
672     ///
673     /// Unlike [`Entry`], an `OwnedEntry` may be stored in a struct which must live
674     /// for the `'static` lifetime:
675     ///
676     /// ```
677     /// # use sharded_slab::Slab;
678     /// use sharded_slab::OwnedEntry;
679     /// use std::sync::Arc;
680     ///
681     /// pub struct MyStruct {
682     ///     entry: OwnedEntry<&'static str>,
683     ///     // ... other fields ...
684     /// }
685     ///
686     /// // Suppose this is some arbitrary function which requires a value that
687     /// // lives for the 'static lifetime...
688     /// fn function_requiring_static<T: 'static>(t: &T) {
689     ///     // ... do something extremely important and interesting ...
690     /// }
691     ///
692     /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
693     /// let key = slab.insert("hello world").unwrap();
694     ///
695     /// // Look up the created key, returning an `OwnedEntry`.
696     /// let entry = slab.clone().get_owned(key).unwrap();
697     /// let my_struct = MyStruct {
698     ///     entry,
699     ///     // ...
700     /// };
701     ///
702     /// // We can use `my_struct` anywhere where it is required to have the
703     /// // `'static` lifetime:
704     /// function_requiring_static(&my_struct);
705     /// ```
706     ///
707     /// `OwnedEntry`s may be sent between threads:
708     ///
709     /// ```
710     /// # use sharded_slab::Slab;
711     /// use std::{thread, sync::Arc};
712     ///
713     /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
714     /// let key = slab.insert("hello world").unwrap();
715     ///
716     /// // Look up the created key, returning an `OwnedEntry`.
717     /// let value = slab.clone().get_owned(key).unwrap();
718     ///
719     /// thread::spawn(move || {
720     ///     assert_eq!(value, "hello world");
721     ///     // ...
722     /// }).join().unwrap();
723     /// ```
724     ///
725     /// [`get`]: Slab::get
726     /// [`OwnedEntry`]: crate::OwnedEntry
727     /// [`Entry`]: crate::Entry
get_owned(self: Arc<Self>, key: usize) -> Option<OwnedEntry<T, C>>728     pub fn get_owned(self: Arc<Self>, key: usize) -> Option<OwnedEntry<T, C>> {
729         let tid = C::unpack_tid(key);
730 
731         test_println!("get_owned {:?}; current={:?}", tid, Tid::<C>::current());
732         let shard = self.shards.get(tid.as_usize())?;
733         shard.with_slot(key, |slot| {
734             let inner = slot.get(C::unpack_gen(key))?;
735             let value = ptr::NonNull::from(slot.value().as_ref().unwrap());
736             Some(OwnedEntry {
737                 inner,
738                 value,
739                 slab: self.clone(),
740                 key,
741             })
742         })
743     }
744 
745     /// Returns `true` if the slab contains a value for the given key.
746     ///
747     /// # Examples
748     ///
749     /// ```
750     /// let slab = sharded_slab::Slab::new();
751     ///
752     /// let key = slab.insert("hello world").unwrap();
753     /// assert!(slab.contains(key));
754     ///
755     /// slab.take(key).unwrap();
756     /// assert!(!slab.contains(key));
757     /// ```
contains(&self, key: usize) -> bool758     pub fn contains(&self, key: usize) -> bool {
759         self.get(key).is_some()
760     }
761 
762     /// Returns an iterator over all the items in the slab.
unique_iter(&mut self) -> iter::UniqueIter<'_, T, C>763     pub fn unique_iter(&mut self) -> iter::UniqueIter<'_, T, C> {
764         let mut shards = self.shards.iter_mut();
765         let shard = shards.next().expect("must be at least 1 shard");
766         let mut pages = shard.iter();
767         let slots = pages.next().and_then(page::Shared::iter);
768         iter::UniqueIter {
769             shards,
770             slots,
771             pages,
772         }
773     }
774 }
775 
776 impl<T> Default for Slab<T> {
default() -> Self777     fn default() -> Self {
778         Self::new()
779     }
780 }
781 
782 impl<T: fmt::Debug, C: cfg::Config> fmt::Debug for Slab<T, C> {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result783     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
784         f.debug_struct("Slab")
785             .field("shards", &self.shards)
786             .field("config", &C::debug())
787             .finish()
788     }
789 }
790 
791 unsafe impl<T: Send, C: cfg::Config> Send for Slab<T, C> {}
792 unsafe impl<T: Sync, C: cfg::Config> Sync for Slab<T, C> {}
793 
794 // === impl Entry ===
795 
796 impl<'a, T, C: cfg::Config> Entry<'a, T, C> {
797     /// Returns the key used to access the guard.
key(&self) -> usize798     pub fn key(&self) -> usize {
799         self.key
800     }
801 
802     #[inline(always)]
value(&self) -> &T803     fn value(&self) -> &T {
804         unsafe {
805             // Safety: this is always going to be valid, as it's projected from
806             // the safe reference to `self.value` --- this is just to avoid
807             // having to `expect` an option in the hot path when dereferencing.
808             self.value.as_ref()
809         }
810     }
811 }
812 
813 impl<'a, T, C: cfg::Config> std::ops::Deref for Entry<'a, T, C> {
814     type Target = T;
815 
deref(&self) -> &Self::Target816     fn deref(&self) -> &Self::Target {
817         self.value()
818     }
819 }
820 
821 impl<'a, T, C: cfg::Config> Drop for Entry<'a, T, C> {
drop(&mut self)822     fn drop(&mut self) {
823         let should_remove = unsafe {
824             // Safety: calling `slot::Guard::release` is unsafe, since the
825             // `Guard` value contains a pointer to the slot that may outlive the
826             // slab containing that slot. Here, the `Entry` guard owns a
827             // borrowed reference to the shard containing that slot, which
828             // ensures that the slot will not be dropped while this `Guard`
829             // exists.
830             self.inner.release()
831         };
832         if should_remove {
833             self.shard.clear_after_release(self.key)
834         }
835     }
836 }
837 
838 impl<'a, T, C> fmt::Debug for Entry<'a, T, C>
839 where
840     T: fmt::Debug,
841     C: cfg::Config,
842 {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result843     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
844         fmt::Debug::fmt(self.value(), f)
845     }
846 }
847 
848 impl<'a, T, C> PartialEq<T> for Entry<'a, T, C>
849 where
850     T: PartialEq<T>,
851     C: cfg::Config,
852 {
eq(&self, other: &T) -> bool853     fn eq(&self, other: &T) -> bool {
854         self.value().eq(other)
855     }
856 }
857 
858 // === impl VacantEntry ===
859 
860 impl<'a, T, C: cfg::Config> VacantEntry<'a, T, C> {
861     /// Insert a value in the entry.
862     ///
863     /// To get the key associated with the value, use `key` prior to calling
864     /// `insert`.
865     ///
866     /// # Examples
867     ///
868     /// ```
869     /// # use sharded_slab::Slab;
870     /// let mut slab = Slab::new();
871     ///
872     /// let hello = {
873     ///     let entry = slab.vacant_entry().unwrap();
874     ///     let key = entry.key();
875     ///
876     ///     entry.insert((key, "hello"));
877     ///     key
878     /// };
879     ///
880     /// assert_eq!(hello, slab.get(hello).unwrap().0);
881     /// assert_eq!("hello", slab.get(hello).unwrap().1);
882     /// ```
insert(mut self, val: T)883     pub fn insert(mut self, val: T) {
884         let value = unsafe {
885             // Safety: this `VacantEntry` only lives as long as the `Slab` it was
886             // borrowed from, so it cannot outlive the entry's slot.
887             self.inner.value_mut()
888         };
889         debug_assert!(
890             value.is_none(),
891             "tried to insert to a slot that already had a value!"
892         );
893         *value = Some(val);
894         let _released = unsafe {
895             // Safety: again, this `VacantEntry` only lives as long as the
896             // `Slab` it was borrowed from, so it cannot outlive the entry's
897             // slot.
898             self.inner.release()
899         };
900         debug_assert!(
901             !_released,
902             "removing a value before it was inserted should be a no-op"
903         )
904     }
905 
906     /// Return the key associated with this entry.
907     ///
908     /// A value stored in this entry will be associated with this key.
909     ///
910     /// # Examples
911     ///
912     /// ```
913     /// # use sharded_slab::*;
914     /// let mut slab = Slab::new();
915     ///
916     /// let hello = {
917     ///     let entry = slab.vacant_entry().unwrap();
918     ///     let key = entry.key();
919     ///
920     ///     entry.insert((key, "hello"));
921     ///     key
922     /// };
923     ///
924     /// assert_eq!(hello, slab.get(hello).unwrap().0);
925     /// assert_eq!("hello", slab.get(hello).unwrap().1);
926     /// ```
key(&self) -> usize927     pub fn key(&self) -> usize {
928         self.key
929     }
930 }
931 // === impl OwnedEntry ===
932 
933 impl<T, C> OwnedEntry<T, C>
934 where
935     C: cfg::Config,
936 {
937     /// Returns the key used to access this guard
key(&self) -> usize938     pub fn key(&self) -> usize {
939         self.key
940     }
941 
942     #[inline(always)]
value(&self) -> &T943     fn value(&self) -> &T {
944         unsafe {
945             // Safety: this is always going to be valid, as it's projected from
946             // the safe reference to `self.value` --- this is just to avoid
947             // having to `expect` an option in the hot path when dereferencing.
948             self.value.as_ref()
949         }
950     }
951 }
952 
953 impl<T, C> std::ops::Deref for OwnedEntry<T, C>
954 where
955     C: cfg::Config,
956 {
957     type Target = T;
958 
deref(&self) -> &Self::Target959     fn deref(&self) -> &Self::Target {
960         self.value()
961     }
962 }
963 
964 impl<T, C> Drop for OwnedEntry<T, C>
965 where
966     C: cfg::Config,
967 {
drop(&mut self)968     fn drop(&mut self) {
969         test_println!("drop OwnedEntry: try clearing data");
970         let should_clear = unsafe {
971             // Safety: calling `slot::Guard::release` is unsafe, since the
972             // `Guard` value contains a pointer to the slot that may outlive the
973             // slab containing that slot. Here, the `OwnedEntry` owns an `Arc`
974             // clone of the pool, which keeps it alive as long as the `OwnedEntry`
975             // exists.
976             self.inner.release()
977         };
978         if should_clear {
979             let shard_idx = Tid::<C>::from_packed(self.key);
980             test_println!("-> shard={:?}", shard_idx);
981             if let Some(shard) = self.slab.shards.get(shard_idx.as_usize()) {
982                 shard.clear_after_release(self.key)
983             } else {
984                 test_println!("-> shard={:?} does not exist! THIS IS A BUG", shard_idx);
985                 debug_assert!(std::thread::panicking(), "[internal error] tried to drop an `OwnedEntry` to a slot on a shard that never existed!");
986             }
987         }
988     }
989 }
990 
991 impl<T, C> fmt::Debug for OwnedEntry<T, C>
992 where
993     T: fmt::Debug,
994     C: cfg::Config,
995 {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result996     fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
997         fmt::Debug::fmt(self.value(), f)
998     }
999 }
1000 
1001 impl<T, C> PartialEq<T> for OwnedEntry<T, C>
1002 where
1003     T: PartialEq<T>,
1004     C: cfg::Config,
1005 {
eq(&self, other: &T) -> bool1006     fn eq(&self, other: &T) -> bool {
1007         *self.value() == *other
1008     }
1009 }
1010 
1011 unsafe impl<T, C> Sync for OwnedEntry<T, C>
1012 where
1013     T: Sync,
1014     C: cfg::Config,
1015 {
1016 }
1017 
1018 unsafe impl<T, C> Send for OwnedEntry<T, C>
1019 where
1020     T: Sync,
1021     C: cfg::Config,
1022 {
1023 }
1024 
1025 // === pack ===
1026 
1027 pub(crate) trait Pack<C: cfg::Config>: Sized {
1028     // ====== provided by each implementation =================================
1029 
1030     /// The number of bits occupied by this type when packed into a usize.
1031     ///
1032     /// This must be provided to determine the number of bits into which to pack
1033     /// the type.
1034     const LEN: usize;
1035     /// The type packed on the less significant side of this type.
1036     ///
1037     /// If this type is packed into the least significant bit of a usize, this
1038     /// should be `()`, which occupies no bytes.
1039     ///
1040     /// This is used to calculate the shift amount for packing this value.
1041     type Prev: Pack<C>;
1042 
1043     // ====== calculated automatically ========================================
1044 
1045     /// A number consisting of `Self::LEN` 1 bits, starting at the least
1046     /// significant bit.
1047     ///
1048     /// This is the higest value this type can represent. This number is shifted
1049     /// left by `Self::SHIFT` bits to calculate this type's `MASK`.
1050     ///
1051     /// This is computed automatically based on `Self::LEN`.
1052     const BITS: usize = {
1053         let shift = 1 << (Self::LEN - 1);
1054         shift | (shift - 1)
1055     };
1056     /// The number of bits to shift a number to pack it into a usize with other
1057     /// values.
1058     ///
1059     /// This is caculated automatically based on the `LEN` and `SHIFT` constants
1060     /// of the previous value.
1061     const SHIFT: usize = Self::Prev::SHIFT + Self::Prev::LEN;
1062 
1063     /// The mask to extract only this type from a packed `usize`.
1064     ///
1065     /// This is calculated by shifting `Self::BITS` left by `Self::SHIFT`.
1066     const MASK: usize = Self::BITS << Self::SHIFT;
1067 
as_usize(&self) -> usize1068     fn as_usize(&self) -> usize;
from_usize(val: usize) -> Self1069     fn from_usize(val: usize) -> Self;
1070 
1071     #[inline(always)]
pack(&self, to: usize) -> usize1072     fn pack(&self, to: usize) -> usize {
1073         let value = self.as_usize();
1074         debug_assert!(value <= Self::BITS);
1075 
1076         (to & !Self::MASK) | (value << Self::SHIFT)
1077     }
1078 
1079     #[inline(always)]
from_packed(from: usize) -> Self1080     fn from_packed(from: usize) -> Self {
1081         let value = (from & Self::MASK) >> Self::SHIFT;
1082         debug_assert!(value <= Self::BITS);
1083         Self::from_usize(value)
1084     }
1085 }
1086 
1087 impl<C: cfg::Config> Pack<C> for () {
1088     const BITS: usize = 0;
1089     const LEN: usize = 0;
1090     const SHIFT: usize = 0;
1091     const MASK: usize = 0;
1092 
1093     type Prev = ();
1094 
as_usize(&self) -> usize1095     fn as_usize(&self) -> usize {
1096         unreachable!()
1097     }
from_usize(_val: usize) -> Self1098     fn from_usize(_val: usize) -> Self {
1099         unreachable!()
1100     }
1101 
pack(&self, _to: usize) -> usize1102     fn pack(&self, _to: usize) -> usize {
1103         unreachable!()
1104     }
1105 
from_packed(_from: usize) -> Self1106     fn from_packed(_from: usize) -> Self {
1107         unreachable!()
1108     }
1109 }
1110 
1111 #[cfg(test)]
1112 pub(crate) use self::tests::util as test_util;
1113 
1114 #[cfg(test)]
1115 mod tests;
1116