1`folly/Synchronized.h`
2----------------------
3
4`folly/Synchronized.h` introduces a simple abstraction for mutex-
5based concurrency. It replaces convoluted, unwieldy, and just
6plain wrong code with simple constructs that are easy to get
7right and difficult to get wrong.
8
9### Motivation
10
11Many of our multithreaded C++ programs use shared data structures
12associated with locks. This follows the time-honored adage of
13mutex-based concurrency control "associate mutexes with data, not code".
14Consider the following example:
15
16``` Cpp
17
18    class RequestHandler {
19      ...
20      RequestQueue requestQueue_;
21      SharedMutex requestQueueMutex_;
22
23      std::map<std::string, Endpoint> requestEndpoints_;
24      SharedMutex requestEndpointsMutex_;
25
26      HandlerState workState_;
27      SharedMutex workStateMutex_;
28      ...
29    };
30```
31
32Whenever the code needs to read or write some of the protected
33data, it acquires the mutex for reading or for reading and
34writing. For example:
35
36``` Cpp
37    void RequestHandler::processRequest(const Request& request) {
38      stop_watch<> watch;
39      checkRequestValidity(request);
40      SharedMutex::WriteHolder lock(requestQueueMutex_);
41      requestQueue_.push_back(request);
42      stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
43      LOG(INFO) << "enqueued request ID " << request.getID();
44    }
45```
46
47However, the correctness of the technique is entirely predicated on
48convention.  Developers manipulating these data members must take care
49to explicitly acquire the correct lock for the data they wish to access.
50There is no ostensible error for code that:
51
52* manipulates a piece of data without acquiring its lock first
53* acquires a different lock instead of the intended one
54* acquires a lock in read mode but modifies the guarded data structure
55* acquires a lock in read-write mode although it only has `const` access
56  to the guarded data
57
58### Introduction to `folly/Synchronized.h`
59
60The same code sample could be rewritten with `Synchronized`
61as follows:
62
63``` Cpp
64    class RequestHandler {
65      ...
66      Synchronized<RequestQueue> requestQueue_;
67      Synchronized<std::map<std::string, Endpoint>> requestEndpoints_;
68      Synchronized<HandlerState> workState_;
69      ...
70    };
71
72    void RequestHandler::processRequest(const Request& request) {
73      stop_watch<> watch;
74      checkRequestValidity(request);
75      requestQueue_.wlock()->push_back(request);
76      stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
77      LOG(INFO) << "enqueued request ID " << request.getID();
78    }
79```
80
81The rewrite does at maximum efficiency what needs to be done:
82acquires the lock associated with the `RequestQueue` object, writes to
83the queue, and releases the lock immediately thereafter.
84
85On the face of it, that's not much to write home about, and not
86an obvious improvement over the previous state of affairs. But
87the features at work invisible in the code above are as important
88as those that are visible:
89
90* Unlike before, the data and the mutex protecting it are
91  inextricably encapsulated together.
92* If you tried to use `requestQueue_` without acquiring the lock you
93  wouldn't be able to; it is virtually impossible to access the queue
94  without acquiring the correct lock.
95* The lock is released immediately after the insert operation is
96  performed, and is not held for operations that do not need it.
97
98If you need to perform several operations while holding the lock,
99`Synchronized` provides several options for doing this.
100
101The `wlock()` method (or `lock()` if you have a non-shared mutex type)
102returns a `LockedPtr` object that can be stored in a variable.  The lock
103will be held for as long as this object exists, similar to a
104`std::unique_lock`.  This object can be used as if it were a pointer to
105the underlying locked object:
106
107``` Cpp
108    {
109      auto lockedQueue = requestQueue_.wlock();
110      lockedQueue->push_back(request1);
111      lockedQueue->push_back(request2);
112    }
113```
114
115The `rlock()` function is similar to `wlock()`, but acquires a shared lock
116rather than an exclusive lock.
117
118We recommend explicitly opening a new nested scope whenever you store a
119`LockedPtr` object, to help visibly delineate the critical section, and
120to ensure that the `LockedPtr` is destroyed as soon as it is no longer
121needed.
122
123Alternatively, `Synchronized` also provides mechanisms to run a function while
124holding the lock.  This makes it possible to use lambdas to define brief
125critical sections:
126
127``` Cpp
128    void RequestHandler::processRequest(const Request& request) {
129      stop_watch<> watch;
130      checkRequestValidity(request);
131      requestQueue_.withWLock([&](auto& queue) {
132        // withWLock() automatically holds the lock for the
133        // duration of this lambda function
134        queue.push_back(request);
135      });
136      stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
137      LOG(INFO) << "enqueued request ID " << request.getID();
138    }
139```
140
141One advantage of the `withWLock()` approach is that it forces a new
142scope to be used for the critical section, making the critical section
143more obvious in the code, and helping to encourage code that releases
144the lock as soon as possible.
145
146### Template class `Synchronized<T>`
147
148#### Template Parameters
149
150`Synchronized` is a template with two parameters, the data type and a
151mutex type: `Synchronized<T, Mutex>`.
152
153If not specified, the mutex type defaults to `folly::SharedMutex`.  However, any
154mutex type supported by `folly::LockTraits` can be used instead.
155`folly::LockTraits` can be specialized to support other custom mutex
156types that it does not know about out of the box.
157
158`Synchronized` provides slightly different APIs when instantiated with a
159shared mutex type or an upgrade mutex type then with a plain exclusive mutex.
160If instantiated with either of the two mutex types above through having a
161member called lock_shared(), the `Synchronized` object has corresponding
162`wlock`, `rlock` or `ulock` methods to acquire different lock types.  When
163using a shared or upgrade mutex type, these APIs ensure that callers make an
164explicit choice to acquire a shared, exclusive or upgrade lock and that
165callers do not unintentionally lock the mutex in the incorrect mode.  The
166`rlock()` APIs only provide `const` access to the underlying data type,
167ensuring that it cannot be modified when only holding a shared lock.
168
169#### Constructors
170
171The default constructor default-initializes the data and its
172associated mutex.
173
174
175The copy constructor locks the source for reading and copies its
176data into the target. (The target is not locked as an object
177under construction is only accessed by one thread.)
178
179Finally, `Synchronized<T>` defines an explicit constructor that
180takes an object of type `T` and copies it. For example:
181
182``` Cpp
183    // Default constructed
184    Synchronized<map<string, int>> syncMap1;
185
186    // Copy constructed
187    Synchronized<map<string, int>> syncMap2(syncMap1);
188
189    // Initializing from an existing map
190    map<string, int> init;
191    init["world"] = 42;
192    Synchronized<map<string, int>> syncMap3(init);
193    EXPECT_EQ(syncMap3->size(), 1);
194```
195
196#### Assignment, swap, and copying
197
198The copy assignment operator copies the underlying source data
199into a temporary with the source mutex locked, and then move the
200temporary into the destination data with the destination mutex
201locked. This technique avoids the need to lock both mutexes at
202the same time. Mutexes are not copied or moved.
203
204The move assignment operator assumes the source object is a true
205rvalue and does lock the source mutex. It moves the source
206data into the destination data with the destination mutex locked.
207
208`swap` acquires locks on both mutexes in increasing order of
209object address, and then swaps the underlying data. This avoids
210potential deadlock, which may otherwise happen should one thread
211do `a = b` while another thread does `b = a`.
212
213The data copy assignment operator copies the parameter into the
214destination data while the destination mutex is locked.
215
216The data move assignment operator moves the parameter into the
217destination data while the destination mutex is locked.
218
219To get a copy of the guarded data, there are two methods
220available: `void copy(T*)` and `T copy()`. The first copies data
221to a provided target and the second returns a copy by value. Both
222operations are done under a read lock. Example:
223
224``` Cpp
225    Synchronized<vector<string>> syncVec1, syncVec2;
226    vector<string> vec;
227
228    // Assign
229    syncVec1 = syncVec2;
230    // Assign straight from vector
231    syncVec1 = vec;
232
233    // Swap
234    syncVec1.swap(syncVec2);
235    // Swap with vector
236    syncVec1.swap(vec);
237
238    // Copy to given target
239    syncVec1.copy(&vec);
240    // Get a copy by value
241    auto copy = syncVec1.copy();
242```
243
244#### `lock()`
245
246If the mutex type used with `Synchronized` is a simple exclusive mutex
247type (as opposed to a shared mutex), `Synchronized<T>` provides a
248`lock()` method that returns a `LockedPtr<T>` to access the data while
249holding the lock.
250
251The `LockedPtr` object returned by `lock()` holds the lock for as long
252as it exists.  Whenever possible, prefer declaring a separate inner
253scope for storing this variable, to make sure the `LockedPtr` is
254destroyed as soon as the lock is no longer needed:
255
256``` Cpp
257    void fun(Synchronized<vector<string>, std::mutex>& vec) {
258      {
259        auto locked = vec.lock();
260        locked->push_back("hello");
261        locked->push_back("world");
262      }
263      LOG(INFO) << "successfully added greeting";
264    }
265```
266
267#### `wlock()` and `rlock()`
268
269If the mutex type used with `Synchronized` is a shared mutex type,
270`Synchronized<T>` provides a `wlock()` method that acquires an exclusive
271lock, and an `rlock()` method that acquires a shared lock.
272
273The `LockedPtr` returned by `rlock()` only provides const access to the
274internal data, to ensure that it cannot be modified while only holding a
275shared lock.
276
277``` Cpp
278    int computeSum(const Synchronized<vector<int>>& vec) {
279      int sum = 0;
280      auto locked = vec.rlock();
281      for (int n : *locked) {
282        sum += n;
283      }
284      return sum;
285    }
286
287    void doubleValues(Synchronized<vector<int>>& vec) {
288      auto locked = vec.wlock();
289      for (int& n : *locked) {
290        n *= 2;
291      }
292    }
293```
294
295This example brings us to a cautionary discussion.  The `LockedPtr`
296object returned by `lock()`, `wlock()`, or `rlock()` only holds the lock
297as long as it exists.  This object makes it difficult to access the data
298without holding the lock, but not impossible.  In particular you should
299never store a raw pointer or reference to the internal data for longer
300than the lifetime of the `LockedPtr` object.
301
302For instance, if we had written the following code in the examples
303above, this would have continued accessing the vector after the lock had
304been released:
305
306``` Cpp
307    // No. NO. NO!
308    for (int& n : *vec.wlock()) {
309      n *= 2;
310    }
311```
312
313The `vec.wlock()` return value is destroyed in this case as soon as the
314internal range iterators are created.  The range iterators point into
315the vector's data, but lock is released immediately, before executing
316the loop body.
317
318Needless to say, this is a crime punishable by long debugging nights.
319
320Range-based for loops are slightly subtle about the lifetime of objects
321used in the initializer statement.  Most other problematic use cases are
322a bit easier to spot than this, since the lifetime of the `LockedPtr` is
323more explicitly visible.
324
325#### `withLock()`
326
327As an alternative to the `lock()` API, `Synchronized` also provides a
328`withLock()` method that executes a function or lambda expression while
329holding the lock.  The function receives a reference to the data as its
330only argument.
331
332This has a few benefits compared to `lock()`:
333
334* The lambda expression requires its own nested scope, making critical
335  sections more visible in the code.  Callers are recommended to define
336  a new scope when using `lock()` if they choose to, but this is not
337  required.  `withLock()` ensures that a new scope must always be
338  defined.
339* Because a new scope is required, `withLock()` also helps encourage
340  users to release the lock as soon as possible.  Because the critical
341  section scope is easily visible in the code, it is harder to
342  accidentally put extraneous code inside the critical section without
343  realizing it.
344* The separate lambda scope makes it more difficult to store raw
345  pointers or references to the protected data and continue using those
346  pointers outside the critical section.
347
348For example, `withLock()` makes the range-based for loop mistake from
349above much harder to accidentally run into:
350
351``` Cpp
352    vec.withLock([](auto& locked) {
353      for (int& n : locked) {
354        n *= 2;
355      }
356    });
357```
358
359This code does not have the same problem as the counter-example with
360`wlock()` above, since the lock is held for the duration of the loop.
361
362When using `Synchronized` with a shared mutex type, it provides separate
363`withWLock()` and `withRLock()` methods instead of `withLock()`.
364
365#### `ulock()` and `withULockPtr()`
366
367`Synchronized` also supports upgrading and downgrading mutex lock levels as
368long as the mutex type used to instantiate the `Synchronized` type has the
369same interface as the mutex types in the C++ standard library, or if
370`LockTraits` is specialized for the mutex type and the specialization is
371visible. See below for an intro to upgrade mutexes.
372
373An upgrade lock can be acquired as usual either with the `ulock()` method or
374the `withULockPtr()` method as so
375
376``` Cpp
377    {
378      // only const access allowed to the underlying object when an upgrade lock
379      // is acquired
380      auto ulock = vec.ulock();
381      auto newSize = ulock->size();
382    }
383
384    auto newSize = vec.withULockPtr([](auto ulock) {
385      // only const access allowed to the underlying object when an upgrade lock
386      // is acquired
387      return ulock->size();
388    });
389```
390
391An upgrade lock acquired via `ulock()` or `withULockPtr()` can be upgraded or
392downgraded by calling any of the following methods on the `LockedPtr` proxy
393
394* `moveFromUpgradeToWrite()`
395* `moveFromWriteToUpgrade()`
396* `moveFromWriteToRead()`
397* `moveFromUpgradeToRead()`
398
399Calling these leaves the `LockedPtr` object on which the method was called in
400an invalid `null` state and returns another LockedPtr proxy holding the
401specified lock.  The upgrade or downgrade is done atomically - the
402`Synchronized` object is never in an unlocked state during the lock state
403transition.  For example
404
405``` Cpp
406    auto ulock = obj.ulock();
407    if (ulock->needsUpdate()) {
408      auto wlock = ulock.moveFromUpgradeToWrite();
409
410      // ulock is now null
411
412      wlock->updateObj();
413    }
414```
415
416This "move" can also occur in the context of a `withULockPtr()`
417(`withWLockPtr()` or `withRLockPtr()` work as well!) function as so
418
419``` Cpp
420    auto newSize = obj.withULockPtr([](auto ulock) {
421      if (ulock->needsUpdate()) {
422
423        // release upgrade lock get write lock atomically
424        auto wlock = ulock.moveFromUpgradeToWrite();
425        // ulock is now null
426        wlock->updateObj();
427
428        // release write lock and acquire read lock atomically
429        auto rlock = wlock.moveFromWriteToRead();
430        // wlock is now null
431        return rlock->newSize();
432
433      } else {
434
435        // release upgrade lock and acquire read lock atomically
436        auto rlock = ulock.moveFromUpgradeToRead();
437        // ulock is now null
438        return rlock->newSize();
439      }
440    });
441```
442
443#### Intro to upgrade mutexes:
444
445An upgrade mutex is a shared mutex with an extra state called `upgrade` and an
446atomic state transition from `upgrade` to `unique`. The `upgrade` state is more
447powerful than the `shared` state but less powerful than the `unique` state.
448
449An upgrade lock permits only const access to shared state for doing reads. It
450does not permit mutable access to shared state for doing writes. Only a unique
451lock permits mutable access for doing writes.
452
453An upgrade lock may be held concurrently with any number of shared locks on the
454same mutex. An upgrade lock is exclusive with other upgrade locks and unique
455locks on the same mutex - only one upgrade lock or unique lock may be held at a
456time.
457
458The upgrade mutex solves the problem of doing a read of shared state and then
459optionally doing a write to shared state efficiently under contention. Consider
460this scenario with a shared mutex:
461
462``` Cpp
463    struct MyObect {
464      bool isUpdateRequired() const;
465      void doUpdate();
466    };
467
468    struct MyContainingObject {
469      folly::Synchronized<MyObject> sync;
470
471      void mightHappenConcurrently() {
472        // first check
473        if (!sync.rlock()->isUpdateRequired()) {
474          return;
475        }
476        sync.withWLock([&](auto& state) {
477          // second check
478          if (!state.isUpdateRequired()) {
479            return;
480          }
481          state.doUpdate();
482        });
483      }
484    };
485```
486
487Here, the second `isUpdateRequired` check happens under a unique lock. This
488means that the second check cannot be done concurrently with other threads doing
489first `isUpdateRequired` checks under the shared lock, even though the second
490check, like the first check, is read-only and requires only const access to the
491shared state.
492
493This may even introduce unnecessary blocking under contention. Since the default
494mutex type, `folly::SharedMutex`, has write priority, the unique lock protecting
495the second check may introduce unnecessary blocking to all the other threads
496that are attempting to acquire a shared lock to protect the first check. This
497problem is called reader starvation.
498
499One solution is to use a shared mutex type with read priority, such as
500`folly::SharedMutexReadPriority`. That can introduce less blocking under
501contention to the other threads attempting to acquire a shared lock to do the
502first check. However, that may backfire and cause threads which are attempting
503to acquire a unique lock (for the second check) to stall, waiting for a moment
504in time when there are no shared locks held on the mutex, a moment in time that
505may never even happen. This problem is called writer starvation.
506
507Starvation is a tricky problem to solve in general. But we can partially side-
508step it in our case.
509
510An alternative solution is to use an upgrade lock for the second check. Threads
511attempting to acquire an upgrade lock for the second check do not introduce
512unnecessary blocking to all other threads that are attempting to acquire a
513shared lock for the first check. Only after the second check passes, and the
514upgrade lock transitions atomically from an upgrade lock to a unique lock, does
515the unique lock introduce *necessary* blocking to the other threads attempting
516to acquire a shared lock. With this solution, unlike the solution without the
517upgrade lock, the second check may be done concurrently with all other first
518checks rather than blocking or being blocked by them.
519
520The example would then look like:
521
522``` Cpp
523    struct MyObject {
524      bool isUpdateRequired() const;
525      void doUpdate();
526    };
527
528    struct MyContainingObject {
529      folly::Synchronized<MyObject> sync;
530
531      void mightHappenConcurrently() {
532        // first check
533        if (!sync.rlock()->isUpdateRequired()) {
534          return;
535        }
536        sync.withULockPtr([&](auto ulock) {
537          // second check
538          if (!ulock->isUpdateRequired()) {
539            return;
540          }
541          auto wlock = ulock.moveFromUpgradeToWrite();
542          wlock->doUpdate();
543        });
544      }
545    };
546```
547
548Note: Some shared mutex implementations offer an atomic state transition from
549`shared` to `unique` and some upgrade mutex implementations offer an atomic
550state transition from `shared` to `upgrade`. These atomic state transitions are
551dangerous, however, and can deadlock when done concurrently on the same mutex.
552For example, if threads A and B both hold shared locks on a mutex and are both
553attempting to transition atomically from shared to upgrade locks, the threads
554are deadlocked. Likewise if they are both attempting to transition atomically
555from shared to unique locks, or one is attempting to transition atomically from
556shared to upgrade while the other is attempting to transition atomically from
557shared to unique. Therefore, `LockTraits` does not expose either of these
558dangerous atomic state transitions even when the underlying mutex type supports
559them. Likewise, `Synchronized`'s `LockedPtr` proxies do not expose these
560dangerous atomic state transitions either.
561
562#### Timed Locking
563
564When `Synchronized` is used with a mutex type that supports timed lock
565acquisition, `lock()`, `wlock()`, and `rlock()` can all take an optional
566`std::chrono::duration` argument.  This argument specifies a timeout to
567use for acquiring the lock.  If the lock is not acquired before the
568timeout expires, a null `LockedPtr` object will be returned.  Callers
569must explicitly check the return value before using it:
570
571``` Cpp
572    void fun(Synchronized<vector<string>>& vec) {
573      {
574        auto locked = vec.lock(10ms);
575        if (!locked) {
576          throw std::runtime_error("failed to acquire lock");
577        }
578        locked->push_back("hello");
579        locked->push_back("world");
580      }
581      LOG(INFO) << "successfully added greeting";
582    }
583```
584
585#### `unlock()` and `scopedUnlock()`
586
587`Synchronized` is a good mechanism for enforcing scoped
588synchronization, but it has the inherent limitation that it
589requires the critical section to be, well, scoped. Sometimes the
590code structure requires a fleeting "escape" from the iron fist of
591synchronization, while still inside the critical section scope.
592
593One common pattern is releasing the lock early on error code paths,
594prior to logging an error message.  The `LockedPtr` class provides an
595`unlock()` method that makes this possible:
596
597``` Cpp
598    Synchronized<map<int, string>> dic;
599    ...
600    {
601      auto locked = dic.rlock();
602      auto iter = locked->find(0);
603      if (iter == locked.end()) {
604        locked.unlock();  // don't hold the lock while logging
605        LOG(ERROR) << "key 0 not found";
606        return false;
607      }
608      processValue(*iter);
609    }
610    LOG(INFO) << "succeeded";
611```
612
613For more complex nested control flow scenarios, `scopedUnlock()` returns
614an object that will release the lock for as long as it exists, and will
615reacquire the lock when it goes out of scope.
616
617``` Cpp
618
619    Synchronized<map<int, string>> dic;
620    ...
621    {
622      auto locked = dic.wlock();
623      auto iter = locked->find(0);
624      if (iter == locked->end()) {
625        {
626          auto unlocker = locked.scopedUnlock();
627          LOG(INFO) << "Key 0 not found, inserting it."
628        }
629        locked->emplace(0, "zero");
630      } else {
631        *iter = "zero";
632      }
633    }
634```
635
636Clearly `scopedUnlock()` comes with specific caveats and
637liabilities. You must assume that during the `scopedUnlock()`
638section, other threads might have changed the protected structure
639in arbitrary ways. In the example above, you cannot use the
640iterator `iter` and you cannot assume that the key `0` is not in the
641map; another thread might have inserted it while you were
642bragging on `LOG(INFO)`.
643
644Whenever a `LockedPtr` object has been unlocked, whether with `unlock()`
645or `scopedUnlock()`, it will behave as if it is null.  `isNull()` will
646return true.  Dereferencing an unlocked `LockedPtr` is not allowed and
647will result in undefined behavior.
648
649#### `Synchronized` and `std::condition_variable`
650
651When used with a `std::mutex`, `Synchronized` supports using a
652`std::condition_variable` with its internal mutex.  This allows a
653`condition_variable` to be used to wait for a particular change to occur
654in the internal data.
655
656The `LockedPtr` returned by `Synchronized<T, std::mutex>::lock()` has a
657`as_lock()` method that returns a reference to a
658`std::unique_lock<std::mutex>`, which can be given to the
659`std::condition_variable`:
660
661``` Cpp
662    Synchronized<vector<string>, std::mutex> vec;
663    std::condition_variable emptySignal;
664
665    // Assuming some other thread will put data on vec and signal
666    // emptySignal, we can then wait on it as follows:
667    auto locked = vec.lock();
668    emptySignal.wait(locked.as_lock(),
669                     [&] { return !locked->empty(); });
670```
671
672### `acquireLocked()`
673
674Sometimes locking just one object won't be able to cut the mustard. Consider a
675function that needs to lock two `Synchronized` objects at the
676same time - for example, to copy some data from one to the other.
677At first sight, it looks like sequential `wlock()` calls will work just
678fine:
679
680``` Cpp
681    void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
682      auto lockedA = a.wlock();
683      auto lockedB = b.wlock();
684      ... use lockedA and lockedB ...
685    }
686```
687
688This code compiles and may even run most of the time, but embeds
689a deadly peril: if one threads call `fun(x, y)` and another
690thread calls `fun(y, x)`, then the two threads are liable to
691deadlocking as each thread will be waiting for a lock the other
692is holding. This issue is a classic that applies regardless of
693the fact the objects involved have the same type.
694
695This classic problem has a classic solution: all threads must
696acquire locks in the same order. The actual order is not
697important, just the fact that the order is the same in all
698threads. Many libraries simply acquire mutexes in increasing
699order of their address, which is what we'll do, too. The
700`acquireLocked()` function takes care of all details of proper
701locking of two objects and offering their innards.  It returns a
702`std::tuple` of `LockedPtr`s:
703
704``` Cpp
705    void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
706      auto ret = folly::acquireLocked(a, b);
707      auto& lockedA = std::get<0>(ret);
708      auto& lockedB = std::get<1>(ret);
709      ... use lockedA and lockedB ...
710    }
711```
712
713Note that C++ 17 introduces
714[structured binding syntax](http://wg21.link/P0144r2)
715which will make the returned tuple more convenient to use:
716
717``` Cpp
718    void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
719      auto [lockedA, lockedB] = folly::acquireLocked(a, b);
720      ... use lockedA and lockedB ...
721    }
722```
723
724An `acquireLockedPair()` function is also available, which returns a
725`std::pair` instead of a `std::tuple`.  This is more convenient to use
726in many situations, until compiler support for structured bindings is
727more widely available.
728
729### Synchronizing several data items with one mutex
730
731The library is geared at protecting one object of a given type
732with a mutex. However, sometimes we'd like to protect two or more
733members with the same mutex. Consider for example a bidirectional
734map, i.e. a map that holds an `int` to `string` mapping and also
735the converse `string` to `int` mapping. The two maps would need
736to be manipulated simultaneously. There are at least two designs
737that come to mind.
738
739#### Using a nested `struct`
740
741You can easily pack the needed data items in a little struct.
742For example:
743
744``` Cpp
745    class Server {
746      struct BiMap {
747        map<int, string> direct;
748        map<string, int> inverse;
749      };
750      Synchronized<BiMap> bimap_;
751      ...
752    };
753    ...
754    bimap_.withLock([](auto& locked) {
755      locked.direct[0] = "zero";
756      locked.inverse["zero"] = 0;
757    });
758```
759
760With this code in tow you get to use `bimap_` just like any other
761`Synchronized` object, without much effort.
762
763#### Using `std::tuple`
764
765If you won't stop short of using a spaceship-era approach,
766`std::tuple` is there for you. The example above could be
767rewritten for the same functionality like this:
768
769``` Cpp
770    class Server {
771      Synchronized<tuple<map<int, string>, map<string, int>>> bimap_;
772      ...
773    };
774    ...
775    bimap_.withLock([](auto& locked) {
776      get<0>(locked)[0] = "zero";
777      get<1>(locked)["zero"] = 0;
778    });
779```
780
781The code uses `std::get` with compile-time integers to access the
782fields in the tuple. The relative advantages and disadvantages of
783using a local struct vs. `std::tuple` are quite obvious - in the
784first case you need to invest in the definition, in the second
785case you need to put up with slightly more verbose and less clear
786access syntax.
787
788### Summary
789
790`Synchronized` and its supporting tools offer you a simple,
791robust paradigm for mutual exclusion-based concurrency. Instead
792of manually pairing data with the mutexes that protect it and
793relying on convention to use them appropriately, you can benefit
794of encapsulation and typechecking to offload a large part of that
795task and to provide good guarantees.
796