xref: /minix/minix/lib/libbdev/NOTES (revision 83ee113e)
1Development notes regarding libbdev, by David van Moolenbroek.
2
3
4GENERAL MODEL
5
6This library is designed mainly for use by file servers. It essentially covers
7two use cases: 1) use of the block device that contains the file system itself,
8and 2) use of any block device for raw block I/O (on unmounted file systems)
9performed by the root file server. In the first case, the file server is
10responsible for opening and closing the block device, and recovery from a
11driver restart involves reopening those minor devices. Regular file systems
12should have one or at most two (for a separate journal) block devices open at
13the same time, which is why NR_OPEN_DEVS is set to a value that is quite low.
14In the second case, VFS is responsible for opening and closing the block device
15(and performing IOCTLs), as well as reopening the block device on a driver
16restart -- the root file server only gets raw I/O (and flush) requests.
17
18At this time, libbdev considers only clean crashes (a crash-only model), and
19does not support recovery from behavioral errors. Protocol errors are passed to
20the user process, and generally do not have an effect on the overall state of
21the library.
22
23
24RETRY MODEL
25
26The philosophy for recovering from driver restarts in libbdev can be formulated
27as follows: we want to tolerate an unlimited number of driver restarts over a
28long time, but we do not want to keep retrying individual requests across
29driver restarts. As such, we do not keep track of driver restarts on a per-
30driver basis, because that would mean we put a hard limit on the number of
31restarts for that driver in total. Instead, there are two limits: a driver
32restart limit that is kept on a per-request basis, failing only that request
33when the limit is reached, and a driver restart limit that is kept during
34recovery, limiting the number of restarts and eventually giving up on the
35entire driver when even the recovery keeps failing (as no progress is made in
36that case).
37
38Each transfer request also has a transfer retry count. The assumption here is
39that when a transfer request returns EIO, it can be retried and possibly
40succeed upon repetition. The driver restart and transfer retry counts are
41tracked independently and thus the first to hit the limit will fail the
42request. The behavior should be the same for synchronous and asynchronous
43requests in this respect.
44
45It could happen that a new driver gets loaded after we have decided that the
46current driver is unusable. This could be due to a race condition (VFS sends a
47new-driver request after we've given up) or due to user interaction (the user
48loads a replacement driver). The latter case may occur legitimately with raw
49I/O on the root file server, so we must not mark the driver as unusable
50forever. On the other hand, in the former case, we must not continue to send
51I/O without first reopening the minor devices. For this reason, we do not clean
52up the record of the minor devices when we mark a driver as unusable.
53