1# @(#)TODO 5.10 (Berkeley) 08/02/92 2 3NOTE: Changed the lookup on a page of inodes to search from the back 4in case the same inode gets written twice on the same page. 5 6Make sure that if you are writing a file, but not all the blocks 7make it into a single segment, that you do not write the inode in 8that segment. 9 10Keith: 11 Why not delete the lfs_bmapv call, just mark everything dirty 12 that isn't deleted/truncated? Get some numbers about 13 what percentage of the stuff that the cleaner thinks 14 might be live is live. If it's high, get rid of lfs_bmapv. 15 16 There is a nasty problem in that it may take *more* room to write 17 the data to clean a segment than is returned by the new segment 18 because of indirect blocks in segment 2 being dirtied by the data 19 being copied into the log from segment 1. The suggested solution 20 at this point is to detect it when we have no space left on the 21 filesystem, write the extra data into the last segment (leaving 22 no clean ones), make it a checkpoint and shut down the file system 23 for fixing by a utility reading the raw partition. Argument is 24 that this should never happen and is practically impossible to fix 25 since the cleaner would have to theoretically build a model of the 26 entire filesystem in memory to detect the condition occurring. 27 A file coalescing cleaner will help avoid the problem, and one 28 that reads/writes from the raw disk could fix it. 29 30DONE Currently, inodes are being flushed to disk synchronously upon 31 creation -- see ufs_makeinode. However, only the inode 32 is flushed, the directory "name" is written using VOP_BWRITE, 33 so it's not synchronous. Possible solutions: 1: get some 34 ordering in the writes so that inode/directory entries get 35 stuffed into the same segment. 2: do both synchronously 36 3: add Mendel's information into the stream so we log 37 creation/deletion of inodes. 4: do some form of partial 38 segment when changing the inode (creation/deletion/rename). 39DONE Fix i_block increment for indirect blocks. 40 If the file system is tar'd, extracted on top of another LFS, the 41 IFILE ain't worth diddly. Is the cleaner writing the IFILE? 42 If not, let's make it read-only. 43DONE Delete unnecessary source from utils in main-line source tree. 44DONE Make sure that we're counting meta blocks in the inode i_block count. 45 Overlap the version and nextfree fields in the IFILE 46DONE Vinvalbuf (Kirk): 47 Why writing blocks that are no longer useful? 48 Are the semantics of close such that blocks have to be flushed? 49 How specify in the buf chain the blocks that don't need 50 to be written? (Different numbering of indirect blocks.) 51 52Margo: 53 Change so that only search one sector of inode block file for the 54 inode by using sector addresses in the ifile instead of 55 logical disk addresses. 56 Fix the use of the ifile version field to use the generation 57 number instead. 58DONE Unmount; not doing a bgetvp (VHOLD) in lfs_newbuf call. 59DONE Document in the README file where the checkpoint information is 60 on disk. 61 Variable block sizes (Margo/Keith). 62 Switch the byte accounting to sector accounting. 63DONE Check lfs.h and make sure that the #defines/structures are all 64 actually needed. 65DONE Add a check in lfs_segment.c so that if the segment is empty, 66 we don't write it. 67 Need to keep vnode v_numoutput up to date for pending writes? 68DONE USENIX paper (Carl/Margo). 69 70 71Evelyn: 72 lfsck: If delete a file that's being executed, the version number 73 isn't updated, and lfsck has to figure this out; case is the same as if have an inode that no directory references, 74 so the file should be reattached into lost+found. 75 Recovery/fsck. 76 77Carl: 78 Investigate: clustering of reads (if blocks in the segment are ordered, 79 should read them all) and writes (McVoy paper). 80 Investigate: should the access time be part of the IFILE: 81 pro: theoretically, saves disk writes 82 con: cacheing inodes should obviate this advantage 83 the IFILE is already humongous 84 Cleaner. 85 Port to OSF/1 (Carl/Keith). 86 Currently there's no notion of write error checking. 87 + Failed data/inode writes should be rescheduled (kernel level 88 bad blocking). 89 + Failed superblock writes should cause selection of new 90 superblock for checkpointing. 91 92FUTURE FANTASIES: ============ 93 94+ unrm, versioning 95+ transactions 96+ extended cleaner policies (hot/cold data, data placement) 97 98============================== 99Problem with the concept of multiple buffer headers referencing the segment: 100Positives: 101 Don't lock down 1 segment per file system of physical memory. 102 Don't copy from buffers to segment memory. 103 Don't tie down the bus to transfer 1M. 104 Works on controllers supporting less than large transfers. 105 Disk can start writing immediately instead of waiting 1/2 rotation 106 and the full transfer. 107Negatives: 108 Have to do segment write then segment summary write, since the latter 109 is what verifies that the segment is okay. (Is there another way 110 to do this?) 111============================== 112 113The algorithm for selecting the disk addresses of the super-blocks 114has to be available to the user program which checks the file system. 115 116(Currently in newfs, becomes a common subroutine.) 117