/dragonfly/sys/vfs/hammer/ |
H A D | hammer_undo.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_transaction.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_recover.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_blockmap.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_flusher.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_disk.h | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_vfsops.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer_inode.c | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|
H A D | hammer.h | 0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations
* Implement a per-direct cache of new object IDs. Up to 128 directories will be managed in LRU fashion. The cached provides a pool of object IDs to better localize the object ids of files created in a directory, so parallel operations on the filesystem do not create a fragmented object id space.
* Cache numerous fields in the root volume's header to avoid creating undo records for them, creatly improving
(ultimately we can sync an undo space representing the volume header using a direct comparison mechanic but for now we assume the write of the volume header to be atomic).
* Implement a zone limit for the blockmap which newfs_hammer can install. The blockmap zones have an ultimate limit of 2^60 bytes, or around one million terrabytes. If you create a 100G filesystem there is no reason to let the blockmap iterate over its entire range as that would result in a lot of fragmentation and blockmap overhead. By default newfs_hammer sets the zone limit to 100x the size of the filesystem.
* Fix a bug in the crash recovery code. Do not sync newly added inodes once the flusher is running, otherwise the volume header can get out of sync. Just create a dummy marker structure and move it to the tail of the inode flush_list when the flush starts, and stop when we hit it.
* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to update the volume header's undo fifo indices, otherwise HAMMER will believe that it must undo the last fully synchronized flush.
|