Home
last modified time | relevance | path

Searched hist:"0729 c8c8" (Results 1 – 9 of 9) sorted by relevance

/dragonfly/sys/vfs/hammer/
H A Dhammer_undo.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_transaction.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_recover.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_blockmap.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_flusher.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_disk.h0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_vfsops.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer_inode.c0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.
H A Dhammer.h0729c8c8 Tue Apr 29 01:10:37 GMT 2008 Matthew Dillon <dillon@dragonflybsd.org> HAMMER 39/Many: Parallel operations optimizations

* Implement a per-direct cache of new object IDs. Up to 128 directories
will be managed in LRU fashion. The cached provides a pool of object
IDs to better localize the object ids of files created in a directory,
so parallel operations on the filesystem do not create a fragmented
object id space.

* Cache numerous fields in the root volume's header to avoid creating
undo records for them, creatly improving

(ultimately we can sync an undo space representing the volume header
using a direct comparison mechanic but for now we assume the write of
the volume header to be atomic).

* Implement a zone limit for the blockmap which newfs_hammer can install.
The blockmap zones have an ultimate limit of 2^60 bytes, or around
one million terrabytes. If you create a 100G filesystem there is no
reason to let the blockmap iterate over its entire range as that would
result in a lot of fragmentation and blockmap overhead. By default
newfs_hammer sets the zone limit to 100x the size of the filesystem.

* Fix a bug in the crash recovery code. Do not sync newly added inodes
once the flusher is running, otherwise the volume header can get out
of sync. Just create a dummy marker structure and move it to the tail
of the inode flush_list when the flush starts, and stop when we hit it.

* Adjust hammer_vfs_sync() to sync twice. The second sync is needed to
update the volume header's undo fifo indices, otherwise HAMMER will
believe that it must undo the last fully synchronized flush.