History log of /dragonfly/sbin/jscan/jscan.c (Results 1 – 16 of 16)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2, v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.7.0, v3.4.3, v3.4.2, v3.4.0, v3.4.1, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0
# 86d7f5d3 26-Nov-2011 John Marino <draco@marino.st>

Initial import of binutils 2.22 on the new vendor branch

Future versions of binutils will also reside on this branch rather
than continuing to create new binutils branches for each new version.


Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0
# a276dc6b 21-Aug-2009 Matthew Dillon <dillon@apollo.backplane.com>

AMD64 - AUDIT RUN - Fix format strings, size_t, and other issues


Revision tags: v2.3.2, v2.3.1, v2.2.1, v2.2.0
# 8360e2de 08-Feb-2009 Thomas Nikolajsen <thomas@dragonflybsd.org>

jscan: Fix SYNOPSIS and sync usage()


Revision tags: v2.3.0, v2.1.1, v2.0.1
# 3641b7ca 05-Jun-2008 Sascha Wildner <swildner@dragonflybsd.org>

* Fix some cases where NULL was used but 0 was meant (and vice versa).

* Remove some bogus casts of NULL to (void *).


# 71d5b5d4 09-Aug-2007 Sascha Wildner <swildner@dragonflybsd.org>

Fix typo.

Reported by: Michael Neumann <mneumann@ntecs.de>


# dc4cb79a 26-Jan-2007 Matthew Dillon <dillon@dragonflybsd.org>

Implement -D

Submitted-by: "Steve O'Hara-Smith" <steve@sohara.org>


# c0c3e80b 06-Nov-2005 Sascha Wildner <swildner@dragonflybsd.org>

Cleanup:

- In function definitions, move the type on a line of its own.


# 36456e49 07-Sep-2005 Matthew Dillon <dillon@dragonflybsd.org>

Rework and expand the algorithms in JSCAN, part 6/?.

Preliminary support for UNDO (-u) on mirrors. We now have the ability
to roll a mirror forwards or backwards by a specific number of raw journal

Rework and expand the algorithms in JSCAN, part 6/?.

Preliminary support for UNDO (-u) on mirrors. We now have the ability
to roll a mirror forwards or backwards by a specific number of raw journaling
records (-c), or to roll it backwards completely. However, only basic
remove and write UNDOs currently work, UNDOs for other transaction types
and UNDO's that start mid-transaction are untested.

show more ...


# a9cb3889 07-Sep-2005 Matthew Dillon <dillon@dragonflybsd.org>

Rework and expand the algorithms in JSCAN, part 5/?.

Revamp the scanning code, cleaning up the API considerably and allowing
forward or backwards reads relative to a supplied record, which makes
sca

Rework and expand the algorithms in JSCAN, part 5/?.

Revamp the scanning code, cleaning up the API considerably and allowing
forward or backwards reads relative to a supplied record, which makes
scanning a lot more intuitive.

Implement the -c option to specify the number of raw records to process.
This is primarily intended for running a mirror forwards or backwards by
a certain number of records.

Implement mid-transaction recovery. Mid-transaction recovery is needed
when a batch operation has stopped at a raw record that is in the middle
of a meta-transaction (a meta-transaction can consist of many raw records).
Stoppage can occur for many reasons, such as the batch being run while
the system is still writing out the records related to a meta-transaction,
or due to journal restarts, unexpected program termination, or command-line
limited record counts with -c.

This feature has been tested a little with -m -c 1 to iterate a mirror
forwards in time one raw record at a time across a large WRITE transaction
that covers many raw records. It is a pre-requisite for being able to
iterate mirrors forwards and backwards in time.

show more ...


# cbeb73b9 06-Sep-2005 Matthew Dillon <dillon@dragonflybsd.org>

Rework and expand the algorithms in JSCAN, part 3/?.

* Have the debug dump print out raw record transaction id's.
* Fix a number of bugs in the prefix set code (the files were being
strtoul'd in b

Rework and expand the algorithms in JSCAN, part 3/?.

* Have the debug dump print out raw record transaction id's.
* Fix a number of bugs in the prefix set code (the files were being
strtoul'd in base 10 rather then base 16).
* Make multiple commands on the command line work (e.g. -w and -m together).
* Fix a few jseek() bugs.
* Add '-f' which operates like tail -f and allows a batch operation on e.g.
a prefix file set to poll for more data.

Still TODO: ACK protocol, raw output (-o/-O) protocol, temporary mode for
record files (-W), and many other things.

show more ...


# bb406b71 06-Sep-2005 Matthew Dillon <dillon@dragonflybsd.org>

Rework and expand the algorithms in JSCAN, part 2/?.

* Implement session tracking so multiple output commands can be independantly
tracked. However, currently only one output command may be speci

Rework and expand the algorithms in JSCAN, part 2/?.

* Implement session tracking so multiple output commands can be independantly
tracked. However, currently only one output command may be specified
(e.g. -w or -m, not both, yet)
* The session code now updates transaction id files, giving us a general
restart capability on the userland side.
* Prefix set record mode (-w) should now work, including journal restarts.
* Running a mirror (-m) on an active prefix set should now operate properly
in batch mode.

Many things are still not working yet.

show more ...


# c2b2044a 06-Sep-2005 Matthew Dillon <dillon@dragonflybsd.org>

Rework and expand the algorithms in JSCAN, part 1/2. Implement a new
infrastructure in the userland jscan program to create a framework for
supporting the following:

* Pipe-throughs (without storin

Rework and expand the algorithms in JSCAN, part 1/2. Implement a new
infrastructure in the userland jscan program to create a framework for
supporting the following:

* Pipe-throughs (without storing data in a buffering file).
* File-based buffering
* Streamed replication of journaled data.
* Sequenced output files limited to a specific size (to match against
backup media)
* Transaction id tracking to support journal restarts and resynchronization.
This is needed to make off-site and networked backups reliable.
* Decoupled journal operations. e.g. (once finished) we will be able to run
a jscan in realtime to record journal data, and another one in batch to
take that data and use it to maintain a mirror or to send it to another
machine in a restartable and resynchronizable manner.
* Infrastructure to be able to scan a journal forwards or backwards by
an arbitrary number of raw records and maintain a mirror in both
the forwards (redo) and backwards (undo) direction based on that ability.
* Infrastructure that will aid in the preferred backup methodology of
maintaining a real time mirror and UNDO/REDO journal on a 100% full
backup partition such that one is able to free space on the partition
simply by removing the oldest journaling file.
* Infrastructure that will make it easier to solve the overlapping
meta-transaction problem illustrated below:

current synchronization point
V
[ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ] <= Raw records
A <= meta transaction A
B B B B <= meta transaction B
C C C <= meta transaction C
D D <= meta transaction D
E E <= meta transaction E


As you can see, the point at which a journal is 'broken' could
be right smack in the middle of several meta transactions at
once, due to the fact that meta transactions are laid out as
their records can be generated and multiple processes could be
modifying the filesystem at the same time.

If we are attempting to mirror this data, only transactions A and E
will have been completed and issued to the mirror. If the journal
is broken at the sychnronization point and then restarted, in
order to pick up where we left off we have to scan existing
journal records all the way back to the first 'B' in order to
recover transactions B and C, then continue recording the journal
from the synchronization point until those transactions are complete
and can be issued to the mirror. The difficulty here is that we
have to carefully track exactly what needs to be synchronized.
In recovering transaction 'B' and 'C' we do not want to reissue
transaction E, for example, which had already been completed prior to
the break.

The problem exists in both the forwards and reverse direction, and
we need to be able to go in both directions to control the 'snapshot'
view the mirror is presenting to us.

NOTE: this is just part 1/2. This code and the new options are
NON WORKING at the moment.

show more ...


# 5adba82e 06-Jul-2005 Matthew Dillon <dillon@dragonflybsd.org>

Add an option and test implementation for the full-duplex ack protocol.


# 712e03b0 05-Jul-2005 Matthew Dillon <dillon@dragonflybsd.org>

Major continuing work on jscan, the userland backend for the journaling
system.

* Fix bugs in the stream demultiplexing code. Only the first stream header
is supposed to be passed to the virtuali

Major continuing work on jscan, the userland backend for the journaling
system.

* Fix bugs in the stream demultiplexing code. Only the first stream header
is supposed to be passed to the virtualized stream code. We were passing
the stream header and trailer for each meta record and the extra meta-data
really confused the virtualized stream scanning code.

* Add support for unknown nesting record sizes. This occurs when the
nesting record in the virtual stream is too large to fit in the kernel's
in-memory journal FIFO. This allows us to arbitrarily push and pop
elements of a transaction without having to know the size of the completed
transaction in advance.

* Use a 64 bit integer to calculate the completed transaction size. Note
however that currently the stream reconstruction code uses malloc so
there are serious limits to what it can reconstruct.

* Implement (partial) support for mirroring. Only basic file and directory
creation, write, remove, and rename is currently decoded.

The TODO list is long and varied. We need to use mmap() instead of malloc()
to reference journal data. We need to finish implementing mirror mode. We
need a catch-up or restart mode for the mirror. We need raw journal data
logging, we need stream ackknowledgements, and a thousand other things.

This example will maintain a mirror of /usr in /usr_mirror in real time.
Again, remember that there is a lot more work to do.... the mirroring isn't
perfect by a long shot. We don't do symlinks for example, yet, and there
are many other issues.

# Warning: do not ^Z mountctl!
cpdup /usr /usr_mirror
cd /usr_mirror
mountctl -a /usr:test | jscan -m stdin
[does not return]
[to terminate, type 'mountctl -d test' in another shell window]

show more ...


# 9c118cb2 07-Mar-2005 Matthew Dillon <dillon@dragonflybsd.org>

Cleanup the debug output, fix a few data conversion issues.


# ce5e5ac4 07-Mar-2005 Matthew Dillon <dillon@dragonflybsd.org>

First cut of the jscan utility. This will become the core utility for
scanning journaling files (e.g. as created by mountctl). The utility is
currently able to dump a journaling file in human reada

First cut of the jscan utility. This will become the core utility for
scanning journaling files (e.g. as created by mountctl). The utility is
currently able to dump a journaling file in human readable format forwards
or backwards. It will eventually be capable of tracking and mirroring,
undo, security audits, partial restores, sanity checks, and other operations.

Most of the scanning infrastructure is in as of this commit, but the code
currently tries to cache the entire transaction into memory which will
fail for large (e.g. multi gigabyte) transactions. The API's are abstracted
with the intent of being able to page out or do on-the-fly mmaping of the
underlying data in the future rather then copying it into memory.

show more ...