1ToDo List for Slony-I
2-----------------------------------------
3
4
5
6Short Term Items
7---------------------------
8
9Improve script that tries to run UPDATE FUNCTIONS across versions to
10verify that upgrades work properly.
11
12- Clone Node - use pg_dump/PITR to populate a new subscriber node
13
14  Jan working on this
15
16- UPDATE FUNCTIONS needs to be able to reload version-specific
17  functions in v2.0, so that if we do an upgrade via:
18    "pg_dump -p $OLDVPORT dbname | psql -p $NEWVERPORT -d dbname"
19  we may then run "UPDATE FUNCTIONS" to tell the instance to know
20  about the new PostgreSQL version.
21
22  This probably involves refactoring the code that loads the
23  version-specific SQL into a function that is called by both STORE
24  NODE and UPDATE FUNCTIONS.
25
26- Need to draw some "ducttape" tests into NG tests
27
28   - Need to add a MERGE SET test; should do a pretty mean torture of
29     this!
30
31   - Duplicate duct tape test #6 - create 6 nodes:
32          - #2 and #3 subscribe to #1
33	  - #4 to #3
34          - #5 and #6 subscribe to #4
35
36   - Have a test that does a bunch of subtransactions
37
38- Need upgrade path
39
40Longer Term Items
41---------------------------
42
43- Windows-compatible version of tools/slony1_dump.sh
44
45- Consider pulling the lexer from psql
46
47  http://developer.postgresql.org/cvsweb.cgi/pgsql/src/bin/psql/psqlscan.l?rev=1.21;content-type=text%2Fx-cvsweb-markup
48
49Wishful Thinking
50----------------------------
51
52SYNC pipelining
53
54  - the notion here is to open two connections to the source DB, and
55    to start running the queries to generate the next LOG cursor while
56    the previous request is pushing INSERT/UPDATE/DELETE requests to
57    the subscriber.
58
59COPY pipelining
60
61  - the notion here is to try to parallelize the data load at
62    SUBSCRIBE time.  Suppose we decide we can process 4 tables at a
63    time, we set up 4 threads.  We then iterate thus:
64
65    For each table
66       - acquire a thread (waiting as needed)
67       - submit COPY TO stdout to the provider, and feed to
68         COPY FROM stdin on the subscriber
69       - Submit the REINDEX request on the subscriber
70
71    Even with a fairly small number of threads, we should be able to
72    process the whole subscription in as long as it takes to process
73    the single largest table.
74
75    This introduces a risk of locking problems not true at present
76    (alas) in that, at present, the subscription process is able to
77    demand exclusive locks on all tables up front; that is no longer
78    possible if the subscriptions are split across multiple tables.
79    In addition, the updates will COMMIT across some period of time on
80    the subscriber rather than appearing at one instant in time.
81
82    The timing improvement is probably still worthwhile.
83
84    http://lists.slony.info/pipermail/slony1-hackers/2007-April/000000.html
85
86Slonik ALTER TABLE event
87
88    This would permit passing through changes targeted at a single
89    table, and require much less extensive locking than traditional
90    EXECUTE SCRIPT.
91
92Compress DELETE/UPDATE/INSERT requests
93
94    Some performance benefits could be gotten by compressing sets of
95    DELETEs on the same table into a single DELETE statement.  This
96    doesn't help the time it takes to fire triggers on the origin, but
97    can speed the process of "mass" deleting records on subscribers.
98
99    <http://lists.slony.info/pipermail/slony1-general/2007-July/006249.html>
100
101    Unfortunately, this would complicate the application code, which
102    people agreed would be a net loss...
103
104    <http://lists.slony.info/pipermail/slony1-general/2007-July/006267.html>
105
106Data Transformations on Subscriber
107
108    Have an alternative "logtrigger()" scheme which permits creating a
109    custom logtrigger function that can read both OLD.* and NEW.* and
110    assortedly:
111
112    - Omit columns on a subscriber
113    - Omit tuples
114
115SL-Set
116
117- Could it have some policy in it as to preferred failover targets?
118
119