• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..08-Nov-2021-

MakefileH A D08-Nov-2021553 196

READMEH A D08-Nov-202119.5 KiB420350

gist.cH A D08-Nov-202146 KiB1,567877

gistbuild.cH A D08-Nov-202136.3 KiB1,204617

gistbuildbuffers.cH A D08-Nov-202121 KiB777399

gistget.cH A D08-Nov-202121.8 KiB818472

gistproc.cH A D08-Nov-202141.7 KiB1,573988

gistscan.cH A D08-Nov-202110.1 KiB347182

gistsplit.cH A D08-Nov-202123.8 KiB780420

gistutil.cH A D08-Nov-202123.8 KiB967601

gistvacuum.cH A D08-Nov-20216.7 KiB276179

gistvalidate.cH A D08-Nov-20218.6 KiB276205

gistxlog.cH A D08-Nov-202116.8 KiB614347

README

1src/backend/access/gist/README
2
3GiST Indexing
4=============
5
6This directory contains an implementation of GiST indexing for Postgres.
7
8GiST stands for Generalized Search Tree. It was introduced in the seminal paper
9"Generalized Search Trees for Database Systems", 1995, Joseph M. Hellerstein,
10Jeffrey F. Naughton, Avi Pfeffer:
11
12    http://www.sai.msu.su/~megera/postgres/gist/papers/gist.ps
13
14and implemented by J. Hellerstein and P. Aoki in an early version of
15PostgreSQL (more details are available from The GiST Indexing Project
16at Berkeley at http://gist.cs.berkeley.edu/). As a "university"
17project it had a limited number of features and was in rare use.
18
19The current implementation of GiST supports:
20
21  * Variable length keys
22  * Composite keys (multi-key)
23  * Ordered search (nearest-neighbor search)
24  * provides NULL-safe interface to GiST core
25  * Concurrency
26  * Recovery support via WAL logging
27  * Buffering build algorithm
28
29The support for concurrency implemented in PostgreSQL was developed based on
30the paper "Access Methods for Next-Generation Database Systems" by
31Marcel Kornacker:
32
33    http://www.sai.msu.su/~megera/postgres/gist/papers/concurrency/access-methods-for-next-generation.pdf.gz
34
35Buffering build algorithm for GiST was developed based on the paper "Efficient
36Bulk Operations on Dynamic R-trees" by Lars Arge, Klaus Hinrichs, Jan Vahrenhold
37and Jeffrey Scott Vitter.
38
39    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.135.9894&rep=rep1&type=pdf
40
41The original algorithms were modified in several ways:
42
43* They had to be adapted to PostgreSQL conventions. For example, the SEARCH
44  algorithm was considerably changed, because in PostgreSQL the search function
45  should return one tuple (next), not all tuples at once. Also, it should
46  release page locks between calls.
47* Since we added support for variable length keys, it's not possible to
48  guarantee enough free space for all keys on pages after splitting. User
49  defined function picksplit doesn't have information about size of tuples
50  (each tuple may contain several keys as in multicolumn index while picksplit
51  could work with only one key) and pages.
52* We modified original INSERT algorithm for performance reasons. In particular,
53  it is now a single-pass algorithm.
54* Since the papers were theoretical, some details were omitted and we
55  had to find out ourself how to solve some specific problems.
56
57Because of the above reasons, we have revised the interaction of GiST
58core and PostgreSQL WAL system. Moreover, we encountered (and solved)
59a problem of uncompleted insertions when recovering after crash, which
60was not touched in the paper.
61
62Search Algorithm
63----------------
64
65The search code maintains a queue of unvisited items, where an "item" is
66either a heap tuple known to satisfy the search conditions, or an index
67page that is consistent with the search conditions according to inspection
68of its parent page's downlink item.  Initially the root page is searched
69to find unvisited items in it.  Then we pull items from the queue.  A
70heap tuple pointer is just returned immediately; an index page entry
71causes that page to be searched, generating more queue entries.
72
73The queue is kept ordered with heap tuple items at the front, then
74index page entries, with any newly-added index page entry inserted
75before existing index page entries.  This ensures depth-first traversal
76of the index, and in particular causes the first few heap tuples to be
77returned as soon as possible.  That is helpful in case there is a LIMIT
78that requires only a few tuples to be produced.
79
80To implement nearest-neighbor search, the queue entries are augmented
81with distance data: heap tuple entries are labeled with exact distance
82from the search argument, while index-page entries must be labeled with
83the minimum distance that any of their children could have.  Then,
84queue entries are retrieved in smallest-distance-first order, with
85entries having identical distances managed as stated in the previous
86paragraph.
87
88The search algorithm keeps an index page locked only long enough to scan
89its entries and queue those that satisfy the search conditions.  Since
90insertions can occur concurrently with searches, it is possible for an
91index child page to be split between the time we make a queue entry for it
92(while visiting its parent page) and the time we actually reach and scan
93the child page.  To avoid missing the entries that were moved to the right
94sibling, we detect whether a split has occurred by comparing the child
95page's NSN to the LSN that the parent had when visited.  If it did, the
96sibling page is immediately added to the front of the queue, ensuring that
97its items will be scanned in the same order as if they were still on the
98original child page.
99
100As is usual in Postgres, the search algorithm only guarantees to find index
101entries that existed before the scan started; index entries added during
102the scan might or might not be visited.  This is okay as long as all
103searches use MVCC snapshot rules to reject heap tuples newer than the time
104of scan start.  In particular, this means that we need not worry about
105cases where a parent page's downlink key is "enlarged" after we look at it.
106Any such enlargement would be to add child items that we aren't interested
107in returning anyway.
108
109
110Insert Algorithm
111----------------
112
113INSERT guarantees that the GiST tree remains balanced. User defined key method
114Penalty is used for choosing a subtree to insert; method PickSplit is used for
115the node splitting algorithm; method Union is used for propagating changes
116upward to maintain the tree properties.
117
118To insert a tuple, we first have to find a suitable leaf page to insert to.
119The algorithm walks down the tree, starting from the root, along the path
120of smallest Penalty. At each step:
121
1221. Has this page been split since we looked at the parent? If so, it's
123possible that we should be inserting to the other half instead, so retreat
124back to the parent.
1252. If this is a leaf node, we've found our target node.
1263. Otherwise use Penalty to pick a new target subtree.
1274. Check the key representing the target subtree. If it doesn't already cover
128the key we're inserting, replace it with the Union of the old downlink key
129and the key being inserted. (Actually, we always call Union, and just skip
130the replacement if the Unioned key is the same as the existing key)
1315. Replacing the key in step 4 might cause the page to be split. In that case,
132propagate the change upwards and restart the algorithm from the first parent
133that didn't need to be split.
1346. Walk down to the target subtree, and goto 1.
135
136This differs from the insertion algorithm in the original paper. In the
137original paper, you first walk down the tree until you reach a leaf page, and
138then you adjust the downlink in the parent, and propagate the adjustment up,
139all the way up to the root in the worst case. But we adjust the downlinks to
140cover the new key already when we walk down, so that when we reach the leaf
141page, we don't need to update the parents anymore, except to insert the
142downlinks if we have to split the page. This makes crash recovery simpler:
143after inserting a key to the page, the tree is immediately self-consistent
144without having to update the parents. Even if we split a page and crash before
145inserting the downlink to the parent, the tree is self-consistent because the
146right half of the split is accessible via the rightlink of the left page
147(which replaced the original page).
148
149Note that the algorithm can walk up and down the tree before reaching a leaf
150page, if internal pages need to split while adjusting the downlinks for the
151new key. Eventually, you should reach the bottom, and proceed with the
152insertion of the new tuple.
153
154Once we've found the target page to insert to, we check if there's room
155for the new tuple. If there is, the tuple is inserted, and we're done.
156If it doesn't fit, however, the page needs to be split. Note that it is
157possible that a page needs to be split into more than two pages, if keys have
158different lengths or more than one key is being inserted at a time (which can
159happen when inserting downlinks for a page split that resulted in more than
160two pages at the lower level). After splitting a page, the parent page needs
161to be updated. The downlink for the new page needs to be inserted, and the
162downlink for the old page, which became the left half of the split, needs to
163be updated to only cover those tuples that stayed on the left page. Inserting
164the downlink in the parent can again lead to a page split, recursing up to the
165root page in the worst case.
166
167gistplacetopage is the workhorse function that performs one step of the
168insertion. If the tuple fits, it inserts it to the given page, otherwise
169it splits the page, and constructs the new downlink tuples for the split
170pages. The caller must then call gistplacetopage() on the parent page to
171insert the downlink tuples. The parent page that holds the downlink to
172the child might have migrated as a result of concurrent splits of the
173parent, gistfindCorrectParent() is used to find the parent page.
174
175Splitting the root page works slightly differently. At root split,
176gistplacetopage() allocates the new child pages and replaces the old root
177page with the new root containing downlinks to the new children, all in one
178operation.
179
180
181findPath is a subroutine of findParent, used when the correct parent page
182can't be found by following the rightlinks at the parent level:
183
184findPath( stack item )
185	push stack, [root, 0, 0] // page, LSN, parent
186	while( stack )
187		ptr = top of stack
188		latch( ptr->page, S-mode )
189		if ( ptr->parent->page->lsn < ptr->page->nsn )
190			push stack, [ ptr->page->rightlink, 0, ptr->parent ]
191		end
192		for( each tuple on page )
193			if ( tuple->pagepointer == item->page )
194				return stack
195			else
196				add to stack at the end [tuple->pagepointer,0, ptr]
197			end
198		end
199		unlatch( ptr->page )
200		pop stack
201	end
202
203
204gistFindCorrectParent is used to re-find the parent of a page during
205insertion. It might have migrated to the right since we traversed down the
206tree because of page splits.
207
208findParent( stack item )
209	parent = item->parent
210	if ( parent->page->lsn != parent->lsn )
211		while(true)
212			search parent tuple on parent->page, if found the return
213			rightlink = parent->page->rightlink
214			unlatch( parent->page )
215			if ( rightlink is incorrect )
216				break loop
217			end
218			parent->page = rightlink
219			latch( parent->page, X-mode )
220		end
221		newstack = findPath( item->parent )
222		replace part of stack to new one
223		latch( parent->page, X-mode )
224		return findParent( item )
225	end
226
227pageSplit function decides how to distribute keys to the new pages after
228page split:
229
230pageSplit(page, allkeys)
231	(lkeys, rkeys) = pickSplit( allkeys )
232	if ( page is root )
233		lpage = new page
234	else
235		lpage = page
236	rpage = new page
237	if ( no space left on rpage )
238		newkeys = pageSplit( rpage, rkeys )
239	else
240		push newkeys, union(rkeys)
241	end
242	if ( no space left on lpage )
243		push newkeys, pageSplit( lpage, lkeys )
244	else
245		push newkeys, union(lkeys)
246	end
247	return newkeys
248
249
250
251Concurrency control
252-------------------
253As a rule of thumb, if you need to hold a lock on multiple pages at the
254same time, the locks should be acquired in the following order: child page
255before parent, and left-to-right at the same level. Always acquiring the
256locks in the same order avoids deadlocks.
257
258The search algorithm only looks at and locks one page at a time. Consequently
259there's a race condition between a search and a page split. A page split
260happens in two phases: 1. The page is split 2. The downlink is inserted to the
261parent. If a search looks at the parent page between those steps, before the
262downlink is inserted, it will still find the new right half by following the
263rightlink on the left half. But it must not follow the rightlink if it saw the
264downlink in the parent, or the page will be visited twice!
265
266A split initially marks the left page with the F_FOLLOW_RIGHT flag. If a scan
267sees that flag set, it knows that the right page is missing the downlink, and
268should be visited too. When split inserts the downlink to the parent, it
269clears the F_FOLLOW_RIGHT flag in the child, and sets the NSN field in the
270child page header to match the LSN of the insertion on the parent. If the
271F_FOLLOW_RIGHT flag is not set, a scan compares the NSN on the child and the
272LSN it saw in the parent. If NSN < LSN, the scan looked at the parent page
273before the downlink was inserted, so it should follow the rightlink. Otherwise
274the scan saw the downlink in the parent page, and will/did follow that as
275usual.
276
277A scan can't normally see a page with the F_FOLLOW_RIGHT flag set, because
278a page split keeps the child pages locked until the downlink has been inserted
279to the parent and the flag cleared again. But if a crash happens in the middle
280of a page split, before the downlinks are inserted into the parent, that will
281leave a page with F_FOLLOW_RIGHT in the tree. Scans handle that just fine,
282but we'll eventually want to fix that for performance reasons. And more
283importantly, dealing with pages with missing downlink pointers in the parent
284would complicate the insertion algorithm. So when an insertion sees a page
285with F_FOLLOW_RIGHT set, it immediately tries to bring the split that
286crashed in the middle to completion by adding the downlink in the parent.
287
288Buffering build algorithm
289-------------------------
290
291In the buffering index build algorithm, some or all internal nodes have a
292buffer attached to them. When a tuple is inserted at the top, the descend down
293the tree is stopped as soon as a buffer is reached, and the tuple is pushed to
294the buffer. When a buffer gets too full, all the tuples in it are flushed to
295the lower level, where they again hit lower level buffers or leaf pages. This
296makes the insertions happen in more of a breadth-first than depth-first order,
297which greatly reduces the amount of random I/O required.
298
299In the algorithm, levels are numbered so that leaf pages have level zero,
300and internal node levels count up from 1. This numbering ensures that a page's
301level number never changes, even when the root page is split.
302
303Level                    Tree
304
3053                         *
306                      /       \
3072                *                 *
308              /  |  \           /  |  \
3091          *     *     *     *     *     *
310          / \   / \   / \   / \   / \   / \
3110        o   o o   o o   o o   o o   o o   o
312
313* - internal page
314o - leaf page
315
316Internal pages that belong to certain levels have buffers associated with
317them. Leaf pages never have buffers. Which levels have buffers is controlled
318by "level step" parameter: level numbers that are multiples of level_step
319have buffers, while others do not. For example, if level_step = 2, then
320pages on levels 2, 4, 6, ... have buffers. If level_step = 1 then every
321internal page has a buffer.
322
323Level        Tree (level_step = 1)                Tree (level_step = 2)
324
3253                      *                                     *
326                   /       \                             /       \
3272             *(b)              *(b)                *(b)              *(b)
328           /  |  \           /  |  \             /  |  \           /  |  \
3291       *(b)  *(b)  *(b)  *(b)  *(b)  *(b)    *     *     *     *     *     *
330       / \   / \   / \   / \   / \   / \     / \   / \   / \   / \   / \   / \
3310     o   o o   o o   o o   o o   o o   o   o   o o   o o   o o   o o   o o   o
332
333(b) - buffer
334
335Logically, a buffer is just bunch of tuples. Physically, it is divided in
336pages, backed by a temporary file. Each buffer can be in one of two states:
337a) Last page of the buffer is kept in main memory. A node buffer is
338automatically switched to this state when a new index tuple is added to it,
339or a tuple is removed from it.
340b) All pages of the buffer are swapped out to disk. When a buffer becomes too
341full, and we start to flush it, all other buffers are switched to this state.
342
343When an index tuple is inserted, its initial processing can end in one of the
344following points:
3451) Leaf page, if the depth of the index <= level_step, meaning that
346   none of the internal pages have buffers associated with them.
3472) Buffer of topmost level page that has buffers.
348
349New index tuples are processed until one of the buffers in the topmost
350buffered level becomes half-full. When a buffer becomes half-full, it's added
351to the emptying queue, and will be emptied before a new tuple is processed.
352
353Buffer emptying process means that index tuples from the buffer are moved
354into buffers at a lower level, or leaf pages. First, all the other buffers are
355swapped to disk to free up the memory. Then tuples are popped from the buffer
356one by one, and cascaded down the tree to the next buffer or leaf page below
357the buffered node.
358
359Emptying a buffer has the interesting dynamic property that any intermediate
360pages between the buffer being emptied, and the next buffered or leaf level
361below it, become cached. If there are no more buffers below the node, the leaf
362pages where the tuples finally land on get cached too. If there are, the last
363buffer page of each buffer below is kept in memory. This is illustrated in
364the figures below:
365
366   Buffer being emptied to
367     lower-level buffers               Buffer being emptied to leaf pages
368
369               +(fb)                                 +(fb)
370            /     \                                /     \
371        +             +                        +             +
372      /   \         /   \                    /   \         /   \
373    *(ab)   *(ab) *(ab)   *(ab)            x       x     x       x
374
375+    - cached internal page
376x    - cached leaf page
377*    - non-cached internal page
378(fb) - buffer being emptied
379(ab) - buffers being appended to, with last page in memory
380
381In the beginning of the index build, the level-step is chosen so that all those
382pages involved in emptying one buffer fit in cache, so after each of those
383pages have been accessed once and cached, emptying a buffer doesn't involve
384any more I/O. This locality is where the speedup of the buffering algorithm
385comes from.
386
387Emptying one buffer can fill up one or more of the lower-level buffers,
388triggering emptying of them as well. Whenever a buffer becomes too full, it's
389added to the emptying queue, and will be emptied after the current buffer has
390been processed.
391
392To keep the size of each buffer limited even in the worst case, buffer emptying
393is scheduled as soon as a buffer becomes half-full, and emptying it continues
394until 1/2 of the nominal buffer size worth of tuples has been emptied. This
395guarantees that when buffer emptying begins, all the lower-level buffers
396are at most half-full. In the worst case that all the tuples are cascaded down
397to the same lower-level buffer, that buffer therefore has enough space to
398accommodate all the tuples emptied from the upper-level buffer. There is no
399hard size limit in any of the data structures used, though, so this only needs
400to be approximate; small overfilling of some buffers doesn't matter.
401
402If an internal page that has a buffer associated with it is split, the buffer
403needs to be split too. All tuples in the buffer are scanned through and
404relocated to the correct sibling buffers, using the penalty function to decide
405which buffer each tuple should go to.
406
407After all tuples from the heap have been processed, there are still some index
408tuples in the buffers. At this point, final buffer emptying starts. All buffers
409are emptied in top-down order. This is slightly complicated by the fact that
410new buffers can be allocated during the emptying, due to page splits. However,
411the new buffers will always be siblings of buffers that haven't been fully
412emptied yet; tuples never move upwards in the tree. The final emptying loops
413through buffers at a given level until all buffers at that level have been
414emptied, and then moves down to the next level.
415
416
417Authors:
418	Teodor Sigaev	<teodor@sigaev.ru>
419	Oleg Bartunov	<oleg@sai.msu.su>
420