• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

geqo/H08-Nov-2021-2,6311,219

path/H08-Nov-2021-22,77011,845

plan/H08-Nov-2021-23,99913,335

prep/H08-Nov-2021-6,8633,880

util/H08-Nov-2021-19,70610,826

MakefileH A D08-Nov-2021249 145

READMEH A D08-Nov-202159 KiB1,121963

README

1src/backend/optimizer/README
2
3Optimizer
4=========
5
6These directories take the Query structure returned by the parser, and
7generate a plan used by the executor.  The /plan directory generates the
8actual output plan, the /path code generates all possible ways to join the
9tables, and /prep handles various preprocessing steps for special cases.
10/util is utility stuff.  /geqo is the separate "genetic optimization" planner
11--- it does a semi-random search through the join tree space, rather than
12exhaustively considering all possible join trees.  (But each join considered
13by /geqo is given to /path to create paths for, so we consider all possible
14implementation paths for each specific join pair even in GEQO mode.)
15
16
17Paths and Join Pairs
18--------------------
19
20During the planning/optimizing process, we build "Path" trees representing
21the different ways of doing a query.  We select the cheapest Path that
22generates the desired relation and turn it into a Plan to pass to the
23executor.  (There is pretty nearly a one-to-one correspondence between the
24Path and Plan trees, but Path nodes omit info that won't be needed during
25planning, and include info needed for planning that won't be needed by the
26executor.)
27
28The optimizer builds a RelOptInfo structure for each base relation used in
29the query.  Base rels are either primitive tables, or subquery subselects
30that are planned via a separate recursive invocation of the planner.  A
31RelOptInfo is also built for each join relation that is considered during
32planning.  A join rel is simply a combination of base rels.  There is only
33one join RelOptInfo for any given set of baserels --- for example, the join
34{A B C} is represented by the same RelOptInfo no matter whether we build it
35by joining A and B first and then adding C, or joining B and C first and
36then adding A, etc.  These different means of building the joinrel are
37represented as Paths.  For each RelOptInfo we build a list of Paths that
38represent plausible ways to implement the scan or join of that relation.
39Once we've considered all the plausible Paths for a rel, we select the one
40that is cheapest according to the planner's cost estimates.  The final plan
41is derived from the cheapest Path for the RelOptInfo that includes all the
42base rels of the query.
43
44Possible Paths for a primitive table relation include plain old sequential
45scan, plus index scans for any indexes that exist on the table, plus bitmap
46index scans using one or more indexes.  Specialized RTE types, such as
47function RTEs, may have only one possible Path.
48
49Joins always occur using two RelOptInfos.  One is outer, the other inner.
50Outers drive lookups of values in the inner.  In a nested loop, lookups of
51values in the inner occur by scanning the inner path once per outer tuple
52to find each matching inner row.  In a mergejoin, inner and outer rows are
53ordered, and are accessed in order, so only one scan is required to perform
54the entire join: both inner and outer paths are scanned in-sync.  (There's
55not a lot of difference between inner and outer in a mergejoin...)  In a
56hashjoin, the inner is scanned first and all its rows are entered in a
57hashtable, then the outer is scanned and for each row we lookup the join
58key in the hashtable.
59
60A Path for a join relation is actually a tree structure, with the topmost
61Path node representing the last-applied join method.  It has left and right
62subpaths that represent the scan or join methods used for the two input
63relations.
64
65
66Join Tree Construction
67----------------------
68
69The optimizer generates optimal query plans by doing a more-or-less
70exhaustive search through the ways of executing the query.  The best Path
71tree is found by a recursive process:
72
731) Take each base relation in the query, and make a RelOptInfo structure
74for it.  Find each potentially useful way of accessing the relation,
75including sequential and index scans, and make Paths representing those
76ways.  All the Paths made for a given relation are placed in its
77RelOptInfo.pathlist.  (Actually, we discard Paths that are obviously
78inferior alternatives before they ever get into the pathlist --- what
79ends up in the pathlist is the cheapest way of generating each potentially
80useful sort ordering and parameterization of the relation.)  Also create a
81RelOptInfo.joininfo list including all the join clauses that involve this
82relation.  For example, the WHERE clause "tab1.col1 = tab2.col1" generates
83entries in both tab1 and tab2's joininfo lists.
84
85If we have only a single base relation in the query, we are done.
86Otherwise we have to figure out how to join the base relations into a
87single join relation.
88
892) Normally, any explicit JOIN clauses are "flattened" so that we just
90have a list of relations to join.  However, FULL OUTER JOIN clauses are
91never flattened, and other kinds of JOIN might not be either, if the
92flattening process is stopped by join_collapse_limit or from_collapse_limit
93restrictions.  Therefore, we end up with a planning problem that contains
94lists of relations to be joined in any order, where any individual item
95might be a sub-list that has to be joined together before we can consider
96joining it to its siblings.  We process these sub-problems recursively,
97bottom up.  Note that the join list structure constrains the possible join
98orders, but it doesn't constrain the join implementation method at each
99join (nestloop, merge, hash), nor does it say which rel is considered outer
100or inner at each join.  We consider all these possibilities in building
101Paths. We generate a Path for each feasible join method, and select the
102cheapest Path.
103
104For each planning problem, therefore, we will have a list of relations
105that are either base rels or joinrels constructed per sub-join-lists.
106We can join these rels together in any order the planner sees fit.
107The standard (non-GEQO) planner does this as follows:
108
109Consider joining each RelOptInfo to each other RelOptInfo for which there
110is a usable joinclause, and generate a Path for each possible join method
111for each such pair.  (If we have a RelOptInfo with no join clauses, we have
112no choice but to generate a clauseless Cartesian-product join; so we
113consider joining that rel to each other available rel.  But in the presence
114of join clauses we will only consider joins that use available join
115clauses.  Note that join-order restrictions induced by outer joins and
116IN/EXISTS clauses are also checked, to ensure that we find a workable join
117order in cases where those restrictions force a clauseless join to be done.)
118
119If we only had two relations in the list, we are done: we just pick
120the cheapest path for the join RelOptInfo.  If we had more than two, we now
121need to consider ways of joining join RelOptInfos to each other to make
122join RelOptInfos that represent more than two list items.
123
124The join tree is constructed using a "dynamic programming" algorithm:
125in the first pass (already described) we consider ways to create join rels
126representing exactly two list items.  The second pass considers ways
127to make join rels that represent exactly three list items; the next pass,
128four items, etc.  The last pass considers how to make the final join
129relation that includes all list items --- obviously there can be only one
130join rel at this top level, whereas there can be more than one join rel
131at lower levels.  At each level we use joins that follow available join
132clauses, if possible, just as described for the first level.
133
134For example:
135
136    SELECT  *
137    FROM    tab1, tab2, tab3, tab4
138    WHERE   tab1.col = tab2.col AND
139        tab2.col = tab3.col AND
140        tab3.col = tab4.col
141
142    Tables 1, 2, 3, and 4 are joined as:
143    {1 2},{2 3},{3 4}
144    {1 2 3},{2 3 4}
145    {1 2 3 4}
146    (other possibilities will be excluded for lack of join clauses)
147
148    SELECT  *
149    FROM    tab1, tab2, tab3, tab4
150    WHERE   tab1.col = tab2.col AND
151        tab1.col = tab3.col AND
152        tab1.col = tab4.col
153
154    Tables 1, 2, 3, and 4 are joined as:
155    {1 2},{1 3},{1 4}
156    {1 2 3},{1 3 4},{1 2 4}
157    {1 2 3 4}
158
159We consider left-handed plans (the outer rel of an upper join is a joinrel,
160but the inner is always a single list item); right-handed plans (outer rel
161is always a single item); and bushy plans (both inner and outer can be
162joins themselves).  For example, when building {1 2 3 4} we consider
163joining {1 2 3} to {4} (left-handed), {4} to {1 2 3} (right-handed), and
164{1 2} to {3 4} (bushy), among other choices.  Although the jointree
165scanning code produces these potential join combinations one at a time,
166all the ways to produce the same set of joined base rels will share the
167same RelOptInfo, so the paths produced from different join combinations
168that produce equivalent joinrels will compete in add_path().
169
170The dynamic-programming approach has an important property that's not
171immediately obvious: we will finish constructing all paths for a given
172relation before we construct any paths for relations containing that rel.
173This means that we can reliably identify the "cheapest path" for each rel
174before higher-level relations need to know that.  Also, we can safely
175discard a path when we find that another path for the same rel is better,
176without worrying that maybe there is already a reference to that path in
177some higher-level join path.  Without this, memory management for paths
178would be much more complicated.
179
180Once we have built the final join rel, we use either the cheapest path
181for it or the cheapest path with the desired ordering (if that's cheaper
182than applying a sort to the cheapest other path).
183
184If the query contains one-sided outer joins (LEFT or RIGHT joins), or
185IN or EXISTS WHERE clauses that were converted to semijoins or antijoins,
186then some of the possible join orders may be illegal.  These are excluded
187by having join_is_legal consult a side list of such "special" joins to see
188whether a proposed join is illegal.  (The same consultation allows it to
189see which join style should be applied for a valid join, ie, JOIN_INNER,
190JOIN_LEFT, etc.)
191
192
193Valid OUTER JOIN Optimizations
194------------------------------
195
196The planner's treatment of outer join reordering is based on the following
197identities:
198
1991.	(A leftjoin B on (Pab)) innerjoin C on (Pac)
200	= (A innerjoin C on (Pac)) leftjoin B on (Pab)
201
202where Pac is a predicate referencing A and C, etc (in this case, clearly
203Pac cannot reference B, or the transformation is nonsensical).
204
2052.	(A leftjoin B on (Pab)) leftjoin C on (Pac)
206	= (A leftjoin C on (Pac)) leftjoin B on (Pab)
207
2083.	(A leftjoin B on (Pab)) leftjoin C on (Pbc)
209	= A leftjoin (B leftjoin C on (Pbc)) on (Pab)
210
211Identity 3 only holds if predicate Pbc must fail for all-null B rows
212(that is, Pbc is strict for at least one column of B).  If Pbc is not
213strict, the first form might produce some rows with nonnull C columns
214where the second form would make those entries null.
215
216RIGHT JOIN is equivalent to LEFT JOIN after switching the two input
217tables, so the same identities work for right joins.
218
219An example of a case that does *not* work is moving an innerjoin into or
220out of the nullable side of an outer join:
221
222	A leftjoin (B join C on (Pbc)) on (Pab)
223	!= (A leftjoin B on (Pab)) join C on (Pbc)
224
225SEMI joins work a little bit differently.  A semijoin can be reassociated
226into or out of the lefthand side of another semijoin, left join, or
227antijoin, but not into or out of the righthand side.  Likewise, an inner
228join, left join, or antijoin can be reassociated into or out of the
229lefthand side of a semijoin, but not into or out of the righthand side.
230
231ANTI joins work approximately like LEFT joins, except that identity 3
232fails if the join to C is an antijoin (even if Pbc is strict, and in
233both the cases where the other join is a leftjoin and where it is an
234antijoin).  So we can't reorder antijoins into or out of the RHS of a
235leftjoin or antijoin, even if the relevant clause is strict.
236
237The current code does not attempt to re-order FULL JOINs at all.
238FULL JOIN ordering is enforced by not collapsing FULL JOIN nodes when
239translating the jointree to "joinlist" representation.  Other types of
240JOIN nodes are normally collapsed so that they participate fully in the
241join order search.  To avoid generating illegal join orders, the planner
242creates a SpecialJoinInfo node for each non-inner join, and join_is_legal
243checks this list to decide if a proposed join is legal.
244
245What we store in SpecialJoinInfo nodes are the minimum sets of Relids
246required on each side of the join to form the outer join.  Note that
247these are minimums; there's no explicit maximum, since joining other
248rels to the OJ's syntactic rels may be legal.  Per identities 1 and 2,
249non-FULL joins can be freely associated into the lefthand side of an
250OJ, but in some cases they can't be associated into the righthand side.
251So the restriction enforced by join_is_legal is that a proposed join
252can't join a rel within or partly within an RHS boundary to one outside
253the boundary, unless the proposed join is a LEFT join that can associate
254into the SpecialJoinInfo's RHS using identity 3.
255
256The use of minimum Relid sets has some pitfalls; consider a query like
257	A leftjoin (B leftjoin (C innerjoin D) on (Pbcd)) on Pa
258where Pa doesn't mention B/C/D at all.  In this case a naive computation
259would give the upper leftjoin's min LHS as {A} and min RHS as {C,D} (since
260we know that the innerjoin can't associate out of the leftjoin's RHS, and
261enforce that by including its relids in the leftjoin's min RHS).  And the
262lower leftjoin has min LHS of {B} and min RHS of {C,D}.  Given such
263information, join_is_legal would think it's okay to associate the upper
264join into the lower join's RHS, transforming the query to
265	B leftjoin (A leftjoin (C innerjoin D) on Pa) on (Pbcd)
266which yields totally wrong answers.  We prevent that by forcing the min RHS
267for the upper join to include B.  This is perhaps overly restrictive, but
268such cases don't arise often so it's not clear that it's worth developing a
269more complicated system.
270
271
272Pulling Up Subqueries
273---------------------
274
275As we described above, a subquery appearing in the range table is planned
276independently and treated as a "black box" during planning of the outer
277query.  This is necessary when the subquery uses features such as
278aggregates, GROUP, or DISTINCT.  But if the subquery is just a simple
279scan or join, treating the subquery as a black box may produce a poor plan
280compared to considering it as part of the entire plan search space.
281Therefore, at the start of the planning process the planner looks for
282simple subqueries and pulls them up into the main query's jointree.
283
284Pulling up a subquery may result in FROM-list joins appearing below the top
285of the join tree.  Each FROM-list is planned using the dynamic-programming
286search method described above.
287
288If pulling up a subquery produces a FROM-list as a direct child of another
289FROM-list, then we can merge the two FROM-lists together.  Once that's
290done, the subquery is an absolutely integral part of the outer query and
291will not constrain the join tree search space at all.  However, that could
292result in unpleasant growth of planning time, since the dynamic-programming
293search has runtime exponential in the number of FROM-items considered.
294Therefore, we don't merge FROM-lists if the result would have too many
295FROM-items in one list.
296
297
298Optimizer Functions
299-------------------
300
301The primary entry point is planner().
302
303planner()
304set up for recursive handling of subqueries
305-subquery_planner()
306 pull up sublinks and subqueries from rangetable, if possible
307 canonicalize qual
308     Attempt to simplify WHERE clause to the most useful form; this includes
309     flattening nested AND/ORs and detecting clauses that are duplicated in
310     different branches of an OR.
311 simplify constant expressions
312 process sublinks
313 convert Vars of outer query levels into Params
314--grouping_planner()
315  preprocess target list for non-SELECT queries
316  handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,
317	ORDER BY, DISTINCT, LIMIT
318--query_planner()
319   make list of base relations used in query
320   split up the qual into restrictions (a=1) and joins (b=c)
321   find qual clauses that enable merge and hash joins
322----make_one_rel()
323     set_base_rel_pathlists()
324      find seqscan and all index paths for each base relation
325      find selectivity of columns used in joins
326     make_rel_from_joinlist()
327      hand off join subproblems to a plugin, GEQO, or standard_join_search()
328-----standard_join_search()
329      call join_search_one_level() for each level of join tree needed
330      join_search_one_level():
331        For each joinrel of the prior level, do make_rels_by_clause_joins()
332        if it has join clauses, or make_rels_by_clauseless_joins() if not.
333        Also generate "bushy plan" joins between joinrels of lower levels.
334      Back at standard_join_search(), generate gather paths if needed for
335      each newly constructed joinrel, then apply set_cheapest() to extract
336      the cheapest path for it.
337      Loop back if this wasn't the top join level.
338  Back at grouping_planner:
339  do grouping (GROUP BY) and aggregation
340  do window functions
341  make unique (DISTINCT)
342  do sorting (ORDER BY)
343  do limit (LIMIT/OFFSET)
344Back at planner():
345convert finished Path tree into a Plan tree
346do final cleanup after planning
347
348
349Optimizer Data Structures
350-------------------------
351
352PlannerGlobal   - global information for a single planner invocation
353
354PlannerInfo     - information for planning a particular Query (we make
355                  a separate PlannerInfo node for each sub-Query)
356
357RelOptInfo      - a relation or joined relations
358
359 RestrictInfo   - WHERE clauses, like "x = 3" or "y = z"
360                  (note the same structure is used for restriction and
361                   join clauses)
362
363 Path           - every way to generate a RelOptInfo(sequential,index,joins)
364  SeqScan       - represents a sequential scan plan
365  IndexPath     - index scan
366  BitmapHeapPath - top of a bitmapped index scan
367  TidPath       - scan by CTID
368  SubqueryScanPath - scan a subquery-in-FROM
369  ForeignPath   - scan a foreign table, foreign join or foreign upper-relation
370  CustomPath    - for custom scan providers
371  AppendPath    - append multiple subpaths together
372  MergeAppendPath - merge multiple subpaths, preserving their common sort order
373  ResultPath    - a childless Result plan node (used for FROM-less SELECT)
374  MaterialPath  - a Material plan node
375  UniquePath    - remove duplicate rows (either by hashing or sorting)
376  GatherPath    - collect the results of parallel workers
377  GatherMergePath - collect parallel results, preserving their common sort order
378  ProjectionPath - a Result plan node with child (used for projection)
379  ProjectSetPath - a ProjectSet plan node applied to some sub-path
380  SortPath      - a Sort plan node applied to some sub-path
381  GroupPath     - a Group plan node applied to some sub-path
382  UpperUniquePath - a Unique plan node applied to some sub-path
383  AggPath       - an Agg plan node applied to some sub-path
384  GroupingSetsPath - an Agg plan node used to implement GROUPING SETS
385  MinMaxAggPath - a Result plan node with subplans performing MIN/MAX
386  WindowAggPath - a WindowAgg plan node applied to some sub-path
387  SetOpPath     - a SetOp plan node applied to some sub-path
388  RecursiveUnionPath - a RecursiveUnion plan node applied to two sub-paths
389  LockRowsPath  - a LockRows plan node applied to some sub-path
390  ModifyTablePath - a ModifyTable plan node applied to some sub-path(s)
391  LimitPath     - a Limit plan node applied to some sub-path
392  NestPath      - nested-loop joins
393  MergePath     - merge joins
394  HashPath      - hash joins
395
396 EquivalenceClass - a data structure representing a set of values known equal
397
398 PathKey        - a data structure representing the sort ordering of a path
399
400The optimizer spends a good deal of its time worrying about the ordering
401of the tuples returned by a path.  The reason this is useful is that by
402knowing the sort ordering of a path, we may be able to use that path as
403the left or right input of a mergejoin and avoid an explicit sort step.
404Nestloops and hash joins don't really care what the order of their inputs
405is, but mergejoin needs suitably ordered inputs.  Therefore, all paths
406generated during the optimization process are marked with their sort order
407(to the extent that it is known) for possible use by a higher-level merge.
408
409It is also possible to avoid an explicit sort step to implement a user's
410ORDER BY clause if the final path has the right ordering already, so the
411sort ordering is of interest even at the top level.  grouping_planner() will
412look for the cheapest path with a sort order matching the desired order,
413then compare its cost to the cost of using the cheapest-overall path and
414doing an explicit sort on that.
415
416When we are generating paths for a particular RelOptInfo, we discard a path
417if it is more expensive than another known path that has the same or better
418sort order.  We will never discard a path that is the only known way to
419achieve a given sort order (without an explicit sort, that is).  In this
420way, the next level up will have the maximum freedom to build mergejoins
421without sorting, since it can pick from any of the paths retained for its
422inputs.
423
424
425EquivalenceClasses
426------------------
427
428During the deconstruct_jointree() scan of the query's qual clauses, we look
429for mergejoinable equality clauses A = B whose applicability is not delayed
430by an outer join; these are called "equivalence clauses".  When we find
431one, we create an EquivalenceClass containing the expressions A and B to
432record this knowledge.  If we later find another equivalence clause B = C,
433we add C to the existing EquivalenceClass for {A B}; this may require
434merging two existing EquivalenceClasses.  At the end of the scan, we have
435sets of values that are known all transitively equal to each other.  We can
436therefore use a comparison of any pair of the values as a restriction or
437join clause (when these values are available at the scan or join, of
438course); furthermore, we need test only one such comparison, not all of
439them.  Therefore, equivalence clauses are removed from the standard qual
440distribution process.  Instead, when preparing a restriction or join clause
441list, we examine each EquivalenceClass to see if it can contribute a
442clause, and if so we select an appropriate pair of values to compare.  For
443example, if we are trying to join A's relation to C's, we can generate the
444clause A = C, even though this appeared nowhere explicitly in the original
445query.  This may allow us to explore join paths that otherwise would have
446been rejected as requiring Cartesian-product joins.
447
448Sometimes an EquivalenceClass may contain a pseudo-constant expression
449(i.e., one not containing Vars or Aggs of the current query level, nor
450volatile functions).  In this case we do not follow the policy of
451dynamically generating join clauses: instead, we dynamically generate
452restriction clauses "var = const" wherever one of the variable members of
453the class can first be computed.  For example, if we have A = B and B = 42,
454we effectively generate the restriction clauses A = 42 and B = 42, and then
455we need not bother with explicitly testing the join clause A = B when the
456relations are joined.  In effect, all the class members can be tested at
457relation-scan level and there's never a need for join tests.
458
459The precise technical interpretation of an EquivalenceClass is that it
460asserts that at any plan node where more than one of its member values
461can be computed, output rows in which the values are not all equal may
462be discarded without affecting the query result.  (We require all levels
463of the plan to enforce EquivalenceClasses, hence a join need not recheck
464equality of values that were computable by one of its children.)  For an
465ordinary EquivalenceClass that is "valid everywhere", we can further infer
466that the values are all non-null, because all mergejoinable operators are
467strict.  However, we also allow equivalence clauses that appear below the
468nullable side of an outer join to form EquivalenceClasses; for these
469classes, the interpretation is that either all the values are equal, or
470all (except pseudo-constants) have gone to null.  (This requires a
471limitation that non-constant members be strict, else they might not go
472to null when the other members do.)  Consider for example
473
474	SELECT *
475	  FROM a LEFT JOIN
476	       (SELECT * FROM b JOIN c ON b.y = c.z WHERE b.y = 10) ss
477	       ON a.x = ss.y
478	  WHERE a.x = 42;
479
480We can form the below-outer-join EquivalenceClass {b.y c.z 10} and thereby
481apply c.z = 10 while scanning c.  (The reason we disallow outerjoin-delayed
482clauses from forming EquivalenceClasses is exactly that we want to be able
483to push any derived clauses as far down as possible.)  But once above the
484outer join it's no longer necessarily the case that b.y = 10, and thus we
485cannot use such EquivalenceClasses to conclude that sorting is unnecessary
486(see discussion of PathKeys below).
487
488In this example, notice also that a.x = ss.y (really a.x = b.y) is not an
489equivalence clause because its applicability to b is delayed by the outer
490join; thus we do not try to insert b.y into the equivalence class {a.x 42}.
491But since we see that a.x has been equated to 42 above the outer join, we
492are able to form a below-outer-join class {b.y 42}; this restriction can be
493added because no b/c row not having b.y = 42 can contribute to the result
494of the outer join, and so we need not compute such rows.  Now this class
495will get merged with {b.y c.z 10}, leading to the contradiction 10 = 42,
496which lets the planner deduce that the b/c join need not be computed at all
497because none of its rows can contribute to the outer join.  (This gets
498implemented as a gating Result filter, since more usually the potential
499contradiction involves Param values rather than just Consts, and thus has
500to be checked at runtime.)
501
502To aid in determining the sort ordering(s) that can work with a mergejoin,
503we mark each mergejoinable clause with the EquivalenceClasses of its left
504and right inputs.  For an equivalence clause, these are of course the same
505EquivalenceClass.  For a non-equivalence mergejoinable clause (such as an
506outer-join qualification), we generate two separate EquivalenceClasses for
507the left and right inputs.  This may result in creating single-item
508equivalence "classes", though of course these are still subject to merging
509if other equivalence clauses are later found to bear on the same
510expressions.
511
512Another way that we may form a single-item EquivalenceClass is in creation
513of a PathKey to represent a desired sort order (see below).  This is a bit
514different from the above cases because such an EquivalenceClass might
515contain an aggregate function or volatile expression.  (A clause containing
516a volatile function will never be considered mergejoinable, even if its top
517operator is mergejoinable, so there is no way for a volatile expression to
518get into EquivalenceClasses otherwise.  Aggregates are disallowed in WHERE
519altogether, so will never be found in a mergejoinable clause.)  This is just
520a convenience to maintain a uniform PathKey representation: such an
521EquivalenceClass will never be merged with any other.  Note in particular
522that a single-item EquivalenceClass {a.x} is *not* meant to imply an
523assertion that a.x = a.x; the practical effect of this is that a.x could
524be NULL.
525
526An EquivalenceClass also contains a list of btree opfamily OIDs, which
527determines what the equalities it represents actually "mean".  All the
528equivalence clauses that contribute to an EquivalenceClass must have
529equality operators that belong to the same set of opfamilies.  (Note: most
530of the time, a particular equality operator belongs to only one family, but
531it's possible that it belongs to more than one.  We keep track of all the
532families to ensure that we can make use of an index belonging to any one of
533the families for mergejoin purposes.)
534
535An EquivalenceClass can contain "em_is_child" members, which are copies
536of members that contain appendrel parent relation Vars, transposed to
537contain the equivalent child-relation variables or expressions.  These
538members are *not* full-fledged members of the EquivalenceClass and do not
539affect the class's overall properties at all.  They are kept only to
540simplify matching of child-relation expressions to EquivalenceClasses.
541Most operations on EquivalenceClasses should ignore child members.
542
543
544PathKeys
545--------
546
547The PathKeys data structure represents what is known about the sort order
548of the tuples generated by a particular Path.  A path's pathkeys field is a
549list of PathKey nodes, where the n'th item represents the n'th sort key of
550the result.  Each PathKey contains these fields:
551
552	* a reference to an EquivalenceClass
553	* a btree opfamily OID (must match one of those in the EC)
554	* a sort direction (ascending or descending)
555	* a nulls-first-or-last flag
556
557The EquivalenceClass represents the value being sorted on.  Since the
558various members of an EquivalenceClass are known equal according to the
559opfamily, we can consider a path sorted by any one of them to be sorted by
560any other too; this is what justifies referencing the whole
561EquivalenceClass rather than just one member of it.
562
563In single/base relation RelOptInfo's, the Paths represent various ways
564of scanning the relation and the resulting ordering of the tuples.
565Sequential scan Paths have NIL pathkeys, indicating no known ordering.
566Index scans have Path.pathkeys that represent the chosen index's ordering,
567if any.  A single-key index would create a single-PathKey list, while a
568multi-column index generates a list with one element per key index column.
569Non-key columns specified in the INCLUDE clause of covering indexes don't
570have corresponding PathKeys in the list, because the have no influence on
571index ordering.  (Actually, since an index can be scanned either forward or
572backward, there are two possible sort orders and two possible PathKey lists
573it can generate.)
574
575Note that a bitmap scan has NIL pathkeys since we can say nothing about
576the overall order of its result.  Also, an indexscan on an unordered type
577of index generates NIL pathkeys.  However, we can always create a pathkey
578by doing an explicit sort.  The pathkeys for a Sort plan's output just
579represent the sort key fields and the ordering operators used.
580
581Things get more interesting when we consider joins.  Suppose we do a
582mergejoin between A and B using the mergeclause A.X = B.Y.  The output
583of the mergejoin is sorted by X --- but it is also sorted by Y.  Again,
584this can be represented by a PathKey referencing an EquivalenceClass
585containing both X and Y.
586
587With a little further thought, it becomes apparent that nestloop joins
588can also produce sorted output.  For example, if we do a nestloop join
589between outer relation A and inner relation B, then any pathkeys relevant
590to A are still valid for the join result: we have not altered the order of
591the tuples from A.  Even more interesting, if there was an equivalence clause
592A.X=B.Y, and A.X was a pathkey for the outer relation A, then we can assert
593that B.Y is a pathkey for the join result; X was ordered before and still
594is, and the joined values of Y are equal to the joined values of X, so Y
595must now be ordered too.  This is true even though we used neither an
596explicit sort nor a mergejoin on Y.  (Note: hash joins cannot be counted
597on to preserve the order of their outer relation, because the executor
598might decide to "batch" the join, so we always set pathkeys to NIL for
599a hashjoin path.)  Exception: a RIGHT or FULL join doesn't preserve the
600ordering of its outer relation, because it might insert nulls at random
601points in the ordering.
602
603In general, we can justify using EquivalenceClasses as the basis for
604pathkeys because, whenever we scan a relation containing multiple
605EquivalenceClass members or join two relations each containing
606EquivalenceClass members, we apply restriction or join clauses derived from
607the EquivalenceClass.  This guarantees that any two values listed in the
608EquivalenceClass are in fact equal in all tuples emitted by the scan or
609join, and therefore that if the tuples are sorted by one of the values,
610they can be considered sorted by any other as well.  It does not matter
611whether the test clause is used as a mergeclause, or merely enforced
612after-the-fact as a qpqual filter.
613
614Note that there is no particular difficulty in labeling a path's sort
615order with a PathKey referencing an EquivalenceClass that contains
616variables not yet joined into the path's output.  We can simply ignore
617such entries as not being relevant (yet).  This makes it possible to
618use the same EquivalenceClasses throughout the join planning process.
619In fact, by being careful not to generate multiple identical PathKey
620objects, we can reduce comparison of EquivalenceClasses and PathKeys
621to simple pointer comparison, which is a huge savings because add_path
622has to make a large number of PathKey comparisons in deciding whether
623competing Paths are equivalently sorted.
624
625Pathkeys are also useful to represent an ordering that we wish to achieve,
626since they are easily compared to the pathkeys of a potential candidate
627path.  So, SortGroupClause lists are turned into pathkeys lists for use
628inside the optimizer.
629
630An additional refinement we can make is to insist that canonical pathkey
631lists (sort orderings) do not mention the same EquivalenceClass more than
632once.  For example, in all these cases the second sort column is redundant,
633because it cannot distinguish values that are the same according to the
634first sort column:
635	SELECT ... ORDER BY x, x
636	SELECT ... ORDER BY x, x DESC
637	SELECT ... WHERE x = y ORDER BY x, y
638Although a user probably wouldn't write "ORDER BY x,x" directly, such
639redundancies are more probable once equivalence classes have been
640considered.  Also, the system may generate redundant pathkey lists when
641computing the sort ordering needed for a mergejoin.  By eliminating the
642redundancy, we save time and improve planning, since the planner will more
643easily recognize equivalent orderings as being equivalent.
644
645Another interesting property is that if the underlying EquivalenceClass
646contains a constant and is not below an outer join, then the pathkey is
647completely redundant and need not be sorted by at all!  Every row must
648contain the same constant value, so there's no need to sort.  (If the EC is
649below an outer join, we still have to sort, since some of the rows might
650have gone to null and others not.  In this case we must be careful to pick
651a non-const member to sort by.  The assumption that all the non-const
652members go to null at the same plan level is critical here, else they might
653not produce the same sort order.)  This might seem pointless because users
654are unlikely to write "... WHERE x = 42 ORDER BY x", but it allows us to
655recognize when particular index columns are irrelevant to the sort order:
656if we have "... WHERE x = 42 ORDER BY y", scanning an index on (x,y)
657produces correctly ordered data without a sort step.  We used to have very
658ugly ad-hoc code to recognize that in limited contexts, but discarding
659constant ECs from pathkeys makes it happen cleanly and automatically.
660
661You might object that a below-outer-join EquivalenceClass doesn't always
662represent the same values at every level of the join tree, and so using
663it to uniquely identify a sort order is dubious.  This is true, but we
664can avoid dealing with the fact explicitly because we always consider that
665an outer join destroys any ordering of its nullable inputs.  Thus, even
666if a path was sorted by {a.x} below an outer join, we'll re-sort if that
667sort ordering was important; and so using the same PathKey for both sort
668orderings doesn't create any real problem.
669
670
671Order of processing for EquivalenceClasses and PathKeys
672-------------------------------------------------------
673
674As alluded to above, there is a specific sequence of phases in the
675processing of EquivalenceClasses and PathKeys during planning.  During the
676initial scanning of the query's quals (deconstruct_jointree followed by
677reconsider_outer_join_clauses), we construct EquivalenceClasses based on
678mergejoinable clauses found in the quals.  At the end of this process,
679we know all we can know about equivalence of different variables, so
680subsequently there will be no further merging of EquivalenceClasses.
681At that point it is possible to consider the EquivalenceClasses as
682"canonical" and build canonical PathKeys that reference them.  At this
683time we construct PathKeys for the query's ORDER BY and related clauses.
684(Any ordering expressions that do not appear elsewhere will result in
685the creation of new EquivalenceClasses, but this cannot result in merging
686existing classes, so canonical-ness is not lost.)
687
688Because all the EquivalenceClasses are known before we begin path
689generation, we can use them as a guide to which indexes are of interest:
690if an index's column is not mentioned in any EquivalenceClass then that
691index's sort order cannot possibly be helpful for the query.  This allows
692short-circuiting of much of the processing of create_index_paths() for
693irrelevant indexes.
694
695There are some cases where planner.c constructs additional
696EquivalenceClasses and PathKeys after query_planner has completed.
697In these cases, the extra ECs/PKs are needed to represent sort orders
698that were not considered during query_planner.  Such situations should be
699minimized since it is impossible for query_planner to return a plan
700producing such a sort order, meaning an explicit sort will always be needed.
701Currently this happens only for queries involving multiple window functions
702with different orderings, for which extra sorts are needed anyway.
703
704
705Parameterized Paths
706-------------------
707
708The naive way to join two relations using a clause like WHERE A.X = B.Y
709is to generate a nestloop plan like this:
710
711	NestLoop
712		Filter: A.X = B.Y
713		-> Seq Scan on A
714		-> Seq Scan on B
715
716We can make this better by using a merge or hash join, but it still
717requires scanning all of both input relations.  If A is very small and B is
718very large, but there is an index on B.Y, it can be enormously better to do
719something like this:
720
721	NestLoop
722		-> Seq Scan on A
723		-> Index Scan using B_Y_IDX on B
724			Index Condition: B.Y = A.X
725
726Here, we are expecting that for each row scanned from A, the nestloop
727plan node will pass down the current value of A.X into the scan of B.
728That allows the indexscan to treat A.X as a constant for any one
729invocation, and thereby use it as an index key.  This is the only plan type
730that can avoid fetching all of B, and for small numbers of rows coming from
731A, that will dominate every other consideration.  (As A gets larger, this
732gets less attractive, and eventually a merge or hash join will win instead.
733So we have to cost out all the alternatives to decide what to do.)
734
735It can be useful for the parameter value to be passed down through
736intermediate layers of joins, for example:
737
738	NestLoop
739		-> Seq Scan on A
740		Hash Join
741			Join Condition: B.Y = C.W
742			-> Seq Scan on B
743			-> Index Scan using C_Z_IDX on C
744				Index Condition: C.Z = A.X
745
746If all joins are plain inner joins then this is usually unnecessary,
747because it's possible to reorder the joins so that a parameter is used
748immediately below the nestloop node that provides it.  But in the
749presence of outer joins, such join reordering may not be possible.
750
751Also, the bottom-level scan might require parameters from more than one
752other relation.  In principle we could join the other relations first
753so that all the parameters are supplied from a single nestloop level.
754But if those other relations have no join clause in common (which is
755common in star-schema queries for instance), the planner won't consider
756joining them directly to each other.  In such a case we need to be able
757to create a plan like
758
759    NestLoop
760        -> Seq Scan on SmallTable1 A
761        NestLoop
762            -> Seq Scan on SmallTable2 B
763            -> Index Scan using XYIndex on LargeTable C
764                 Index Condition: C.X = A.AID and C.Y = B.BID
765
766so we should be willing to pass down A.AID through a join even though
767there is no join order constraint forcing the plan to look like this.
768
769Before version 9.2, Postgres used ad-hoc methods for planning and
770executing nestloop queries of this kind, and those methods could not
771handle passing parameters down through multiple join levels.
772
773To plan such queries, we now use a notion of a "parameterized path",
774which is a path that makes use of a join clause to a relation that's not
775scanned by the path.  In the example two above, we would construct a
776path representing the possibility of doing this:
777
778	-> Index Scan using C_Z_IDX on C
779		Index Condition: C.Z = A.X
780
781This path will be marked as being parameterized by relation A.  (Note that
782this is only one of the possible access paths for C; we'd still have a
783plain unparameterized seqscan, and perhaps other possibilities.)  The
784parameterization marker does not prevent joining the path to B, so one of
785the paths generated for the joinrel {B C} will represent
786
787	Hash Join
788		Join Condition: B.Y = C.W
789		-> Seq Scan on B
790		-> Index Scan using C_Z_IDX on C
791			Index Condition: C.Z = A.X
792
793This path is still marked as being parameterized by A.  When we attempt to
794join {B C} to A to form the complete join tree, such a path can only be
795used as the inner side of a nestloop join: it will be ignored for other
796possible join types.  So we will form a join path representing the query
797plan shown above, and it will compete in the usual way with paths built
798from non-parameterized scans.
799
800While all ordinary paths for a particular relation generate the same set
801of rows (since they must all apply the same set of restriction clauses),
802parameterized paths typically generate fewer rows than less-parameterized
803paths, since they have additional clauses to work with.  This means we
804must consider the number of rows generated as an additional figure of
805merit.  A path that costs more than another, but generates fewer rows,
806must be kept since the smaller number of rows might save work at some
807intermediate join level.  (It would not save anything if joined
808immediately to the source of the parameters.)
809
810To keep cost estimation rules relatively simple, we make an implementation
811restriction that all paths for a given relation of the same parameterization
812(i.e., the same set of outer relations supplying parameters) must have the
813same rowcount estimate.  This is justified by insisting that each such path
814apply *all* join clauses that are available with the named outer relations.
815Different paths might, for instance, choose different join clauses to use
816as index clauses; but they must then apply any other join clauses available
817from the same outer relations as filter conditions, so that the set of rows
818returned is held constant.  This restriction doesn't degrade the quality of
819the finished plan: it amounts to saying that we should always push down
820movable join clauses to the lowest possible evaluation level, which is a
821good thing anyway.  The restriction is useful in particular to support
822pre-filtering of join paths in add_path_precheck.  Without this rule we
823could never reject a parameterized path in advance of computing its rowcount
824estimate, which would greatly reduce the value of the pre-filter mechanism.
825
826To limit planning time, we have to avoid generating an unreasonably large
827number of parameterized paths.  We do this by only generating parameterized
828relation scan paths for index scans, and then only for indexes for which
829suitable join clauses are available.  There are also heuristics in join
830planning that try to limit the number of parameterized paths considered.
831
832In particular, there's been a deliberate policy decision to favor hash
833joins over merge joins for parameterized join steps (those occurring below
834a nestloop that provides parameters to the lower join's inputs).  While we
835do not ignore merge joins entirely, joinpath.c does not fully explore the
836space of potential merge joins with parameterized inputs.  Also, add_path
837treats parameterized paths as having no pathkeys, so that they compete
838only on cost and rowcount; they don't get preference for producing a
839special sort order.  This creates additional bias against merge joins,
840since we might discard a path that could have been useful for performing
841a merge without an explicit sort step.  Since a parameterized path must
842ultimately be used on the inside of a nestloop, where its sort order is
843uninteresting, these choices do not affect any requirement for the final
844output order of a query --- they only make it harder to use a merge join
845at a lower level.  The savings in planning work justifies that.
846
847Similarly, parameterized paths do not normally get preference in add_path
848for having cheap startup cost; that's seldom of much value when on the
849inside of a nestloop, so it seems not worth keeping extra paths solely for
850that.  An exception occurs for parameterized paths for the RHS relation of
851a SEMI or ANTI join: in those cases, we can stop the inner scan after the
852first match, so it's primarily startup not total cost that we care about.
853
854
855LATERAL subqueries
856------------------
857
858As of 9.3 we support SQL-standard LATERAL references from subqueries in
859FROM (and also functions in FROM).  The planner implements these by
860generating parameterized paths for any RTE that contains lateral
861references.  In such cases, *all* paths for that relation will be
862parameterized by at least the set of relations used in its lateral
863references.  (And in turn, join relations including such a subquery might
864not have any unparameterized paths.)  All the other comments made above for
865parameterized paths still apply, though; in particular, each such path is
866still expected to enforce any join clauses that can be pushed down to it,
867so that all paths of the same parameterization have the same rowcount.
868
869We also allow LATERAL subqueries to be flattened (pulled up into the parent
870query) by the optimizer, but only when this does not introduce lateral
871references into JOIN/ON quals that would refer to relations outside the
872lowest outer join at/above that qual.  The semantics of such a qual would
873be unclear.  Note that even with this restriction, pullup of a LATERAL
874subquery can result in creating PlaceHolderVars that contain lateral
875references to relations outside their syntactic scope.  We still evaluate
876such PHVs at their syntactic location or lower, but the presence of such a
877PHV in the quals or targetlist of a plan node requires that node to appear
878on the inside of a nestloop join relative to the rel(s) supplying the
879lateral reference.  (Perhaps now that that stuff works, we could relax the
880pullup restriction?)
881
882
883Security-level constraints on qual clauses
884------------------------------------------
885
886To support row-level security and security-barrier views efficiently,
887we mark qual clauses (RestrictInfo nodes) with a "security_level" field.
888The basic concept is that a qual with a lower security_level must be
889evaluated before one with a higher security_level.  This ensures that
890"leaky" quals that might expose sensitive data are not evaluated until
891after the security barrier quals that are supposed to filter out
892security-sensitive rows.  However, many qual conditions are "leakproof",
893that is we trust the functions they use to not expose data.  To avoid
894unnecessarily inefficient plans, a leakproof qual is not delayed by
895security-level considerations, even if it has a higher syntactic
896security_level than another qual.
897
898In a query that contains no use of RLS or security-barrier views, all
899quals will have security_level zero, so that none of these restrictions
900kick in; we don't even need to check leakproofness of qual conditions.
901
902If there are security-barrier quals, they get security_level zero (and
903possibly higher, if there are multiple layers of barriers).  Regular quals
904coming from the query text get a security_level one more than the highest
905level used for barrier quals.
906
907When new qual clauses are generated by EquivalenceClass processing,
908they must be assigned a security_level.  This is trickier than it seems.
909One's first instinct is that it would be safe to use the largest level
910found among the source quals for the EquivalenceClass, but that isn't
911safe at all, because it allows unwanted delays of security-barrier quals.
912Consider a barrier qual "t.x = t.y" plus a query qual "t.x = constant",
913and suppose there is another query qual "leaky_function(t.z)" that
914we mustn't evaluate before the barrier qual has been checked.
915We will have an EC {t.x, t.y, constant} which will lead us to replace
916the EC quals with "t.x = constant AND t.y = constant".  (We do not want
917to give up that behavior, either, since the latter condition could allow
918use of an index on t.y, which we would never discover from the original
919quals.)  If these generated quals are assigned the same security_level as
920the query quals, then it's possible for the leaky_function qual to be
921evaluated first, allowing leaky_function to see data from rows that
922possibly don't pass the barrier condition.
923
924Instead, our handling of security levels with ECs works like this:
925* Quals are not accepted as source clauses for ECs in the first place
926unless they are leakproof or have security_level zero.
927* EC-derived quals are assigned the minimum (not maximum) security_level
928found among the EC's source clauses.
929* If the maximum security_level found among the EC's source clauses is
930above zero, then the equality operators selected for derived quals must
931be leakproof.  When no such operator can be found, the EC is treated as
932"broken" and we fall back to emitting its source clauses without any
933additional derived quals.
934
935These rules together ensure that an untrusted qual clause (one with
936security_level above zero) cannot cause an EC to generate a leaky derived
937clause.  This makes it safe to use the minimum not maximum security_level
938for derived clauses.  The rules could result in poor plans due to not
939being able to generate derived clauses at all, but the risk of that is
940small in practice because most btree equality operators are leakproof.
941Also, by making exceptions for level-zero quals, we ensure that there is
942no plan degradation when no barrier quals are present.
943
944Once we have security levels assigned to all clauses, enforcement
945of barrier-qual ordering restrictions boils down to two rules:
946
947* Table scan plan nodes must not select quals for early execution
948(for example, use them as index qualifiers in an indexscan) unless
949they are leakproof or have security_level no higher than any other
950qual that is due to be executed at the same plan node.  (Use the
951utility function restriction_is_securely_promotable() to check
952whether it's okay to select a qual for early execution.)
953
954* Normal execution of a list of quals must execute them in an order
955that satisfies the same security rule, ie higher security_levels must
956be evaluated later unless leakproof.  (This is handled in a single place
957by order_qual_clauses() in createplan.c.)
958
959order_qual_clauses() uses a heuristic to decide exactly what to do with
960leakproof clauses.  Normally it sorts clauses by security_level then cost,
961being careful that the sort is stable so that we don't reorder clauses
962without a clear reason.  But this could result in a very expensive qual
963being done before a cheaper one that is of higher security_level.
964If the cheaper qual is leaky we have no choice, but if it is leakproof
965we could put it first.  We choose to sort leakproof quals as if they
966have security_level zero, but only when their cost is less than 10X
967cpu_operator_cost; that restriction alleviates the opposite problem of
968doing expensive quals first just because they're leakproof.
969
970Additional rules will be needed to support safe handling of join quals
971when there is a mix of security levels among join quals; for example, it
972will be necessary to prevent leaky higher-security-level quals from being
973evaluated at a lower join level than other quals of lower security level.
974Currently there is no need to consider that since security-prioritized
975quals can only be single-table restriction quals coming from RLS policies
976or security-barrier views, and security-barrier view subqueries are never
977flattened into the parent query.  Hence enforcement of security-prioritized
978quals only happens at the table scan level.  With extra rules for safe
979handling of security levels among join quals, it should be possible to let
980security-barrier views be flattened into the parent query, allowing more
981flexibility of planning while still preserving required ordering of qual
982evaluation.  But that will come later.
983
984
985Post scan/join planning
986-----------------------
987
988So far we have discussed only scan/join planning, that is, implementation
989of the FROM and WHERE clauses of a SQL query.  But the planner must also
990determine how to deal with GROUP BY, aggregation, and other higher-level
991features of queries; and in many cases there are multiple ways to do these
992steps and thus opportunities for optimization choices.  These steps, like
993scan/join planning, are handled by constructing Paths representing the
994different ways to do a step, then choosing the cheapest Path.
995
996Since all Paths require a RelOptInfo as "parent", we create RelOptInfos
997representing the outputs of these upper-level processing steps.  These
998RelOptInfos are mostly dummy, but their pathlist lists hold all the Paths
999considered useful for each step.  Currently, we may create these types of
1000additional RelOptInfos during upper-level planning:
1001
1002UPPERREL_SETOP		result of UNION/INTERSECT/EXCEPT, if any
1003UPPERREL_PARTIAL_GROUP_AGG	result of partial grouping/aggregation, if any
1004UPPERREL_GROUP_AGG	result of grouping/aggregation, if any
1005UPPERREL_WINDOW		result of window functions, if any
1006UPPERREL_DISTINCT	result of "SELECT DISTINCT", if any
1007UPPERREL_ORDERED	result of ORDER BY, if any
1008UPPERREL_FINAL		result of any remaining top-level actions
1009
1010UPPERREL_FINAL is used to represent any final processing steps, currently
1011LockRows (SELECT FOR UPDATE), LIMIT/OFFSET, and ModifyTable.  There is no
1012flexibility about the order in which these steps are done, and thus no need
1013to subdivide this stage more finely.
1014
1015These "upper relations" are identified by the UPPERREL enum values shown
1016above, plus a relids set, which allows there to be more than one upperrel
1017of the same kind.  We use NULL for the relids if there's no need for more
1018than one upperrel of the same kind.  Currently, in fact, the relids set
1019is vestigial because it's always NULL, but that's expected to change in
1020the future.  For example, in planning set operations, we might need the
1021relids to denote which subset of the leaf SELECTs has been combined in a
1022particular group of Paths that are competing with each other.
1023
1024The result of subquery_planner() is always returned as a set of Paths
1025stored in the UPPERREL_FINAL rel with NULL relids.  The other types of
1026upperrels are created only if needed for the particular query.
1027
1028
1029Parallel Query and Partial Paths
1030--------------------------------
1031
1032Parallel query involves dividing up the work that needs to be performed
1033either by an entire query or some portion of the query in such a way that
1034some of that work can be done by one or more worker processes, which are
1035called parallel workers.  Parallel workers are a subtype of dynamic
1036background workers; see src/backend/access/transam/README.parallel for a
1037fuller description.  The academic literature on parallel query suggests
1038that parallel execution strategies can be divided into essentially two
1039categories: pipelined parallelism, where the execution of the query is
1040divided into multiple stages and each stage is handled by a separate
1041process; and partitioning parallelism, where the data is split between
1042multiple processes and each process handles a subset of it.  The
1043literature, however, suggests that gains from pipeline parallelism are
1044often very limited due to the difficulty of avoiding pipeline stalls.
1045Consequently, we do not currently attempt to generate query plans that
1046use this technique.
1047
1048Instead, we focus on partitioning parallelism, which does not require
1049that the underlying table be partitioned.  It only requires that (1)
1050there is some method of dividing the data from at least one of the base
1051tables involved in the relation across multiple processes, (2) allowing
1052each process to handle its own portion of the data, and then (3)
1053collecting the results.  Requirements (2) and (3) are satisfied by the
1054executor node Gather (or GatherMerge), which launches any number of worker
1055processes and executes its single child plan in all of them, and perhaps
1056in the leader also, if the children aren't generating enough data to keep
1057the leader busy.  Requirement (1) is handled by the table scan node: when
1058invoked with parallel_aware = true, this node will, in effect, partition
1059the table on a block by block basis, returning a subset of the tuples from
1060the relation in each worker where that scan node is executed.
1061
1062Just as we do for non-parallel access methods, we build Paths to
1063represent access strategies that can be used in a parallel plan.  These
1064are, in essence, the same strategies that are available in the
1065non-parallel plan, but there is an important difference: a path that
1066will run beneath a Gather node returns only a subset of the query
1067results in each worker, not all of them.  To form a path that can
1068actually be executed, the (rather large) cost of the Gather node must be
1069accounted for.  For this reason among others, paths intended to run
1070beneath a Gather node - which we call "partial" paths since they return
1071only a subset of the results in each worker - must be kept separate from
1072ordinary paths (see RelOptInfo's partial_pathlist and the function
1073add_partial_path).
1074
1075One of the keys to making parallel query effective is to run as much of
1076the query in parallel as possible.  Therefore, we expect it to generally
1077be desirable to postpone the Gather stage until as near to the top of the
1078plan as possible.  Expanding the range of cases in which more work can be
1079pushed below the Gather (and costing them accurately) is likely to keep us
1080busy for a long time to come.
1081
1082Partitionwise joins
1083-------------------
1084
1085A join between two similarly partitioned tables can be broken down into joins
1086between their matching partitions if there exists an equi-join condition
1087between the partition keys of the joining tables. The equi-join between
1088partition keys implies that all join partners for a given row in one
1089partitioned table must be in the corresponding partition of the other
1090partitioned table. Because of this the join between partitioned tables to be
1091broken into joins between the matching partitions. The resultant join is
1092partitioned in the same way as the joining relations, thus allowing an N-way
1093join between similarly partitioned tables having equi-join condition between
1094their partition keys to be broken down into N-way joins between their matching
1095partitions. This technique of breaking down a join between partitioned tables
1096into joins between their partitions is called partitionwise join. We will use
1097term "partitioned relation" for either a partitioned table or a join between
1098compatibly partitioned tables.
1099
1100The partitioning properties of a partitioned relation are stored in its
1101RelOptInfo.  The information about data types of partition keys are stored in
1102PartitionSchemeData structure. The planner maintains a list of canonical
1103partition schemes (distinct PartitionSchemeData objects) so that RelOptInfo of
1104any two partitioned relations with same partitioning scheme point to the same
1105PartitionSchemeData object.  This reduces memory consumed by
1106PartitionSchemeData objects and makes it easy to compare the partition schemes
1107of joining relations.
1108
1109Partitionwise aggregates/grouping
1110---------------------------------
1111
1112If the GROUP BY clause contains all of the partition keys, all the rows
1113that belong to a given group must come from a single partition; therefore,
1114aggregation can be done completely separately for each partition. Otherwise,
1115partial aggregates can be computed for each partition, and then finalized
1116after appending the results from the individual partitions.  This technique of
1117breaking down aggregation or grouping over a partitioned relation into
1118aggregation or grouping over its partitions is called partitionwise
1119aggregation.  Especially when the partition keys match the GROUP BY clause,
1120this can be significantly faster than the regular method.
1121