• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

geqo/H08-Nov-2021-2,6351,225

path/H08-Nov-2021-24,24612,497

plan/H08-Nov-2021-25,27814,094

prep/H08-Nov-2021-6,1913,407

util/H08-Nov-2021-21,97812,159

MakefileH A D08-Nov-2021249 145

READMEH A D08-Nov-202161.2 KiB1,158997

README

1src/backend/optimizer/README
2
3Optimizer
4=========
5
6These directories take the Query structure returned by the parser, and
7generate a plan used by the executor.  The /plan directory generates the
8actual output plan, the /path code generates all possible ways to join the
9tables, and /prep handles various preprocessing steps for special cases.
10/util is utility stuff.  /geqo is the separate "genetic optimization" planner
11--- it does a semi-random search through the join tree space, rather than
12exhaustively considering all possible join trees.  (But each join considered
13by /geqo is given to /path to create paths for, so we consider all possible
14implementation paths for each specific join pair even in GEQO mode.)
15
16
17Paths and Join Pairs
18--------------------
19
20During the planning/optimizing process, we build "Path" trees representing
21the different ways of doing a query.  We select the cheapest Path that
22generates the desired relation and turn it into a Plan to pass to the
23executor.  (There is pretty nearly a one-to-one correspondence between the
24Path and Plan trees, but Path nodes omit info that won't be needed during
25planning, and include info needed for planning that won't be needed by the
26executor.)
27
28The optimizer builds a RelOptInfo structure for each base relation used in
29the query.  Base rels are either primitive tables, or subquery subselects
30that are planned via a separate recursive invocation of the planner.  A
31RelOptInfo is also built for each join relation that is considered during
32planning.  A join rel is simply a combination of base rels.  There is only
33one join RelOptInfo for any given set of baserels --- for example, the join
34{A B C} is represented by the same RelOptInfo no matter whether we build it
35by joining A and B first and then adding C, or joining B and C first and
36then adding A, etc.  These different means of building the joinrel are
37represented as Paths.  For each RelOptInfo we build a list of Paths that
38represent plausible ways to implement the scan or join of that relation.
39Once we've considered all the plausible Paths for a rel, we select the one
40that is cheapest according to the planner's cost estimates.  The final plan
41is derived from the cheapest Path for the RelOptInfo that includes all the
42base rels of the query.
43
44Possible Paths for a primitive table relation include plain old sequential
45scan, plus index scans for any indexes that exist on the table, plus bitmap
46index scans using one or more indexes.  Specialized RTE types, such as
47function RTEs, may have only one possible Path.
48
49Joins always occur using two RelOptInfos.  One is outer, the other inner.
50Outers drive lookups of values in the inner.  In a nested loop, lookups of
51values in the inner occur by scanning the inner path once per outer tuple
52to find each matching inner row.  In a mergejoin, inner and outer rows are
53ordered, and are accessed in order, so only one scan is required to perform
54the entire join: both inner and outer paths are scanned in-sync.  (There's
55not a lot of difference between inner and outer in a mergejoin...)  In a
56hashjoin, the inner is scanned first and all its rows are entered in a
57hashtable, then the outer is scanned and for each row we lookup the join
58key in the hashtable.
59
60A Path for a join relation is actually a tree structure, with the topmost
61Path node representing the last-applied join method.  It has left and right
62subpaths that represent the scan or join methods used for the two input
63relations.
64
65
66Join Tree Construction
67----------------------
68
69The optimizer generates optimal query plans by doing a more-or-less
70exhaustive search through the ways of executing the query.  The best Path
71tree is found by a recursive process:
72
731) Take each base relation in the query, and make a RelOptInfo structure
74for it.  Find each potentially useful way of accessing the relation,
75including sequential and index scans, and make Paths representing those
76ways.  All the Paths made for a given relation are placed in its
77RelOptInfo.pathlist.  (Actually, we discard Paths that are obviously
78inferior alternatives before they ever get into the pathlist --- what
79ends up in the pathlist is the cheapest way of generating each potentially
80useful sort ordering and parameterization of the relation.)  Also create a
81RelOptInfo.joininfo list including all the join clauses that involve this
82relation.  For example, the WHERE clause "tab1.col1 = tab2.col1" generates
83entries in both tab1 and tab2's joininfo lists.
84
85If we have only a single base relation in the query, we are done.
86Otherwise we have to figure out how to join the base relations into a
87single join relation.
88
892) Normally, any explicit JOIN clauses are "flattened" so that we just
90have a list of relations to join.  However, FULL OUTER JOIN clauses are
91never flattened, and other kinds of JOIN might not be either, if the
92flattening process is stopped by join_collapse_limit or from_collapse_limit
93restrictions.  Therefore, we end up with a planning problem that contains
94lists of relations to be joined in any order, where any individual item
95might be a sub-list that has to be joined together before we can consider
96joining it to its siblings.  We process these sub-problems recursively,
97bottom up.  Note that the join list structure constrains the possible join
98orders, but it doesn't constrain the join implementation method at each
99join (nestloop, merge, hash), nor does it say which rel is considered outer
100or inner at each join.  We consider all these possibilities in building
101Paths. We generate a Path for each feasible join method, and select the
102cheapest Path.
103
104For each planning problem, therefore, we will have a list of relations
105that are either base rels or joinrels constructed per sub-join-lists.
106We can join these rels together in any order the planner sees fit.
107The standard (non-GEQO) planner does this as follows:
108
109Consider joining each RelOptInfo to each other RelOptInfo for which there
110is a usable joinclause, and generate a Path for each possible join method
111for each such pair.  (If we have a RelOptInfo with no join clauses, we have
112no choice but to generate a clauseless Cartesian-product join; so we
113consider joining that rel to each other available rel.  But in the presence
114of join clauses we will only consider joins that use available join
115clauses.  Note that join-order restrictions induced by outer joins and
116IN/EXISTS clauses are also checked, to ensure that we find a workable join
117order in cases where those restrictions force a clauseless join to be done.)
118
119If we only had two relations in the list, we are done: we just pick
120the cheapest path for the join RelOptInfo.  If we had more than two, we now
121need to consider ways of joining join RelOptInfos to each other to make
122join RelOptInfos that represent more than two list items.
123
124The join tree is constructed using a "dynamic programming" algorithm:
125in the first pass (already described) we consider ways to create join rels
126representing exactly two list items.  The second pass considers ways
127to make join rels that represent exactly three list items; the next pass,
128four items, etc.  The last pass considers how to make the final join
129relation that includes all list items --- obviously there can be only one
130join rel at this top level, whereas there can be more than one join rel
131at lower levels.  At each level we use joins that follow available join
132clauses, if possible, just as described for the first level.
133
134For example:
135
136    SELECT  *
137    FROM    tab1, tab2, tab3, tab4
138    WHERE   tab1.col = tab2.col AND
139        tab2.col = tab3.col AND
140        tab3.col = tab4.col
141
142    Tables 1, 2, 3, and 4 are joined as:
143    {1 2},{2 3},{3 4}
144    {1 2 3},{2 3 4}
145    {1 2 3 4}
146    (other possibilities will be excluded for lack of join clauses)
147
148    SELECT  *
149    FROM    tab1, tab2, tab3, tab4
150    WHERE   tab1.col = tab2.col AND
151        tab1.col = tab3.col AND
152        tab1.col = tab4.col
153
154    Tables 1, 2, 3, and 4 are joined as:
155    {1 2},{1 3},{1 4}
156    {1 2 3},{1 3 4},{1 2 4}
157    {1 2 3 4}
158
159We consider left-handed plans (the outer rel of an upper join is a joinrel,
160but the inner is always a single list item); right-handed plans (outer rel
161is always a single item); and bushy plans (both inner and outer can be
162joins themselves).  For example, when building {1 2 3 4} we consider
163joining {1 2 3} to {4} (left-handed), {4} to {1 2 3} (right-handed), and
164{1 2} to {3 4} (bushy), among other choices.  Although the jointree
165scanning code produces these potential join combinations one at a time,
166all the ways to produce the same set of joined base rels will share the
167same RelOptInfo, so the paths produced from different join combinations
168that produce equivalent joinrels will compete in add_path().
169
170The dynamic-programming approach has an important property that's not
171immediately obvious: we will finish constructing all paths for a given
172relation before we construct any paths for relations containing that rel.
173This means that we can reliably identify the "cheapest path" for each rel
174before higher-level relations need to know that.  Also, we can safely
175discard a path when we find that another path for the same rel is better,
176without worrying that maybe there is already a reference to that path in
177some higher-level join path.  Without this, memory management for paths
178would be much more complicated.
179
180Once we have built the final join rel, we use either the cheapest path
181for it or the cheapest path with the desired ordering (if that's cheaper
182than applying a sort to the cheapest other path).
183
184If the query contains one-sided outer joins (LEFT or RIGHT joins), or
185IN or EXISTS WHERE clauses that were converted to semijoins or antijoins,
186then some of the possible join orders may be illegal.  These are excluded
187by having join_is_legal consult a side list of such "special" joins to see
188whether a proposed join is illegal.  (The same consultation allows it to
189see which join style should be applied for a valid join, ie, JOIN_INNER,
190JOIN_LEFT, etc.)
191
192
193Valid OUTER JOIN Optimizations
194------------------------------
195
196The planner's treatment of outer join reordering is based on the following
197identities:
198
1991.	(A leftjoin B on (Pab)) innerjoin C on (Pac)
200	= (A innerjoin C on (Pac)) leftjoin B on (Pab)
201
202where Pac is a predicate referencing A and C, etc (in this case, clearly
203Pac cannot reference B, or the transformation is nonsensical).
204
2052.	(A leftjoin B on (Pab)) leftjoin C on (Pac)
206	= (A leftjoin C on (Pac)) leftjoin B on (Pab)
207
2083.	(A leftjoin B on (Pab)) leftjoin C on (Pbc)
209	= A leftjoin (B leftjoin C on (Pbc)) on (Pab)
210
211Identity 3 only holds if predicate Pbc must fail for all-null B rows
212(that is, Pbc is strict for at least one column of B).  If Pbc is not
213strict, the first form might produce some rows with nonnull C columns
214where the second form would make those entries null.
215
216RIGHT JOIN is equivalent to LEFT JOIN after switching the two input
217tables, so the same identities work for right joins.
218
219An example of a case that does *not* work is moving an innerjoin into or
220out of the nullable side of an outer join:
221
222	A leftjoin (B join C on (Pbc)) on (Pab)
223	!= (A leftjoin B on (Pab)) join C on (Pbc)
224
225SEMI joins work a little bit differently.  A semijoin can be reassociated
226into or out of the lefthand side of another semijoin, left join, or
227antijoin, but not into or out of the righthand side.  Likewise, an inner
228join, left join, or antijoin can be reassociated into or out of the
229lefthand side of a semijoin, but not into or out of the righthand side.
230
231ANTI joins work approximately like LEFT joins, except that identity 3
232fails if the join to C is an antijoin (even if Pbc is strict, and in
233both the cases where the other join is a leftjoin and where it is an
234antijoin).  So we can't reorder antijoins into or out of the RHS of a
235leftjoin or antijoin, even if the relevant clause is strict.
236
237The current code does not attempt to re-order FULL JOINs at all.
238FULL JOIN ordering is enforced by not collapsing FULL JOIN nodes when
239translating the jointree to "joinlist" representation.  Other types of
240JOIN nodes are normally collapsed so that they participate fully in the
241join order search.  To avoid generating illegal join orders, the planner
242creates a SpecialJoinInfo node for each non-inner join, and join_is_legal
243checks this list to decide if a proposed join is legal.
244
245What we store in SpecialJoinInfo nodes are the minimum sets of Relids
246required on each side of the join to form the outer join.  Note that
247these are minimums; there's no explicit maximum, since joining other
248rels to the OJ's syntactic rels may be legal.  Per identities 1 and 2,
249non-FULL joins can be freely associated into the lefthand side of an
250OJ, but in some cases they can't be associated into the righthand side.
251So the restriction enforced by join_is_legal is that a proposed join
252can't join a rel within or partly within an RHS boundary to one outside
253the boundary, unless the proposed join is a LEFT join that can associate
254into the SpecialJoinInfo's RHS using identity 3.
255
256The use of minimum Relid sets has some pitfalls; consider a query like
257	A leftjoin (B leftjoin (C innerjoin D) on (Pbcd)) on Pa
258where Pa doesn't mention B/C/D at all.  In this case a naive computation
259would give the upper leftjoin's min LHS as {A} and min RHS as {C,D} (since
260we know that the innerjoin can't associate out of the leftjoin's RHS, and
261enforce that by including its relids in the leftjoin's min RHS).  And the
262lower leftjoin has min LHS of {B} and min RHS of {C,D}.  Given such
263information, join_is_legal would think it's okay to associate the upper
264join into the lower join's RHS, transforming the query to
265	B leftjoin (A leftjoin (C innerjoin D) on Pa) on (Pbcd)
266which yields totally wrong answers.  We prevent that by forcing the min RHS
267for the upper join to include B.  This is perhaps overly restrictive, but
268such cases don't arise often so it's not clear that it's worth developing a
269more complicated system.
270
271
272Pulling Up Subqueries
273---------------------
274
275As we described above, a subquery appearing in the range table is planned
276independently and treated as a "black box" during planning of the outer
277query.  This is necessary when the subquery uses features such as
278aggregates, GROUP, or DISTINCT.  But if the subquery is just a simple
279scan or join, treating the subquery as a black box may produce a poor plan
280compared to considering it as part of the entire plan search space.
281Therefore, at the start of the planning process the planner looks for
282simple subqueries and pulls them up into the main query's jointree.
283
284Pulling up a subquery may result in FROM-list joins appearing below the top
285of the join tree.  Each FROM-list is planned using the dynamic-programming
286search method described above.
287
288If pulling up a subquery produces a FROM-list as a direct child of another
289FROM-list, then we can merge the two FROM-lists together.  Once that's
290done, the subquery is an absolutely integral part of the outer query and
291will not constrain the join tree search space at all.  However, that could
292result in unpleasant growth of planning time, since the dynamic-programming
293search has runtime exponential in the number of FROM-items considered.
294Therefore, we don't merge FROM-lists if the result would have too many
295FROM-items in one list.
296
297
298Optimizer Functions
299-------------------
300
301The primary entry point is planner().
302
303planner()
304set up for recursive handling of subqueries
305-subquery_planner()
306 pull up sublinks and subqueries from rangetable, if possible
307 canonicalize qual
308     Attempt to simplify WHERE clause to the most useful form; this includes
309     flattening nested AND/ORs and detecting clauses that are duplicated in
310     different branches of an OR.
311 simplify constant expressions
312 process sublinks
313 convert Vars of outer query levels into Params
314--grouping_planner()
315  preprocess target list for non-SELECT queries
316  handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,
317	ORDER BY, DISTINCT, LIMIT
318---query_planner()
319   make list of base relations used in query
320   split up the qual into restrictions (a=1) and joins (b=c)
321   find qual clauses that enable merge and hash joins
322----make_one_rel()
323     set_base_rel_pathlists()
324      find seqscan and all index paths for each base relation
325      find selectivity of columns used in joins
326     make_rel_from_joinlist()
327      hand off join subproblems to a plugin, GEQO, or standard_join_search()
328------standard_join_search()
329      call join_search_one_level() for each level of join tree needed
330      join_search_one_level():
331        For each joinrel of the prior level, do make_rels_by_clause_joins()
332        if it has join clauses, or make_rels_by_clauseless_joins() if not.
333        Also generate "bushy plan" joins between joinrels of lower levels.
334      Back at standard_join_search(), generate gather paths if needed for
335      each newly constructed joinrel, then apply set_cheapest() to extract
336      the cheapest path for it.
337      Loop back if this wasn't the top join level.
338  Back at grouping_planner:
339  do grouping (GROUP BY) and aggregation
340  do window functions
341  make unique (DISTINCT)
342  do sorting (ORDER BY)
343  do limit (LIMIT/OFFSET)
344Back at planner():
345convert finished Path tree into a Plan tree
346do final cleanup after planning
347
348
349Optimizer Data Structures
350-------------------------
351
352PlannerGlobal   - global information for a single planner invocation
353
354PlannerInfo     - information for planning a particular Query (we make
355                  a separate PlannerInfo node for each sub-Query)
356
357RelOptInfo      - a relation or joined relations
358
359 RestrictInfo   - WHERE clauses, like "x = 3" or "y = z"
360                  (note the same structure is used for restriction and
361                   join clauses)
362
363 Path           - every way to generate a RelOptInfo(sequential,index,joins)
364  A plain Path node can represent several simple plans, per its pathtype:
365    T_SeqScan   - sequential scan
366    T_SampleScan - tablesample scan
367    T_FunctionScan - function-in-FROM scan
368    T_TableFuncScan - table function scan
369    T_ValuesScan - VALUES scan
370    T_CteScan   - CTE (WITH) scan
371    T_NamedTuplestoreScan - ENR scan
372    T_WorkTableScan - scan worktable of a recursive CTE
373    T_Result    - childless Result plan node (used for FROM-less SELECT)
374  IndexPath     - index scan
375  BitmapHeapPath - top of a bitmapped index scan
376  TidPath       - scan by CTID
377  SubqueryScanPath - scan a subquery-in-FROM
378  ForeignPath   - scan a foreign table, foreign join or foreign upper-relation
379  CustomPath    - for custom scan providers
380  AppendPath    - append multiple subpaths together
381  MergeAppendPath - merge multiple subpaths, preserving their common sort order
382  GroupResultPath - childless Result plan node (used for degenerate grouping)
383  MaterialPath  - a Material plan node
384  UniquePath    - remove duplicate rows (either by hashing or sorting)
385  GatherPath    - collect the results of parallel workers
386  GatherMergePath - collect parallel results, preserving their common sort order
387  ProjectionPath - a Result plan node with child (used for projection)
388  ProjectSetPath - a ProjectSet plan node applied to some sub-path
389  SortPath      - a Sort plan node applied to some sub-path
390  IncrementalSortPath - an IncrementalSort plan node applied to some sub-path
391  GroupPath     - a Group plan node applied to some sub-path
392  UpperUniquePath - a Unique plan node applied to some sub-path
393  AggPath       - an Agg plan node applied to some sub-path
394  GroupingSetsPath - an Agg plan node used to implement GROUPING SETS
395  MinMaxAggPath - a Result plan node with subplans performing MIN/MAX
396  WindowAggPath - a WindowAgg plan node applied to some sub-path
397  SetOpPath     - a SetOp plan node applied to some sub-path
398  RecursiveUnionPath - a RecursiveUnion plan node applied to two sub-paths
399  LockRowsPath  - a LockRows plan node applied to some sub-path
400  ModifyTablePath - a ModifyTable plan node applied to some sub-path(s)
401  LimitPath     - a Limit plan node applied to some sub-path
402  NestPath      - nested-loop joins
403  MergePath     - merge joins
404  HashPath      - hash joins
405
406 EquivalenceClass - a data structure representing a set of values known equal
407
408 PathKey        - a data structure representing the sort ordering of a path
409
410The optimizer spends a good deal of its time worrying about the ordering
411of the tuples returned by a path.  The reason this is useful is that by
412knowing the sort ordering of a path, we may be able to use that path as
413the left or right input of a mergejoin and avoid an explicit sort step.
414Nestloops and hash joins don't really care what the order of their inputs
415is, but mergejoin needs suitably ordered inputs.  Therefore, all paths
416generated during the optimization process are marked with their sort order
417(to the extent that it is known) for possible use by a higher-level merge.
418
419It is also possible to avoid an explicit sort step to implement a user's
420ORDER BY clause if the final path has the right ordering already, so the
421sort ordering is of interest even at the top level.  grouping_planner() will
422look for the cheapest path with a sort order matching the desired order,
423then compare its cost to the cost of using the cheapest-overall path and
424doing an explicit sort on that.
425
426When we are generating paths for a particular RelOptInfo, we discard a path
427if it is more expensive than another known path that has the same or better
428sort order.  We will never discard a path that is the only known way to
429achieve a given sort order (without an explicit sort, that is).  In this
430way, the next level up will have the maximum freedom to build mergejoins
431without sorting, since it can pick from any of the paths retained for its
432inputs.
433
434
435EquivalenceClasses
436------------------
437
438During the deconstruct_jointree() scan of the query's qual clauses, we look
439for mergejoinable equality clauses A = B whose applicability is not delayed
440by an outer join; these are called "equivalence clauses".  When we find
441one, we create an EquivalenceClass containing the expressions A and B to
442record this knowledge.  If we later find another equivalence clause B = C,
443we add C to the existing EquivalenceClass for {A B}; this may require
444merging two existing EquivalenceClasses.  At the end of the scan, we have
445sets of values that are known all transitively equal to each other.  We can
446therefore use a comparison of any pair of the values as a restriction or
447join clause (when these values are available at the scan or join, of
448course); furthermore, we need test only one such comparison, not all of
449them.  Therefore, equivalence clauses are removed from the standard qual
450distribution process.  Instead, when preparing a restriction or join clause
451list, we examine each EquivalenceClass to see if it can contribute a
452clause, and if so we select an appropriate pair of values to compare.  For
453example, if we are trying to join A's relation to C's, we can generate the
454clause A = C, even though this appeared nowhere explicitly in the original
455query.  This may allow us to explore join paths that otherwise would have
456been rejected as requiring Cartesian-product joins.
457
458Sometimes an EquivalenceClass may contain a pseudo-constant expression
459(i.e., one not containing Vars or Aggs of the current query level, nor
460volatile functions).  In this case we do not follow the policy of
461dynamically generating join clauses: instead, we dynamically generate
462restriction clauses "var = const" wherever one of the variable members of
463the class can first be computed.  For example, if we have A = B and B = 42,
464we effectively generate the restriction clauses A = 42 and B = 42, and then
465we need not bother with explicitly testing the join clause A = B when the
466relations are joined.  In effect, all the class members can be tested at
467relation-scan level and there's never a need for join tests.
468
469The precise technical interpretation of an EquivalenceClass is that it
470asserts that at any plan node where more than one of its member values
471can be computed, output rows in which the values are not all equal may
472be discarded without affecting the query result.  (We require all levels
473of the plan to enforce EquivalenceClasses, hence a join need not recheck
474equality of values that were computable by one of its children.)  For an
475ordinary EquivalenceClass that is "valid everywhere", we can further infer
476that the values are all non-null, because all mergejoinable operators are
477strict.  However, we also allow equivalence clauses that appear below the
478nullable side of an outer join to form EquivalenceClasses; for these
479classes, the interpretation is that either all the values are equal, or
480all (except pseudo-constants) have gone to null.  (This requires a
481limitation that non-constant members be strict, else they might not go
482to null when the other members do.)  Consider for example
483
484	SELECT *
485	  FROM a LEFT JOIN
486	       (SELECT * FROM b JOIN c ON b.y = c.z WHERE b.y = 10) ss
487	       ON a.x = ss.y
488	  WHERE a.x = 42;
489
490We can form the below-outer-join EquivalenceClass {b.y c.z 10} and thereby
491apply c.z = 10 while scanning c.  (The reason we disallow outerjoin-delayed
492clauses from forming EquivalenceClasses is exactly that we want to be able
493to push any derived clauses as far down as possible.)  But once above the
494outer join it's no longer necessarily the case that b.y = 10, and thus we
495cannot use such EquivalenceClasses to conclude that sorting is unnecessary
496(see discussion of PathKeys below).
497
498In this example, notice also that a.x = ss.y (really a.x = b.y) is not an
499equivalence clause because its applicability to b is delayed by the outer
500join; thus we do not try to insert b.y into the equivalence class {a.x 42}.
501But since we see that a.x has been equated to 42 above the outer join, we
502are able to form a below-outer-join class {b.y 42}; this restriction can be
503added because no b/c row not having b.y = 42 can contribute to the result
504of the outer join, and so we need not compute such rows.  Now this class
505will get merged with {b.y c.z 10}, leading to the contradiction 10 = 42,
506which lets the planner deduce that the b/c join need not be computed at all
507because none of its rows can contribute to the outer join.  (This gets
508implemented as a gating Result filter, since more usually the potential
509contradiction involves Param values rather than just Consts, and thus has
510to be checked at runtime.)
511
512To aid in determining the sort ordering(s) that can work with a mergejoin,
513we mark each mergejoinable clause with the EquivalenceClasses of its left
514and right inputs.  For an equivalence clause, these are of course the same
515EquivalenceClass.  For a non-equivalence mergejoinable clause (such as an
516outer-join qualification), we generate two separate EquivalenceClasses for
517the left and right inputs.  This may result in creating single-item
518equivalence "classes", though of course these are still subject to merging
519if other equivalence clauses are later found to bear on the same
520expressions.
521
522Another way that we may form a single-item EquivalenceClass is in creation
523of a PathKey to represent a desired sort order (see below).  This is a bit
524different from the above cases because such an EquivalenceClass might
525contain an aggregate function or volatile expression.  (A clause containing
526a volatile function will never be considered mergejoinable, even if its top
527operator is mergejoinable, so there is no way for a volatile expression to
528get into EquivalenceClasses otherwise.  Aggregates are disallowed in WHERE
529altogether, so will never be found in a mergejoinable clause.)  This is just
530a convenience to maintain a uniform PathKey representation: such an
531EquivalenceClass will never be merged with any other.  Note in particular
532that a single-item EquivalenceClass {a.x} is *not* meant to imply an
533assertion that a.x = a.x; the practical effect of this is that a.x could
534be NULL.
535
536An EquivalenceClass also contains a list of btree opfamily OIDs, which
537determines what the equalities it represents actually "mean".  All the
538equivalence clauses that contribute to an EquivalenceClass must have
539equality operators that belong to the same set of opfamilies.  (Note: most
540of the time, a particular equality operator belongs to only one family, but
541it's possible that it belongs to more than one.  We keep track of all the
542families to ensure that we can make use of an index belonging to any one of
543the families for mergejoin purposes.)
544
545An EquivalenceClass can contain "em_is_child" members, which are copies
546of members that contain appendrel parent relation Vars, transposed to
547contain the equivalent child-relation variables or expressions.  These
548members are *not* full-fledged members of the EquivalenceClass and do not
549affect the class's overall properties at all.  They are kept only to
550simplify matching of child-relation expressions to EquivalenceClasses.
551Most operations on EquivalenceClasses should ignore child members.
552
553
554PathKeys
555--------
556
557The PathKeys data structure represents what is known about the sort order
558of the tuples generated by a particular Path.  A path's pathkeys field is a
559list of PathKey nodes, where the n'th item represents the n'th sort key of
560the result.  Each PathKey contains these fields:
561
562	* a reference to an EquivalenceClass
563	* a btree opfamily OID (must match one of those in the EC)
564	* a sort direction (ascending or descending)
565	* a nulls-first-or-last flag
566
567The EquivalenceClass represents the value being sorted on.  Since the
568various members of an EquivalenceClass are known equal according to the
569opfamily, we can consider a path sorted by any one of them to be sorted by
570any other too; this is what justifies referencing the whole
571EquivalenceClass rather than just one member of it.
572
573In single/base relation RelOptInfo's, the Paths represent various ways
574of scanning the relation and the resulting ordering of the tuples.
575Sequential scan Paths have NIL pathkeys, indicating no known ordering.
576Index scans have Path.pathkeys that represent the chosen index's ordering,
577if any.  A single-key index would create a single-PathKey list, while a
578multi-column index generates a list with one element per key index column.
579Non-key columns specified in the INCLUDE clause of covering indexes don't
580have corresponding PathKeys in the list, because the have no influence on
581index ordering.  (Actually, since an index can be scanned either forward or
582backward, there are two possible sort orders and two possible PathKey lists
583it can generate.)
584
585Note that a bitmap scan has NIL pathkeys since we can say nothing about
586the overall order of its result.  Also, an indexscan on an unordered type
587of index generates NIL pathkeys.  However, we can always create a pathkey
588by doing an explicit sort.  The pathkeys for a Sort plan's output just
589represent the sort key fields and the ordering operators used.
590
591Things get more interesting when we consider joins.  Suppose we do a
592mergejoin between A and B using the mergeclause A.X = B.Y.  The output
593of the mergejoin is sorted by X --- but it is also sorted by Y.  Again,
594this can be represented by a PathKey referencing an EquivalenceClass
595containing both X and Y.
596
597With a little further thought, it becomes apparent that nestloop joins
598can also produce sorted output.  For example, if we do a nestloop join
599between outer relation A and inner relation B, then any pathkeys relevant
600to A are still valid for the join result: we have not altered the order of
601the tuples from A.  Even more interesting, if there was an equivalence clause
602A.X=B.Y, and A.X was a pathkey for the outer relation A, then we can assert
603that B.Y is a pathkey for the join result; X was ordered before and still
604is, and the joined values of Y are equal to the joined values of X, so Y
605must now be ordered too.  This is true even though we used neither an
606explicit sort nor a mergejoin on Y.  (Note: hash joins cannot be counted
607on to preserve the order of their outer relation, because the executor
608might decide to "batch" the join, so we always set pathkeys to NIL for
609a hashjoin path.)  Exception: a RIGHT or FULL join doesn't preserve the
610ordering of its outer relation, because it might insert nulls at random
611points in the ordering.
612
613In general, we can justify using EquivalenceClasses as the basis for
614pathkeys because, whenever we scan a relation containing multiple
615EquivalenceClass members or join two relations each containing
616EquivalenceClass members, we apply restriction or join clauses derived from
617the EquivalenceClass.  This guarantees that any two values listed in the
618EquivalenceClass are in fact equal in all tuples emitted by the scan or
619join, and therefore that if the tuples are sorted by one of the values,
620they can be considered sorted by any other as well.  It does not matter
621whether the test clause is used as a mergeclause, or merely enforced
622after-the-fact as a qpqual filter.
623
624Note that there is no particular difficulty in labeling a path's sort
625order with a PathKey referencing an EquivalenceClass that contains
626variables not yet joined into the path's output.  We can simply ignore
627such entries as not being relevant (yet).  This makes it possible to
628use the same EquivalenceClasses throughout the join planning process.
629In fact, by being careful not to generate multiple identical PathKey
630objects, we can reduce comparison of EquivalenceClasses and PathKeys
631to simple pointer comparison, which is a huge savings because add_path
632has to make a large number of PathKey comparisons in deciding whether
633competing Paths are equivalently sorted.
634
635Pathkeys are also useful to represent an ordering that we wish to achieve,
636since they are easily compared to the pathkeys of a potential candidate
637path.  So, SortGroupClause lists are turned into pathkeys lists for use
638inside the optimizer.
639
640An additional refinement we can make is to insist that canonical pathkey
641lists (sort orderings) do not mention the same EquivalenceClass more than
642once.  For example, in all these cases the second sort column is redundant,
643because it cannot distinguish values that are the same according to the
644first sort column:
645	SELECT ... ORDER BY x, x
646	SELECT ... ORDER BY x, x DESC
647	SELECT ... WHERE x = y ORDER BY x, y
648Although a user probably wouldn't write "ORDER BY x,x" directly, such
649redundancies are more probable once equivalence classes have been
650considered.  Also, the system may generate redundant pathkey lists when
651computing the sort ordering needed for a mergejoin.  By eliminating the
652redundancy, we save time and improve planning, since the planner will more
653easily recognize equivalent orderings as being equivalent.
654
655Another interesting property is that if the underlying EquivalenceClass
656contains a constant and is not below an outer join, then the pathkey is
657completely redundant and need not be sorted by at all!  Every row must
658contain the same constant value, so there's no need to sort.  (If the EC is
659below an outer join, we still have to sort, since some of the rows might
660have gone to null and others not.  In this case we must be careful to pick
661a non-const member to sort by.  The assumption that all the non-const
662members go to null at the same plan level is critical here, else they might
663not produce the same sort order.)  This might seem pointless because users
664are unlikely to write "... WHERE x = 42 ORDER BY x", but it allows us to
665recognize when particular index columns are irrelevant to the sort order:
666if we have "... WHERE x = 42 ORDER BY y", scanning an index on (x,y)
667produces correctly ordered data without a sort step.  We used to have very
668ugly ad-hoc code to recognize that in limited contexts, but discarding
669constant ECs from pathkeys makes it happen cleanly and automatically.
670
671You might object that a below-outer-join EquivalenceClass doesn't always
672represent the same values at every level of the join tree, and so using
673it to uniquely identify a sort order is dubious.  This is true, but we
674can avoid dealing with the fact explicitly because we always consider that
675an outer join destroys any ordering of its nullable inputs.  Thus, even
676if a path was sorted by {a.x} below an outer join, we'll re-sort if that
677sort ordering was important; and so using the same PathKey for both sort
678orderings doesn't create any real problem.
679
680
681Order of processing for EquivalenceClasses and PathKeys
682-------------------------------------------------------
683
684As alluded to above, there is a specific sequence of phases in the
685processing of EquivalenceClasses and PathKeys during planning.  During the
686initial scanning of the query's quals (deconstruct_jointree followed by
687reconsider_outer_join_clauses), we construct EquivalenceClasses based on
688mergejoinable clauses found in the quals.  At the end of this process,
689we know all we can know about equivalence of different variables, so
690subsequently there will be no further merging of EquivalenceClasses.
691At that point it is possible to consider the EquivalenceClasses as
692"canonical" and build canonical PathKeys that reference them.  At this
693time we construct PathKeys for the query's ORDER BY and related clauses.
694(Any ordering expressions that do not appear elsewhere will result in
695the creation of new EquivalenceClasses, but this cannot result in merging
696existing classes, so canonical-ness is not lost.)
697
698Because all the EquivalenceClasses are known before we begin path
699generation, we can use them as a guide to which indexes are of interest:
700if an index's column is not mentioned in any EquivalenceClass then that
701index's sort order cannot possibly be helpful for the query.  This allows
702short-circuiting of much of the processing of create_index_paths() for
703irrelevant indexes.
704
705There are some cases where planner.c constructs additional
706EquivalenceClasses and PathKeys after query_planner has completed.
707In these cases, the extra ECs/PKs are needed to represent sort orders
708that were not considered during query_planner.  Such situations should be
709minimized since it is impossible for query_planner to return a plan
710producing such a sort order, meaning an explicit sort will always be needed.
711Currently this happens only for queries involving multiple window functions
712with different orderings, for which extra sorts are needed anyway.
713
714
715Parameterized Paths
716-------------------
717
718The naive way to join two relations using a clause like WHERE A.X = B.Y
719is to generate a nestloop plan like this:
720
721	NestLoop
722		Filter: A.X = B.Y
723		-> Seq Scan on A
724		-> Seq Scan on B
725
726We can make this better by using a merge or hash join, but it still
727requires scanning all of both input relations.  If A is very small and B is
728very large, but there is an index on B.Y, it can be enormously better to do
729something like this:
730
731	NestLoop
732		-> Seq Scan on A
733		-> Index Scan using B_Y_IDX on B
734			Index Condition: B.Y = A.X
735
736Here, we are expecting that for each row scanned from A, the nestloop
737plan node will pass down the current value of A.X into the scan of B.
738That allows the indexscan to treat A.X as a constant for any one
739invocation, and thereby use it as an index key.  This is the only plan type
740that can avoid fetching all of B, and for small numbers of rows coming from
741A, that will dominate every other consideration.  (As A gets larger, this
742gets less attractive, and eventually a merge or hash join will win instead.
743So we have to cost out all the alternatives to decide what to do.)
744
745It can be useful for the parameter value to be passed down through
746intermediate layers of joins, for example:
747
748	NestLoop
749		-> Seq Scan on A
750		Hash Join
751			Join Condition: B.Y = C.W
752			-> Seq Scan on B
753			-> Index Scan using C_Z_IDX on C
754				Index Condition: C.Z = A.X
755
756If all joins are plain inner joins then this is usually unnecessary,
757because it's possible to reorder the joins so that a parameter is used
758immediately below the nestloop node that provides it.  But in the
759presence of outer joins, such join reordering may not be possible.
760
761Also, the bottom-level scan might require parameters from more than one
762other relation.  In principle we could join the other relations first
763so that all the parameters are supplied from a single nestloop level.
764But if those other relations have no join clause in common (which is
765common in star-schema queries for instance), the planner won't consider
766joining them directly to each other.  In such a case we need to be able
767to create a plan like
768
769    NestLoop
770        -> Seq Scan on SmallTable1 A
771        NestLoop
772            -> Seq Scan on SmallTable2 B
773            -> Index Scan using XYIndex on LargeTable C
774                 Index Condition: C.X = A.AID and C.Y = B.BID
775
776so we should be willing to pass down A.AID through a join even though
777there is no join order constraint forcing the plan to look like this.
778
779Before version 9.2, Postgres used ad-hoc methods for planning and
780executing nestloop queries of this kind, and those methods could not
781handle passing parameters down through multiple join levels.
782
783To plan such queries, we now use a notion of a "parameterized path",
784which is a path that makes use of a join clause to a relation that's not
785scanned by the path.  In the example two above, we would construct a
786path representing the possibility of doing this:
787
788	-> Index Scan using C_Z_IDX on C
789		Index Condition: C.Z = A.X
790
791This path will be marked as being parameterized by relation A.  (Note that
792this is only one of the possible access paths for C; we'd still have a
793plain unparameterized seqscan, and perhaps other possibilities.)  The
794parameterization marker does not prevent joining the path to B, so one of
795the paths generated for the joinrel {B C} will represent
796
797	Hash Join
798		Join Condition: B.Y = C.W
799		-> Seq Scan on B
800		-> Index Scan using C_Z_IDX on C
801			Index Condition: C.Z = A.X
802
803This path is still marked as being parameterized by A.  When we attempt to
804join {B C} to A to form the complete join tree, such a path can only be
805used as the inner side of a nestloop join: it will be ignored for other
806possible join types.  So we will form a join path representing the query
807plan shown above, and it will compete in the usual way with paths built
808from non-parameterized scans.
809
810While all ordinary paths for a particular relation generate the same set
811of rows (since they must all apply the same set of restriction clauses),
812parameterized paths typically generate fewer rows than less-parameterized
813paths, since they have additional clauses to work with.  This means we
814must consider the number of rows generated as an additional figure of
815merit.  A path that costs more than another, but generates fewer rows,
816must be kept since the smaller number of rows might save work at some
817intermediate join level.  (It would not save anything if joined
818immediately to the source of the parameters.)
819
820To keep cost estimation rules relatively simple, we make an implementation
821restriction that all paths for a given relation of the same parameterization
822(i.e., the same set of outer relations supplying parameters) must have the
823same rowcount estimate.  This is justified by insisting that each such path
824apply *all* join clauses that are available with the named outer relations.
825Different paths might, for instance, choose different join clauses to use
826as index clauses; but they must then apply any other join clauses available
827from the same outer relations as filter conditions, so that the set of rows
828returned is held constant.  This restriction doesn't degrade the quality of
829the finished plan: it amounts to saying that we should always push down
830movable join clauses to the lowest possible evaluation level, which is a
831good thing anyway.  The restriction is useful in particular to support
832pre-filtering of join paths in add_path_precheck.  Without this rule we
833could never reject a parameterized path in advance of computing its rowcount
834estimate, which would greatly reduce the value of the pre-filter mechanism.
835
836To limit planning time, we have to avoid generating an unreasonably large
837number of parameterized paths.  We do this by only generating parameterized
838relation scan paths for index scans, and then only for indexes for which
839suitable join clauses are available.  There are also heuristics in join
840planning that try to limit the number of parameterized paths considered.
841
842In particular, there's been a deliberate policy decision to favor hash
843joins over merge joins for parameterized join steps (those occurring below
844a nestloop that provides parameters to the lower join's inputs).  While we
845do not ignore merge joins entirely, joinpath.c does not fully explore the
846space of potential merge joins with parameterized inputs.  Also, add_path
847treats parameterized paths as having no pathkeys, so that they compete
848only on cost and rowcount; they don't get preference for producing a
849special sort order.  This creates additional bias against merge joins,
850since we might discard a path that could have been useful for performing
851a merge without an explicit sort step.  Since a parameterized path must
852ultimately be used on the inside of a nestloop, where its sort order is
853uninteresting, these choices do not affect any requirement for the final
854output order of a query --- they only make it harder to use a merge join
855at a lower level.  The savings in planning work justifies that.
856
857Similarly, parameterized paths do not normally get preference in add_path
858for having cheap startup cost; that's seldom of much value when on the
859inside of a nestloop, so it seems not worth keeping extra paths solely for
860that.  An exception occurs for parameterized paths for the RHS relation of
861a SEMI or ANTI join: in those cases, we can stop the inner scan after the
862first match, so it's primarily startup not total cost that we care about.
863
864
865LATERAL subqueries
866------------------
867
868As of 9.3 we support SQL-standard LATERAL references from subqueries in
869FROM (and also functions in FROM).  The planner implements these by
870generating parameterized paths for any RTE that contains lateral
871references.  In such cases, *all* paths for that relation will be
872parameterized by at least the set of relations used in its lateral
873references.  (And in turn, join relations including such a subquery might
874not have any unparameterized paths.)  All the other comments made above for
875parameterized paths still apply, though; in particular, each such path is
876still expected to enforce any join clauses that can be pushed down to it,
877so that all paths of the same parameterization have the same rowcount.
878
879We also allow LATERAL subqueries to be flattened (pulled up into the parent
880query) by the optimizer, but only when this does not introduce lateral
881references into JOIN/ON quals that would refer to relations outside the
882lowest outer join at/above that qual.  The semantics of such a qual would
883be unclear.  Note that even with this restriction, pullup of a LATERAL
884subquery can result in creating PlaceHolderVars that contain lateral
885references to relations outside their syntactic scope.  We still evaluate
886such PHVs at their syntactic location or lower, but the presence of such a
887PHV in the quals or targetlist of a plan node requires that node to appear
888on the inside of a nestloop join relative to the rel(s) supplying the
889lateral reference.  (Perhaps now that that stuff works, we could relax the
890pullup restriction?)
891
892
893Security-level constraints on qual clauses
894------------------------------------------
895
896To support row-level security and security-barrier views efficiently,
897we mark qual clauses (RestrictInfo nodes) with a "security_level" field.
898The basic concept is that a qual with a lower security_level must be
899evaluated before one with a higher security_level.  This ensures that
900"leaky" quals that might expose sensitive data are not evaluated until
901after the security barrier quals that are supposed to filter out
902security-sensitive rows.  However, many qual conditions are "leakproof",
903that is we trust the functions they use to not expose data.  To avoid
904unnecessarily inefficient plans, a leakproof qual is not delayed by
905security-level considerations, even if it has a higher syntactic
906security_level than another qual.
907
908In a query that contains no use of RLS or security-barrier views, all
909quals will have security_level zero, so that none of these restrictions
910kick in; we don't even need to check leakproofness of qual conditions.
911
912If there are security-barrier quals, they get security_level zero (and
913possibly higher, if there are multiple layers of barriers).  Regular quals
914coming from the query text get a security_level one more than the highest
915level used for barrier quals.
916
917When new qual clauses are generated by EquivalenceClass processing,
918they must be assigned a security_level.  This is trickier than it seems.
919One's first instinct is that it would be safe to use the largest level
920found among the source quals for the EquivalenceClass, but that isn't
921safe at all, because it allows unwanted delays of security-barrier quals.
922Consider a barrier qual "t.x = t.y" plus a query qual "t.x = constant",
923and suppose there is another query qual "leaky_function(t.z)" that
924we mustn't evaluate before the barrier qual has been checked.
925We will have an EC {t.x, t.y, constant} which will lead us to replace
926the EC quals with "t.x = constant AND t.y = constant".  (We do not want
927to give up that behavior, either, since the latter condition could allow
928use of an index on t.y, which we would never discover from the original
929quals.)  If these generated quals are assigned the same security_level as
930the query quals, then it's possible for the leaky_function qual to be
931evaluated first, allowing leaky_function to see data from rows that
932possibly don't pass the barrier condition.
933
934Instead, our handling of security levels with ECs works like this:
935* Quals are not accepted as source clauses for ECs in the first place
936unless they are leakproof or have security_level zero.
937* EC-derived quals are assigned the minimum (not maximum) security_level
938found among the EC's source clauses.
939* If the maximum security_level found among the EC's source clauses is
940above zero, then the equality operators selected for derived quals must
941be leakproof.  When no such operator can be found, the EC is treated as
942"broken" and we fall back to emitting its source clauses without any
943additional derived quals.
944
945These rules together ensure that an untrusted qual clause (one with
946security_level above zero) cannot cause an EC to generate a leaky derived
947clause.  This makes it safe to use the minimum not maximum security_level
948for derived clauses.  The rules could result in poor plans due to not
949being able to generate derived clauses at all, but the risk of that is
950small in practice because most btree equality operators are leakproof.
951Also, by making exceptions for level-zero quals, we ensure that there is
952no plan degradation when no barrier quals are present.
953
954Once we have security levels assigned to all clauses, enforcement
955of barrier-qual ordering restrictions boils down to two rules:
956
957* Table scan plan nodes must not select quals for early execution
958(for example, use them as index qualifiers in an indexscan) unless
959they are leakproof or have security_level no higher than any other
960qual that is due to be executed at the same plan node.  (Use the
961utility function restriction_is_securely_promotable() to check
962whether it's okay to select a qual for early execution.)
963
964* Normal execution of a list of quals must execute them in an order
965that satisfies the same security rule, ie higher security_levels must
966be evaluated later unless leakproof.  (This is handled in a single place
967by order_qual_clauses() in createplan.c.)
968
969order_qual_clauses() uses a heuristic to decide exactly what to do with
970leakproof clauses.  Normally it sorts clauses by security_level then cost,
971being careful that the sort is stable so that we don't reorder clauses
972without a clear reason.  But this could result in a very expensive qual
973being done before a cheaper one that is of higher security_level.
974If the cheaper qual is leaky we have no choice, but if it is leakproof
975we could put it first.  We choose to sort leakproof quals as if they
976have security_level zero, but only when their cost is less than 10X
977cpu_operator_cost; that restriction alleviates the opposite problem of
978doing expensive quals first just because they're leakproof.
979
980Additional rules will be needed to support safe handling of join quals
981when there is a mix of security levels among join quals; for example, it
982will be necessary to prevent leaky higher-security-level quals from being
983evaluated at a lower join level than other quals of lower security level.
984Currently there is no need to consider that since security-prioritized
985quals can only be single-table restriction quals coming from RLS policies
986or security-barrier views, and security-barrier view subqueries are never
987flattened into the parent query.  Hence enforcement of security-prioritized
988quals only happens at the table scan level.  With extra rules for safe
989handling of security levels among join quals, it should be possible to let
990security-barrier views be flattened into the parent query, allowing more
991flexibility of planning while still preserving required ordering of qual
992evaluation.  But that will come later.
993
994
995Post scan/join planning
996-----------------------
997
998So far we have discussed only scan/join planning, that is, implementation
999of the FROM and WHERE clauses of a SQL query.  But the planner must also
1000determine how to deal with GROUP BY, aggregation, and other higher-level
1001features of queries; and in many cases there are multiple ways to do these
1002steps and thus opportunities for optimization choices.  These steps, like
1003scan/join planning, are handled by constructing Paths representing the
1004different ways to do a step, then choosing the cheapest Path.
1005
1006Since all Paths require a RelOptInfo as "parent", we create RelOptInfos
1007representing the outputs of these upper-level processing steps.  These
1008RelOptInfos are mostly dummy, but their pathlist lists hold all the Paths
1009considered useful for each step.  Currently, we may create these types of
1010additional RelOptInfos during upper-level planning:
1011
1012UPPERREL_SETOP		result of UNION/INTERSECT/EXCEPT, if any
1013UPPERREL_PARTIAL_GROUP_AGG	result of partial grouping/aggregation, if any
1014UPPERREL_GROUP_AGG	result of grouping/aggregation, if any
1015UPPERREL_WINDOW		result of window functions, if any
1016UPPERREL_DISTINCT	result of "SELECT DISTINCT", if any
1017UPPERREL_ORDERED	result of ORDER BY, if any
1018UPPERREL_FINAL		result of any remaining top-level actions
1019
1020UPPERREL_FINAL is used to represent any final processing steps, currently
1021LockRows (SELECT FOR UPDATE), LIMIT/OFFSET, and ModifyTable.  There is no
1022flexibility about the order in which these steps are done, and thus no need
1023to subdivide this stage more finely.
1024
1025These "upper relations" are identified by the UPPERREL enum values shown
1026above, plus a relids set, which allows there to be more than one upperrel
1027of the same kind.  We use NULL for the relids if there's no need for more
1028than one upperrel of the same kind.  Currently, in fact, the relids set
1029is vestigial because it's always NULL, but that's expected to change in
1030the future.  For example, in planning set operations, we might need the
1031relids to denote which subset of the leaf SELECTs has been combined in a
1032particular group of Paths that are competing with each other.
1033
1034The result of subquery_planner() is always returned as a set of Paths
1035stored in the UPPERREL_FINAL rel with NULL relids.  The other types of
1036upperrels are created only if needed for the particular query.
1037
1038
1039Parallel Query and Partial Paths
1040--------------------------------
1041
1042Parallel query involves dividing up the work that needs to be performed
1043either by an entire query or some portion of the query in such a way that
1044some of that work can be done by one or more worker processes, which are
1045called parallel workers.  Parallel workers are a subtype of dynamic
1046background workers; see src/backend/access/transam/README.parallel for a
1047fuller description.  The academic literature on parallel query suggests
1048that parallel execution strategies can be divided into essentially two
1049categories: pipelined parallelism, where the execution of the query is
1050divided into multiple stages and each stage is handled by a separate
1051process; and partitioning parallelism, where the data is split between
1052multiple processes and each process handles a subset of it.  The
1053literature, however, suggests that gains from pipeline parallelism are
1054often very limited due to the difficulty of avoiding pipeline stalls.
1055Consequently, we do not currently attempt to generate query plans that
1056use this technique.
1057
1058Instead, we focus on partitioning parallelism, which does not require
1059that the underlying table be partitioned.  It only requires that (1)
1060there is some method of dividing the data from at least one of the base
1061tables involved in the relation across multiple processes, (2) allowing
1062each process to handle its own portion of the data, and then (3)
1063collecting the results.  Requirements (2) and (3) are satisfied by the
1064executor node Gather (or GatherMerge), which launches any number of worker
1065processes and executes its single child plan in all of them, and perhaps
1066in the leader also, if the children aren't generating enough data to keep
1067the leader busy.  Requirement (1) is handled by the table scan node: when
1068invoked with parallel_aware = true, this node will, in effect, partition
1069the table on a block by block basis, returning a subset of the tuples from
1070the relation in each worker where that scan node is executed.
1071
1072Just as we do for non-parallel access methods, we build Paths to
1073represent access strategies that can be used in a parallel plan.  These
1074are, in essence, the same strategies that are available in the
1075non-parallel plan, but there is an important difference: a path that
1076will run beneath a Gather node returns only a subset of the query
1077results in each worker, not all of them.  To form a path that can
1078actually be executed, the (rather large) cost of the Gather node must be
1079accounted for.  For this reason among others, paths intended to run
1080beneath a Gather node - which we call "partial" paths since they return
1081only a subset of the results in each worker - must be kept separate from
1082ordinary paths (see RelOptInfo's partial_pathlist and the function
1083add_partial_path).
1084
1085One of the keys to making parallel query effective is to run as much of
1086the query in parallel as possible.  Therefore, we expect it to generally
1087be desirable to postpone the Gather stage until as near to the top of the
1088plan as possible.  Expanding the range of cases in which more work can be
1089pushed below the Gather (and costing them accurately) is likely to keep us
1090busy for a long time to come.
1091
1092Partitionwise joins
1093-------------------
1094
1095A join between two similarly partitioned tables can be broken down into joins
1096between their matching partitions if there exists an equi-join condition
1097between the partition keys of the joining tables. The equi-join between
1098partition keys implies that all join partners for a given row in one
1099partitioned table must be in the corresponding partition of the other
1100partitioned table. Because of this the join between partitioned tables to be
1101broken into joins between the matching partitions. The resultant join is
1102partitioned in the same way as the joining relations, thus allowing an N-way
1103join between similarly partitioned tables having equi-join condition between
1104their partition keys to be broken down into N-way joins between their matching
1105partitions. This technique of breaking down a join between partitioned tables
1106into joins between their partitions is called partitionwise join. We will use
1107term "partitioned relation" for either a partitioned table or a join between
1108compatibly partitioned tables.
1109
1110Even if the joining relations don't have exactly the same partition bounds,
1111partitionwise join can still be applied by using an advanced
1112partition-matching algorithm.  For both the joining relations, the algorithm
1113checks whether every partition of one joining relation only matches one
1114partition of the other joining relation at most.  In such a case the join
1115between the joining relations can be broken down into joins between the
1116matching partitions.  The join relation can then be considered partitioned.
1117The algorithm produces the pairs of the matching partitions, plus the
1118partition bounds for the join relation, to allow partitionwise join for
1119computing the join.  The algorithm is implemented in partition_bounds_merge().
1120For an N-way join relation considered partitioned this way, not every pair of
1121joining relations can use partitionwise join.  For example:
1122
1123	(A leftjoin B on (Pab)) innerjoin C on (Pac)
1124
1125where A, B, and C are partitioned tables, and A has an extra partition
1126compared to B and C.  When considering partitionwise join for the join {A B},
1127the extra partition of A doesn't have a matching partition on the nullable
1128side, which is the case that the current implementation of partitionwise join
1129can't handle.  So {A B} is not considered partitioned, and the pair of {A B}
1130and C considered for the 3-way join can't use partitionwise join.  On the
1131other hand, the pair of {A C} and B can use partitionwise join because {A C}
1132is considered partitioned by eliminating the extra partition (see identity 1
1133on outer join reordering).  Whether an N-way join can use partitionwise join
1134is determined based on the first pair of joining relations that are both
1135partitioned and can use partitionwise join.
1136
1137The partitioning properties of a partitioned relation are stored in its
1138RelOptInfo.  The information about data types of partition keys are stored in
1139PartitionSchemeData structure. The planner maintains a list of canonical
1140partition schemes (distinct PartitionSchemeData objects) so that RelOptInfo of
1141any two partitioned relations with same partitioning scheme point to the same
1142PartitionSchemeData object.  This reduces memory consumed by
1143PartitionSchemeData objects and makes it easy to compare the partition schemes
1144of joining relations.
1145
1146Partitionwise aggregates/grouping
1147---------------------------------
1148
1149If the GROUP BY clause contains all of the partition keys, all the rows
1150that belong to a given group must come from a single partition; therefore,
1151aggregation can be done completely separately for each partition. Otherwise,
1152partial aggregates can be computed for each partition, and then finalized
1153after appending the results from the individual partitions.  This technique of
1154breaking down aggregation or grouping over a partitioned relation into
1155aggregation or grouping over its partitions is called partitionwise
1156aggregation.  Especially when the partition keys match the GROUP BY clause,
1157this can be significantly faster than the regular method.
1158