1 //===- Attributor.h --- Module-wide attribute deduction ---------*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // Attributor: An inter procedural (abstract) "attribute" deduction framework.
10 //
11 // The Attributor framework is an inter procedural abstract analysis (fixpoint
12 // iteration analysis). The goal is to allow easy deduction of new attributes as
13 // well as information exchange between abstract attributes in-flight.
14 //
15 // The Attributor class is the driver and the link between the various abstract
16 // attributes. The Attributor will iterate until a fixpoint state is reached by
17 // all abstract attributes in-flight, or until it will enforce a pessimistic fix
18 // point because an iteration limit is reached.
19 //
20 // Abstract attributes, derived from the AbstractAttribute class, actually
21 // describe properties of the code. They can correspond to actual LLVM-IR
22 // attributes, or they can be more general, ultimately unrelated to LLVM-IR
23 // attributes. The latter is useful when an abstract attributes provides
24 // information to other abstract attributes in-flight but we might not want to
25 // manifest the information. The Attributor allows to query in-flight abstract
26 // attributes through the `Attributor::getAAFor` method (see the method
27 // description for an example). If the method is used by an abstract attribute
28 // P, and it results in an abstract attribute Q, the Attributor will
29 // automatically capture a potential dependence from Q to P. This dependence
30 // will cause P to be reevaluated whenever Q changes in the future.
31 //
32 // The Attributor will only reevaluate abstract attributes that might have
33 // changed since the last iteration. That means that the Attribute will not
34 // revisit all instructions/blocks/functions in the module but only query
35 // an update from a subset of the abstract attributes.
36 //
37 // The update method `AbstractAttribute::updateImpl` is implemented by the
38 // specific "abstract attribute" subclasses. The method is invoked whenever the
39 // currently assumed state (see the AbstractState class) might not be valid
40 // anymore. This can, for example, happen if the state was dependent on another
41 // abstract attribute that changed. In every invocation, the update method has
42 // to adjust the internal state of an abstract attribute to a point that is
43 // justifiable by the underlying IR and the current state of abstract attributes
44 // in-flight. Since the IR is given and assumed to be valid, the information
45 // derived from it can be assumed to hold. However, information derived from
46 // other abstract attributes is conditional on various things. If the justifying
47 // state changed, the `updateImpl` has to revisit the situation and potentially
48 // find another justification or limit the optimistic assumes made.
49 //
50 // Change is the key in this framework. Until a state of no-change, thus a
51 // fixpoint, is reached, the Attributor will query the abstract attributes
52 // in-flight to re-evaluate their state. If the (current) state is too
53 // optimistic, hence it cannot be justified anymore through other abstract
54 // attributes or the state of the IR, the state of the abstract attribute will
55 // have to change. Generally, we assume abstract attribute state to be a finite
56 // height lattice and the update function to be monotone. However, these
57 // conditions are not enforced because the iteration limit will guarantee
58 // termination. If an optimistic fixpoint is reached, or a pessimistic fix
59 // point is enforced after a timeout, the abstract attributes are tasked to
60 // manifest their result in the IR for passes to come.
61 //
62 // Attribute manifestation is not mandatory. If desired, there is support to
63 // generate a single or multiple LLVM-IR attributes already in the helper struct
64 // IRAttribute. In the simplest case, a subclass inherits from IRAttribute with
65 // a proper Attribute::AttrKind as template parameter. The Attributor
66 // manifestation framework will then create and place a new attribute if it is
67 // allowed to do so (based on the abstract state). Other use cases can be
68 // achieved by overloading AbstractAttribute or IRAttribute methods.
69 //
70 //
71 // The "mechanics" of adding a new "abstract attribute":
72 // - Define a class (transitively) inheriting from AbstractAttribute and one
73 //   (which could be the same) that (transitively) inherits from AbstractState.
74 //   For the latter, consider the already available BooleanState and
75 //   {Inc,Dec,Bit}IntegerState if they fit your needs, e.g., you require only a
76 //   number tracking or bit-encoding.
77 // - Implement all pure methods. Also use overloading if the attribute is not
78 //   conforming with the "default" behavior: A (set of) LLVM-IR attribute(s) for
79 //   an argument, call site argument, function return value, or function. See
80 //   the class and method descriptions for more information on the two
81 //   "Abstract" classes and their respective methods.
82 // - Register opportunities for the new abstract attribute in the
83 //   `Attributor::identifyDefaultAbstractAttributes` method if it should be
84 //   counted as a 'default' attribute.
85 // - Add sufficient tests.
86 // - Add a Statistics object for bookkeeping. If it is a simple (set of)
87 //   attribute(s) manifested through the Attributor manifestation framework, see
88 //   the bookkeeping function in Attributor.cpp.
89 // - If instructions with a certain opcode are interesting to the attribute, add
90 //   that opcode to the switch in `Attributor::identifyAbstractAttributes`. This
91 //   will make it possible to query all those instructions through the
92 //   `InformationCache::getOpcodeInstMapForFunction` interface and eliminate the
93 //   need to traverse the IR repeatedly.
94 //
95 //===----------------------------------------------------------------------===//
96 
97 #ifndef LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
98 #define LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
99 
100 #include "llvm/ADT/DenseSet.h"
101 #include "llvm/ADT/GraphTraits.h"
102 #include "llvm/ADT/MapVector.h"
103 #include "llvm/ADT/STLExtras.h"
104 #include "llvm/ADT/SetVector.h"
105 #include "llvm/Analysis/AssumeBundleQueries.h"
106 #include "llvm/Analysis/CFG.h"
107 #include "llvm/Analysis/CGSCCPassManager.h"
108 #include "llvm/Analysis/LazyCallGraph.h"
109 #include "llvm/Analysis/LoopInfo.h"
110 #include "llvm/Analysis/MustExecute.h"
111 #include "llvm/Analysis/PostDominators.h"
112 #include "llvm/Analysis/TargetLibraryInfo.h"
113 #include "llvm/IR/AbstractCallSite.h"
114 #include "llvm/IR/ConstantRange.h"
115 #include "llvm/IR/PassManager.h"
116 #include "llvm/Support/Allocator.h"
117 #include "llvm/Support/Casting.h"
118 #include "llvm/Support/TimeProfiler.h"
119 #include "llvm/Transforms/Utils/CallGraphUpdater.h"
120 
121 namespace llvm {
122 
123 struct AADepGraphNode;
124 struct AADepGraph;
125 struct Attributor;
126 struct AbstractAttribute;
127 struct InformationCache;
128 struct AAIsDead;
129 
130 class AAManager;
131 class AAResults;
132 class Function;
133 
134 /// The value passed to the line option that defines the maximal initialization
135 /// chain length.
136 extern unsigned MaxInitializationChainLength;
137 
138 ///{
139 enum class ChangeStatus {
140   CHANGED,
141   UNCHANGED,
142 };
143 
144 ChangeStatus operator|(ChangeStatus l, ChangeStatus r);
145 ChangeStatus operator&(ChangeStatus l, ChangeStatus r);
146 
147 enum class DepClassTy {
148   REQUIRED,
149   OPTIONAL,
150 };
151 ///}
152 
153 /// The data structure for the nodes of a dependency graph
154 struct AADepGraphNode {
155 public:
156   virtual ~AADepGraphNode(){};
157   using DepTy = PointerIntPair<AADepGraphNode *, 1>;
158 
159 protected:
160   /// Set of dependency graph nodes which should be updated if this one
161   /// is updated. The bit encodes if it is optional.
162   TinyPtrVector<DepTy> Deps;
163 
164   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
165   static AbstractAttribute *DepGetValAA(DepTy &DT) {
166     return cast<AbstractAttribute>(DT.getPointer());
167   }
168 
169   operator AbstractAttribute *() { return cast<AbstractAttribute>(this); }
170 
171 public:
172   using iterator =
173       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
174   using aaiterator =
175       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetValAA)>;
176 
177   aaiterator begin() { return aaiterator(Deps.begin(), &DepGetValAA); }
178   aaiterator end() { return aaiterator(Deps.end(), &DepGetValAA); }
179   iterator child_begin() { return iterator(Deps.begin(), &DepGetVal); }
180   iterator child_end() { return iterator(Deps.end(), &DepGetVal); }
181 
182   virtual void print(raw_ostream &OS) const { OS << "AADepNode Impl\n"; }
183   TinyPtrVector<DepTy> &getDeps() { return Deps; }
184 
185   friend struct Attributor;
186   friend struct AADepGraph;
187 };
188 
189 /// The data structure for the dependency graph
190 ///
191 /// Note that in this graph if there is an edge from A to B (A -> B),
192 /// then it means that B depends on A, and when the state of A is
193 /// updated, node B should also be updated
194 struct AADepGraph {
195   AADepGraph() {}
196   ~AADepGraph() {}
197 
198   using DepTy = AADepGraphNode::DepTy;
199   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
200   using iterator =
201       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
202 
203   /// There is no root node for the dependency graph. But the SCCIterator
204   /// requires a single entry point, so we maintain a fake("synthetic") root
205   /// node that depends on every node.
206   AADepGraphNode SyntheticRoot;
207   AADepGraphNode *GetEntryNode() { return &SyntheticRoot; }
208 
209   iterator begin() { return SyntheticRoot.child_begin(); }
210   iterator end() { return SyntheticRoot.child_end(); }
211 
212   void viewGraph();
213 
214   /// Dump graph to file
215   void dumpGraph();
216 
217   /// Print dependency graph
218   void print();
219 };
220 
221 /// Helper to describe and deal with positions in the LLVM-IR.
222 ///
223 /// A position in the IR is described by an anchor value and an "offset" that
224 /// could be the argument number, for call sites and arguments, or an indicator
225 /// of the "position kind". The kinds, specified in the Kind enum below, include
226 /// the locations in the attribute list, i.a., function scope and return value,
227 /// as well as a distinction between call sites and functions. Finally, there
228 /// are floating values that do not have a corresponding attribute list
229 /// position.
230 struct IRPosition {
231 
232   /// The positions we distinguish in the IR.
233   enum Kind : char {
234     IRP_INVALID,  ///< An invalid position.
235     IRP_FLOAT,    ///< A position that is not associated with a spot suitable
236                   ///< for attributes. This could be any value or instruction.
237     IRP_RETURNED, ///< An attribute for the function return value.
238     IRP_CALL_SITE_RETURNED, ///< An attribute for a call site return value.
239     IRP_FUNCTION,           ///< An attribute for a function (scope).
240     IRP_CALL_SITE,          ///< An attribute for a call site (function scope).
241     IRP_ARGUMENT,           ///< An attribute for a function argument.
242     IRP_CALL_SITE_ARGUMENT, ///< An attribute for a call site argument.
243   };
244 
245   /// Default constructor available to create invalid positions implicitly. All
246   /// other positions need to be created explicitly through the appropriate
247   /// static member function.
248   IRPosition() : Enc(nullptr, ENC_VALUE) { verify(); }
249 
250   /// Create a position describing the value of \p V.
251   static const IRPosition value(const Value &V) {
252     if (auto *Arg = dyn_cast<Argument>(&V))
253       return IRPosition::argument(*Arg);
254     if (auto *CB = dyn_cast<CallBase>(&V))
255       return IRPosition::callsite_returned(*CB);
256     return IRPosition(const_cast<Value &>(V), IRP_FLOAT);
257   }
258 
259   /// Create a position describing the function scope of \p F.
260   static const IRPosition function(const Function &F) {
261     return IRPosition(const_cast<Function &>(F), IRP_FUNCTION);
262   }
263 
264   /// Create a position describing the returned value of \p F.
265   static const IRPosition returned(const Function &F) {
266     return IRPosition(const_cast<Function &>(F), IRP_RETURNED);
267   }
268 
269   /// Create a position describing the argument \p Arg.
270   static const IRPosition argument(const Argument &Arg) {
271     return IRPosition(const_cast<Argument &>(Arg), IRP_ARGUMENT);
272   }
273 
274   /// Create a position describing the function scope of \p CB.
275   static const IRPosition callsite_function(const CallBase &CB) {
276     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE);
277   }
278 
279   /// Create a position describing the returned value of \p CB.
280   static const IRPosition callsite_returned(const CallBase &CB) {
281     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE_RETURNED);
282   }
283 
284   /// Create a position describing the argument of \p CB at position \p ArgNo.
285   static const IRPosition callsite_argument(const CallBase &CB,
286                                             unsigned ArgNo) {
287     return IRPosition(const_cast<Use &>(CB.getArgOperandUse(ArgNo)),
288                       IRP_CALL_SITE_ARGUMENT);
289   }
290 
291   /// Create a position describing the argument of \p ACS at position \p ArgNo.
292   static const IRPosition callsite_argument(AbstractCallSite ACS,
293                                             unsigned ArgNo) {
294     if (ACS.getNumArgOperands() <= ArgNo)
295       return IRPosition();
296     int CSArgNo = ACS.getCallArgOperandNo(ArgNo);
297     if (CSArgNo >= 0)
298       return IRPosition::callsite_argument(
299           cast<CallBase>(*ACS.getInstruction()), CSArgNo);
300     return IRPosition();
301   }
302 
303   /// Create a position with function scope matching the "context" of \p IRP.
304   /// If \p IRP is a call site (see isAnyCallSitePosition()) then the result
305   /// will be a call site position, otherwise the function position of the
306   /// associated function.
307   static const IRPosition function_scope(const IRPosition &IRP) {
308     if (IRP.isAnyCallSitePosition()) {
309       return IRPosition::callsite_function(
310           cast<CallBase>(IRP.getAnchorValue()));
311     }
312     assert(IRP.getAssociatedFunction());
313     return IRPosition::function(*IRP.getAssociatedFunction());
314   }
315 
316   bool operator==(const IRPosition &RHS) const { return Enc == RHS.Enc; }
317   bool operator!=(const IRPosition &RHS) const { return !(*this == RHS); }
318 
319   /// Return the value this abstract attribute is anchored with.
320   ///
321   /// The anchor value might not be the associated value if the latter is not
322   /// sufficient to determine where arguments will be manifested. This is, so
323   /// far, only the case for call site arguments as the value is not sufficient
324   /// to pinpoint them. Instead, we can use the call site as an anchor.
325   Value &getAnchorValue() const {
326     switch (getEncodingBits()) {
327     case ENC_VALUE:
328     case ENC_RETURNED_VALUE:
329     case ENC_FLOATING_FUNCTION:
330       return *getAsValuePtr();
331     case ENC_CALL_SITE_ARGUMENT_USE:
332       return *(getAsUsePtr()->getUser());
333     default:
334       llvm_unreachable("Unkown encoding!");
335     };
336   }
337 
338   /// Return the associated function, if any.
339   Function *getAssociatedFunction() const {
340     if (auto *CB = dyn_cast<CallBase>(&getAnchorValue())) {
341       // We reuse the logic that associates callback calles to arguments of a
342       // call site here to identify the callback callee as the associated
343       // function.
344       if (Argument *Arg = getAssociatedArgument())
345         return Arg->getParent();
346       return CB->getCalledFunction();
347     }
348     return getAnchorScope();
349   }
350 
351   /// Return the associated argument, if any.
352   Argument *getAssociatedArgument() const;
353 
354   /// Return true if the position refers to a function interface, that is the
355   /// function scope, the function return, or an argument.
356   bool isFnInterfaceKind() const {
357     switch (getPositionKind()) {
358     case IRPosition::IRP_FUNCTION:
359     case IRPosition::IRP_RETURNED:
360     case IRPosition::IRP_ARGUMENT:
361       return true;
362     default:
363       return false;
364     }
365   }
366 
367   /// Return the Function surrounding the anchor value.
368   Function *getAnchorScope() const {
369     Value &V = getAnchorValue();
370     if (isa<Function>(V))
371       return &cast<Function>(V);
372     if (isa<Argument>(V))
373       return cast<Argument>(V).getParent();
374     if (isa<Instruction>(V))
375       return cast<Instruction>(V).getFunction();
376     return nullptr;
377   }
378 
379   /// Return the context instruction, if any.
380   Instruction *getCtxI() const {
381     Value &V = getAnchorValue();
382     if (auto *I = dyn_cast<Instruction>(&V))
383       return I;
384     if (auto *Arg = dyn_cast<Argument>(&V))
385       if (!Arg->getParent()->isDeclaration())
386         return &Arg->getParent()->getEntryBlock().front();
387     if (auto *F = dyn_cast<Function>(&V))
388       if (!F->isDeclaration())
389         return &(F->getEntryBlock().front());
390     return nullptr;
391   }
392 
393   /// Return the value this abstract attribute is associated with.
394   Value &getAssociatedValue() const {
395     if (getCallSiteArgNo() < 0 || isa<Argument>(&getAnchorValue()))
396       return getAnchorValue();
397     assert(isa<CallBase>(&getAnchorValue()) && "Expected a call base!");
398     return *cast<CallBase>(&getAnchorValue())
399                 ->getArgOperand(getCallSiteArgNo());
400   }
401 
402   /// Return the type this abstract attribute is associated with.
403   Type *getAssociatedType() const {
404     if (getPositionKind() == IRPosition::IRP_RETURNED)
405       return getAssociatedFunction()->getReturnType();
406     return getAssociatedValue().getType();
407   }
408 
409   /// Return the callee argument number of the associated value if it is an
410   /// argument or call site argument, otherwise a negative value. In contrast to
411   /// `getCallSiteArgNo` this method will always return the "argument number"
412   /// from the perspective of the callee. This may not the same as the call site
413   /// if this is a callback call.
414   int getCalleeArgNo() const {
415     return getArgNo(/* CallbackCalleeArgIfApplicable */ true);
416   }
417 
418   /// Return the call site argument number of the associated value if it is an
419   /// argument or call site argument, otherwise a negative value. In contrast to
420   /// `getCalleArgNo` this method will always return the "operand number" from
421   /// the perspective of the call site. This may not the same as the callee
422   /// perspective if this is a callback call.
423   int getCallSiteArgNo() const {
424     return getArgNo(/* CallbackCalleeArgIfApplicable */ false);
425   }
426 
427   /// Return the index in the attribute list for this position.
428   unsigned getAttrIdx() const {
429     switch (getPositionKind()) {
430     case IRPosition::IRP_INVALID:
431     case IRPosition::IRP_FLOAT:
432       break;
433     case IRPosition::IRP_FUNCTION:
434     case IRPosition::IRP_CALL_SITE:
435       return AttributeList::FunctionIndex;
436     case IRPosition::IRP_RETURNED:
437     case IRPosition::IRP_CALL_SITE_RETURNED:
438       return AttributeList::ReturnIndex;
439     case IRPosition::IRP_ARGUMENT:
440     case IRPosition::IRP_CALL_SITE_ARGUMENT:
441       return getCallSiteArgNo() + AttributeList::FirstArgIndex;
442     }
443     llvm_unreachable(
444         "There is no attribute index for a floating or invalid position!");
445   }
446 
447   /// Return the associated position kind.
448   Kind getPositionKind() const {
449     char EncodingBits = getEncodingBits();
450     if (EncodingBits == ENC_CALL_SITE_ARGUMENT_USE)
451       return IRP_CALL_SITE_ARGUMENT;
452     if (EncodingBits == ENC_FLOATING_FUNCTION)
453       return IRP_FLOAT;
454 
455     Value *V = getAsValuePtr();
456     if (!V)
457       return IRP_INVALID;
458     if (isa<Argument>(V))
459       return IRP_ARGUMENT;
460     if (isa<Function>(V))
461       return isReturnPosition(EncodingBits) ? IRP_RETURNED : IRP_FUNCTION;
462     if (isa<CallBase>(V))
463       return isReturnPosition(EncodingBits) ? IRP_CALL_SITE_RETURNED
464                                             : IRP_CALL_SITE;
465     return IRP_FLOAT;
466   }
467 
468   /// TODO: Figure out if the attribute related helper functions should live
469   ///       here or somewhere else.
470 
471   /// Return true if any kind in \p AKs existing in the IR at a position that
472   /// will affect this one. See also getAttrs(...).
473   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
474   ///                                 e.g., the function position if this is an
475   ///                                 argument position, should be ignored.
476   bool hasAttr(ArrayRef<Attribute::AttrKind> AKs,
477                bool IgnoreSubsumingPositions = false,
478                Attributor *A = nullptr) const;
479 
480   /// Return the attributes of any kind in \p AKs existing in the IR at a
481   /// position that will affect this one. While each position can only have a
482   /// single attribute of any kind in \p AKs, there are "subsuming" positions
483   /// that could have an attribute as well. This method returns all attributes
484   /// found in \p Attrs.
485   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
486   ///                                 e.g., the function position if this is an
487   ///                                 argument position, should be ignored.
488   void getAttrs(ArrayRef<Attribute::AttrKind> AKs,
489                 SmallVectorImpl<Attribute> &Attrs,
490                 bool IgnoreSubsumingPositions = false,
491                 Attributor *A = nullptr) const;
492 
493   /// Remove the attribute of kind \p AKs existing in the IR at this position.
494   void removeAttrs(ArrayRef<Attribute::AttrKind> AKs) const {
495     if (getPositionKind() == IRP_INVALID || getPositionKind() == IRP_FLOAT)
496       return;
497 
498     AttributeList AttrList;
499     auto *CB = dyn_cast<CallBase>(&getAnchorValue());
500     if (CB)
501       AttrList = CB->getAttributes();
502     else
503       AttrList = getAssociatedFunction()->getAttributes();
504 
505     LLVMContext &Ctx = getAnchorValue().getContext();
506     for (Attribute::AttrKind AK : AKs)
507       AttrList = AttrList.removeAttribute(Ctx, getAttrIdx(), AK);
508 
509     if (CB)
510       CB->setAttributes(AttrList);
511     else
512       getAssociatedFunction()->setAttributes(AttrList);
513   }
514 
515   bool isAnyCallSitePosition() const {
516     switch (getPositionKind()) {
517     case IRPosition::IRP_CALL_SITE:
518     case IRPosition::IRP_CALL_SITE_RETURNED:
519     case IRPosition::IRP_CALL_SITE_ARGUMENT:
520       return true;
521     default:
522       return false;
523     }
524   }
525 
526   /// Return true if the position is an argument or call site argument.
527   bool isArgumentPosition() const {
528     switch (getPositionKind()) {
529     case IRPosition::IRP_ARGUMENT:
530     case IRPosition::IRP_CALL_SITE_ARGUMENT:
531       return true;
532     default:
533       return false;
534     }
535   }
536 
537   /// Special DenseMap key values.
538   ///
539   ///{
540   static const IRPosition EmptyKey;
541   static const IRPosition TombstoneKey;
542   ///}
543 
544   /// Conversion into a void * to allow reuse of pointer hashing.
545   operator void *() const { return Enc.getOpaqueValue(); }
546 
547 private:
548   /// Private constructor for special values only!
549   explicit IRPosition(void *Ptr) { Enc.setFromOpaqueValue(Ptr); }
550 
551   /// IRPosition anchored at \p AnchorVal with kind/argument numbet \p PK.
552   explicit IRPosition(Value &AnchorVal, Kind PK) {
553     switch (PK) {
554     case IRPosition::IRP_INVALID:
555       llvm_unreachable("Cannot create invalid IRP with an anchor value!");
556       break;
557     case IRPosition::IRP_FLOAT:
558       // Special case for floating functions.
559       if (isa<Function>(AnchorVal))
560         Enc = {&AnchorVal, ENC_FLOATING_FUNCTION};
561       else
562         Enc = {&AnchorVal, ENC_VALUE};
563       break;
564     case IRPosition::IRP_FUNCTION:
565     case IRPosition::IRP_CALL_SITE:
566       Enc = {&AnchorVal, ENC_VALUE};
567       break;
568     case IRPosition::IRP_RETURNED:
569     case IRPosition::IRP_CALL_SITE_RETURNED:
570       Enc = {&AnchorVal, ENC_RETURNED_VALUE};
571       break;
572     case IRPosition::IRP_ARGUMENT:
573       Enc = {&AnchorVal, ENC_VALUE};
574       break;
575     case IRPosition::IRP_CALL_SITE_ARGUMENT:
576       llvm_unreachable(
577           "Cannot create call site argument IRP with an anchor value!");
578       break;
579     }
580     verify();
581   }
582 
583   /// Return the callee argument number of the associated value if it is an
584   /// argument or call site argument. See also `getCalleeArgNo` and
585   /// `getCallSiteArgNo`.
586   int getArgNo(bool CallbackCalleeArgIfApplicable) const {
587     if (CallbackCalleeArgIfApplicable)
588       if (Argument *Arg = getAssociatedArgument())
589         return Arg->getArgNo();
590     switch (getPositionKind()) {
591     case IRPosition::IRP_ARGUMENT:
592       return cast<Argument>(getAsValuePtr())->getArgNo();
593     case IRPosition::IRP_CALL_SITE_ARGUMENT: {
594       Use &U = *getAsUsePtr();
595       return cast<CallBase>(U.getUser())->getArgOperandNo(&U);
596     }
597     default:
598       return -1;
599     }
600   }
601 
602   /// IRPosition for the use \p U. The position kind \p PK needs to be
603   /// IRP_CALL_SITE_ARGUMENT, the anchor value is the user, the associated value
604   /// the used value.
605   explicit IRPosition(Use &U, Kind PK) {
606     assert(PK == IRP_CALL_SITE_ARGUMENT &&
607            "Use constructor is for call site arguments only!");
608     Enc = {&U, ENC_CALL_SITE_ARGUMENT_USE};
609     verify();
610   }
611 
612   /// Verify internal invariants.
613   void verify();
614 
615   /// Return the attributes of kind \p AK existing in the IR as attribute.
616   bool getAttrsFromIRAttr(Attribute::AttrKind AK,
617                           SmallVectorImpl<Attribute> &Attrs) const;
618 
619   /// Return the attributes of kind \p AK existing in the IR as operand bundles
620   /// of an llvm.assume.
621   bool getAttrsFromAssumes(Attribute::AttrKind AK,
622                            SmallVectorImpl<Attribute> &Attrs,
623                            Attributor &A) const;
624 
625   /// Return the underlying pointer as Value *, valid for all positions but
626   /// IRP_CALL_SITE_ARGUMENT.
627   Value *getAsValuePtr() const {
628     assert(getEncodingBits() != ENC_CALL_SITE_ARGUMENT_USE &&
629            "Not a value pointer!");
630     return reinterpret_cast<Value *>(Enc.getPointer());
631   }
632 
633   /// Return the underlying pointer as Use *, valid only for
634   /// IRP_CALL_SITE_ARGUMENT positions.
635   Use *getAsUsePtr() const {
636     assert(getEncodingBits() == ENC_CALL_SITE_ARGUMENT_USE &&
637            "Not a value pointer!");
638     return reinterpret_cast<Use *>(Enc.getPointer());
639   }
640 
641   /// Return true if \p EncodingBits describe a returned or call site returned
642   /// position.
643   static bool isReturnPosition(char EncodingBits) {
644     return EncodingBits == ENC_RETURNED_VALUE;
645   }
646 
647   /// Return true if the encoding bits describe a returned or call site returned
648   /// position.
649   bool isReturnPosition() const { return isReturnPosition(getEncodingBits()); }
650 
651   /// The encoding of the IRPosition is a combination of a pointer and two
652   /// encoding bits. The values of the encoding bits are defined in the enum
653   /// below. The pointer is either a Value* (for the first three encoding bit
654   /// combinations) or Use* (for ENC_CALL_SITE_ARGUMENT_USE).
655   ///
656   ///{
657   enum {
658     ENC_VALUE = 0b00,
659     ENC_RETURNED_VALUE = 0b01,
660     ENC_FLOATING_FUNCTION = 0b10,
661     ENC_CALL_SITE_ARGUMENT_USE = 0b11,
662   };
663 
664   // Reserve the maximal amount of bits so there is no need to mask out the
665   // remaining ones. We will not encode anything else in the pointer anyway.
666   static constexpr int NumEncodingBits =
667       PointerLikeTypeTraits<void *>::NumLowBitsAvailable;
668   static_assert(NumEncodingBits >= 2, "At least two bits are required!");
669 
670   /// The pointer with the encoding bits.
671   PointerIntPair<void *, NumEncodingBits, char> Enc;
672   ///}
673 
674   /// Return the encoding bits.
675   char getEncodingBits() const { return Enc.getInt(); }
676 };
677 
678 /// Helper that allows IRPosition as a key in a DenseMap.
679 template <> struct DenseMapInfo<IRPosition> : DenseMapInfo<void *> {
680   static inline IRPosition getEmptyKey() { return IRPosition::EmptyKey; }
681   static inline IRPosition getTombstoneKey() {
682     return IRPosition::TombstoneKey;
683   }
684 };
685 
686 /// A visitor class for IR positions.
687 ///
688 /// Given a position P, the SubsumingPositionIterator allows to visit "subsuming
689 /// positions" wrt. attributes/information. Thus, if a piece of information
690 /// holds for a subsuming position, it also holds for the position P.
691 ///
692 /// The subsuming positions always include the initial position and then,
693 /// depending on the position kind, additionally the following ones:
694 /// - for IRP_RETURNED:
695 ///   - the function (IRP_FUNCTION)
696 /// - for IRP_ARGUMENT:
697 ///   - the function (IRP_FUNCTION)
698 /// - for IRP_CALL_SITE:
699 ///   - the callee (IRP_FUNCTION), if known
700 /// - for IRP_CALL_SITE_RETURNED:
701 ///   - the callee (IRP_RETURNED), if known
702 ///   - the call site (IRP_FUNCTION)
703 ///   - the callee (IRP_FUNCTION), if known
704 /// - for IRP_CALL_SITE_ARGUMENT:
705 ///   - the argument of the callee (IRP_ARGUMENT), if known
706 ///   - the callee (IRP_FUNCTION), if known
707 ///   - the position the call site argument is associated with if it is not
708 ///     anchored to the call site, e.g., if it is an argument then the argument
709 ///     (IRP_ARGUMENT)
710 class SubsumingPositionIterator {
711   SmallVector<IRPosition, 4> IRPositions;
712   using iterator = decltype(IRPositions)::iterator;
713 
714 public:
715   SubsumingPositionIterator(const IRPosition &IRP);
716   iterator begin() { return IRPositions.begin(); }
717   iterator end() { return IRPositions.end(); }
718 };
719 
720 /// Wrapper for FunctoinAnalysisManager.
721 struct AnalysisGetter {
722   template <typename Analysis>
723   typename Analysis::Result *getAnalysis(const Function &F) {
724     if (!FAM || !F.getParent())
725       return nullptr;
726     return &FAM->getResult<Analysis>(const_cast<Function &>(F));
727   }
728 
729   AnalysisGetter(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
730   AnalysisGetter() {}
731 
732 private:
733   FunctionAnalysisManager *FAM = nullptr;
734 };
735 
736 /// Data structure to hold cached (LLVM-IR) information.
737 ///
738 /// All attributes are given an InformationCache object at creation time to
739 /// avoid inspection of the IR by all of them individually. This default
740 /// InformationCache will hold information required by 'default' attributes,
741 /// thus the ones deduced when Attributor::identifyDefaultAbstractAttributes(..)
742 /// is called.
743 ///
744 /// If custom abstract attributes, registered manually through
745 /// Attributor::registerAA(...), need more information, especially if it is not
746 /// reusable, it is advised to inherit from the InformationCache and cast the
747 /// instance down in the abstract attributes.
748 struct InformationCache {
749   InformationCache(const Module &M, AnalysisGetter &AG,
750                    BumpPtrAllocator &Allocator, SetVector<Function *> *CGSCC)
751       : DL(M.getDataLayout()), Allocator(Allocator),
752         Explorer(
753             /* ExploreInterBlock */ true, /* ExploreCFGForward */ true,
754             /* ExploreCFGBackward */ true,
755             /* LIGetter */
756             [&](const Function &F) { return AG.getAnalysis<LoopAnalysis>(F); },
757             /* DTGetter */
758             [&](const Function &F) {
759               return AG.getAnalysis<DominatorTreeAnalysis>(F);
760             },
761             /* PDTGetter */
762             [&](const Function &F) {
763               return AG.getAnalysis<PostDominatorTreeAnalysis>(F);
764             }),
765         AG(AG), CGSCC(CGSCC) {
766     if (CGSCC)
767       initializeModuleSlice(*CGSCC);
768   }
769 
770   ~InformationCache() {
771     // The FunctionInfo objects are allocated via a BumpPtrAllocator, we call
772     // the destructor manually.
773     for (auto &It : FuncInfoMap)
774       It.getSecond()->~FunctionInfo();
775   }
776 
777   /// Apply \p CB to all uses of \p F. If \p LookThroughConstantExprUses is
778   /// true, constant expression users are not given to \p CB but their uses are
779   /// traversed transitively.
780   template <typename CBTy>
781   static void foreachUse(Function &F, CBTy CB,
782                          bool LookThroughConstantExprUses = true) {
783     SmallVector<Use *, 8> Worklist(make_pointer_range(F.uses()));
784 
785     for (unsigned Idx = 0; Idx < Worklist.size(); ++Idx) {
786       Use &U = *Worklist[Idx];
787 
788       // Allow use in constant bitcasts and simply look through them.
789       if (LookThroughConstantExprUses && isa<ConstantExpr>(U.getUser())) {
790         for (Use &CEU : cast<ConstantExpr>(U.getUser())->uses())
791           Worklist.push_back(&CEU);
792         continue;
793       }
794 
795       CB(U);
796     }
797   }
798 
799   /// Initialize the ModuleSlice member based on \p SCC. ModuleSlices contains
800   /// (a subset of) all functions that we can look at during this SCC traversal.
801   /// This includes functions (transitively) called from the SCC and the
802   /// (transitive) callers of SCC functions. We also can look at a function if
803   /// there is a "reference edge", i.a., if the function somehow uses (!=calls)
804   /// a function in the SCC or a caller of a function in the SCC.
805   void initializeModuleSlice(SetVector<Function *> &SCC) {
806     ModuleSlice.insert(SCC.begin(), SCC.end());
807 
808     SmallPtrSet<Function *, 16> Seen;
809     SmallVector<Function *, 16> Worklist(SCC.begin(), SCC.end());
810     while (!Worklist.empty()) {
811       Function *F = Worklist.pop_back_val();
812       ModuleSlice.insert(F);
813 
814       for (Instruction &I : instructions(*F))
815         if (auto *CB = dyn_cast<CallBase>(&I))
816           if (Function *Callee = CB->getCalledFunction())
817             if (Seen.insert(Callee).second)
818               Worklist.push_back(Callee);
819     }
820 
821     Seen.clear();
822     Worklist.append(SCC.begin(), SCC.end());
823     while (!Worklist.empty()) {
824       Function *F = Worklist.pop_back_val();
825       ModuleSlice.insert(F);
826 
827       // Traverse all transitive uses.
828       foreachUse(*F, [&](Use &U) {
829         if (auto *UsrI = dyn_cast<Instruction>(U.getUser()))
830           if (Seen.insert(UsrI->getFunction()).second)
831             Worklist.push_back(UsrI->getFunction());
832       });
833     }
834   }
835 
836   /// The slice of the module we are allowed to look at.
837   SmallPtrSet<Function *, 8> ModuleSlice;
838 
839   /// A vector type to hold instructions.
840   using InstructionVectorTy = SmallVector<Instruction *, 8>;
841 
842   /// A map type from opcodes to instructions with this opcode.
843   using OpcodeInstMapTy = DenseMap<unsigned, InstructionVectorTy *>;
844 
845   /// Return the map that relates "interesting" opcodes with all instructions
846   /// with that opcode in \p F.
847   OpcodeInstMapTy &getOpcodeInstMapForFunction(const Function &F) {
848     return getFunctionInfo(F).OpcodeInstMap;
849   }
850 
851   /// Return the instructions in \p F that may read or write memory.
852   InstructionVectorTy &getReadOrWriteInstsForFunction(const Function &F) {
853     return getFunctionInfo(F).RWInsts;
854   }
855 
856   /// Return MustBeExecutedContextExplorer
857   MustBeExecutedContextExplorer &getMustBeExecutedContextExplorer() {
858     return Explorer;
859   }
860 
861   /// Return TargetLibraryInfo for function \p F.
862   TargetLibraryInfo *getTargetLibraryInfoForFunction(const Function &F) {
863     return AG.getAnalysis<TargetLibraryAnalysis>(F);
864   }
865 
866   /// Return AliasAnalysis Result for function \p F.
867   AAResults *getAAResultsForFunction(const Function &F);
868 
869   /// Return true if \p Arg is involved in a must-tail call, thus the argument
870   /// of the caller or callee.
871   bool isInvolvedInMustTailCall(const Argument &Arg) {
872     FunctionInfo &FI = getFunctionInfo(*Arg.getParent());
873     return FI.CalledViaMustTail || FI.ContainsMustTailCall;
874   }
875 
876   /// Return the analysis result from a pass \p AP for function \p F.
877   template <typename AP>
878   typename AP::Result *getAnalysisResultForFunction(const Function &F) {
879     return AG.getAnalysis<AP>(F);
880   }
881 
882   /// Return SCC size on call graph for function \p F or 0 if unknown.
883   unsigned getSccSize(const Function &F) {
884     if (CGSCC && CGSCC->count(const_cast<Function *>(&F)))
885       return CGSCC->size();
886     return 0;
887   }
888 
889   /// Return datalayout used in the module.
890   const DataLayout &getDL() { return DL; }
891 
892   /// Return the map conaining all the knowledge we have from `llvm.assume`s.
893   const RetainedKnowledgeMap &getKnowledgeMap() const { return KnowledgeMap; }
894 
895   /// Return if \p To is potentially reachable form \p From or not
896   /// If the same query was answered, return cached result
897   bool getPotentiallyReachable(const Instruction &From, const Instruction &To) {
898     auto KeyPair = std::make_pair(&From, &To);
899     auto Iter = PotentiallyReachableMap.find(KeyPair);
900     if (Iter != PotentiallyReachableMap.end())
901       return Iter->second;
902     const Function &F = *From.getFunction();
903     bool Result = isPotentiallyReachable(
904         &From, &To, nullptr, AG.getAnalysis<DominatorTreeAnalysis>(F),
905         AG.getAnalysis<LoopAnalysis>(F));
906     PotentiallyReachableMap.insert(std::make_pair(KeyPair, Result));
907     return Result;
908   }
909 
910   /// Check whether \p F is part of module slice.
911   bool isInModuleSlice(const Function &F) {
912     return ModuleSlice.count(const_cast<Function *>(&F));
913   }
914 
915 private:
916   struct FunctionInfo {
917     ~FunctionInfo();
918 
919     /// A nested map that remembers all instructions in a function with a
920     /// certain instruction opcode (Instruction::getOpcode()).
921     OpcodeInstMapTy OpcodeInstMap;
922 
923     /// A map from functions to their instructions that may read or write
924     /// memory.
925     InstructionVectorTy RWInsts;
926 
927     /// Function is called by a `musttail` call.
928     bool CalledViaMustTail;
929 
930     /// Function contains a `musttail` call.
931     bool ContainsMustTailCall;
932   };
933 
934   /// A map type from functions to informatio about it.
935   DenseMap<const Function *, FunctionInfo *> FuncInfoMap;
936 
937   /// Return information about the function \p F, potentially by creating it.
938   FunctionInfo &getFunctionInfo(const Function &F) {
939     FunctionInfo *&FI = FuncInfoMap[&F];
940     if (!FI) {
941       FI = new (Allocator) FunctionInfo();
942       initializeInformationCache(F, *FI);
943     }
944     return *FI;
945   }
946 
947   /// Initialize the function information cache \p FI for the function \p F.
948   ///
949   /// This method needs to be called for all function that might be looked at
950   /// through the information cache interface *prior* to looking at them.
951   void initializeInformationCache(const Function &F, FunctionInfo &FI);
952 
953   /// The datalayout used in the module.
954   const DataLayout &DL;
955 
956   /// The allocator used to allocate memory, e.g. for `FunctionInfo`s.
957   BumpPtrAllocator &Allocator;
958 
959   /// MustBeExecutedContextExplorer
960   MustBeExecutedContextExplorer Explorer;
961 
962   /// A map with knowledge retained in `llvm.assume` instructions.
963   RetainedKnowledgeMap KnowledgeMap;
964 
965   /// Getters for analysis.
966   AnalysisGetter &AG;
967 
968   /// The underlying CGSCC, or null if not available.
969   SetVector<Function *> *CGSCC;
970 
971   /// Set of inlineable functions
972   SmallPtrSet<const Function *, 8> InlineableFunctions;
973 
974   /// A map for caching results of queries for isPotentiallyReachable
975   DenseMap<std::pair<const Instruction *, const Instruction *>, bool>
976       PotentiallyReachableMap;
977 
978   /// Give the Attributor access to the members so
979   /// Attributor::identifyDefaultAbstractAttributes(...) can initialize them.
980   friend struct Attributor;
981 };
982 
983 /// The fixpoint analysis framework that orchestrates the attribute deduction.
984 ///
985 /// The Attributor provides a general abstract analysis framework (guided
986 /// fixpoint iteration) as well as helper functions for the deduction of
987 /// (LLVM-IR) attributes. However, also other code properties can be deduced,
988 /// propagated, and ultimately manifested through the Attributor framework. This
989 /// is particularly useful if these properties interact with attributes and a
990 /// co-scheduled deduction allows to improve the solution. Even if not, thus if
991 /// attributes/properties are completely isolated, they should use the
992 /// Attributor framework to reduce the number of fixpoint iteration frameworks
993 /// in the code base. Note that the Attributor design makes sure that isolated
994 /// attributes are not impacted, in any way, by others derived at the same time
995 /// if there is no cross-reasoning performed.
996 ///
997 /// The public facing interface of the Attributor is kept simple and basically
998 /// allows abstract attributes to one thing, query abstract attributes
999 /// in-flight. There are two reasons to do this:
1000 ///    a) The optimistic state of one abstract attribute can justify an
1001 ///       optimistic state of another, allowing to framework to end up with an
1002 ///       optimistic (=best possible) fixpoint instead of one based solely on
1003 ///       information in the IR.
1004 ///    b) This avoids reimplementing various kinds of lookups, e.g., to check
1005 ///       for existing IR attributes, in favor of a single lookups interface
1006 ///       provided by an abstract attribute subclass.
1007 ///
1008 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
1009 ///       described in the file comment.
1010 struct Attributor {
1011   /// Constructor
1012   ///
1013   /// \param Functions The set of functions we are deriving attributes for.
1014   /// \param InfoCache Cache to hold various information accessible for
1015   ///                  the abstract attributes.
1016   /// \param CGUpdater Helper to update an underlying call graph.
1017   /// \param Allowed If not null, a set limiting the attribute opportunities.
1018   Attributor(SetVector<Function *> &Functions, InformationCache &InfoCache,
1019              CallGraphUpdater &CGUpdater,
1020              DenseSet<const char *> *Allowed = nullptr)
1021       : Allocator(InfoCache.Allocator), Functions(Functions),
1022         InfoCache(InfoCache), CGUpdater(CGUpdater), Allowed(Allowed) {}
1023 
1024   ~Attributor();
1025 
1026   /// Run the analyses until a fixpoint is reached or enforced (timeout).
1027   ///
1028   /// The attributes registered with this Attributor can be used after as long
1029   /// as the Attributor is not destroyed (it owns the attributes now).
1030   ///
1031   /// \Returns CHANGED if the IR was changed, otherwise UNCHANGED.
1032   ChangeStatus run();
1033 
1034   /// Lookup an abstract attribute of type \p AAType at position \p IRP. While
1035   /// no abstract attribute is found equivalent positions are checked, see
1036   /// SubsumingPositionIterator. Thus, the returned abstract attribute
1037   /// might be anchored at a different position, e.g., the callee if \p IRP is a
1038   /// call base.
1039   ///
1040   /// This method is the only (supported) way an abstract attribute can retrieve
1041   /// information from another abstract attribute. As an example, take an
1042   /// abstract attribute that determines the memory access behavior for a
1043   /// argument (readnone, readonly, ...). It should use `getAAFor` to get the
1044   /// most optimistic information for other abstract attributes in-flight, e.g.
1045   /// the one reasoning about the "captured" state for the argument or the one
1046   /// reasoning on the memory access behavior of the function as a whole.
1047   ///
1048   /// If the flag \p TrackDependence is set to false the dependence from
1049   /// \p QueryingAA to the return abstract attribute is not automatically
1050   /// recorded. This should only be used if the caller will record the
1051   /// dependence explicitly if necessary, thus if it the returned abstract
1052   /// attribute is used for reasoning. To record the dependences explicitly use
1053   /// the `Attributor::recordDependence` method.
1054   template <typename AAType>
1055   const AAType &getAAFor(const AbstractAttribute &QueryingAA,
1056                          const IRPosition &IRP, bool TrackDependence = true,
1057                          DepClassTy DepClass = DepClassTy::REQUIRED) {
1058     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, TrackDependence, DepClass,
1059                                     /* ForceUpdate */ false);
1060   }
1061 
1062   /// Similar to getAAFor but the return abstract attribute will be updated (via
1063   /// `AbstractAttribute::update`) even if it is found in the cache. This is
1064   /// especially useful for AAIsDead as changes in liveness can make updates
1065   /// possible/useful that were not happening before as the abstract attribute
1066   /// was assumed dead.
1067   template <typename AAType>
1068   const AAType &getAndUpdateAAFor(const AbstractAttribute &QueryingAA,
1069                                   const IRPosition &IRP,
1070                                   bool TrackDependence = true,
1071                                   DepClassTy DepClass = DepClassTy::REQUIRED) {
1072     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, TrackDependence, DepClass,
1073                                     /* ForceUpdate */ true);
1074   }
1075 
1076   /// The version of getAAFor that allows to omit a querying abstract
1077   /// attribute. Using this after Attributor started running is restricted to
1078   /// only the Attributor itself. Initial seeding of AAs can be done via this
1079   /// function.
1080   /// NOTE: ForceUpdate is ignored in any stage other than the update stage.
1081   template <typename AAType>
1082   const AAType &getOrCreateAAFor(const IRPosition &IRP,
1083                                  const AbstractAttribute *QueryingAA = nullptr,
1084                                  bool TrackDependence = false,
1085                                  DepClassTy DepClass = DepClassTy::OPTIONAL,
1086                                  bool ForceUpdate = false) {
1087     if (AAType *AAPtr = lookupAAFor<AAType>(IRP, QueryingAA, TrackDependence)) {
1088       if (ForceUpdate && Phase == AttributorPhase::UPDATE)
1089         updateAA(*AAPtr);
1090       return *AAPtr;
1091     }
1092 
1093     // No matching attribute found, create one.
1094     // Use the static create method.
1095     auto &AA = AAType::createForPosition(IRP, *this);
1096 
1097     // If we are currenty seeding attributes, enforce seeding rules.
1098     if (Phase == AttributorPhase::SEEDING && !shouldSeedAttribute(AA)) {
1099       AA.getState().indicatePessimisticFixpoint();
1100       return AA;
1101     }
1102 
1103     registerAA(AA);
1104 
1105     // For now we ignore naked and optnone functions.
1106     bool Invalidate = Allowed && !Allowed->count(&AAType::ID);
1107     const Function *FnScope = IRP.getAnchorScope();
1108     if (FnScope)
1109       Invalidate |= FnScope->hasFnAttribute(Attribute::Naked) ||
1110                     FnScope->hasFnAttribute(Attribute::OptimizeNone);
1111 
1112     // Avoid too many nested initializations to prevent a stack overflow.
1113     Invalidate |= InitializationChainLength > MaxInitializationChainLength;
1114 
1115     // Bootstrap the new attribute with an initial update to propagate
1116     // information, e.g., function -> call site. If it is not on a given
1117     // Allowed we will not perform updates at all.
1118     if (Invalidate) {
1119       AA.getState().indicatePessimisticFixpoint();
1120       return AA;
1121     }
1122 
1123     {
1124       TimeTraceScope TimeScope(AA.getName() + "::initialize");
1125       ++InitializationChainLength;
1126       AA.initialize(*this);
1127       --InitializationChainLength;
1128     }
1129 
1130     // Initialize and update is allowed for code outside of the current function
1131     // set, but only if it is part of module slice we are allowed to look at.
1132     // Only exception is AAIsDeadFunction whose initialization is prevented
1133     // directly, since we don't to compute it twice.
1134     if (FnScope && !Functions.count(const_cast<Function *>(FnScope))) {
1135       if (!getInfoCache().isInModuleSlice(*FnScope)) {
1136         AA.getState().indicatePessimisticFixpoint();
1137         return AA;
1138       }
1139     }
1140 
1141     // If this is queried in the manifest stage, we force the AA to indicate
1142     // pessimistic fixpoint immediately.
1143     if (Phase == AttributorPhase::MANIFEST) {
1144       AA.getState().indicatePessimisticFixpoint();
1145       return AA;
1146     }
1147 
1148     // Allow seeded attributes to declare dependencies.
1149     // Remember the seeding state.
1150     AttributorPhase OldPhase = Phase;
1151     Phase = AttributorPhase::UPDATE;
1152 
1153     updateAA(AA);
1154 
1155     Phase = OldPhase;
1156 
1157     if (TrackDependence && AA.getState().isValidState())
1158       recordDependence(AA, const_cast<AbstractAttribute &>(*QueryingAA),
1159                        DepClass);
1160     return AA;
1161   }
1162 
1163   /// Return the attribute of \p AAType for \p IRP if existing. This also allows
1164   /// non-AA users lookup.
1165   template <typename AAType>
1166   AAType *lookupAAFor(const IRPosition &IRP,
1167                       const AbstractAttribute *QueryingAA = nullptr,
1168                       bool TrackDependence = false,
1169                       DepClassTy DepClass = DepClassTy::OPTIONAL) {
1170     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1171                   "Cannot query an attribute with a type not derived from "
1172                   "'AbstractAttribute'!");
1173     assert((QueryingAA || !TrackDependence) &&
1174            "Cannot track dependences without a QueryingAA!");
1175 
1176     // Lookup the abstract attribute of type AAType. If found, return it after
1177     // registering a dependence of QueryingAA on the one returned attribute.
1178     AbstractAttribute *AAPtr = AAMap.lookup({&AAType::ID, IRP});
1179     if (!AAPtr)
1180       return nullptr;
1181 
1182     AAType *AA = static_cast<AAType *>(AAPtr);
1183 
1184     // Do not register a dependence on an attribute with an invalid state.
1185     if (TrackDependence && AA->getState().isValidState())
1186       recordDependence(*AA, const_cast<AbstractAttribute &>(*QueryingAA),
1187                        DepClass);
1188     return AA;
1189   }
1190 
1191   /// Explicitly record a dependence from \p FromAA to \p ToAA, that is if
1192   /// \p FromAA changes \p ToAA should be updated as well.
1193   ///
1194   /// This method should be used in conjunction with the `getAAFor` method and
1195   /// with the TrackDependence flag passed to the method set to false. This can
1196   /// be beneficial to avoid false dependences but it requires the users of
1197   /// `getAAFor` to explicitly record true dependences through this method.
1198   /// The \p DepClass flag indicates if the dependence is striclty necessary.
1199   /// That means for required dependences, if \p FromAA changes to an invalid
1200   /// state, \p ToAA can be moved to a pessimistic fixpoint because it required
1201   /// information from \p FromAA but none are available anymore.
1202   void recordDependence(const AbstractAttribute &FromAA,
1203                         const AbstractAttribute &ToAA, DepClassTy DepClass);
1204 
1205   /// Introduce a new abstract attribute into the fixpoint analysis.
1206   ///
1207   /// Note that ownership of the attribute is given to the Attributor. It will
1208   /// invoke delete for the Attributor on destruction of the Attributor.
1209   ///
1210   /// Attributes are identified by their IR position (AAType::getIRPosition())
1211   /// and the address of their static member (see AAType::ID).
1212   template <typename AAType> AAType &registerAA(AAType &AA) {
1213     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1214                   "Cannot register an attribute with a type not derived from "
1215                   "'AbstractAttribute'!");
1216     // Put the attribute in the lookup map structure and the container we use to
1217     // keep track of all attributes.
1218     const IRPosition &IRP = AA.getIRPosition();
1219     AbstractAttribute *&AAPtr = AAMap[{&AAType::ID, IRP}];
1220 
1221     assert(!AAPtr && "Attribute already in map!");
1222     AAPtr = &AA;
1223 
1224     // Register AA with the synthetic root only before the manifest stage.
1225     if (Phase == AttributorPhase::SEEDING || Phase == AttributorPhase::UPDATE)
1226       DG.SyntheticRoot.Deps.push_back(
1227           AADepGraphNode::DepTy(&AA, unsigned(DepClassTy::REQUIRED)));
1228 
1229     return AA;
1230   }
1231 
1232   /// Return the internal information cache.
1233   InformationCache &getInfoCache() { return InfoCache; }
1234 
1235   /// Return true if this is a module pass, false otherwise.
1236   bool isModulePass() const {
1237     return !Functions.empty() &&
1238            Functions.size() == Functions.front()->getParent()->size();
1239   }
1240 
1241   /// Return true if we derive attributes for \p Fn
1242   bool isRunOn(Function &Fn) const {
1243     return Functions.empty() || Functions.count(&Fn);
1244   }
1245 
1246   /// Determine opportunities to derive 'default' attributes in \p F and create
1247   /// abstract attribute objects for them.
1248   ///
1249   /// \param F The function that is checked for attribute opportunities.
1250   ///
1251   /// Note that abstract attribute instances are generally created even if the
1252   /// IR already contains the information they would deduce. The most important
1253   /// reason for this is the single interface, the one of the abstract attribute
1254   /// instance, which can be queried without the need to look at the IR in
1255   /// various places.
1256   void identifyDefaultAbstractAttributes(Function &F);
1257 
1258   /// Determine whether the function \p F is IPO amendable
1259   ///
1260   /// If a function is exactly defined or it has alwaysinline attribute
1261   /// and is viable to be inlined, we say it is IPO amendable
1262   bool isFunctionIPOAmendable(const Function &F) {
1263     return F.hasExactDefinition() || InfoCache.InlineableFunctions.count(&F);
1264   }
1265 
1266   /// Mark the internal function \p F as live.
1267   ///
1268   /// This will trigger the identification and initialization of attributes for
1269   /// \p F.
1270   void markLiveInternalFunction(const Function &F) {
1271     assert(F.hasLocalLinkage() &&
1272            "Only local linkage is assumed dead initially.");
1273 
1274     identifyDefaultAbstractAttributes(const_cast<Function &>(F));
1275   }
1276 
1277   /// Helper function to remove callsite.
1278   void removeCallSite(CallInst *CI) {
1279     if (!CI)
1280       return;
1281 
1282     CGUpdater.removeCallSite(*CI);
1283   }
1284 
1285   /// Record that \p U is to be replaces with \p NV after information was
1286   /// manifested. This also triggers deletion of trivially dead istructions.
1287   bool changeUseAfterManifest(Use &U, Value &NV) {
1288     Value *&V = ToBeChangedUses[&U];
1289     if (V && (V->stripPointerCasts() == NV.stripPointerCasts() ||
1290               isa_and_nonnull<UndefValue>(V)))
1291       return false;
1292     assert((!V || V == &NV || isa<UndefValue>(NV)) &&
1293            "Use was registered twice for replacement with different values!");
1294     V = &NV;
1295     return true;
1296   }
1297 
1298   /// Helper function to replace all uses of \p V with \p NV. Return true if
1299   /// there is any change. The flag \p ChangeDroppable indicates if dropppable
1300   /// uses should be changed too.
1301   bool changeValueAfterManifest(Value &V, Value &NV,
1302                                 bool ChangeDroppable = true) {
1303     bool Changed = false;
1304     for (auto &U : V.uses())
1305       if (ChangeDroppable || !U.getUser()->isDroppable())
1306         Changed |= changeUseAfterManifest(U, NV);
1307 
1308     return Changed;
1309   }
1310 
1311   /// Record that \p I is to be replaced with `unreachable` after information
1312   /// was manifested.
1313   void changeToUnreachableAfterManifest(Instruction *I) {
1314     ToBeChangedToUnreachableInsts.insert(I);
1315   }
1316 
1317   /// Record that \p II has at least one dead successor block. This information
1318   /// is used, e.g., to replace \p II with a call, after information was
1319   /// manifested.
1320   void registerInvokeWithDeadSuccessor(InvokeInst &II) {
1321     InvokeWithDeadSuccessor.push_back(&II);
1322   }
1323 
1324   /// Record that \p I is deleted after information was manifested. This also
1325   /// triggers deletion of trivially dead istructions.
1326   void deleteAfterManifest(Instruction &I) { ToBeDeletedInsts.insert(&I); }
1327 
1328   /// Record that \p BB is deleted after information was manifested. This also
1329   /// triggers deletion of trivially dead istructions.
1330   void deleteAfterManifest(BasicBlock &BB) { ToBeDeletedBlocks.insert(&BB); }
1331 
1332   /// Record that \p F is deleted after information was manifested.
1333   void deleteAfterManifest(Function &F) { ToBeDeletedFunctions.insert(&F); }
1334 
1335   /// If \p V is assumed to be a constant, return it, if it is unclear yet,
1336   /// return None, otherwise return `nullptr`.
1337   Optional<Constant *> getAssumedConstant(const Value &V,
1338                                           const AbstractAttribute &AA,
1339                                           bool &UsedAssumedInformation);
1340 
1341   /// Return true if \p AA (or its context instruction) is assumed dead.
1342   ///
1343   /// If \p LivenessAA is not provided it is queried.
1344   bool isAssumedDead(const AbstractAttribute &AA, const AAIsDead *LivenessAA,
1345                      bool CheckBBLivenessOnly = false,
1346                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1347 
1348   /// Return true if \p I is assumed dead.
1349   ///
1350   /// If \p LivenessAA is not provided it is queried.
1351   bool isAssumedDead(const Instruction &I, const AbstractAttribute *QueryingAA,
1352                      const AAIsDead *LivenessAA,
1353                      bool CheckBBLivenessOnly = false,
1354                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1355 
1356   /// Return true if \p U is assumed dead.
1357   ///
1358   /// If \p FnLivenessAA is not provided it is queried.
1359   bool isAssumedDead(const Use &U, const AbstractAttribute *QueryingAA,
1360                      const AAIsDead *FnLivenessAA,
1361                      bool CheckBBLivenessOnly = false,
1362                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1363 
1364   /// Return true if \p IRP is assumed dead.
1365   ///
1366   /// If \p FnLivenessAA is not provided it is queried.
1367   bool isAssumedDead(const IRPosition &IRP, const AbstractAttribute *QueryingAA,
1368                      const AAIsDead *FnLivenessAA,
1369                      bool CheckBBLivenessOnly = false,
1370                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1371 
1372   /// Check \p Pred on all (transitive) uses of \p V.
1373   ///
1374   /// This method will evaluate \p Pred on all (transitive) uses of the
1375   /// associated value and return true if \p Pred holds every time.
1376   bool checkForAllUses(function_ref<bool(const Use &, bool &)> Pred,
1377                        const AbstractAttribute &QueryingAA, const Value &V,
1378                        DepClassTy LivenessDepClass = DepClassTy::OPTIONAL);
1379 
1380   /// Helper struct used in the communication between an abstract attribute (AA)
1381   /// that wants to change the signature of a function and the Attributor which
1382   /// applies the changes. The struct is partially initialized with the
1383   /// information from the AA (see the constructor). All other members are
1384   /// provided by the Attributor prior to invoking any callbacks.
1385   struct ArgumentReplacementInfo {
1386     /// Callee repair callback type
1387     ///
1388     /// The function repair callback is invoked once to rewire the replacement
1389     /// arguments in the body of the new function. The argument replacement info
1390     /// is passed, as build from the registerFunctionSignatureRewrite call, as
1391     /// well as the replacement function and an iteratore to the first
1392     /// replacement argument.
1393     using CalleeRepairCBTy = std::function<void(
1394         const ArgumentReplacementInfo &, Function &, Function::arg_iterator)>;
1395 
1396     /// Abstract call site (ACS) repair callback type
1397     ///
1398     /// The abstract call site repair callback is invoked once on every abstract
1399     /// call site of the replaced function (\see ReplacedFn). The callback needs
1400     /// to provide the operands for the call to the new replacement function.
1401     /// The number and type of the operands appended to the provided vector
1402     /// (second argument) is defined by the number and types determined through
1403     /// the replacement type vector (\see ReplacementTypes). The first argument
1404     /// is the ArgumentReplacementInfo object registered with the Attributor
1405     /// through the registerFunctionSignatureRewrite call.
1406     using ACSRepairCBTy =
1407         std::function<void(const ArgumentReplacementInfo &, AbstractCallSite,
1408                            SmallVectorImpl<Value *> &)>;
1409 
1410     /// Simple getters, see the corresponding members for details.
1411     ///{
1412 
1413     Attributor &getAttributor() const { return A; }
1414     const Function &getReplacedFn() const { return ReplacedFn; }
1415     const Argument &getReplacedArg() const { return ReplacedArg; }
1416     unsigned getNumReplacementArgs() const { return ReplacementTypes.size(); }
1417     const SmallVectorImpl<Type *> &getReplacementTypes() const {
1418       return ReplacementTypes;
1419     }
1420 
1421     ///}
1422 
1423   private:
1424     /// Constructor that takes the argument to be replaced, the types of
1425     /// the replacement arguments, as well as callbacks to repair the call sites
1426     /// and new function after the replacement happened.
1427     ArgumentReplacementInfo(Attributor &A, Argument &Arg,
1428                             ArrayRef<Type *> ReplacementTypes,
1429                             CalleeRepairCBTy &&CalleeRepairCB,
1430                             ACSRepairCBTy &&ACSRepairCB)
1431         : A(A), ReplacedFn(*Arg.getParent()), ReplacedArg(Arg),
1432           ReplacementTypes(ReplacementTypes.begin(), ReplacementTypes.end()),
1433           CalleeRepairCB(std::move(CalleeRepairCB)),
1434           ACSRepairCB(std::move(ACSRepairCB)) {}
1435 
1436     /// Reference to the attributor to allow access from the callbacks.
1437     Attributor &A;
1438 
1439     /// The "old" function replaced by ReplacementFn.
1440     const Function &ReplacedFn;
1441 
1442     /// The "old" argument replaced by new ones defined via ReplacementTypes.
1443     const Argument &ReplacedArg;
1444 
1445     /// The types of the arguments replacing ReplacedArg.
1446     const SmallVector<Type *, 8> ReplacementTypes;
1447 
1448     /// Callee repair callback, see CalleeRepairCBTy.
1449     const CalleeRepairCBTy CalleeRepairCB;
1450 
1451     /// Abstract call site (ACS) repair callback, see ACSRepairCBTy.
1452     const ACSRepairCBTy ACSRepairCB;
1453 
1454     /// Allow access to the private members from the Attributor.
1455     friend struct Attributor;
1456   };
1457 
1458   /// Check if we can rewrite a function signature.
1459   ///
1460   /// The argument \p Arg is replaced with new ones defined by the number,
1461   /// order, and types in \p ReplacementTypes.
1462   ///
1463   /// \returns True, if the replacement can be registered, via
1464   /// registerFunctionSignatureRewrite, false otherwise.
1465   bool isValidFunctionSignatureRewrite(Argument &Arg,
1466                                        ArrayRef<Type *> ReplacementTypes);
1467 
1468   /// Register a rewrite for a function signature.
1469   ///
1470   /// The argument \p Arg is replaced with new ones defined by the number,
1471   /// order, and types in \p ReplacementTypes. The rewiring at the call sites is
1472   /// done through \p ACSRepairCB and at the callee site through
1473   /// \p CalleeRepairCB.
1474   ///
1475   /// \returns True, if the replacement was registered, false otherwise.
1476   bool registerFunctionSignatureRewrite(
1477       Argument &Arg, ArrayRef<Type *> ReplacementTypes,
1478       ArgumentReplacementInfo::CalleeRepairCBTy &&CalleeRepairCB,
1479       ArgumentReplacementInfo::ACSRepairCBTy &&ACSRepairCB);
1480 
1481   /// Check \p Pred on all function call sites.
1482   ///
1483   /// This method will evaluate \p Pred on call sites and return
1484   /// true if \p Pred holds in every call sites. However, this is only possible
1485   /// all call sites are known, hence the function has internal linkage.
1486   /// If true is returned, \p AllCallSitesKnown is set if all possible call
1487   /// sites of the function have been visited.
1488   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1489                             const AbstractAttribute &QueryingAA,
1490                             bool RequireAllCallSites, bool &AllCallSitesKnown);
1491 
1492   /// Check \p Pred on all values potentially returned by \p F.
1493   ///
1494   /// This method will evaluate \p Pred on all values potentially returned by
1495   /// the function associated with \p QueryingAA. The returned values are
1496   /// matched with their respective return instructions. Returns true if \p Pred
1497   /// holds on all of them.
1498   bool checkForAllReturnedValuesAndReturnInsts(
1499       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred,
1500       const AbstractAttribute &QueryingAA);
1501 
1502   /// Check \p Pred on all values potentially returned by the function
1503   /// associated with \p QueryingAA.
1504   ///
1505   /// This is the context insensitive version of the method above.
1506   bool checkForAllReturnedValues(function_ref<bool(Value &)> Pred,
1507                                  const AbstractAttribute &QueryingAA);
1508 
1509   /// Check \p Pred on all instructions with an opcode present in \p Opcodes.
1510   ///
1511   /// This method will evaluate \p Pred on all instructions with an opcode
1512   /// present in \p Opcode and return true if \p Pred holds on all of them.
1513   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
1514                                const AbstractAttribute &QueryingAA,
1515                                const ArrayRef<unsigned> &Opcodes,
1516                                bool CheckBBLivenessOnly = false);
1517 
1518   /// Check \p Pred on all call-like instructions (=CallBased derived).
1519   ///
1520   /// See checkForAllCallLikeInstructions(...) for more information.
1521   bool checkForAllCallLikeInstructions(function_ref<bool(Instruction &)> Pred,
1522                                        const AbstractAttribute &QueryingAA) {
1523     return checkForAllInstructions(Pred, QueryingAA,
1524                                    {(unsigned)Instruction::Invoke,
1525                                     (unsigned)Instruction::CallBr,
1526                                     (unsigned)Instruction::Call});
1527   }
1528 
1529   /// Check \p Pred on all Read/Write instructions.
1530   ///
1531   /// This method will evaluate \p Pred on all instructions that read or write
1532   /// to memory present in the information cache and return true if \p Pred
1533   /// holds on all of them.
1534   bool checkForAllReadWriteInstructions(function_ref<bool(Instruction &)> Pred,
1535                                         AbstractAttribute &QueryingAA);
1536 
1537   /// Create a shallow wrapper for \p F such that \p F has internal linkage
1538   /// afterwards. It also sets the original \p F 's name to anonymous
1539   ///
1540   /// A wrapper is a function with the same type (and attributes) as \p F
1541   /// that will only call \p F and return the result, if any.
1542   ///
1543   /// Assuming the declaration of looks like:
1544   ///   rty F(aty0 arg0, ..., atyN argN);
1545   ///
1546   /// The wrapper will then look as follows:
1547   ///   rty wrapper(aty0 arg0, ..., atyN argN) {
1548   ///     return F(arg0, ..., argN);
1549   ///   }
1550   ///
1551   static void createShallowWrapper(Function &F);
1552 
1553   /// Return the data layout associated with the anchor scope.
1554   const DataLayout &getDataLayout() const { return InfoCache.DL; }
1555 
1556   /// The allocator used to allocate memory, e.g. for `AbstractAttribute`s.
1557   BumpPtrAllocator &Allocator;
1558 
1559 private:
1560   /// This method will do fixpoint iteration until fixpoint or the
1561   /// maximum iteration count is reached.
1562   ///
1563   /// If the maximum iteration count is reached, This method will
1564   /// indicate pessimistic fixpoint on attributes that transitively depend
1565   /// on attributes that were scheduled for an update.
1566   void runTillFixpoint();
1567 
1568   /// Gets called after scheduling, manifests attributes to the LLVM IR.
1569   ChangeStatus manifestAttributes();
1570 
1571   /// Gets called after attributes have been manifested, cleans up the IR.
1572   /// Deletes dead functions, blocks and instructions.
1573   /// Rewrites function signitures and updates the call graph.
1574   ChangeStatus cleanupIR();
1575 
1576   /// Identify internal functions that are effectively dead, thus not reachable
1577   /// from a live entry point. The functions are added to ToBeDeletedFunctions.
1578   void identifyDeadInternalFunctions();
1579 
1580   /// Run `::update` on \p AA and track the dependences queried while doing so.
1581   /// Also adjust the state if we know further updates are not necessary.
1582   ChangeStatus updateAA(AbstractAttribute &AA);
1583 
1584   /// Remember the dependences on the top of the dependence stack such that they
1585   /// may trigger further updates. (\see DependenceStack)
1586   void rememberDependences();
1587 
1588   /// Check \p Pred on all call sites of \p Fn.
1589   ///
1590   /// This method will evaluate \p Pred on call sites and return
1591   /// true if \p Pred holds in every call sites. However, this is only possible
1592   /// all call sites are known, hence the function has internal linkage.
1593   /// If true is returned, \p AllCallSitesKnown is set if all possible call
1594   /// sites of the function have been visited.
1595   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1596                             const Function &Fn, bool RequireAllCallSites,
1597                             const AbstractAttribute *QueryingAA,
1598                             bool &AllCallSitesKnown);
1599 
1600   /// Apply all requested function signature rewrites
1601   /// (\see registerFunctionSignatureRewrite) and return Changed if the module
1602   /// was altered.
1603   ChangeStatus
1604   rewriteFunctionSignatures(SmallPtrSetImpl<Function *> &ModifiedFns);
1605 
1606   /// Check if the Attribute \p AA should be seeded.
1607   /// See getOrCreateAAFor.
1608   bool shouldSeedAttribute(AbstractAttribute &AA);
1609 
1610   /// A nested map to lookup abstract attributes based on the argument position
1611   /// on the outer level, and the addresses of the static member (AAType::ID) on
1612   /// the inner level.
1613   ///{
1614   using AAMapKeyTy = std::pair<const char *, IRPosition>;
1615   DenseMap<AAMapKeyTy, AbstractAttribute *> AAMap;
1616   ///}
1617 
1618   /// Map to remember all requested signature changes (= argument replacements).
1619   DenseMap<Function *, SmallVector<std::unique_ptr<ArgumentReplacementInfo>, 8>>
1620       ArgumentReplacementMap;
1621 
1622   /// The set of functions we are deriving attributes for.
1623   SetVector<Function *> &Functions;
1624 
1625   /// The information cache that holds pre-processed (LLVM-IR) information.
1626   InformationCache &InfoCache;
1627 
1628   /// Helper to update an underlying call graph.
1629   CallGraphUpdater &CGUpdater;
1630 
1631   /// Abstract Attribute dependency graph
1632   AADepGraph DG;
1633 
1634   /// Set of functions for which we modified the content such that it might
1635   /// impact the call graph.
1636   SmallPtrSet<Function *, 8> CGModifiedFunctions;
1637 
1638   /// Information about a dependence. If FromAA is changed ToAA needs to be
1639   /// updated as well.
1640   struct DepInfo {
1641     const AbstractAttribute *FromAA;
1642     const AbstractAttribute *ToAA;
1643     DepClassTy DepClass;
1644   };
1645 
1646   /// The dependence stack is used to track dependences during an
1647   /// `AbstractAttribute::update` call. As `AbstractAttribute::update` can be
1648   /// recursive we might have multiple vectors of dependences in here. The stack
1649   /// size, should be adjusted according to the expected recursion depth and the
1650   /// inner dependence vector size to the expected number of dependences per
1651   /// abstract attribute. Since the inner vectors are actually allocated on the
1652   /// stack we can be generous with their size.
1653   using DependenceVector = SmallVector<DepInfo, 8>;
1654   SmallVector<DependenceVector *, 16> DependenceStack;
1655 
1656   /// If not null, a set limiting the attribute opportunities.
1657   const DenseSet<const char *> *Allowed;
1658 
1659   /// A set to remember the functions we already assume to be live and visited.
1660   DenseSet<const Function *> VisitedFunctions;
1661 
1662   /// Uses we replace with a new value after manifest is done. We will remove
1663   /// then trivially dead instructions as well.
1664   DenseMap<Use *, Value *> ToBeChangedUses;
1665 
1666   /// Instructions we replace with `unreachable` insts after manifest is done.
1667   SmallDenseSet<WeakVH, 16> ToBeChangedToUnreachableInsts;
1668 
1669   /// Invoke instructions with at least a single dead successor block.
1670   SmallVector<WeakVH, 16> InvokeWithDeadSuccessor;
1671 
1672   /// A flag that indicates which stage of the process we are in. Initially, the
1673   /// phase is SEEDING. Phase is changed in `Attributor::run()`
1674   enum class AttributorPhase {
1675     SEEDING,
1676     UPDATE,
1677     MANIFEST,
1678     CLEANUP,
1679   } Phase = AttributorPhase::SEEDING;
1680 
1681   /// The current initialization chain length. Tracked to avoid stack overflows.
1682   unsigned InitializationChainLength = 0;
1683 
1684   /// Functions, blocks, and instructions we delete after manifest is done.
1685   ///
1686   ///{
1687   SmallPtrSet<Function *, 8> ToBeDeletedFunctions;
1688   SmallPtrSet<BasicBlock *, 8> ToBeDeletedBlocks;
1689   SmallDenseSet<WeakVH, 8> ToBeDeletedInsts;
1690   ///}
1691 
1692   friend AADepGraph;
1693 };
1694 
1695 /// An interface to query the internal state of an abstract attribute.
1696 ///
1697 /// The abstract state is a minimal interface that allows the Attributor to
1698 /// communicate with the abstract attributes about their internal state without
1699 /// enforcing or exposing implementation details, e.g., the (existence of an)
1700 /// underlying lattice.
1701 ///
1702 /// It is sufficient to be able to query if a state is (1) valid or invalid, (2)
1703 /// at a fixpoint, and to indicate to the state that (3) an optimistic fixpoint
1704 /// was reached or (4) a pessimistic fixpoint was enforced.
1705 ///
1706 /// All methods need to be implemented by the subclass. For the common use case,
1707 /// a single boolean state or a bit-encoded state, the BooleanState and
1708 /// {Inc,Dec,Bit}IntegerState classes are already provided. An abstract
1709 /// attribute can inherit from them to get the abstract state interface and
1710 /// additional methods to directly modify the state based if needed. See the
1711 /// class comments for help.
1712 struct AbstractState {
1713   virtual ~AbstractState() {}
1714 
1715   /// Return if this abstract state is in a valid state. If false, no
1716   /// information provided should be used.
1717   virtual bool isValidState() const = 0;
1718 
1719   /// Return if this abstract state is fixed, thus does not need to be updated
1720   /// if information changes as it cannot change itself.
1721   virtual bool isAtFixpoint() const = 0;
1722 
1723   /// Indicate that the abstract state should converge to the optimistic state.
1724   ///
1725   /// This will usually make the optimistically assumed state the known to be
1726   /// true state.
1727   ///
1728   /// \returns ChangeStatus::UNCHANGED as the assumed value should not change.
1729   virtual ChangeStatus indicateOptimisticFixpoint() = 0;
1730 
1731   /// Indicate that the abstract state should converge to the pessimistic state.
1732   ///
1733   /// This will usually revert the optimistically assumed state to the known to
1734   /// be true state.
1735   ///
1736   /// \returns ChangeStatus::CHANGED as the assumed value may change.
1737   virtual ChangeStatus indicatePessimisticFixpoint() = 0;
1738 };
1739 
1740 /// Simple state with integers encoding.
1741 ///
1742 /// The interface ensures that the assumed bits are always a subset of the known
1743 /// bits. Users can only add known bits and, except through adding known bits,
1744 /// they can only remove assumed bits. This should guarantee monotoniticy and
1745 /// thereby the existence of a fixpoint (if used corretly). The fixpoint is
1746 /// reached when the assumed and known state/bits are equal. Users can
1747 /// force/inidicate a fixpoint. If an optimistic one is indicated, the known
1748 /// state will catch up with the assumed one, for a pessimistic fixpoint it is
1749 /// the other way around.
1750 template <typename base_ty, base_ty BestState, base_ty WorstState>
1751 struct IntegerStateBase : public AbstractState {
1752   using base_t = base_ty;
1753 
1754   IntegerStateBase() {}
1755   IntegerStateBase(base_t Assumed) : Assumed(Assumed) {}
1756 
1757   /// Return the best possible representable state.
1758   static constexpr base_t getBestState() { return BestState; }
1759   static constexpr base_t getBestState(const IntegerStateBase &) {
1760     return getBestState();
1761   }
1762 
1763   /// Return the worst possible representable state.
1764   static constexpr base_t getWorstState() { return WorstState; }
1765   static constexpr base_t getWorstState(const IntegerStateBase &) {
1766     return getWorstState();
1767   }
1768 
1769   /// See AbstractState::isValidState()
1770   /// NOTE: For now we simply pretend that the worst possible state is invalid.
1771   bool isValidState() const override { return Assumed != getWorstState(); }
1772 
1773   /// See AbstractState::isAtFixpoint()
1774   bool isAtFixpoint() const override { return Assumed == Known; }
1775 
1776   /// See AbstractState::indicateOptimisticFixpoint(...)
1777   ChangeStatus indicateOptimisticFixpoint() override {
1778     Known = Assumed;
1779     return ChangeStatus::UNCHANGED;
1780   }
1781 
1782   /// See AbstractState::indicatePessimisticFixpoint(...)
1783   ChangeStatus indicatePessimisticFixpoint() override {
1784     Assumed = Known;
1785     return ChangeStatus::CHANGED;
1786   }
1787 
1788   /// Return the known state encoding
1789   base_t getKnown() const { return Known; }
1790 
1791   /// Return the assumed state encoding.
1792   base_t getAssumed() const { return Assumed; }
1793 
1794   /// Equality for IntegerStateBase.
1795   bool
1796   operator==(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
1797     return this->getAssumed() == R.getAssumed() &&
1798            this->getKnown() == R.getKnown();
1799   }
1800 
1801   /// Inequality for IntegerStateBase.
1802   bool
1803   operator!=(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
1804     return !(*this == R);
1805   }
1806 
1807   /// "Clamp" this state with \p R. The result is subtype dependent but it is
1808   /// intended that only information assumed in both states will be assumed in
1809   /// this one afterwards.
1810   void operator^=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
1811     handleNewAssumedValue(R.getAssumed());
1812   }
1813 
1814   /// "Clamp" this state with \p R. The result is subtype dependent but it is
1815   /// intended that information known in either state will be known in
1816   /// this one afterwards.
1817   void operator+=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
1818     handleNewKnownValue(R.getKnown());
1819   }
1820 
1821   void operator|=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
1822     joinOR(R.getAssumed(), R.getKnown());
1823   }
1824 
1825   void operator&=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
1826     joinAND(R.getAssumed(), R.getKnown());
1827   }
1828 
1829 protected:
1830   /// Handle a new assumed value \p Value. Subtype dependent.
1831   virtual void handleNewAssumedValue(base_t Value) = 0;
1832 
1833   /// Handle a new known value \p Value. Subtype dependent.
1834   virtual void handleNewKnownValue(base_t Value) = 0;
1835 
1836   /// Handle a  value \p Value. Subtype dependent.
1837   virtual void joinOR(base_t AssumedValue, base_t KnownValue) = 0;
1838 
1839   /// Handle a new assumed value \p Value. Subtype dependent.
1840   virtual void joinAND(base_t AssumedValue, base_t KnownValue) = 0;
1841 
1842   /// The known state encoding in an integer of type base_t.
1843   base_t Known = getWorstState();
1844 
1845   /// The assumed state encoding in an integer of type base_t.
1846   base_t Assumed = getBestState();
1847 };
1848 
1849 /// Specialization of the integer state for a bit-wise encoding.
1850 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
1851           base_ty WorstState = 0>
1852 struct BitIntegerState
1853     : public IntegerStateBase<base_ty, BestState, WorstState> {
1854   using base_t = base_ty;
1855 
1856   /// Return true if the bits set in \p BitsEncoding are "known bits".
1857   bool isKnown(base_t BitsEncoding) const {
1858     return (this->Known & BitsEncoding) == BitsEncoding;
1859   }
1860 
1861   /// Return true if the bits set in \p BitsEncoding are "assumed bits".
1862   bool isAssumed(base_t BitsEncoding) const {
1863     return (this->Assumed & BitsEncoding) == BitsEncoding;
1864   }
1865 
1866   /// Add the bits in \p BitsEncoding to the "known bits".
1867   BitIntegerState &addKnownBits(base_t Bits) {
1868     // Make sure we never miss any "known bits".
1869     this->Assumed |= Bits;
1870     this->Known |= Bits;
1871     return *this;
1872   }
1873 
1874   /// Remove the bits in \p BitsEncoding from the "assumed bits" if not known.
1875   BitIntegerState &removeAssumedBits(base_t BitsEncoding) {
1876     return intersectAssumedBits(~BitsEncoding);
1877   }
1878 
1879   /// Remove the bits in \p BitsEncoding from the "known bits".
1880   BitIntegerState &removeKnownBits(base_t BitsEncoding) {
1881     this->Known = (this->Known & ~BitsEncoding);
1882     return *this;
1883   }
1884 
1885   /// Keep only "assumed bits" also set in \p BitsEncoding but all known ones.
1886   BitIntegerState &intersectAssumedBits(base_t BitsEncoding) {
1887     // Make sure we never loose any "known bits".
1888     this->Assumed = (this->Assumed & BitsEncoding) | this->Known;
1889     return *this;
1890   }
1891 
1892 private:
1893   void handleNewAssumedValue(base_t Value) override {
1894     intersectAssumedBits(Value);
1895   }
1896   void handleNewKnownValue(base_t Value) override { addKnownBits(Value); }
1897   void joinOR(base_t AssumedValue, base_t KnownValue) override {
1898     this->Known |= KnownValue;
1899     this->Assumed |= AssumedValue;
1900   }
1901   void joinAND(base_t AssumedValue, base_t KnownValue) override {
1902     this->Known &= KnownValue;
1903     this->Assumed &= AssumedValue;
1904   }
1905 };
1906 
1907 /// Specialization of the integer state for an increasing value, hence ~0u is
1908 /// the best state and 0 the worst.
1909 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
1910           base_ty WorstState = 0>
1911 struct IncIntegerState
1912     : public IntegerStateBase<base_ty, BestState, WorstState> {
1913   using super = IntegerStateBase<base_ty, BestState, WorstState>;
1914   using base_t = base_ty;
1915 
1916   IncIntegerState() : super() {}
1917   IncIntegerState(base_t Assumed) : super(Assumed) {}
1918 
1919   /// Return the best possible representable state.
1920   static constexpr base_t getBestState() { return BestState; }
1921   static constexpr base_t
1922   getBestState(const IncIntegerState<base_ty, BestState, WorstState> &) {
1923     return getBestState();
1924   }
1925 
1926   /// Take minimum of assumed and \p Value.
1927   IncIntegerState &takeAssumedMinimum(base_t Value) {
1928     // Make sure we never loose "known value".
1929     this->Assumed = std::max(std::min(this->Assumed, Value), this->Known);
1930     return *this;
1931   }
1932 
1933   /// Take maximum of known and \p Value.
1934   IncIntegerState &takeKnownMaximum(base_t Value) {
1935     // Make sure we never loose "known value".
1936     this->Assumed = std::max(Value, this->Assumed);
1937     this->Known = std::max(Value, this->Known);
1938     return *this;
1939   }
1940 
1941 private:
1942   void handleNewAssumedValue(base_t Value) override {
1943     takeAssumedMinimum(Value);
1944   }
1945   void handleNewKnownValue(base_t Value) override { takeKnownMaximum(Value); }
1946   void joinOR(base_t AssumedValue, base_t KnownValue) override {
1947     this->Known = std::max(this->Known, KnownValue);
1948     this->Assumed = std::max(this->Assumed, AssumedValue);
1949   }
1950   void joinAND(base_t AssumedValue, base_t KnownValue) override {
1951     this->Known = std::min(this->Known, KnownValue);
1952     this->Assumed = std::min(this->Assumed, AssumedValue);
1953   }
1954 };
1955 
1956 /// Specialization of the integer state for a decreasing value, hence 0 is the
1957 /// best state and ~0u the worst.
1958 template <typename base_ty = uint32_t>
1959 struct DecIntegerState : public IntegerStateBase<base_ty, 0, ~base_ty(0)> {
1960   using base_t = base_ty;
1961 
1962   /// Take maximum of assumed and \p Value.
1963   DecIntegerState &takeAssumedMaximum(base_t Value) {
1964     // Make sure we never loose "known value".
1965     this->Assumed = std::min(std::max(this->Assumed, Value), this->Known);
1966     return *this;
1967   }
1968 
1969   /// Take minimum of known and \p Value.
1970   DecIntegerState &takeKnownMinimum(base_t Value) {
1971     // Make sure we never loose "known value".
1972     this->Assumed = std::min(Value, this->Assumed);
1973     this->Known = std::min(Value, this->Known);
1974     return *this;
1975   }
1976 
1977 private:
1978   void handleNewAssumedValue(base_t Value) override {
1979     takeAssumedMaximum(Value);
1980   }
1981   void handleNewKnownValue(base_t Value) override { takeKnownMinimum(Value); }
1982   void joinOR(base_t AssumedValue, base_t KnownValue) override {
1983     this->Assumed = std::min(this->Assumed, KnownValue);
1984     this->Assumed = std::min(this->Assumed, AssumedValue);
1985   }
1986   void joinAND(base_t AssumedValue, base_t KnownValue) override {
1987     this->Assumed = std::max(this->Assumed, KnownValue);
1988     this->Assumed = std::max(this->Assumed, AssumedValue);
1989   }
1990 };
1991 
1992 /// Simple wrapper for a single bit (boolean) state.
1993 struct BooleanState : public IntegerStateBase<bool, 1, 0> {
1994   using super = IntegerStateBase<bool, 1, 0>;
1995   using base_t = IntegerStateBase::base_t;
1996 
1997   BooleanState() : super() {}
1998   BooleanState(base_t Assumed) : super(Assumed) {}
1999 
2000   /// Set the assumed value to \p Value but never below the known one.
2001   void setAssumed(bool Value) { Assumed &= (Known | Value); }
2002 
2003   /// Set the known and asssumed value to \p Value.
2004   void setKnown(bool Value) {
2005     Known |= Value;
2006     Assumed |= Value;
2007   }
2008 
2009   /// Return true if the state is assumed to hold.
2010   bool isAssumed() const { return getAssumed(); }
2011 
2012   /// Return true if the state is known to hold.
2013   bool isKnown() const { return getKnown(); }
2014 
2015 private:
2016   void handleNewAssumedValue(base_t Value) override {
2017     if (!Value)
2018       Assumed = Known;
2019   }
2020   void handleNewKnownValue(base_t Value) override {
2021     if (Value)
2022       Known = (Assumed = Value);
2023   }
2024   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2025     Known |= KnownValue;
2026     Assumed |= AssumedValue;
2027   }
2028   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2029     Known &= KnownValue;
2030     Assumed &= AssumedValue;
2031   }
2032 };
2033 
2034 /// State for an integer range.
2035 struct IntegerRangeState : public AbstractState {
2036 
2037   /// Bitwidth of the associated value.
2038   uint32_t BitWidth;
2039 
2040   /// State representing assumed range, initially set to empty.
2041   ConstantRange Assumed;
2042 
2043   /// State representing known range, initially set to [-inf, inf].
2044   ConstantRange Known;
2045 
2046   IntegerRangeState(uint32_t BitWidth)
2047       : BitWidth(BitWidth), Assumed(ConstantRange::getEmpty(BitWidth)),
2048         Known(ConstantRange::getFull(BitWidth)) {}
2049 
2050   IntegerRangeState(const ConstantRange &CR)
2051       : BitWidth(CR.getBitWidth()), Assumed(CR),
2052         Known(getWorstState(CR.getBitWidth())) {}
2053 
2054   /// Return the worst possible representable state.
2055   static ConstantRange getWorstState(uint32_t BitWidth) {
2056     return ConstantRange::getFull(BitWidth);
2057   }
2058 
2059   /// Return the best possible representable state.
2060   static ConstantRange getBestState(uint32_t BitWidth) {
2061     return ConstantRange::getEmpty(BitWidth);
2062   }
2063   static ConstantRange getBestState(const IntegerRangeState &IRS) {
2064     return getBestState(IRS.getBitWidth());
2065   }
2066 
2067   /// Return associated values' bit width.
2068   uint32_t getBitWidth() const { return BitWidth; }
2069 
2070   /// See AbstractState::isValidState()
2071   bool isValidState() const override {
2072     return BitWidth > 0 && !Assumed.isFullSet();
2073   }
2074 
2075   /// See AbstractState::isAtFixpoint()
2076   bool isAtFixpoint() const override { return Assumed == Known; }
2077 
2078   /// See AbstractState::indicateOptimisticFixpoint(...)
2079   ChangeStatus indicateOptimisticFixpoint() override {
2080     Known = Assumed;
2081     return ChangeStatus::CHANGED;
2082   }
2083 
2084   /// See AbstractState::indicatePessimisticFixpoint(...)
2085   ChangeStatus indicatePessimisticFixpoint() override {
2086     Assumed = Known;
2087     return ChangeStatus::CHANGED;
2088   }
2089 
2090   /// Return the known state encoding
2091   ConstantRange getKnown() const { return Known; }
2092 
2093   /// Return the assumed state encoding.
2094   ConstantRange getAssumed() const { return Assumed; }
2095 
2096   /// Unite assumed range with the passed state.
2097   void unionAssumed(const ConstantRange &R) {
2098     // Don't loose a known range.
2099     Assumed = Assumed.unionWith(R).intersectWith(Known);
2100   }
2101 
2102   /// See IntegerRangeState::unionAssumed(..).
2103   void unionAssumed(const IntegerRangeState &R) {
2104     unionAssumed(R.getAssumed());
2105   }
2106 
2107   /// Unite known range with the passed state.
2108   void unionKnown(const ConstantRange &R) {
2109     // Don't loose a known range.
2110     Known = Known.unionWith(R);
2111     Assumed = Assumed.unionWith(Known);
2112   }
2113 
2114   /// See IntegerRangeState::unionKnown(..).
2115   void unionKnown(const IntegerRangeState &R) { unionKnown(R.getKnown()); }
2116 
2117   /// Intersect known range with the passed state.
2118   void intersectKnown(const ConstantRange &R) {
2119     Assumed = Assumed.intersectWith(R);
2120     Known = Known.intersectWith(R);
2121   }
2122 
2123   /// See IntegerRangeState::intersectKnown(..).
2124   void intersectKnown(const IntegerRangeState &R) {
2125     intersectKnown(R.getKnown());
2126   }
2127 
2128   /// Equality for IntegerRangeState.
2129   bool operator==(const IntegerRangeState &R) const {
2130     return getAssumed() == R.getAssumed() && getKnown() == R.getKnown();
2131   }
2132 
2133   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2134   /// intended that only information assumed in both states will be assumed in
2135   /// this one afterwards.
2136   IntegerRangeState operator^=(const IntegerRangeState &R) {
2137     // NOTE: `^=` operator seems like `intersect` but in this case, we need to
2138     // take `union`.
2139     unionAssumed(R);
2140     return *this;
2141   }
2142 
2143   IntegerRangeState operator&=(const IntegerRangeState &R) {
2144     // NOTE: `&=` operator seems like `intersect` but in this case, we need to
2145     // take `union`.
2146     unionKnown(R);
2147     unionAssumed(R);
2148     return *this;
2149   }
2150 };
2151 /// Helper struct necessary as the modular build fails if the virtual method
2152 /// IRAttribute::manifest is defined in the Attributor.cpp.
2153 struct IRAttributeManifest {
2154   static ChangeStatus manifestAttrs(Attributor &A, const IRPosition &IRP,
2155                                     const ArrayRef<Attribute> &DeducedAttrs);
2156 };
2157 
2158 /// Helper to tie a abstract state implementation to an abstract attribute.
2159 template <typename StateTy, typename BaseType, class... Ts>
2160 struct StateWrapper : public BaseType, public StateTy {
2161   /// Provide static access to the type of the state.
2162   using StateType = StateTy;
2163 
2164   StateWrapper(const IRPosition &IRP, Ts... Args)
2165       : BaseType(IRP), StateTy(Args...) {}
2166 
2167   /// See AbstractAttribute::getState(...).
2168   StateType &getState() override { return *this; }
2169 
2170   /// See AbstractAttribute::getState(...).
2171   const StateType &getState() const override { return *this; }
2172 };
2173 
2174 /// Helper class that provides common functionality to manifest IR attributes.
2175 template <Attribute::AttrKind AK, typename BaseType>
2176 struct IRAttribute : public BaseType {
2177   IRAttribute(const IRPosition &IRP) : BaseType(IRP) {}
2178 
2179   /// See AbstractAttribute::initialize(...).
2180   virtual void initialize(Attributor &A) override {
2181     const IRPosition &IRP = this->getIRPosition();
2182     if (isa<UndefValue>(IRP.getAssociatedValue()) ||
2183         this->hasAttr(getAttrKind(), /* IgnoreSubsumingPositions */ false,
2184                       &A)) {
2185       this->getState().indicateOptimisticFixpoint();
2186       return;
2187     }
2188 
2189     bool IsFnInterface = IRP.isFnInterfaceKind();
2190     const Function *FnScope = IRP.getAnchorScope();
2191     // TODO: Not all attributes require an exact definition. Find a way to
2192     //       enable deduction for some but not all attributes in case the
2193     //       definition might be changed at runtime, see also
2194     //       http://lists.llvm.org/pipermail/llvm-dev/2018-February/121275.html.
2195     // TODO: We could always determine abstract attributes and if sufficient
2196     //       information was found we could duplicate the functions that do not
2197     //       have an exact definition.
2198     if (IsFnInterface && (!FnScope || !A.isFunctionIPOAmendable(*FnScope)))
2199       this->getState().indicatePessimisticFixpoint();
2200   }
2201 
2202   /// See AbstractAttribute::manifest(...).
2203   ChangeStatus manifest(Attributor &A) override {
2204     if (isa<UndefValue>(this->getIRPosition().getAssociatedValue()))
2205       return ChangeStatus::UNCHANGED;
2206     SmallVector<Attribute, 4> DeducedAttrs;
2207     getDeducedAttributes(this->getAnchorValue().getContext(), DeducedAttrs);
2208     return IRAttributeManifest::manifestAttrs(A, this->getIRPosition(),
2209                                               DeducedAttrs);
2210   }
2211 
2212   /// Return the kind that identifies the abstract attribute implementation.
2213   Attribute::AttrKind getAttrKind() const { return AK; }
2214 
2215   /// Return the deduced attributes in \p Attrs.
2216   virtual void getDeducedAttributes(LLVMContext &Ctx,
2217                                     SmallVectorImpl<Attribute> &Attrs) const {
2218     Attrs.emplace_back(Attribute::get(Ctx, getAttrKind()));
2219   }
2220 };
2221 
2222 /// Base struct for all "concrete attribute" deductions.
2223 ///
2224 /// The abstract attribute is a minimal interface that allows the Attributor to
2225 /// orchestrate the abstract/fixpoint analysis. The design allows to hide away
2226 /// implementation choices made for the subclasses but also to structure their
2227 /// implementation and simplify the use of other abstract attributes in-flight.
2228 ///
2229 /// To allow easy creation of new attributes, most methods have default
2230 /// implementations. The ones that do not are generally straight forward, except
2231 /// `AbstractAttribute::updateImpl` which is the location of most reasoning
2232 /// associated with the abstract attribute. The update is invoked by the
2233 /// Attributor in case the situation used to justify the current optimistic
2234 /// state might have changed. The Attributor determines this automatically
2235 /// by monitoring the `Attributor::getAAFor` calls made by abstract attributes.
2236 ///
2237 /// The `updateImpl` method should inspect the IR and other abstract attributes
2238 /// in-flight to justify the best possible (=optimistic) state. The actual
2239 /// implementation is, similar to the underlying abstract state encoding, not
2240 /// exposed. In the most common case, the `updateImpl` will go through a list of
2241 /// reasons why its optimistic state is valid given the current information. If
2242 /// any combination of them holds and is sufficient to justify the current
2243 /// optimistic state, the method shall return UNCHAGED. If not, the optimistic
2244 /// state is adjusted to the situation and the method shall return CHANGED.
2245 ///
2246 /// If the manifestation of the "concrete attribute" deduced by the subclass
2247 /// differs from the "default" behavior, which is a (set of) LLVM-IR
2248 /// attribute(s) for an argument, call site argument, function return value, or
2249 /// function, the `AbstractAttribute::manifest` method should be overloaded.
2250 ///
2251 /// NOTE: If the state obtained via getState() is INVALID, thus if
2252 ///       AbstractAttribute::getState().isValidState() returns false, no
2253 ///       information provided by the methods of this class should be used.
2254 /// NOTE: The Attributor currently has certain limitations to what we can do.
2255 ///       As a general rule of thumb, "concrete" abstract attributes should *for
2256 ///       now* only perform "backward" information propagation. That means
2257 ///       optimistic information obtained through abstract attributes should
2258 ///       only be used at positions that precede the origin of the information
2259 ///       with regards to the program flow. More practically, information can
2260 ///       *now* be propagated from instructions to their enclosing function, but
2261 ///       *not* from call sites to the called function. The mechanisms to allow
2262 ///       both directions will be added in the future.
2263 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
2264 ///       described in the file comment.
2265 struct AbstractAttribute : public IRPosition, public AADepGraphNode {
2266   using StateType = AbstractState;
2267 
2268   AbstractAttribute(const IRPosition &IRP) : IRPosition(IRP) {}
2269 
2270   /// Virtual destructor.
2271   virtual ~AbstractAttribute() {}
2272 
2273   /// This function is used to identify if an \p DGN is of type
2274   /// AbstractAttribute so that the dyn_cast and cast can use such information
2275   /// to cast an AADepGraphNode to an AbstractAttribute.
2276   ///
2277   /// We eagerly return true here because all AADepGraphNodes except for the
2278   /// Synthethis Node are of type AbstractAttribute
2279   static bool classof(const AADepGraphNode *DGN) { return true; }
2280 
2281   /// Initialize the state with the information in the Attributor \p A.
2282   ///
2283   /// This function is called by the Attributor once all abstract attributes
2284   /// have been identified. It can and shall be used for task like:
2285   ///  - identify existing knowledge in the IR and use it for the "known state"
2286   ///  - perform any work that is not going to change over time, e.g., determine
2287   ///    a subset of the IR, or attributes in-flight, that have to be looked at
2288   ///    in the `updateImpl` method.
2289   virtual void initialize(Attributor &A) {}
2290 
2291   /// Return the internal abstract state for inspection.
2292   virtual StateType &getState() = 0;
2293   virtual const StateType &getState() const = 0;
2294 
2295   /// Return an IR position, see struct IRPosition.
2296   const IRPosition &getIRPosition() const { return *this; };
2297   IRPosition &getIRPosition() { return *this; };
2298 
2299   /// Helper functions, for debug purposes only.
2300   ///{
2301   void print(raw_ostream &OS) const override;
2302   virtual void printWithDeps(raw_ostream &OS) const;
2303   void dump() const { print(dbgs()); }
2304 
2305   /// This function should return the "summarized" assumed state as string.
2306   virtual const std::string getAsStr() const = 0;
2307 
2308   /// This function should return the name of the AbstractAttribute
2309   virtual const std::string getName() const = 0;
2310 
2311   /// This function should return the address of the ID of the AbstractAttribute
2312   virtual const char *getIdAddr() const = 0;
2313   ///}
2314 
2315   /// Allow the Attributor access to the protected methods.
2316   friend struct Attributor;
2317 
2318 protected:
2319   /// Hook for the Attributor to trigger an update of the internal state.
2320   ///
2321   /// If this attribute is already fixed, this method will return UNCHANGED,
2322   /// otherwise it delegates to `AbstractAttribute::updateImpl`.
2323   ///
2324   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
2325   ChangeStatus update(Attributor &A);
2326 
2327   /// Hook for the Attributor to trigger the manifestation of the information
2328   /// represented by the abstract attribute in the LLVM-IR.
2329   ///
2330   /// \Return CHANGED if the IR was altered, otherwise UNCHANGED.
2331   virtual ChangeStatus manifest(Attributor &A) {
2332     return ChangeStatus::UNCHANGED;
2333   }
2334 
2335   /// Hook to enable custom statistic tracking, called after manifest that
2336   /// resulted in a change if statistics are enabled.
2337   ///
2338   /// We require subclasses to provide an implementation so we remember to
2339   /// add statistics for them.
2340   virtual void trackStatistics() const = 0;
2341 
2342   /// The actual update/transfer function which has to be implemented by the
2343   /// derived classes.
2344   ///
2345   /// If it is called, the environment has changed and we have to determine if
2346   /// the current information is still valid or adjust it otherwise.
2347   ///
2348   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
2349   virtual ChangeStatus updateImpl(Attributor &A) = 0;
2350 };
2351 
2352 /// Forward declarations of output streams for debug purposes.
2353 ///
2354 ///{
2355 raw_ostream &operator<<(raw_ostream &OS, const AbstractAttribute &AA);
2356 raw_ostream &operator<<(raw_ostream &OS, ChangeStatus S);
2357 raw_ostream &operator<<(raw_ostream &OS, IRPosition::Kind);
2358 raw_ostream &operator<<(raw_ostream &OS, const IRPosition &);
2359 raw_ostream &operator<<(raw_ostream &OS, const AbstractState &State);
2360 template <typename base_ty, base_ty BestState, base_ty WorstState>
2361 raw_ostream &
2362 operator<<(raw_ostream &OS,
2363            const IntegerStateBase<base_ty, BestState, WorstState> &S) {
2364   return OS << "(" << S.getKnown() << "-" << S.getAssumed() << ")"
2365             << static_cast<const AbstractState &>(S);
2366 }
2367 raw_ostream &operator<<(raw_ostream &OS, const IntegerRangeState &State);
2368 ///}
2369 
2370 struct AttributorPass : public PassInfoMixin<AttributorPass> {
2371   PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM);
2372 };
2373 struct AttributorCGSCCPass : public PassInfoMixin<AttributorCGSCCPass> {
2374   PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
2375                         LazyCallGraph &CG, CGSCCUpdateResult &UR);
2376 };
2377 
2378 Pass *createAttributorLegacyPass();
2379 Pass *createAttributorCGSCCLegacyPass();
2380 
2381 /// ----------------------------------------------------------------------------
2382 ///                       Abstract Attribute Classes
2383 /// ----------------------------------------------------------------------------
2384 
2385 /// An abstract attribute for the returned values of a function.
2386 struct AAReturnedValues
2387     : public IRAttribute<Attribute::Returned, AbstractAttribute> {
2388   AAReturnedValues(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2389 
2390   /// Return an assumed unique return value if a single candidate is found. If
2391   /// there cannot be one, return a nullptr. If it is not clear yet, return the
2392   /// Optional::NoneType.
2393   Optional<Value *> getAssumedUniqueReturnValue(Attributor &A) const;
2394 
2395   /// Check \p Pred on all returned values.
2396   ///
2397   /// This method will evaluate \p Pred on returned values and return
2398   /// true if (1) all returned values are known, and (2) \p Pred returned true
2399   /// for all returned values.
2400   ///
2401   /// Note: Unlike the Attributor::checkForAllReturnedValuesAndReturnInsts
2402   /// method, this one will not filter dead return instructions.
2403   virtual bool checkForAllReturnedValuesAndReturnInsts(
2404       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred)
2405       const = 0;
2406 
2407   using iterator =
2408       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::iterator;
2409   using const_iterator =
2410       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::const_iterator;
2411   virtual llvm::iterator_range<iterator> returned_values() = 0;
2412   virtual llvm::iterator_range<const_iterator> returned_values() const = 0;
2413 
2414   virtual size_t getNumReturnValues() const = 0;
2415   virtual const SmallSetVector<CallBase *, 4> &getUnresolvedCalls() const = 0;
2416 
2417   /// Create an abstract attribute view for the position \p IRP.
2418   static AAReturnedValues &createForPosition(const IRPosition &IRP,
2419                                              Attributor &A);
2420 
2421   /// See AbstractAttribute::getName()
2422   const std::string getName() const override { return "AAReturnedValues"; }
2423 
2424   /// See AbstractAttribute::getIdAddr()
2425   const char *getIdAddr() const override { return &ID; }
2426 
2427   /// This function should return true if the type of the \p AA is
2428   /// AAReturnedValues
2429   static bool classof(const AbstractAttribute *AA) {
2430     return (AA->getIdAddr() == &ID);
2431   }
2432 
2433   /// Unique ID (due to the unique address)
2434   static const char ID;
2435 };
2436 
2437 struct AANoUnwind
2438     : public IRAttribute<Attribute::NoUnwind,
2439                          StateWrapper<BooleanState, AbstractAttribute>> {
2440   AANoUnwind(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2441 
2442   /// Returns true if nounwind is assumed.
2443   bool isAssumedNoUnwind() const { return getAssumed(); }
2444 
2445   /// Returns true if nounwind is known.
2446   bool isKnownNoUnwind() const { return getKnown(); }
2447 
2448   /// Create an abstract attribute view for the position \p IRP.
2449   static AANoUnwind &createForPosition(const IRPosition &IRP, Attributor &A);
2450 
2451   /// See AbstractAttribute::getName()
2452   const std::string getName() const override { return "AANoUnwind"; }
2453 
2454   /// See AbstractAttribute::getIdAddr()
2455   const char *getIdAddr() const override { return &ID; }
2456 
2457   /// This function should return true if the type of the \p AA is AANoUnwind
2458   static bool classof(const AbstractAttribute *AA) {
2459     return (AA->getIdAddr() == &ID);
2460   }
2461 
2462   /// Unique ID (due to the unique address)
2463   static const char ID;
2464 };
2465 
2466 struct AANoSync
2467     : public IRAttribute<Attribute::NoSync,
2468                          StateWrapper<BooleanState, AbstractAttribute>> {
2469   AANoSync(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2470 
2471   /// Returns true if "nosync" is assumed.
2472   bool isAssumedNoSync() const { return getAssumed(); }
2473 
2474   /// Returns true if "nosync" is known.
2475   bool isKnownNoSync() const { return getKnown(); }
2476 
2477   /// Create an abstract attribute view for the position \p IRP.
2478   static AANoSync &createForPosition(const IRPosition &IRP, Attributor &A);
2479 
2480   /// See AbstractAttribute::getName()
2481   const std::string getName() const override { return "AANoSync"; }
2482 
2483   /// See AbstractAttribute::getIdAddr()
2484   const char *getIdAddr() const override { return &ID; }
2485 
2486   /// This function should return true if the type of the \p AA is AANoSync
2487   static bool classof(const AbstractAttribute *AA) {
2488     return (AA->getIdAddr() == &ID);
2489   }
2490 
2491   /// Unique ID (due to the unique address)
2492   static const char ID;
2493 };
2494 
2495 /// An abstract interface for all nonnull attributes.
2496 struct AANonNull
2497     : public IRAttribute<Attribute::NonNull,
2498                          StateWrapper<BooleanState, AbstractAttribute>> {
2499   AANonNull(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2500 
2501   /// Return true if we assume that the underlying value is nonnull.
2502   bool isAssumedNonNull() const { return getAssumed(); }
2503 
2504   /// Return true if we know that underlying value is nonnull.
2505   bool isKnownNonNull() const { return getKnown(); }
2506 
2507   /// Create an abstract attribute view for the position \p IRP.
2508   static AANonNull &createForPosition(const IRPosition &IRP, Attributor &A);
2509 
2510   /// See AbstractAttribute::getName()
2511   const std::string getName() const override { return "AANonNull"; }
2512 
2513   /// See AbstractAttribute::getIdAddr()
2514   const char *getIdAddr() const override { return &ID; }
2515 
2516   /// This function should return true if the type of the \p AA is AANonNull
2517   static bool classof(const AbstractAttribute *AA) {
2518     return (AA->getIdAddr() == &ID);
2519   }
2520 
2521   /// Unique ID (due to the unique address)
2522   static const char ID;
2523 };
2524 
2525 /// An abstract attribute for norecurse.
2526 struct AANoRecurse
2527     : public IRAttribute<Attribute::NoRecurse,
2528                          StateWrapper<BooleanState, AbstractAttribute>> {
2529   AANoRecurse(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2530 
2531   /// Return true if "norecurse" is assumed.
2532   bool isAssumedNoRecurse() const { return getAssumed(); }
2533 
2534   /// Return true if "norecurse" is known.
2535   bool isKnownNoRecurse() const { return getKnown(); }
2536 
2537   /// Create an abstract attribute view for the position \p IRP.
2538   static AANoRecurse &createForPosition(const IRPosition &IRP, Attributor &A);
2539 
2540   /// See AbstractAttribute::getName()
2541   const std::string getName() const override { return "AANoRecurse"; }
2542 
2543   /// See AbstractAttribute::getIdAddr()
2544   const char *getIdAddr() const override { return &ID; }
2545 
2546   /// This function should return true if the type of the \p AA is AANoRecurse
2547   static bool classof(const AbstractAttribute *AA) {
2548     return (AA->getIdAddr() == &ID);
2549   }
2550 
2551   /// Unique ID (due to the unique address)
2552   static const char ID;
2553 };
2554 
2555 /// An abstract attribute for willreturn.
2556 struct AAWillReturn
2557     : public IRAttribute<Attribute::WillReturn,
2558                          StateWrapper<BooleanState, AbstractAttribute>> {
2559   AAWillReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2560 
2561   /// Return true if "willreturn" is assumed.
2562   bool isAssumedWillReturn() const { return getAssumed(); }
2563 
2564   /// Return true if "willreturn" is known.
2565   bool isKnownWillReturn() const { return getKnown(); }
2566 
2567   /// Create an abstract attribute view for the position \p IRP.
2568   static AAWillReturn &createForPosition(const IRPosition &IRP, Attributor &A);
2569 
2570   /// See AbstractAttribute::getName()
2571   const std::string getName() const override { return "AAWillReturn"; }
2572 
2573   /// See AbstractAttribute::getIdAddr()
2574   const char *getIdAddr() const override { return &ID; }
2575 
2576   /// This function should return true if the type of the \p AA is AAWillReturn
2577   static bool classof(const AbstractAttribute *AA) {
2578     return (AA->getIdAddr() == &ID);
2579   }
2580 
2581   /// Unique ID (due to the unique address)
2582   static const char ID;
2583 };
2584 
2585 /// An abstract attribute for undefined behavior.
2586 struct AAUndefinedBehavior
2587     : public StateWrapper<BooleanState, AbstractAttribute> {
2588   using Base = StateWrapper<BooleanState, AbstractAttribute>;
2589   AAUndefinedBehavior(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
2590 
2591   /// Return true if "undefined behavior" is assumed.
2592   bool isAssumedToCauseUB() const { return getAssumed(); }
2593 
2594   /// Return true if "undefined behavior" is assumed for a specific instruction.
2595   virtual bool isAssumedToCauseUB(Instruction *I) const = 0;
2596 
2597   /// Return true if "undefined behavior" is known.
2598   bool isKnownToCauseUB() const { return getKnown(); }
2599 
2600   /// Return true if "undefined behavior" is known for a specific instruction.
2601   virtual bool isKnownToCauseUB(Instruction *I) const = 0;
2602 
2603   /// Create an abstract attribute view for the position \p IRP.
2604   static AAUndefinedBehavior &createForPosition(const IRPosition &IRP,
2605                                                 Attributor &A);
2606 
2607   /// See AbstractAttribute::getName()
2608   const std::string getName() const override { return "AAUndefinedBehavior"; }
2609 
2610   /// See AbstractAttribute::getIdAddr()
2611   const char *getIdAddr() const override { return &ID; }
2612 
2613   /// This function should return true if the type of the \p AA is
2614   /// AAUndefineBehavior
2615   static bool classof(const AbstractAttribute *AA) {
2616     return (AA->getIdAddr() == &ID);
2617   }
2618 
2619   /// Unique ID (due to the unique address)
2620   static const char ID;
2621 };
2622 
2623 /// An abstract interface to determine reachability of point A to B.
2624 struct AAReachability : public StateWrapper<BooleanState, AbstractAttribute> {
2625   using Base = StateWrapper<BooleanState, AbstractAttribute>;
2626   AAReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
2627 
2628   /// Returns true if 'From' instruction is assumed to reach, 'To' instruction.
2629   /// Users should provide two positions they are interested in, and the class
2630   /// determines (and caches) reachability.
2631   bool isAssumedReachable(Attributor &A, const Instruction &From,
2632                           const Instruction &To) const {
2633     return A.getInfoCache().getPotentiallyReachable(From, To);
2634   }
2635 
2636   /// Returns true if 'From' instruction is known to reach, 'To' instruction.
2637   /// Users should provide two positions they are interested in, and the class
2638   /// determines (and caches) reachability.
2639   bool isKnownReachable(Attributor &A, const Instruction &From,
2640                         const Instruction &To) const {
2641     return A.getInfoCache().getPotentiallyReachable(From, To);
2642   }
2643 
2644   /// Create an abstract attribute view for the position \p IRP.
2645   static AAReachability &createForPosition(const IRPosition &IRP,
2646                                            Attributor &A);
2647 
2648   /// See AbstractAttribute::getName()
2649   const std::string getName() const override { return "AAReachability"; }
2650 
2651   /// See AbstractAttribute::getIdAddr()
2652   const char *getIdAddr() const override { return &ID; }
2653 
2654   /// This function should return true if the type of the \p AA is
2655   /// AAReachability
2656   static bool classof(const AbstractAttribute *AA) {
2657     return (AA->getIdAddr() == &ID);
2658   }
2659 
2660   /// Unique ID (due to the unique address)
2661   static const char ID;
2662 };
2663 
2664 /// An abstract interface for all noalias attributes.
2665 struct AANoAlias
2666     : public IRAttribute<Attribute::NoAlias,
2667                          StateWrapper<BooleanState, AbstractAttribute>> {
2668   AANoAlias(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2669 
2670   /// Return true if we assume that the underlying value is alias.
2671   bool isAssumedNoAlias() const { return getAssumed(); }
2672 
2673   /// Return true if we know that underlying value is noalias.
2674   bool isKnownNoAlias() const { return getKnown(); }
2675 
2676   /// Create an abstract attribute view for the position \p IRP.
2677   static AANoAlias &createForPosition(const IRPosition &IRP, Attributor &A);
2678 
2679   /// See AbstractAttribute::getName()
2680   const std::string getName() const override { return "AANoAlias"; }
2681 
2682   /// See AbstractAttribute::getIdAddr()
2683   const char *getIdAddr() const override { return &ID; }
2684 
2685   /// This function should return true if the type of the \p AA is AANoAlias
2686   static bool classof(const AbstractAttribute *AA) {
2687     return (AA->getIdAddr() == &ID);
2688   }
2689 
2690   /// Unique ID (due to the unique address)
2691   static const char ID;
2692 };
2693 
2694 /// An AbstractAttribute for nofree.
2695 struct AANoFree
2696     : public IRAttribute<Attribute::NoFree,
2697                          StateWrapper<BooleanState, AbstractAttribute>> {
2698   AANoFree(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2699 
2700   /// Return true if "nofree" is assumed.
2701   bool isAssumedNoFree() const { return getAssumed(); }
2702 
2703   /// Return true if "nofree" is known.
2704   bool isKnownNoFree() const { return getKnown(); }
2705 
2706   /// Create an abstract attribute view for the position \p IRP.
2707   static AANoFree &createForPosition(const IRPosition &IRP, Attributor &A);
2708 
2709   /// See AbstractAttribute::getName()
2710   const std::string getName() const override { return "AANoFree"; }
2711 
2712   /// See AbstractAttribute::getIdAddr()
2713   const char *getIdAddr() const override { return &ID; }
2714 
2715   /// This function should return true if the type of the \p AA is AANoFree
2716   static bool classof(const AbstractAttribute *AA) {
2717     return (AA->getIdAddr() == &ID);
2718   }
2719 
2720   /// Unique ID (due to the unique address)
2721   static const char ID;
2722 };
2723 
2724 /// An AbstractAttribute for noreturn.
2725 struct AANoReturn
2726     : public IRAttribute<Attribute::NoReturn,
2727                          StateWrapper<BooleanState, AbstractAttribute>> {
2728   AANoReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2729 
2730   /// Return true if the underlying object is assumed to never return.
2731   bool isAssumedNoReturn() const { return getAssumed(); }
2732 
2733   /// Return true if the underlying object is known to never return.
2734   bool isKnownNoReturn() const { return getKnown(); }
2735 
2736   /// Create an abstract attribute view for the position \p IRP.
2737   static AANoReturn &createForPosition(const IRPosition &IRP, Attributor &A);
2738 
2739   /// See AbstractAttribute::getName()
2740   const std::string getName() const override { return "AANoReturn"; }
2741 
2742   /// See AbstractAttribute::getIdAddr()
2743   const char *getIdAddr() const override { return &ID; }
2744 
2745   /// This function should return true if the type of the \p AA is AANoReturn
2746   static bool classof(const AbstractAttribute *AA) {
2747     return (AA->getIdAddr() == &ID);
2748   }
2749 
2750   /// Unique ID (due to the unique address)
2751   static const char ID;
2752 };
2753 
2754 /// An abstract interface for liveness abstract attribute.
2755 struct AAIsDead : public StateWrapper<BooleanState, AbstractAttribute> {
2756   using Base = StateWrapper<BooleanState, AbstractAttribute>;
2757   AAIsDead(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
2758 
2759 protected:
2760   /// The query functions are protected such that other attributes need to go
2761   /// through the Attributor interfaces: `Attributor::isAssumedDead(...)`
2762 
2763   /// Returns true if the underlying value is assumed dead.
2764   virtual bool isAssumedDead() const = 0;
2765 
2766   /// Returns true if the underlying value is known dead.
2767   virtual bool isKnownDead() const = 0;
2768 
2769   /// Returns true if \p BB is assumed dead.
2770   virtual bool isAssumedDead(const BasicBlock *BB) const = 0;
2771 
2772   /// Returns true if \p BB is known dead.
2773   virtual bool isKnownDead(const BasicBlock *BB) const = 0;
2774 
2775   /// Returns true if \p I is assumed dead.
2776   virtual bool isAssumedDead(const Instruction *I) const = 0;
2777 
2778   /// Returns true if \p I is known dead.
2779   virtual bool isKnownDead(const Instruction *I) const = 0;
2780 
2781   /// This method is used to check if at least one instruction in a collection
2782   /// of instructions is live.
2783   template <typename T> bool isLiveInstSet(T begin, T end) const {
2784     for (const auto &I : llvm::make_range(begin, end)) {
2785       assert(I->getFunction() == getIRPosition().getAssociatedFunction() &&
2786              "Instruction must be in the same anchor scope function.");
2787 
2788       if (!isAssumedDead(I))
2789         return true;
2790     }
2791 
2792     return false;
2793   }
2794 
2795 public:
2796   /// Create an abstract attribute view for the position \p IRP.
2797   static AAIsDead &createForPosition(const IRPosition &IRP, Attributor &A);
2798 
2799   /// Determine if \p F might catch asynchronous exceptions.
2800   static bool mayCatchAsynchronousExceptions(const Function &F) {
2801     return F.hasPersonalityFn() && !canSimplifyInvokeNoUnwind(&F);
2802   }
2803 
2804   /// Return if the edge from \p From BB to \p To BB is assumed dead.
2805   /// This is specifically useful in AAReachability.
2806   virtual bool isEdgeDead(const BasicBlock *From, const BasicBlock *To) const {
2807     return false;
2808   }
2809 
2810   /// See AbstractAttribute::getName()
2811   const std::string getName() const override { return "AAIsDead"; }
2812 
2813   /// See AbstractAttribute::getIdAddr()
2814   const char *getIdAddr() const override { return &ID; }
2815 
2816   /// This function should return true if the type of the \p AA is AAIsDead
2817   static bool classof(const AbstractAttribute *AA) {
2818     return (AA->getIdAddr() == &ID);
2819   }
2820 
2821   /// Unique ID (due to the unique address)
2822   static const char ID;
2823 
2824   friend struct Attributor;
2825 };
2826 
2827 /// State for dereferenceable attribute
2828 struct DerefState : AbstractState {
2829 
2830   static DerefState getBestState() { return DerefState(); }
2831   static DerefState getBestState(const DerefState &) { return getBestState(); }
2832 
2833   /// Return the worst possible representable state.
2834   static DerefState getWorstState() {
2835     DerefState DS;
2836     DS.indicatePessimisticFixpoint();
2837     return DS;
2838   }
2839   static DerefState getWorstState(const DerefState &) {
2840     return getWorstState();
2841   }
2842 
2843   /// State representing for dereferenceable bytes.
2844   IncIntegerState<> DerefBytesState;
2845 
2846   /// Map representing for accessed memory offsets and sizes.
2847   /// A key is Offset and a value is size.
2848   /// If there is a load/store instruction something like,
2849   ///   p[offset] = v;
2850   /// (offset, sizeof(v)) will be inserted to this map.
2851   /// std::map is used because we want to iterate keys in ascending order.
2852   std::map<int64_t, uint64_t> AccessedBytesMap;
2853 
2854   /// Helper function to calculate dereferenceable bytes from current known
2855   /// bytes and accessed bytes.
2856   ///
2857   /// int f(int *A){
2858   ///    *A = 0;
2859   ///    *(A+2) = 2;
2860   ///    *(A+1) = 1;
2861   ///    *(A+10) = 10;
2862   /// }
2863   /// ```
2864   /// In that case, AccessedBytesMap is `{0:4, 4:4, 8:4, 40:4}`.
2865   /// AccessedBytesMap is std::map so it is iterated in accending order on
2866   /// key(Offset). So KnownBytes will be updated like this:
2867   ///
2868   /// |Access | KnownBytes
2869   /// |(0, 4)| 0 -> 4
2870   /// |(4, 4)| 4 -> 8
2871   /// |(8, 4)| 8 -> 12
2872   /// |(40, 4) | 12 (break)
2873   void computeKnownDerefBytesFromAccessedMap() {
2874     int64_t KnownBytes = DerefBytesState.getKnown();
2875     for (auto &Access : AccessedBytesMap) {
2876       if (KnownBytes < Access.first)
2877         break;
2878       KnownBytes = std::max(KnownBytes, Access.first + (int64_t)Access.second);
2879     }
2880 
2881     DerefBytesState.takeKnownMaximum(KnownBytes);
2882   }
2883 
2884   /// State representing that whether the value is globaly dereferenceable.
2885   BooleanState GlobalState;
2886 
2887   /// See AbstractState::isValidState()
2888   bool isValidState() const override { return DerefBytesState.isValidState(); }
2889 
2890   /// See AbstractState::isAtFixpoint()
2891   bool isAtFixpoint() const override {
2892     return !isValidState() ||
2893            (DerefBytesState.isAtFixpoint() && GlobalState.isAtFixpoint());
2894   }
2895 
2896   /// See AbstractState::indicateOptimisticFixpoint(...)
2897   ChangeStatus indicateOptimisticFixpoint() override {
2898     DerefBytesState.indicateOptimisticFixpoint();
2899     GlobalState.indicateOptimisticFixpoint();
2900     return ChangeStatus::UNCHANGED;
2901   }
2902 
2903   /// See AbstractState::indicatePessimisticFixpoint(...)
2904   ChangeStatus indicatePessimisticFixpoint() override {
2905     DerefBytesState.indicatePessimisticFixpoint();
2906     GlobalState.indicatePessimisticFixpoint();
2907     return ChangeStatus::CHANGED;
2908   }
2909 
2910   /// Update known dereferenceable bytes.
2911   void takeKnownDerefBytesMaximum(uint64_t Bytes) {
2912     DerefBytesState.takeKnownMaximum(Bytes);
2913 
2914     // Known bytes might increase.
2915     computeKnownDerefBytesFromAccessedMap();
2916   }
2917 
2918   /// Update assumed dereferenceable bytes.
2919   void takeAssumedDerefBytesMinimum(uint64_t Bytes) {
2920     DerefBytesState.takeAssumedMinimum(Bytes);
2921   }
2922 
2923   /// Add accessed bytes to the map.
2924   void addAccessedBytes(int64_t Offset, uint64_t Size) {
2925     uint64_t &AccessedBytes = AccessedBytesMap[Offset];
2926     AccessedBytes = std::max(AccessedBytes, Size);
2927 
2928     // Known bytes might increase.
2929     computeKnownDerefBytesFromAccessedMap();
2930   }
2931 
2932   /// Equality for DerefState.
2933   bool operator==(const DerefState &R) const {
2934     return this->DerefBytesState == R.DerefBytesState &&
2935            this->GlobalState == R.GlobalState;
2936   }
2937 
2938   /// Inequality for DerefState.
2939   bool operator!=(const DerefState &R) const { return !(*this == R); }
2940 
2941   /// See IntegerStateBase::operator^=
2942   DerefState operator^=(const DerefState &R) {
2943     DerefBytesState ^= R.DerefBytesState;
2944     GlobalState ^= R.GlobalState;
2945     return *this;
2946   }
2947 
2948   /// See IntegerStateBase::operator+=
2949   DerefState operator+=(const DerefState &R) {
2950     DerefBytesState += R.DerefBytesState;
2951     GlobalState += R.GlobalState;
2952     return *this;
2953   }
2954 
2955   /// See IntegerStateBase::operator&=
2956   DerefState operator&=(const DerefState &R) {
2957     DerefBytesState &= R.DerefBytesState;
2958     GlobalState &= R.GlobalState;
2959     return *this;
2960   }
2961 
2962   /// See IntegerStateBase::operator|=
2963   DerefState operator|=(const DerefState &R) {
2964     DerefBytesState |= R.DerefBytesState;
2965     GlobalState |= R.GlobalState;
2966     return *this;
2967   }
2968 
2969 protected:
2970   const AANonNull *NonNullAA = nullptr;
2971 };
2972 
2973 /// An abstract interface for all dereferenceable attribute.
2974 struct AADereferenceable
2975     : public IRAttribute<Attribute::Dereferenceable,
2976                          StateWrapper<DerefState, AbstractAttribute>> {
2977   AADereferenceable(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2978 
2979   /// Return true if we assume that the underlying value is nonnull.
2980   bool isAssumedNonNull() const {
2981     return NonNullAA && NonNullAA->isAssumedNonNull();
2982   }
2983 
2984   /// Return true if we know that the underlying value is nonnull.
2985   bool isKnownNonNull() const {
2986     return NonNullAA && NonNullAA->isKnownNonNull();
2987   }
2988 
2989   /// Return true if we assume that underlying value is
2990   /// dereferenceable(_or_null) globally.
2991   bool isAssumedGlobal() const { return GlobalState.getAssumed(); }
2992 
2993   /// Return true if we know that underlying value is
2994   /// dereferenceable(_or_null) globally.
2995   bool isKnownGlobal() const { return GlobalState.getKnown(); }
2996 
2997   /// Return assumed dereferenceable bytes.
2998   uint32_t getAssumedDereferenceableBytes() const {
2999     return DerefBytesState.getAssumed();
3000   }
3001 
3002   /// Return known dereferenceable bytes.
3003   uint32_t getKnownDereferenceableBytes() const {
3004     return DerefBytesState.getKnown();
3005   }
3006 
3007   /// Create an abstract attribute view for the position \p IRP.
3008   static AADereferenceable &createForPosition(const IRPosition &IRP,
3009                                               Attributor &A);
3010 
3011   /// See AbstractAttribute::getName()
3012   const std::string getName() const override { return "AADereferenceable"; }
3013 
3014   /// See AbstractAttribute::getIdAddr()
3015   const char *getIdAddr() const override { return &ID; }
3016 
3017   /// This function should return true if the type of the \p AA is
3018   /// AADereferenceable
3019   static bool classof(const AbstractAttribute *AA) {
3020     return (AA->getIdAddr() == &ID);
3021   }
3022 
3023   /// Unique ID (due to the unique address)
3024   static const char ID;
3025 };
3026 
3027 using AAAlignmentStateType =
3028     IncIntegerState<uint32_t, Value::MaximumAlignment, 0>;
3029 /// An abstract interface for all align attributes.
3030 struct AAAlign : public IRAttribute<
3031                      Attribute::Alignment,
3032                      StateWrapper<AAAlignmentStateType, AbstractAttribute>> {
3033   AAAlign(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3034 
3035   /// Return assumed alignment.
3036   unsigned getAssumedAlign() const { return getAssumed(); }
3037 
3038   /// Return known alignment.
3039   unsigned getKnownAlign() const { return getKnown(); }
3040 
3041   /// See AbstractAttribute::getName()
3042   const std::string getName() const override { return "AAAlign"; }
3043 
3044   /// See AbstractAttribute::getIdAddr()
3045   const char *getIdAddr() const override { return &ID; }
3046 
3047   /// This function should return true if the type of the \p AA is AAAlign
3048   static bool classof(const AbstractAttribute *AA) {
3049     return (AA->getIdAddr() == &ID);
3050   }
3051 
3052   /// Create an abstract attribute view for the position \p IRP.
3053   static AAAlign &createForPosition(const IRPosition &IRP, Attributor &A);
3054 
3055   /// Unique ID (due to the unique address)
3056   static const char ID;
3057 };
3058 
3059 /// An abstract interface for all nocapture attributes.
3060 struct AANoCapture
3061     : public IRAttribute<
3062           Attribute::NoCapture,
3063           StateWrapper<BitIntegerState<uint16_t, 7, 0>, AbstractAttribute>> {
3064   AANoCapture(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3065 
3066   /// State encoding bits. A set bit in the state means the property holds.
3067   /// NO_CAPTURE is the best possible state, 0 the worst possible state.
3068   enum {
3069     NOT_CAPTURED_IN_MEM = 1 << 0,
3070     NOT_CAPTURED_IN_INT = 1 << 1,
3071     NOT_CAPTURED_IN_RET = 1 << 2,
3072 
3073     /// If we do not capture the value in memory or through integers we can only
3074     /// communicate it back as a derived pointer.
3075     NO_CAPTURE_MAYBE_RETURNED = NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT,
3076 
3077     /// If we do not capture the value in memory, through integers, or as a
3078     /// derived pointer we know it is not captured.
3079     NO_CAPTURE =
3080         NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT | NOT_CAPTURED_IN_RET,
3081   };
3082 
3083   /// Return true if we know that the underlying value is not captured in its
3084   /// respective scope.
3085   bool isKnownNoCapture() const { return isKnown(NO_CAPTURE); }
3086 
3087   /// Return true if we assume that the underlying value is not captured in its
3088   /// respective scope.
3089   bool isAssumedNoCapture() const { return isAssumed(NO_CAPTURE); }
3090 
3091   /// Return true if we know that the underlying value is not captured in its
3092   /// respective scope but we allow it to escape through a "return".
3093   bool isKnownNoCaptureMaybeReturned() const {
3094     return isKnown(NO_CAPTURE_MAYBE_RETURNED);
3095   }
3096 
3097   /// Return true if we assume that the underlying value is not captured in its
3098   /// respective scope but we allow it to escape through a "return".
3099   bool isAssumedNoCaptureMaybeReturned() const {
3100     return isAssumed(NO_CAPTURE_MAYBE_RETURNED);
3101   }
3102 
3103   /// Create an abstract attribute view for the position \p IRP.
3104   static AANoCapture &createForPosition(const IRPosition &IRP, Attributor &A);
3105 
3106   /// See AbstractAttribute::getName()
3107   const std::string getName() const override { return "AANoCapture"; }
3108 
3109   /// See AbstractAttribute::getIdAddr()
3110   const char *getIdAddr() const override { return &ID; }
3111 
3112   /// This function should return true if the type of the \p AA is AANoCapture
3113   static bool classof(const AbstractAttribute *AA) {
3114     return (AA->getIdAddr() == &ID);
3115   }
3116 
3117   /// Unique ID (due to the unique address)
3118   static const char ID;
3119 };
3120 
3121 /// An abstract interface for value simplify abstract attribute.
3122 struct AAValueSimplify : public StateWrapper<BooleanState, AbstractAttribute> {
3123   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3124   AAValueSimplify(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3125 
3126   /// Return an assumed simplified value if a single candidate is found. If
3127   /// there cannot be one, return original value. If it is not clear yet, return
3128   /// the Optional::NoneType.
3129   virtual Optional<Value *> getAssumedSimplifiedValue(Attributor &A) const = 0;
3130 
3131   /// Create an abstract attribute view for the position \p IRP.
3132   static AAValueSimplify &createForPosition(const IRPosition &IRP,
3133                                             Attributor &A);
3134 
3135   /// See AbstractAttribute::getName()
3136   const std::string getName() const override { return "AAValueSimplify"; }
3137 
3138   /// See AbstractAttribute::getIdAddr()
3139   const char *getIdAddr() const override { return &ID; }
3140 
3141   /// This function should return true if the type of the \p AA is
3142   /// AAValueSimplify
3143   static bool classof(const AbstractAttribute *AA) {
3144     return (AA->getIdAddr() == &ID);
3145   }
3146 
3147   /// Unique ID (due to the unique address)
3148   static const char ID;
3149 };
3150 
3151 struct AAHeapToStack : public StateWrapper<BooleanState, AbstractAttribute> {
3152   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3153   AAHeapToStack(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3154 
3155   /// Returns true if HeapToStack conversion is assumed to be possible.
3156   bool isAssumedHeapToStack() const { return getAssumed(); }
3157 
3158   /// Returns true if HeapToStack conversion is known to be possible.
3159   bool isKnownHeapToStack() const { return getKnown(); }
3160 
3161   /// Create an abstract attribute view for the position \p IRP.
3162   static AAHeapToStack &createForPosition(const IRPosition &IRP, Attributor &A);
3163 
3164   /// See AbstractAttribute::getName()
3165   const std::string getName() const override { return "AAHeapToStack"; }
3166 
3167   /// See AbstractAttribute::getIdAddr()
3168   const char *getIdAddr() const override { return &ID; }
3169 
3170   /// This function should return true if the type of the \p AA is AAHeapToStack
3171   static bool classof(const AbstractAttribute *AA) {
3172     return (AA->getIdAddr() == &ID);
3173   }
3174 
3175   /// Unique ID (due to the unique address)
3176   static const char ID;
3177 };
3178 
3179 /// An abstract interface for privatizability.
3180 ///
3181 /// A pointer is privatizable if it can be replaced by a new, private one.
3182 /// Privatizing pointer reduces the use count, interaction between unrelated
3183 /// code parts.
3184 ///
3185 /// In order for a pointer to be privatizable its value cannot be observed
3186 /// (=nocapture), it is (for now) not written (=readonly & noalias), we know
3187 /// what values are necessary to make the private copy look like the original
3188 /// one, and the values we need can be loaded (=dereferenceable).
3189 struct AAPrivatizablePtr
3190     : public StateWrapper<BooleanState, AbstractAttribute> {
3191   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3192   AAPrivatizablePtr(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3193 
3194   /// Returns true if pointer privatization is assumed to be possible.
3195   bool isAssumedPrivatizablePtr() const { return getAssumed(); }
3196 
3197   /// Returns true if pointer privatization is known to be possible.
3198   bool isKnownPrivatizablePtr() const { return getKnown(); }
3199 
3200   /// Return the type we can choose for a private copy of the underlying
3201   /// value. None means it is not clear yet, nullptr means there is none.
3202   virtual Optional<Type *> getPrivatizableType() const = 0;
3203 
3204   /// Create an abstract attribute view for the position \p IRP.
3205   static AAPrivatizablePtr &createForPosition(const IRPosition &IRP,
3206                                               Attributor &A);
3207 
3208   /// See AbstractAttribute::getName()
3209   const std::string getName() const override { return "AAPrivatizablePtr"; }
3210 
3211   /// See AbstractAttribute::getIdAddr()
3212   const char *getIdAddr() const override { return &ID; }
3213 
3214   /// This function should return true if the type of the \p AA is
3215   /// AAPricatizablePtr
3216   static bool classof(const AbstractAttribute *AA) {
3217     return (AA->getIdAddr() == &ID);
3218   }
3219 
3220   /// Unique ID (due to the unique address)
3221   static const char ID;
3222 };
3223 
3224 /// An abstract interface for memory access kind related attributes
3225 /// (readnone/readonly/writeonly).
3226 struct AAMemoryBehavior
3227     : public IRAttribute<
3228           Attribute::ReadNone,
3229           StateWrapper<BitIntegerState<uint8_t, 3>, AbstractAttribute>> {
3230   AAMemoryBehavior(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3231 
3232   /// State encoding bits. A set bit in the state means the property holds.
3233   /// BEST_STATE is the best possible state, 0 the worst possible state.
3234   enum {
3235     NO_READS = 1 << 0,
3236     NO_WRITES = 1 << 1,
3237     NO_ACCESSES = NO_READS | NO_WRITES,
3238 
3239     BEST_STATE = NO_ACCESSES,
3240   };
3241   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
3242 
3243   /// Return true if we know that the underlying value is not read or accessed
3244   /// in its respective scope.
3245   bool isKnownReadNone() const { return isKnown(NO_ACCESSES); }
3246 
3247   /// Return true if we assume that the underlying value is not read or accessed
3248   /// in its respective scope.
3249   bool isAssumedReadNone() const { return isAssumed(NO_ACCESSES); }
3250 
3251   /// Return true if we know that the underlying value is not accessed
3252   /// (=written) in its respective scope.
3253   bool isKnownReadOnly() const { return isKnown(NO_WRITES); }
3254 
3255   /// Return true if we assume that the underlying value is not accessed
3256   /// (=written) in its respective scope.
3257   bool isAssumedReadOnly() const { return isAssumed(NO_WRITES); }
3258 
3259   /// Return true if we know that the underlying value is not read in its
3260   /// respective scope.
3261   bool isKnownWriteOnly() const { return isKnown(NO_READS); }
3262 
3263   /// Return true if we assume that the underlying value is not read in its
3264   /// respective scope.
3265   bool isAssumedWriteOnly() const { return isAssumed(NO_READS); }
3266 
3267   /// Create an abstract attribute view for the position \p IRP.
3268   static AAMemoryBehavior &createForPosition(const IRPosition &IRP,
3269                                              Attributor &A);
3270 
3271   /// See AbstractAttribute::getName()
3272   const std::string getName() const override { return "AAMemoryBehavior"; }
3273 
3274   /// See AbstractAttribute::getIdAddr()
3275   const char *getIdAddr() const override { return &ID; }
3276 
3277   /// This function should return true if the type of the \p AA is
3278   /// AAMemoryBehavior
3279   static bool classof(const AbstractAttribute *AA) {
3280     return (AA->getIdAddr() == &ID);
3281   }
3282 
3283   /// Unique ID (due to the unique address)
3284   static const char ID;
3285 };
3286 
3287 /// An abstract interface for all memory location attributes
3288 /// (readnone/argmemonly/inaccessiblememonly/inaccessibleorargmemonly).
3289 struct AAMemoryLocation
3290     : public IRAttribute<
3291           Attribute::ReadNone,
3292           StateWrapper<BitIntegerState<uint32_t, 511>, AbstractAttribute>> {
3293   using MemoryLocationsKind = StateType::base_t;
3294 
3295   AAMemoryLocation(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3296 
3297   /// Encoding of different locations that could be accessed by a memory
3298   /// access.
3299   enum {
3300     ALL_LOCATIONS = 0,
3301     NO_LOCAL_MEM = 1 << 0,
3302     NO_CONST_MEM = 1 << 1,
3303     NO_GLOBAL_INTERNAL_MEM = 1 << 2,
3304     NO_GLOBAL_EXTERNAL_MEM = 1 << 3,
3305     NO_GLOBAL_MEM = NO_GLOBAL_INTERNAL_MEM | NO_GLOBAL_EXTERNAL_MEM,
3306     NO_ARGUMENT_MEM = 1 << 4,
3307     NO_INACCESSIBLE_MEM = 1 << 5,
3308     NO_MALLOCED_MEM = 1 << 6,
3309     NO_UNKOWN_MEM = 1 << 7,
3310     NO_LOCATIONS = NO_LOCAL_MEM | NO_CONST_MEM | NO_GLOBAL_INTERNAL_MEM |
3311                    NO_GLOBAL_EXTERNAL_MEM | NO_ARGUMENT_MEM |
3312                    NO_INACCESSIBLE_MEM | NO_MALLOCED_MEM | NO_UNKOWN_MEM,
3313 
3314     // Helper bit to track if we gave up or not.
3315     VALID_STATE = NO_LOCATIONS + 1,
3316 
3317     BEST_STATE = NO_LOCATIONS | VALID_STATE,
3318   };
3319   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
3320 
3321   /// Return true if we know that the associated functions has no observable
3322   /// accesses.
3323   bool isKnownReadNone() const { return isKnown(NO_LOCATIONS); }
3324 
3325   /// Return true if we assume that the associated functions has no observable
3326   /// accesses.
3327   bool isAssumedReadNone() const {
3328     return isAssumed(NO_LOCATIONS) | isAssumedStackOnly();
3329   }
3330 
3331   /// Return true if we know that the associated functions has at most
3332   /// local/stack accesses.
3333   bool isKnowStackOnly() const {
3334     return isKnown(inverseLocation(NO_LOCAL_MEM, true, true));
3335   }
3336 
3337   /// Return true if we assume that the associated functions has at most
3338   /// local/stack accesses.
3339   bool isAssumedStackOnly() const {
3340     return isAssumed(inverseLocation(NO_LOCAL_MEM, true, true));
3341   }
3342 
3343   /// Return true if we know that the underlying value will only access
3344   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
3345   bool isKnownInaccessibleMemOnly() const {
3346     return isKnown(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
3347   }
3348 
3349   /// Return true if we assume that the underlying value will only access
3350   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
3351   bool isAssumedInaccessibleMemOnly() const {
3352     return isAssumed(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
3353   }
3354 
3355   /// Return true if we know that the underlying value will only access
3356   /// argument pointees (see Attribute::ArgMemOnly).
3357   bool isKnownArgMemOnly() const {
3358     return isKnown(inverseLocation(NO_ARGUMENT_MEM, true, true));
3359   }
3360 
3361   /// Return true if we assume that the underlying value will only access
3362   /// argument pointees (see Attribute::ArgMemOnly).
3363   bool isAssumedArgMemOnly() const {
3364     return isAssumed(inverseLocation(NO_ARGUMENT_MEM, true, true));
3365   }
3366 
3367   /// Return true if we know that the underlying value will only access
3368   /// inaccesible memory or argument pointees (see
3369   /// Attribute::InaccessibleOrArgMemOnly).
3370   bool isKnownInaccessibleOrArgMemOnly() const {
3371     return isKnown(
3372         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
3373   }
3374 
3375   /// Return true if we assume that the underlying value will only access
3376   /// inaccesible memory or argument pointees (see
3377   /// Attribute::InaccessibleOrArgMemOnly).
3378   bool isAssumedInaccessibleOrArgMemOnly() const {
3379     return isAssumed(
3380         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
3381   }
3382 
3383   /// Return true if the underlying value may access memory through arguement
3384   /// pointers of the associated function, if any.
3385   bool mayAccessArgMem() const { return !isAssumed(NO_ARGUMENT_MEM); }
3386 
3387   /// Return true if only the memory locations specififed by \p MLK are assumed
3388   /// to be accessed by the associated function.
3389   bool isAssumedSpecifiedMemOnly(MemoryLocationsKind MLK) const {
3390     return isAssumed(MLK);
3391   }
3392 
3393   /// Return the locations that are assumed to be not accessed by the associated
3394   /// function, if any.
3395   MemoryLocationsKind getAssumedNotAccessedLocation() const {
3396     return getAssumed();
3397   }
3398 
3399   /// Return the inverse of location \p Loc, thus for NO_XXX the return
3400   /// describes ONLY_XXX. The flags \p AndLocalMem and \p AndConstMem determine
3401   /// if local (=stack) and constant memory are allowed as well. Most of the
3402   /// time we do want them to be included, e.g., argmemonly allows accesses via
3403   /// argument pointers or local or constant memory accesses.
3404   static MemoryLocationsKind
3405   inverseLocation(MemoryLocationsKind Loc, bool AndLocalMem, bool AndConstMem) {
3406     return NO_LOCATIONS & ~(Loc | (AndLocalMem ? NO_LOCAL_MEM : 0) |
3407                             (AndConstMem ? NO_CONST_MEM : 0));
3408   };
3409 
3410   /// Return the locations encoded by \p MLK as a readable string.
3411   static std::string getMemoryLocationsAsStr(MemoryLocationsKind MLK);
3412 
3413   /// Simple enum to distinguish read/write/read-write accesses.
3414   enum AccessKind {
3415     NONE = 0,
3416     READ = 1 << 0,
3417     WRITE = 1 << 1,
3418     READ_WRITE = READ | WRITE,
3419   };
3420 
3421   /// Check \p Pred on all accesses to the memory kinds specified by \p MLK.
3422   ///
3423   /// This method will evaluate \p Pred on all accesses (access instruction +
3424   /// underlying accessed memory pointer) and it will return true if \p Pred
3425   /// holds every time.
3426   virtual bool checkForAllAccessesToMemoryKind(
3427       function_ref<bool(const Instruction *, const Value *, AccessKind,
3428                         MemoryLocationsKind)>
3429           Pred,
3430       MemoryLocationsKind MLK) const = 0;
3431 
3432   /// Create an abstract attribute view for the position \p IRP.
3433   static AAMemoryLocation &createForPosition(const IRPosition &IRP,
3434                                              Attributor &A);
3435 
3436   /// See AbstractState::getAsStr().
3437   const std::string getAsStr() const override {
3438     return getMemoryLocationsAsStr(getAssumedNotAccessedLocation());
3439   }
3440 
3441   /// See AbstractAttribute::getName()
3442   const std::string getName() const override { return "AAMemoryLocation"; }
3443 
3444   /// See AbstractAttribute::getIdAddr()
3445   const char *getIdAddr() const override { return &ID; }
3446 
3447   /// This function should return true if the type of the \p AA is
3448   /// AAMemoryLocation
3449   static bool classof(const AbstractAttribute *AA) {
3450     return (AA->getIdAddr() == &ID);
3451   }
3452 
3453   /// Unique ID (due to the unique address)
3454   static const char ID;
3455 };
3456 
3457 /// An abstract interface for range value analysis.
3458 struct AAValueConstantRange
3459     : public StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t> {
3460   using Base = StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t>;
3461   AAValueConstantRange(const IRPosition &IRP, Attributor &A)
3462       : Base(IRP, IRP.getAssociatedType()->getIntegerBitWidth()) {}
3463 
3464   /// See AbstractAttribute::getState(...).
3465   IntegerRangeState &getState() override { return *this; }
3466   const IntegerRangeState &getState() const override { return *this; }
3467 
3468   /// Create an abstract attribute view for the position \p IRP.
3469   static AAValueConstantRange &createForPosition(const IRPosition &IRP,
3470                                                  Attributor &A);
3471 
3472   /// Return an assumed range for the assocaited value a program point \p CtxI.
3473   /// If \p I is nullptr, simply return an assumed range.
3474   virtual ConstantRange
3475   getAssumedConstantRange(Attributor &A,
3476                           const Instruction *CtxI = nullptr) const = 0;
3477 
3478   /// Return a known range for the assocaited value at a program point \p CtxI.
3479   /// If \p I is nullptr, simply return a known range.
3480   virtual ConstantRange
3481   getKnownConstantRange(Attributor &A,
3482                         const Instruction *CtxI = nullptr) const = 0;
3483 
3484   /// Return an assumed constant for the assocaited value a program point \p
3485   /// CtxI.
3486   Optional<ConstantInt *>
3487   getAssumedConstantInt(Attributor &A,
3488                         const Instruction *CtxI = nullptr) const {
3489     ConstantRange RangeV = getAssumedConstantRange(A, CtxI);
3490     if (auto *C = RangeV.getSingleElement())
3491       return cast<ConstantInt>(
3492           ConstantInt::get(getAssociatedValue().getType(), *C));
3493     if (RangeV.isEmptySet())
3494       return llvm::None;
3495     return nullptr;
3496   }
3497 
3498   /// See AbstractAttribute::getName()
3499   const std::string getName() const override { return "AAValueConstantRange"; }
3500 
3501   /// See AbstractAttribute::getIdAddr()
3502   const char *getIdAddr() const override { return &ID; }
3503 
3504   /// This function should return true if the type of the \p AA is
3505   /// AAValueConstantRange
3506   static bool classof(const AbstractAttribute *AA) {
3507     return (AA->getIdAddr() == &ID);
3508   }
3509 
3510   /// Unique ID (due to the unique address)
3511   static const char ID;
3512 };
3513 
3514 /// A class for a set state.
3515 /// The assumed boolean state indicates whether the corresponding set is full
3516 /// set or not. If the assumed state is false, this is the worst state. The
3517 /// worst state (invalid state) of set of potential values is when the set
3518 /// contains every possible value (i.e. we cannot in any way limit the value
3519 /// that the target position can take). That never happens naturally, we only
3520 /// force it. As for the conditions under which we force it, see
3521 /// AAPotentialValues.
3522 template <typename MemberTy, typename KeyInfo = DenseMapInfo<MemberTy>>
3523 struct PotentialValuesState : AbstractState {
3524   using SetTy = DenseSet<MemberTy, KeyInfo>;
3525 
3526   PotentialValuesState() : IsValidState(true), UndefIsContained(false) {}
3527 
3528   PotentialValuesState(bool IsValid)
3529       : IsValidState(IsValid), UndefIsContained(false) {}
3530 
3531   /// See AbstractState::isValidState(...)
3532   bool isValidState() const override { return IsValidState.isValidState(); }
3533 
3534   /// See AbstractState::isAtFixpoint(...)
3535   bool isAtFixpoint() const override { return IsValidState.isAtFixpoint(); }
3536 
3537   /// See AbstractState::indicatePessimisticFixpoint(...)
3538   ChangeStatus indicatePessimisticFixpoint() override {
3539     return IsValidState.indicatePessimisticFixpoint();
3540   }
3541 
3542   /// See AbstractState::indicateOptimisticFixpoint(...)
3543   ChangeStatus indicateOptimisticFixpoint() override {
3544     return IsValidState.indicateOptimisticFixpoint();
3545   }
3546 
3547   /// Return the assumed state
3548   PotentialValuesState &getAssumed() { return *this; }
3549   const PotentialValuesState &getAssumed() const { return *this; }
3550 
3551   /// Return this set. We should check whether this set is valid or not by
3552   /// isValidState() before calling this function.
3553   const SetTy &getAssumedSet() const {
3554     assert(isValidState() && "This set shoud not be used when it is invalid!");
3555     return Set;
3556   }
3557 
3558   /// Returns whether this state contains an undef value or not.
3559   bool undefIsContained() const {
3560     assert(isValidState() && "This flag shoud not be used when it is invalid!");
3561     return UndefIsContained;
3562   }
3563 
3564   bool operator==(const PotentialValuesState &RHS) const {
3565     if (isValidState() != RHS.isValidState())
3566       return false;
3567     if (!isValidState() && !RHS.isValidState())
3568       return true;
3569     if (undefIsContained() != RHS.undefIsContained())
3570       return false;
3571     return Set == RHS.getAssumedSet();
3572   }
3573 
3574   /// Maximum number of potential values to be tracked.
3575   /// This is set by -attributor-max-potential-values command line option
3576   static unsigned MaxPotentialValues;
3577 
3578   /// Return empty set as the best state of potential values.
3579   static PotentialValuesState getBestState() {
3580     return PotentialValuesState(true);
3581   }
3582 
3583   static PotentialValuesState getBestState(PotentialValuesState &PVS) {
3584     return getBestState();
3585   }
3586 
3587   /// Return full set as the worst state of potential values.
3588   static PotentialValuesState getWorstState() {
3589     return PotentialValuesState(false);
3590   }
3591 
3592   /// Union assumed set with the passed value.
3593   void unionAssumed(const MemberTy &C) { insert(C); }
3594 
3595   /// Union assumed set with assumed set of the passed state \p PVS.
3596   void unionAssumed(const PotentialValuesState &PVS) { unionWith(PVS); }
3597 
3598   /// Union assumed set with an undef value.
3599   void unionAssumedWithUndef() { unionWithUndef(); }
3600 
3601   /// "Clamp" this state with \p PVS.
3602   PotentialValuesState operator^=(const PotentialValuesState &PVS) {
3603     IsValidState ^= PVS.IsValidState;
3604     unionAssumed(PVS);
3605     return *this;
3606   }
3607 
3608   PotentialValuesState operator&=(const PotentialValuesState &PVS) {
3609     IsValidState &= PVS.IsValidState;
3610     unionAssumed(PVS);
3611     return *this;
3612   }
3613 
3614 private:
3615   /// Check the size of this set, and invalidate when the size is no
3616   /// less than \p MaxPotentialValues threshold.
3617   void checkAndInvalidate() {
3618     if (Set.size() >= MaxPotentialValues)
3619       indicatePessimisticFixpoint();
3620   }
3621 
3622   /// If this state contains both undef and not undef, we can reduce
3623   /// undef to the not undef value.
3624   void reduceUndefValue() { UndefIsContained = UndefIsContained & Set.empty(); }
3625 
3626   /// Insert an element into this set.
3627   void insert(const MemberTy &C) {
3628     if (!isValidState())
3629       return;
3630     Set.insert(C);
3631     checkAndInvalidate();
3632   }
3633 
3634   /// Take union with R.
3635   void unionWith(const PotentialValuesState &R) {
3636     /// If this is a full set, do nothing.;
3637     if (!isValidState())
3638       return;
3639     /// If R is full set, change L to a full set.
3640     if (!R.isValidState()) {
3641       indicatePessimisticFixpoint();
3642       return;
3643     }
3644     for (const MemberTy &C : R.Set)
3645       Set.insert(C);
3646     UndefIsContained |= R.undefIsContained();
3647     reduceUndefValue();
3648     checkAndInvalidate();
3649   }
3650 
3651   /// Take union with an undef value.
3652   void unionWithUndef() {
3653     UndefIsContained = true;
3654     reduceUndefValue();
3655   }
3656 
3657   /// Take intersection with R.
3658   void intersectWith(const PotentialValuesState &R) {
3659     /// If R is a full set, do nothing.
3660     if (!R.isValidState())
3661       return;
3662     /// If this is a full set, change this to R.
3663     if (!isValidState()) {
3664       *this = R;
3665       return;
3666     }
3667     SetTy IntersectSet;
3668     for (const MemberTy &C : Set) {
3669       if (R.Set.count(C))
3670         IntersectSet.insert(C);
3671     }
3672     Set = IntersectSet;
3673     UndefIsContained &= R.undefIsContained();
3674     reduceUndefValue();
3675   }
3676 
3677   /// A helper state which indicate whether this state is valid or not.
3678   BooleanState IsValidState;
3679 
3680   /// Container for potential values
3681   SetTy Set;
3682 
3683   /// Flag for undef value
3684   bool UndefIsContained;
3685 };
3686 
3687 using PotentialConstantIntValuesState = PotentialValuesState<APInt>;
3688 
3689 raw_ostream &operator<<(raw_ostream &OS,
3690                         const PotentialConstantIntValuesState &R);
3691 
3692 /// An abstract interface for potential values analysis.
3693 ///
3694 /// This AA collects potential values for each IR position.
3695 /// An assumed set of potential values is initialized with the empty set (the
3696 /// best state) and it will grow monotonically as we find more potential values
3697 /// for this position.
3698 /// The set might be forced to the worst state, that is, to contain every
3699 /// possible value for this position in 2 cases.
3700 ///   1. We surpassed the \p MaxPotentialValues threshold. This includes the
3701 ///      case that this position is affected (e.g. because of an operation) by a
3702 ///      Value that is in the worst state.
3703 ///   2. We tried to initialize on a Value that we cannot handle (e.g. an
3704 ///      operator we do not currently handle).
3705 ///
3706 /// TODO: Support values other than constant integers.
3707 struct AAPotentialValues
3708     : public StateWrapper<PotentialConstantIntValuesState, AbstractAttribute> {
3709   using Base = StateWrapper<PotentialConstantIntValuesState, AbstractAttribute>;
3710   AAPotentialValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3711 
3712   /// See AbstractAttribute::getState(...).
3713   PotentialConstantIntValuesState &getState() override { return *this; }
3714   const PotentialConstantIntValuesState &getState() const override {
3715     return *this;
3716   }
3717 
3718   /// Create an abstract attribute view for the position \p IRP.
3719   static AAPotentialValues &createForPosition(const IRPosition &IRP,
3720                                               Attributor &A);
3721 
3722   /// Return assumed constant for the associated value
3723   Optional<ConstantInt *>
3724   getAssumedConstantInt(Attributor &A,
3725                         const Instruction *CtxI = nullptr) const {
3726     if (!isValidState())
3727       return nullptr;
3728     if (getAssumedSet().size() == 1)
3729       return cast<ConstantInt>(ConstantInt::get(getAssociatedValue().getType(),
3730                                                 *(getAssumedSet().begin())));
3731     if (getAssumedSet().size() == 0) {
3732       if (undefIsContained())
3733         return cast<ConstantInt>(
3734             ConstantInt::get(getAssociatedValue().getType(), 0));
3735       return llvm::None;
3736     }
3737 
3738     return nullptr;
3739   }
3740 
3741   /// See AbstractAttribute::getName()
3742   const std::string getName() const override { return "AAPotentialValues"; }
3743 
3744   /// See AbstractAttribute::getIdAddr()
3745   const char *getIdAddr() const override { return &ID; }
3746 
3747   /// This function should return true if the type of the \p AA is
3748   /// AAPotentialValues
3749   static bool classof(const AbstractAttribute *AA) {
3750     return (AA->getIdAddr() == &ID);
3751   }
3752 
3753   /// Unique ID (due to the unique address)
3754   static const char ID;
3755 };
3756 
3757 /// An abstract interface for all noundef attributes.
3758 struct AANoUndef
3759     : public IRAttribute<Attribute::NoUndef,
3760                          StateWrapper<BooleanState, AbstractAttribute>> {
3761   AANoUndef(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3762 
3763   /// Return true if we assume that the underlying value is noundef.
3764   bool isAssumedNoUndef() const { return getAssumed(); }
3765 
3766   /// Return true if we know that underlying value is noundef.
3767   bool isKnownNoUndef() const { return getKnown(); }
3768 
3769   /// Create an abstract attribute view for the position \p IRP.
3770   static AANoUndef &createForPosition(const IRPosition &IRP, Attributor &A);
3771 
3772   /// See AbstractAttribute::getName()
3773   const std::string getName() const override { return "AANoUndef"; }
3774 
3775   /// See AbstractAttribute::getIdAddr()
3776   const char *getIdAddr() const override { return &ID; }
3777 
3778   /// This function should return true if the type of the \p AA is AANoUndef
3779   static bool classof(const AbstractAttribute *AA) {
3780     return (AA->getIdAddr() == &ID);
3781   }
3782 
3783   /// Unique ID (due to the unique address)
3784   static const char ID;
3785 };
3786 
3787 /// Run options, used by the pass manager.
3788 enum AttributorRunOption {
3789   NONE = 0,
3790   MODULE = 1 << 0,
3791   CGSCC = 1 << 1,
3792   ALL = MODULE | CGSCC
3793 };
3794 
3795 } // end namespace llvm
3796 
3797 #endif // LLVM_TRANSFORMS_IPO_FUNCTIONATTRS_H
3798